Usage of GGUF and finetuned LLaMA Models

Are we allowed to use GGUF quantized LLaMA3 models found in huggingface? The guidelines initially said that can be used but I was curious if it goes for LLaMA3 from other users (except the bloke) too.

My second question is about the already finetuned variants of LLaMA3, such as: Are we allowed to use them?

1 Like

I have the same question @mohanty