Usage of GGUF and finetuned LLaMA Models

Are we allowed to use GGUF quantized LLaMA3 models found in huggingface? The guidelines initially said that https://huggingface.co/TheBloke/Llama-2-70B-GGML can be used but I was curious if it goes for LLaMA3 from other users (except the bloke) too.

My second question is about the already finetuned variants of LLaMA3, such as: https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b. Are we allowed to use them?