-
ggml-org/Ministral-3-3B-Reasoning-2512-GGUF
Image-Text-to-Text • 3B • Updated • 362 • 1 -
ggml-org/Ministral-3-8B-Reasoning-2512-GGUF
Image-Text-to-Text • 8B • Updated • 351 -
ggml-org/Ministral-3-14B-Reasoning-2512-GGUF
Image-Text-to-Text • 14B • Updated • 722 • 2 -
ggml-org/Ministral-3-3B-Instruct-2512-GGUF
Image-Text-to-Text • 3B • Updated • 1.16k • 2
AI & ML interests
None defined yet.
Recent Activity
View all activity
Vision and audio models compatible with llama-server and llama-mtmd-cli
Adapters extracted from fine tuned models, using mergekit-extract-lora
-
ggml-org/LoRA-Llama-3-Instruct-abliteration-8B-F16-GGUF
88.1M • Updated • 50 -
ggml-org/LoRA-Qwen2.5-1.5B-Instruct-abliterated-F16-GGUF
93.6M • Updated • 35 • 2 -
ggml-org/LoRA-Qwen2.5-3B-Instruct-abliterated-F16-GGUF
0.1B • Updated • 27 • 1 -
ggml-org/LoRA-Qwen2.5-7B-Instruct-abliterated-v3-F16-GGUF
90.9M • Updated • 32 • 3
Collection of models for Gemma 3-270m
Voice Activity Detection (VAD) models for whisper.cpp.
Models that are used for presets in llama.cpp.
Recommended models for the llama.vim and llama.vscode plugins
-
ggml-org/Qwen2.5-Coder-0.5B-Q8_0-GGUF
Text Generation • 0.5B • Updated • 1.66k • 6 -
ggml-org/Qwen2.5-Coder-1.5B-Q8_0-GGUF
Text Generation • 2B • Updated • 5.02k • 11 -
ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF
Text Generation • 3B • Updated • 2.66k • 5 -
ggml-org/Qwen2.5-Coder-7B-Q8_0-GGUF
Text Generation • 8B • Updated • 3.37k • 6
-
ggml-org/Ministral-3-3B-Reasoning-2512-GGUF
Image-Text-to-Text • 3B • Updated • 362 • 1 -
ggml-org/Ministral-3-8B-Reasoning-2512-GGUF
Image-Text-to-Text • 8B • Updated • 351 -
ggml-org/Ministral-3-14B-Reasoning-2512-GGUF
Image-Text-to-Text • 14B • Updated • 722 • 2 -
ggml-org/Ministral-3-3B-Instruct-2512-GGUF
Image-Text-to-Text • 3B • Updated • 1.16k • 2
Vision and audio models compatible with llama-server and llama-mtmd-cli
Collection of models for Gemma 3-270m
Voice Activity Detection (VAD) models for whisper.cpp.
Models that are used for presets in llama.cpp.
Adapters extracted from fine tuned models, using mergekit-extract-lora
-
ggml-org/LoRA-Llama-3-Instruct-abliteration-8B-F16-GGUF
88.1M • Updated • 50 -
ggml-org/LoRA-Qwen2.5-1.5B-Instruct-abliterated-F16-GGUF
93.6M • Updated • 35 • 2 -
ggml-org/LoRA-Qwen2.5-3B-Instruct-abliterated-F16-GGUF
0.1B • Updated • 27 • 1 -
ggml-org/LoRA-Qwen2.5-7B-Instruct-abliterated-v3-F16-GGUF
90.9M • Updated • 32 • 3
Recommended models for the llama.vim and llama.vscode plugins
-
ggml-org/Qwen2.5-Coder-0.5B-Q8_0-GGUF
Text Generation • 0.5B • Updated • 1.66k • 6 -
ggml-org/Qwen2.5-Coder-1.5B-Q8_0-GGUF
Text Generation • 2B • Updated • 5.02k • 11 -
ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF
Text Generation • 3B • Updated • 2.66k • 5 -
ggml-org/Qwen2.5-Coder-7B-Q8_0-GGUF
Text Generation • 8B • Updated • 3.37k • 6