-
-
-
-
-
-
Inference Providers
Active filters:
int4
ISTA-DASLab/gemma-3-27b-it-GPTQ-4b-128g
Image-Text-to-Text
•
5B
•
Updated
•
15.3k
•
42
alecccdd/moondream3-preview-4bit
Image-Text-to-Text
•
Updated
•
246
•
2
huawei-csl/Kimi-Linear-48B-A3B-Instruct-4bit-SINQ
Text Generation
•
27B
•
Updated
•
37
•
2
huawei-csl/Qwen3-Next-80B-A3B-Instruct-4bit-SINQ
Text Generation
•
Updated
•
307
•
1
Advantech-EIOT/intel_llama-2-chat-7b
Text Generation
•
Updated
•
9
RedHatAI/zephyr-7b-beta-marlin
Text Generation
•
1B
•
Updated
•
25
RedHatAI/TinyLlama-1.1B-Chat-v1.0-marlin
Text Generation
•
0.3B
•
Updated
•
2.19k
•
2
RedHatAI/OpenHermes-2.5-Mistral-7B-marlin
Text Generation
•
1B
•
Updated
•
93
•
2
RedHatAI/Nous-Hermes-2-Yi-34B-marlin
Text Generation
•
5B
•
Updated
•
16
•
5
ecastera/ecastera-eva-westlake-7b-spanish-int4-gguf
7B
•
Updated
•
23
•
2
softmax/Llama-2-70b-chat-hf-marlin
Text Generation
•
10B
•
Updated
•
6
softmax/falcon-180B-chat-marlin
Text Generation
•
26B
•
Updated
•
9
study-hjt/Meta-Llama-3-8B-Instruct-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
4
study-hjt/Meta-Llama-3-70B-Instruct-GPTQ-Int4
Text Generation
•
11B
•
Updated
•
6
•
6
study-hjt/Meta-Llama-3-70B-Instruct-AWQ
Text Generation
•
11B
•
Updated
•
6
study-hjt/Qwen1.5-110B-Chat-GPTQ-Int4
Text Generation
•
17B
•
Updated
•
15
•
2
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
5
study-hjt/Qwen1.5-110B-Chat-AWQ
Text Generation
•
17B
•
Updated
•
6
modelscope/Yi-1.5-34B-Chat-AWQ
Text Generation
•
5B
•
Updated
•
31
•
1
modelscope/Yi-1.5-6B-Chat-GPTQ
Text Generation
•
1B
•
Updated
•
8
modelscope/Yi-1.5-6B-Chat-AWQ
Text Generation
•
1B
•
Updated
•
10
modelscope/Yi-1.5-9B-Chat-GPTQ
Text Generation
•
2B
•
Updated
•
7
•
1
modelscope/Yi-1.5-9B-Chat-AWQ
Text Generation
•
2B
•
Updated
•
43
modelscope/Yi-1.5-34B-Chat-GPTQ
Text Generation
•
5B
•
Updated
•
7
•
1
jojo1899/Phi-3-mini-128k-instruct-ov-int4
Text Generation
•
Updated
•
20
jojo1899/Llama-2-13b-chat-hf-ov-int4
Text Generation
•
Updated
•
15
jojo1899/Mistral-7B-Instruct-v0.2-ov-int4
Text Generation
•
Updated
•
14
model-scope/glm-4-9b-chat-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
35
•
6
ModelCloud/Mistral-Nemo-Instruct-2407-gptq-4bit
Text Generation
•
3B
•
Updated
•
40
•
5
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
2B
•
Updated
•
124
•
4