-
-
-
-
-
-
Inference Providers
Active filters: Qwen3
NVFP4/Qwen3-Coder-30B-A3B-Instruct-FP4
Text Generation
• 16B • Updated
• 23.9k
• 15
nvidia/Qwen3-Coder-480B-A35B-Instruct-NVFP4
Text Generation
• 241B • Updated
• 1.43k
• 6
nvidia/Qwen3-235B-A22B-Instruct-2507-NVFP4
Text Generation
• 120B • Updated
• 4.07k
• 5
Text Generation
• Updated
• 94
• 2
QuantTrio/Qwen3-235B-A22B-Instruct-2507-GPTQ-Int4-Int8Mix
Text Generation
• 248B • Updated
• 662
• 4
Text Generation
• 5B • Updated
• 33.6k
• 15
litert-community/Qwen3-0.6B
Text Generation
• Updated
• 2.72k
• 10
nvidia/Qwen3-Next-80B-A3B-Instruct-NVFP4
Text Generation
• Updated
• 69.4k
• 33
DavidAU/Qwen3-8B-Hivemind-Instruct-Heretic-Abliterated-Uncensored-NEO-Imatrix-GGUF
Text Generation
• 8B • Updated
• 11.2k
• 33
DavidAU/Qwen3-24B-A4B-Freedom-Thinking-Abliterated-Heretic-NEO-Imatrix-GGUF
Text Generation
• 17B • Updated
• 3.18k
• 25
DavidAU/Qwen3-4B-Gemini-TripleX-High-Reasoning-Thinking-Heretic-Uncensored-GGUF
Text Generation
• 4B • Updated
• 5.19k
• 29
Rttrfygguh/DAN-Qwen3-1.7B
Text Generation
• 2B • Updated
• 43
• 1
JunHowie/Qwen3-0.6B-GPTQ-Int4
Text Generation
• 0.6B • Updated
• 153
• 1
JunHowie/Qwen3-0.6B-GPTQ-Int8
Text Generation
• 0.6B • Updated
• 17
JunHowie/Qwen3-1.7B-GPTQ-Int4
Text Generation
• 2B • Updated
• 416
• 1
JunHowie/Qwen3-1.7B-GPTQ-Int8
Text Generation
• 2B • Updated
• 1
JunHowie/Qwen3-32B-GPTQ-Int4
Text Generation
• 33B • Updated
• 3.92k
• 4
JunHowie/Qwen3-32B-GPTQ-Int8
Text Generation
• 33B • Updated
• 1.74k
• 4
JunHowie/Qwen3-30B-A3B-GPTQ-Int4
Text Generation
• 5B • Updated
• 17
• 1
Text Generation
• Updated
• 80
JunHowie/Qwen3-14B-GPTQ-Int8
Text Generation
• 15B • Updated
• 410
• 1
JunHowie/Qwen3-14B-GPTQ-Int4
Text Generation
• 15B • Updated
• 2.73k
• 4
JunHowie/Qwen3-8B-GPTQ-Int8
Text Generation
• 8B • Updated
• 120
JunHowie/Qwen3-8B-GPTQ-Int4
Text Generation
• 8B • Updated
• 1.15k
• 4
Text Generation
• Updated
• 35
• 3
JunHowie/Qwen3-4B-GPTQ-Int4
Text Generation
• 4B • Updated
• 4.82k
• 1
JunHowie/Qwen3-4B-GPTQ-Int8
Text Generation
• 4B • Updated
• 48
prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF
Text Generation
• 4B • Updated
• 15
steampunque/Qwen3-8B-MP-GGUF
8B • Updated
• 51
UnfilteredAI/DAN-Qwen3-1.7B
Text Generation
• 2B • Updated
• 1.08k
• 32