# Catalogue des modèles Unsloth

Répertoire des LLM Unsloth pour tous nos [Dynamique](https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs) modèles GGUF, 4 bits, 16 bits sur Hugging Face.

{% tabs %}
{% tab title="• GGUF + 4 bits" %} <a href="#qwen-models" class="button secondary">Qwen</a><a href="#deepseek-models" class="button secondary">DeepSeek</a><a href="#gemma-models" class="button secondary">Gemma</a><a href="#llama-models" class="button secondary">Llama</a><a href="#mistral-models" class="button secondary">Mistral</a><a href="https://unsloth.ai/docs/get-started/unsloth-model-catalog#glm-models" class="button secondary">GLM</a>

**Les GGUF** vous permettent d’exécuter des modèles dans des outils comme [**Unsloth Studio**](https://unsloth.ai/docs/fr/nouveau/studio)✨, Ollama et llama.cpp.\
**Instruct (4 bits)** Les safetensors peuvent être utilisés pour l’inférence ou le fine-tuning via Unsloth.

#### **Nouveaux modèles recommandés :**

| Modèle                                                                                                                        | Variante                                                                          | GGUF                                                                                                                                                               | Instruct (4 bits)                                                                                                                                                                         |
| ----------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [**Qwen3.6**](https://unsloth.ai/docs/fr/modeles/qwen3.6)                                                                     | 35B-A3B                                                                           | [lien](https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF)                                                                                                        | —                                                                                                                                                                                         |
| [**Gemma 4**](https://unsloth.ai/docs/fr/modeles/gemma-4)                                                                     | 26B-A4B                                                                           | [lien](https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF)                                                                                                     | —                                                                                                                                                                                         |
|                                                                                                                               | 31B                                                                               | [lien](https://huggingface.co/unsloth/gemma-4-31B-it-GGUF)                                                                                                         | [lien](https://huggingface.co/unsloth/gemma-4-31B-it-unsloth-bnb-4bit)                                                                                                                    |
|                                                                                                                               | E4B                                                                               | [lien](https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF)                                                                                                         | [lien](https://huggingface.co/unsloth/gemma-4-E4B-it-unsloth-bnb-4bit)                                                                                                                    |
|                                                                                                                               | E2B                                                                               | [lien](https://huggingface.co/unsloth/gemma-4-E2B-it-GGUF)                                                                                                         | [lien](https://huggingface.co/unsloth/gemma-4-E2B-it-unsloth-bnb-4bit)                                                                                                                    |
| [**Qwen3.5**](https://github.com/unslothai/docs/blob/main/models/qwen3.5)                                                     | 35B-A3B                                                                           | [lien](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF)                                                                                                        | —                                                                                                                                                                                         |
|                                                                                                                               | 27B                                                                               | [lien](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF)                                                                                                            | —                                                                                                                                                                                         |
|                                                                                                                               | 122B-A10B                                                                         | [lien](https://huggingface.co/unsloth/Qwen3.5-122B-A10B-GGUF)                                                                                                      | —                                                                                                                                                                                         |
|                                                                                                                               | 0,8B                                                                              | [lien](https://huggingface.co/unsloth/Qwen3.5-0.8B-GGUF)                                                                                                           | —                                                                                                                                                                                         |
|                                                                                                                               | 2B                                                                                | [lien](https://huggingface.co/unsloth/Qwen3.5-2B-GGUF)                                                                                                             | —                                                                                                                                                                                         |
|                                                                                                                               | 4B                                                                                | [lien](https://huggingface.co/unsloth/Qwen3.5-4B-GGUF)                                                                                                             | —                                                                                                                                                                                         |
|                                                                                                                               | 9B                                                                                | [lien](https://huggingface.co/unsloth/Qwen3.5-9B-GGUF)                                                                                                             | —                                                                                                                                                                                         |
|                                                                                                                               | 397B-A17B                                                                         | [lien](https://huggingface.co/unsloth/Qwen3.5-397B-A17B-GGUF)                                                                                                      | —                                                                                                                                                                                         |
| **Qwen3**                                                                                                                     | [Coder-Next](https://unsloth.ai/docs/fr/modeles/qwen3-coder-next)                 | [lien](https://huggingface.co/unsloth/Qwen3-Coder-Next-GGUF)                                                                                                       | —                                                                                                                                                                                         |
| NVIDIA Nemotron 3                                                                                                             | [Super-120B-A12B](https://unsloth.ai/docs/fr/modeles/nemotron-3/nemotron-3-super) | [lien](https://huggingface.co/unsloth/NVIDIA-Nemotron-3-Super-120B-A12B-GGUF)                                                                                      | [lien](https://huggingface.co/unsloth/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4)                                                                                                            |
|                                                                                                                               | [Nano-4B](https://unsloth.ai/docs/fr/modeles/nemotron-3)                          | [lien](https://huggingface.co/unsloth/NVIDIA-Nemotron-3-Nano-4B-GGUF)                                                                                              | —                                                                                                                                                                                         |
| **GLM**                                                                                                                       | [4.7-Flash](https://unsloth.ai/docs/fr/modeles/glm-4.7-flash)                     | [lien](https://huggingface.co/unsloth/GLM-4.7-Flash-GGUF)                                                                                                          | —                                                                                                                                                                                         |
|                                                                                                                               | [5](https://unsloth.ai/docs/fr/modeles/tutorials/glm-5)                           | [lien](https://huggingface.co/unsloth/GLM-5-GGUF)                                                                                                                  | —                                                                                                                                                                                         |
| **Kimi**                                                                                                                      | [K2.5](https://unsloth.ai/docs/fr/modeles/kimi-k2.5)                              | [lien](https://huggingface.co/unsloth/Kimi-K2.5-GGUF)                                                                                                              | —                                                                                                                                                                                         |
| [**gpt-oss**](https://unsloth.ai/docs/fr/modeles/gpt-oss-how-to-run-and-fine-tune)                                            | 120B                                                                              | [lien](https://huggingface.co/unsloth/gpt-oss-120b-GGUF)                                                                                                           | [lien](https://huggingface.co/unsloth/gpt-oss-120b-unsloth-bnb-4bit)                                                                                                                      |
|                                                                                                                               | 20B                                                                               | [lien](https://huggingface.co/unsloth/gpt-oss-20b-GGUF)                                                                                                            | [lien](https://huggingface.co/unsloth/gpt-oss-20b-unsloth-bnb-4bit)                                                                                                                       |
| **MiniMax**                                                                                                                   | [M2.5](https://unsloth.ai/docs/fr/modeles/tutorials/minimax-m25)                  | [lien](https://huggingface.co/unsloth/MiniMax-M2.5-GGUF)                                                                                                           | —                                                                                                                                                                                         |
| NVIDIA [Nemotron 3](https://unsloth.ai/docs/fr/modeles/nemotron-3)                                                            | 30B                                                                               | [lien](https://huggingface.co/unsloth/Nemotron-3-Nano-30B-A3B-GGUF)                                                                                                | —                                                                                                                                                                                         |
| [**Qwen-Image**](https://unsloth.ai/docs/fr/modeles/tutorials/qwen-image-2512)                                                | 2512                                                                              | [lien](https://huggingface.co/unsloth/Qwen-Image-2512-GGUF)                                                                                                        | —                                                                                                                                                                                         |
|                                                                                                                               | Edit-2511                                                                         | [lien](https://huggingface.co/unsloth/Qwen-Image-Edit-2511-GGUF)                                                                                                   | —                                                                                                                                                                                         |
| [**Ministral 3**](https://unsloth.ai/docs/fr/modeles/tutorials/ministral-3)                                                   | 3B                                                                                | [Instruct](https://huggingface.co/unsloth/Ministral-3-3B-Instruct-2512-GGUF) • [Raisonnement](https://huggingface.co/unsloth/Ministral-3-3B-Reasoning-2512-GGUF)   | [Instruct](https://huggingface.co/unsloth/Ministral-3-14B-Instruct-2512-unsloth-bnb-4bit) • [Raisonnement](https://huggingface.co/unsloth/Ministral-3-3B-Reasoning-2512-GGUF)             |
|                                                                                                                               | 8B                                                                                | [Instruct](https://huggingface.co/unsloth/Ministral-3-8B-Instruct-2512-GGUF) • [Raisonnement](https://huggingface.co/unsloth/Ministral-3-8B-Reasoning-2512-GGUF)   | [Instruct](https://huggingface.co/unsloth/Ministral-3-8B-Instruct-2512-unsloth-bnb-4bit) • [Raisonnement](https://huggingface.co/unsloth/Ministral-3-8B-Reasoning-2512-unsloth-bnb-4bit)  |
|                                                                                                                               | 14B                                                                               | [Instruct](https://huggingface.co/unsloth/Ministral-3-14B-Instruct-2512-GGUF) • [Raisonnement](https://huggingface.co/unsloth/Ministral-3-14B-Reasoning-2512-GGUF) | [Instruct](https://huggingface.co/unsloth/Ministral-3-3B-Instruct-2512-unsloth-bnb-4bit) • [Raisonnement](https://huggingface.co/unsloth/Ministral-3-14B-Reasoning-2512-unsloth-bnb-4bit) |
| [**Devstral 2**](https://unsloth.ai/docs/fr/modeles/tutorials/devstral-2)                                                     | 24B                                                                               | [lien](https://huggingface.co/unsloth/Devstral-Small-2-24B-Instruct-2512-GGUF)                                                                                     | —                                                                                                                                                                                         |
|                                                                                                                               | 123B                                                                              | [lien](https://huggingface.co/unsloth/Devstral-2-123B-Instruct-2512-GGUF)                                                                                          | —                                                                                                                                                                                         |
| **Mistral Large 3**                                                                                                           | 675B                                                                              | [lien](https://huggingface.co/unsloth/Mistral-Large-3-675B-Instruct-2512-GGUF)                                                                                     | [lien](https://huggingface.co/unsloth/Mistral-Large-3-675B-Instruct-2512-NVFP4)                                                                                                           |
| [**Qwen3-Next**](https://unsloth.ai/docs/fr/modeles/tutorials/qwen3-next)                                                     | 80B-A3B-Instruct                                                                  | [lien](https://huggingface.co/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF)                                                                                            | [lien](https://huggingface.co/unsloth/Qwen3-Next-80B-A3B-Instruct-bnb-4bit/)                                                                                                              |
|                                                                                                                               | 80B-A3B-Thinking                                                                  | [lien](https://huggingface.co/unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF)                                                                                            | —                                                                                                                                                                                         |
| [**Qwen3-VL**](https://unsloth.ai/docs/fr/modeles/tutorials/qwen3-how-to-run-and-fine-tune/qwen3-vl-how-to-run-and-fine-tune) | 2B-Instruct                                                                       | [lien](https://huggingface.co/unsloth/Qwen3-VL-2B-Instruct-GGUF)                                                                                                   | [lien](https://huggingface.co/unsloth/Qwen3-VL-2B-Instruct-unsloth-bnb-4bit)                                                                                                              |
|                                                                                                                               | 2B-Thinking                                                                       | [lien](https://huggingface.co/unsloth/Qwen3-VL-2B-Thinking-GGUF)                                                                                                   | [lien](https://huggingface.co/unsloth/Qwen3-VL-2B-Thinking-unsloth-bnb-4bit)                                                                                                              |
|                                                                                                                               | 4B-Instruct                                                                       | [lien](https://huggingface.co/unsloth/Qwen3-VL-4B-Instruct-GGUF)                                                                                                   | [lien](https://huggingface.co/unsloth/Qwen3-VL-4B-Instruct-unsloth-bnb-4bit)                                                                                                              |
|                                                                                                                               | 4B-Thinking                                                                       | [lien](https://huggingface.co/unsloth/Qwen3-VL-4B-Thinking-GGUF)                                                                                                   | [lien](https://huggingface.co/unsloth/Qwen3-VL-4B-Thinking-unsloth-bnb-4bit)                                                                                                              |
|                                                                                                                               | 8B-Instruct                                                                       | [lien](https://huggingface.co/unsloth/Qwen3-VL-8B-Instruct-GGUF)                                                                                                   | [lien](https://huggingface.co/unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit)                                                                                                              |
|                                                                                                                               | 8B-Thinking                                                                       | [lien](https://huggingface.co/unsloth/Qwen3-VL-8B-Thinking-GGUF)                                                                                                   | [lien](https://huggingface.co/unsloth/Qwen3-VL-8B-Thinking-unsloth-bnb-4bit)                                                                                                              |
|                                                                                                                               | 30B-A3B-Instruct                                                                  | [lien](https://huggingface.co/unsloth/Qwen3-VL-30B-A3B-Instruct-GGUF)                                                                                              | —                                                                                                                                                                                         |
|                                                                                                                               | 30B-A3B-Thinking                                                                  | [lien](https://huggingface.co/unsloth/Qwen3-VL-30B-A3B-Thinking-GGUF)                                                                                              | —                                                                                                                                                                                         |
|                                                                                                                               | 32B-Instruct                                                                      | [lien](https://huggingface.co/unsloth/Qwen3-VL-32B-Instruct-GGUF)                                                                                                  | [lien](https://huggingface.co/unsloth/Qwen3-VL-32B-Instruct-unsloth-bnb-4bit)                                                                                                             |
|                                                                                                                               | 32B-Thinking                                                                      | [lien](https://huggingface.co/unsloth/Qwen3-VL-32B-Thinking-GGUF)                                                                                                  | [lien](https://huggingface.co/unsloth/Qwen3-VL-32B-Thinking-unsloth-bnb-4bit)                                                                                                             |
|                                                                                                                               | 235B-A22B-Instruct                                                                | [lien](https://huggingface.co/unsloth/Qwen3-VL-235B-A22B-Instruct-GGUF)                                                                                            | —                                                                                                                                                                                         |
|                                                                                                                               | 235B-A22B-Thinking                                                                | [lien](https://huggingface.co/unsloth/Qwen3-VL-235B-A22B-Thinking-GGUF)                                                                                            | —                                                                                                                                                                                         |
| [**Qwen3-2507**](https://unsloth.ai/docs/fr/modeles/tutorials/qwen3-next)                                                     | 30B-A3B-Instruct                                                                  | [lien](https://huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF)                                                                                            | —                                                                                                                                                                                         |
|                                                                                                                               | 30B-A3B-Thinking                                                                  | [lien](https://huggingface.co/unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF)                                                                                            | —                                                                                                                                                                                         |
|                                                                                                                               | 235B-A22B-Instruct                                                                | [lien](https://huggingface.co/unsloth/Qwen3-235B-A22B-Instruct-2507-GGUF/)                                                                                         | —                                                                                                                                                                                         |
| [**Qwen3-Coder**](https://unsloth.ai/docs/fr/modeles/tutorials/qwen3-coder-how-to-run-locally)                                | 30B-A3B                                                                           | [lien](https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF)                                                                                           | —                                                                                                                                                                                         |
| [**GLM**](https://unsloth.ai/docs/fr/modeles/tutorials/glm-4.6-how-to-run-locally)                                            | 4.7                                                                               | [lien](https://huggingface.co/unsloth/GLM-4.7-GGUF)                                                                                                                | —                                                                                                                                                                                         |
|                                                                                                                               | 4.6V-Flash                                                                        | [lien](https://huggingface.co/unsloth/GLM-4.6V-Flash-GGUF)                                                                                                         | —                                                                                                                                                                                         |
| [**DeepSeek-V3.1**](https://unsloth.ai/docs/fr/modeles/tutorials/deepseek-v3.1-how-to-run-locally)                            | Terminus                                                                          | [lien](https://huggingface.co/unsloth/DeepSeek-V3.1-Terminus-GGUF)                                                                                                 | —                                                                                                                                                                                         |
|                                                                                                                               | V3.1                                                                              | [lien](https://huggingface.co/unsloth/DeepSeek-V3.1-GGUF)                                                                                                          | —                                                                                                                                                                                         |
| **Granite-4.0**                                                                                                               | H-Small                                                                           | [lien](https://huggingface.co/unsloth/granite-4.0-h-small-GGUF)                                                                                                    | [lien](https://huggingface.co/unsloth/granite-4.0-h-small-unsloth-bnb-4bit)                                                                                                               |
| **Kimi-K2**                                                                                                                   | Thinking                                                                          | [lien](https://huggingface.co/unsloth/Kimi-K2-Thinking-GGUF)                                                                                                       | —                                                                                                                                                                                         |
|                                                                                                                               | 0905                                                                              | [lien](https://huggingface.co/unsloth/Kimi-K2-Instruct-0905-GGUF)                                                                                                  | —                                                                                                                                                                                         |

#### **Modèles DeepSeek :**

| Modèle            | Variante               | GGUF                                                                      | Instruct (4 bits)                                                                     |
| ----------------- | ---------------------- | ------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
| **DeepSeek-V3.1** | Terminus               | [lien](https://huggingface.co/unsloth/DeepSeek-V3.1-Terminus-GGUF)        |                                                                                       |
|                   | V3.1                   | [lien](https://huggingface.co/unsloth/DeepSeek-V3.1-GGUF)                 |                                                                                       |
| **DeepSeek-V3**   | V3-0324                | [lien](https://huggingface.co/unsloth/DeepSeek-V3-0324-GGUF)              | —                                                                                     |
|                   | V3                     | [lien](https://huggingface.co/unsloth/DeepSeek-V3-GGUF)                   | —                                                                                     |
| **DeepSeek-R1**   | R1-0528                | [lien](https://huggingface.co/unsloth/DeepSeek-R1-0528-GGUF)              | —                                                                                     |
|                   | R1-0528-Qwen3-8B       | [lien](https://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B-GGUF)     | [lien](https://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit)     |
|                   | R1                     | [lien](https://huggingface.co/unsloth/DeepSeek-R1-GGUF)                   | —                                                                                     |
|                   | R1 Zero                | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Zero-GGUF)              | —                                                                                     |
|                   | Distill Llama 3 8 B    | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF)  | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit)  |
|                   | Distill Llama 3.3 70 B | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF) | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-bnb-4bit)         |
|                   | Distill Qwen 2.5 1.5 B | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B-GGUF) | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B-unsloth-bnb-4bit) |
|                   | Distill Qwen 2.5 7 B   | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-7B-GGUF)   | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-7B-unsloth-bnb-4bit)   |
|                   | Distill Qwen 2.5 14 B  | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-14B-GGUF)  | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit)  |
|                   | Distill Qwen 2.5 32 B  | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-32B-GGUF)  | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit)          |

#### **Modèles Llama :**

| Modèle        | Variante            | GGUF                                                                           | Instruct (4 bits)                                                                      |
| ------------- | ------------------- | ------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------- |
| **Llama 4**   | Scout 17 B-16 E     | [lien](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF)     | [lien](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-unsloth-bnb-4bit) |
|               | Maverick 17 B-128 E | [lien](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF) | —                                                                                      |
| **Llama 3.3** | 70 B                | [lien](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-GGUF)             | [lien](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-bnb-4bit)                 |
| **Llama 3.2** | 1 B                 | [lien](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-GGUF)              | [lien](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-bnb-4bit)                  |
|               | 3 B                 | [lien](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct-GGUF)              | [lien](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct-bnb-4bit)                  |
|               | 11 B Vision         | —                                                                              | [lien](https://huggingface.co/unsloth/Llama-3.2-11B-Vision-Instruct-unsloth-bnb-4bit)  |
|               | 90 B Vision         | —                                                                              | [lien](https://huggingface.co/unsloth/Llama-3.2-90B-Vision-Instruct-bnb-4bit)          |
| **Llama 3.1** | 8 B                 | [lien](https://huggingface.co/unsloth/Llama-3.1-8B-Instruct-GGUF)              | [lien](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit)             |
|               | 70 B                | —                                                                              | [lien](https://huggingface.co/unsloth/Meta-Llama-3.1-70B-Instruct-bnb-4bit)            |
|               | 405 B               | —                                                                              | [lien](https://huggingface.co/unsloth/Meta-Llama-3.1-405B-Instruct-bnb-4bit)           |
| **Llama 3**   | 8 B                 | —                                                                              | [lien](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit)                    |
|               | 70 B                | —                                                                              | [lien](https://huggingface.co/unsloth/llama-3-70b-bnb-4bit)                            |
| **Llama 2**   | 7 B                 | —                                                                              | [lien](https://huggingface.co/unsloth/llama-2-7b-chat-bnb-4bit)                        |
|               | 13 B                | —                                                                              | [lien](https://huggingface.co/unsloth/llama-2-13b-bnb-4bit)                            |
| **CodeLlama** | 7 B                 | —                                                                              | [lien](https://huggingface.co/unsloth/codellama-7b-bnb-4bit)                           |
|               | 13 B                | —                                                                              | [lien](https://huggingface.co/unsloth/codellama-13b-bnb-4bit)                          |
|               | 34 B                | —                                                                              | [lien](https://huggingface.co/unsloth/codellama-34b-bnb-4bit)                          |

#### **Modèles Gemma :**

| Modèle            | Variante      | GGUF                                                              | Instruct (4 bits)                                                            |
| ----------------- | ------------- | ----------------------------------------------------------------- | ---------------------------------------------------------------------------- |
| **Gemma 4**       | E2B           | [lien](https://huggingface.co/unsloth/gemma-4-E2B-it-GGUF)        | [lien](https://huggingface.co/unsloth/gemma-4-E2B-it-unsloth-bnb-4bit)       |
|                   | E4B           | [lien](https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF)        | [lien](https://huggingface.co/unsloth/gemma-4-E4B-it-unsloth-bnb-4bit)       |
|                   | 26B-A4B       | [lien](https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF)    | —                                                                            |
|                   | 31B           | [lien](https://huggingface.co/unsloth/gemma-4-31B-it-GGUF)        | [lien](https://huggingface.co/unsloth/gemma-4-31B-it-unsloth-bnb-4bit)       |
| **FunctionGemma** | 270M          | [lien](https://huggingface.co/unsloth/functiongemma-270m-it-GGUF) | —                                                                            |
| **Gemma 3n**      | E2B           | ​[lien](https://huggingface.co/unsloth/gemma-3n-E2B-it-GGUF)      | [lien](https://huggingface.co/unsloth/gemma-3n-E2B-it-unsloth-bnb-4bit)      |
|                   | E4B           | [lien](https://huggingface.co/unsloth/gemma-3n-E4B-it-GGUF)       | [lien](https://huggingface.co/unsloth/gemma-3n-E4B-it-unsloth-bnb-4bit)      |
| **Gemma 3**       | 270M          | [lien](https://huggingface.co/unsloth/gemma-3-270m-it-GGUF)       | [lien](https://huggingface.co/unsloth/gemma-3-270m-it)                       |
|                   | 1 B           | [lien](https://huggingface.co/unsloth/gemma-3-1b-it-GGUF)         | [lien](https://huggingface.co/unsloth/gemma-3-1b-it-unsloth-bnb-4bit)        |
|                   | 4 B           | [lien](https://huggingface.co/unsloth/gemma-3-4b-it-GGUF)         | [lien](https://huggingface.co/unsloth/gemma-3-4b-it-unsloth-bnb-4bit)        |
|                   | 12 B          | [lien](https://huggingface.co/unsloth/gemma-3-12b-it-GGUF)        | [lien](https://huggingface.co/unsloth/gemma-3-12b-it-unsloth-bnb-4bit)       |
|                   | 27 B          | [lien](https://huggingface.co/unsloth/gemma-3-27b-it-GGUF)        | [lien](https://huggingface.co/unsloth/gemma-3-27b-it-unsloth-bnb-4bit)       |
| **MedGemma**      | 4 B (vision)  | [lien](https://huggingface.co/unsloth/medgemma-4b-it-GGUF)        | [lien](https://huggingface.co/unsloth/medgemma-4b-it-unsloth-bnb-4bit)       |
|                   | 27 B (vision) | [lien](https://huggingface.co/unsloth/medgemma-27b-it-GGUF)       | [lien](https://huggingface.co/unsloth/medgemma-27b-text-it-unsloth-bnb-4bit) |
| **Gemma 2**       | 2 B           | [lien](https://huggingface.co/unsloth/gemma-2-it-GGUF)            | [lien](https://huggingface.co/unsloth/gemma-2-2b-it-bnb-4bit)                |
|                   | 9 B           | —                                                                 | [lien](https://huggingface.co/unsloth/gemma-2-9b-it-bnb-4bit)                |
|                   | 27 B          | —                                                                 | [lien](https://huggingface.co/unsloth/gemma-2-27b-it-bnb-4bit)               |

#### **Modèles Qwen :**

| Modèle                                                                                                                        | Variante                                                          | GGUF                                                                         | Instruct (4 bits)                                                               |
| ----------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------------------------------------- |
| [**Qwen3.5**](https://github.com/unslothai/docs/blob/main/models/qwen3.5)                                                     | 35B-A3B                                                           | [lien](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF)                  | —                                                                               |
|                                                                                                                               | 27B                                                               | [lien](https://huggingface.co/unsloth/Qwen3.5-27B-GGUF)                      | —                                                                               |
|                                                                                                                               | 122B-A10B                                                         | [lien](https://huggingface.co/unsloth/Qwen3.5-122B-A10B-GGUF)                | —                                                                               |
|                                                                                                                               | 0,8B                                                              | [lien](https://huggingface.co/unsloth/Qwen3.5-0.8B-GGUF)                     | —                                                                               |
|                                                                                                                               | 2B                                                                | [lien](https://huggingface.co/unsloth/Qwen3.5-2B-GGUF)                       | —                                                                               |
|                                                                                                                               | 4B                                                                | [lien](https://huggingface.co/unsloth/Qwen3.5-4B-GGUF)                       | —                                                                               |
|                                                                                                                               | 9B                                                                | [lien](https://huggingface.co/unsloth/Qwen3.5-9B-GGUF)                       | —                                                                               |
|                                                                                                                               | 397B-A17B                                                         | [lien](https://huggingface.co/unsloth/Qwen3.5-397B-A17B-GGUF)                | —                                                                               |
| **Qwen3**                                                                                                                     | [Coder-Next](https://unsloth.ai/docs/fr/modeles/qwen3-coder-next) | [lien](https://huggingface.co/unsloth/Qwen3-Coder-Next-GGUF)                 | —                                                                               |
| [**Qwen-Image**](https://unsloth.ai/docs/fr/modeles/tutorials/qwen-image-2512)                                                | 2512                                                              | [lien](https://huggingface.co/unsloth/Qwen-Image-2512-GGUF)                  | —                                                                               |
|                                                                                                                               | Edit-2511                                                         | [lien](https://huggingface.co/unsloth/Qwen-Image-Edit-2511-GGUF)             | —                                                                               |
| [**Qwen3-VL**](https://unsloth.ai/docs/fr/modeles/tutorials/qwen3-how-to-run-and-fine-tune/qwen3-vl-how-to-run-and-fine-tune) | 2B-Instruct                                                       | [lien](https://huggingface.co/unsloth/Qwen3-VL-2B-Instruct-GGUF)             | [lien](https://huggingface.co/unsloth/Qwen3-VL-2B-Instruct-unsloth-bnb-4bit)    |
|                                                                                                                               | 2B-Thinking                                                       | [lien](https://huggingface.co/unsloth/Qwen3-VL-2B-Thinking-GGUF)             | [lien](https://huggingface.co/unsloth/Qwen3-VL-2B-Thinking-unsloth-bnb-4bit)    |
|                                                                                                                               | 4B-Instruct                                                       | [lien](https://huggingface.co/unsloth/Qwen3-VL-4B-Instruct-GGUF)             | [lien](https://huggingface.co/unsloth/Qwen3-VL-4B-Instruct-unsloth-bnb-4bit)    |
|                                                                                                                               | 4B-Thinking                                                       | [lien](https://huggingface.co/unsloth/Qwen3-VL-4B-Thinking-GGUF)             | [lien](https://huggingface.co/unsloth/Qwen3-VL-4B-Thinking-unsloth-bnb-4bit)    |
|                                                                                                                               | 8B-Instruct                                                       | [lien](https://huggingface.co/unsloth/Qwen3-VL-8B-Instruct-GGUF)             | [lien](https://huggingface.co/unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit)    |
|                                                                                                                               | 8B-Thinking                                                       | [lien](https://huggingface.co/unsloth/Qwen3-VL-8B-Thinking-GGUF)             | [lien](https://huggingface.co/unsloth/Qwen3-VL-8B-Thinking-unsloth-bnb-4bit)    |
| **Qwen3-Coder**                                                                                                               | 30B-A3B                                                           | [lien](https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF)     | —                                                                               |
|                                                                                                                               | 480B-A35B                                                         | [lien](https://huggingface.co/unsloth/Qwen3-Coder-480B-A35B-Instruct-GGUF)   | —                                                                               |
| [**Qwen3-2507**](https://unsloth.ai/docs/fr/modeles/tutorials/qwen3-next)                                                     | 30B-A3B-Instruct                                                  | [lien](https://huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-GGUF)      | —                                                                               |
|                                                                                                                               | 30B-A3B-Thinking                                                  | [lien](https://huggingface.co/unsloth/Qwen3-30B-A3B-Thinking-2507-GGUF)      | —                                                                               |
|                                                                                                                               | 235B-A22B-Thinking                                                | [lien](https://huggingface.co/unsloth/Qwen3-235B-A22B-Thinking-2507-GGUF/)   | —                                                                               |
|                                                                                                                               | 235B-A22B-Instruct                                                | [lien](https://huggingface.co/unsloth/Qwen3-235B-A22B-Instruct-2507-GGUF/)   | —                                                                               |
| **Qwen 3**                                                                                                                    | 0.6 B                                                             | [lien](https://huggingface.co/unsloth/Qwen3-0.6B-GGUF)                       | [lien](https://huggingface.co/unsloth/Qwen3-0.6B-unsloth-bnb-4bit)              |
|                                                                                                                               | 1.7 B                                                             | [lien](https://huggingface.co/unsloth/Qwen3-1.7B-GGUF)                       | [lien](https://huggingface.co/unsloth/Qwen3-1.7B-unsloth-bnb-4bit)              |
|                                                                                                                               | 4 B                                                               | [lien](https://huggingface.co/unsloth/Qwen3-4B-GGUF)                         | [lien](https://huggingface.co/unsloth/Qwen3-4B-unsloth-bnb-4bit)                |
|                                                                                                                               | 8 B                                                               | [lien](https://huggingface.co/unsloth/Qwen3-8B-GGUF)                         | [lien](https://huggingface.co/unsloth/Qwen3-8B-unsloth-bnb-4bit)                |
|                                                                                                                               | 14 B                                                              | [lien](https://huggingface.co/unsloth/Qwen3-14B-GGUF)                        | [lien](https://huggingface.co/unsloth/Qwen3-14B-unsloth-bnb-4bit)               |
|                                                                                                                               | 30 B-A3B                                                          | [lien](https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF)                    | [lien](https://huggingface.co/unsloth/Qwen3-30B-A3B-bnb-4bit)                   |
|                                                                                                                               | 32 B                                                              | [lien](https://huggingface.co/unsloth/Qwen3-32B-GGUF)                        | [lien](https://huggingface.co/unsloth/Qwen3-32B-unsloth-bnb-4bit)               |
|                                                                                                                               | 235 B-A22B                                                        | [lien](https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF)                  | —                                                                               |
| **Qwen 2.5 Omni**                                                                                                             | 3 B                                                               | [lien](https://huggingface.co/unsloth/Qwen2.5-Omni-3B-GGUF)                  | —                                                                               |
|                                                                                                                               | 7 B                                                               | [lien](https://huggingface.co/unsloth/Qwen2.5-Omni-7B-GGUF)                  | —                                                                               |
| **Qwen 2.5 VL**                                                                                                               | 3 B                                                               | [lien](https://huggingface.co/unsloth/Qwen2.5-VL-3B-Instruct-GGUF)           | [lien](https://huggingface.co/unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit)  |
|                                                                                                                               | 7 B                                                               | [lien](https://huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct-GGUF)           | [lien](https://huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct-unsloth-bnb-4bit)  |
|                                                                                                                               | 32 B                                                              | [lien](https://huggingface.co/unsloth/Qwen2.5-VL-32B-Instruct-GGUF)          | [lien](https://huggingface.co/unsloth/Qwen2.5-VL-32B-Instruct-unsloth-bnb-4bit) |
|                                                                                                                               | 72 B                                                              | [lien](https://huggingface.co/unsloth/Qwen2.5-VL-72B-Instruct-GGUF)          | [lien](https://huggingface.co/unsloth/Qwen2.5-VL-72B-Instruct-unsloth-bnb-4bit) |
| **Qwen 2.5**                                                                                                                  | 0.5 B                                                             | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct-bnb-4bit)           |
|                                                                                                                               | 1.5 B                                                             | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct-bnb-4bit)           |
|                                                                                                                               | 3 B                                                               | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct-bnb-4bit)             |
|                                                                                                                               | 7 B                                                               | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2.5-7B-Instruct-bnb-4bit)             |
|                                                                                                                               | 14 B                                                              | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2.5-14B-Instruct-bnb-4bit)            |
|                                                                                                                               | 32 B                                                              | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2.5-32B-Instruct-bnb-4bit)            |
|                                                                                                                               | 72 B                                                              | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2.5-72B-Instruct-bnb-4bit)            |
| **Qwen 2.5 Coder (128 K)**                                                                                                    | 0.5 B                                                             | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-0.5B-Instruct-128K-GGUF) | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-0.5B-Instruct-bnb-4bit)     |
|                                                                                                                               | 1.5 B                                                             | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct-128K-GGUF) | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct-bnb-4bit)     |
|                                                                                                                               | 3 B                                                               | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-3B-Instruct-128K-GGUF)   | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-3B-Instruct-bnb-4bit)       |
|                                                                                                                               | 7 B                                                               | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct-128K-GGUF)   | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit)       |
|                                                                                                                               | 14 B                                                              | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-14B-Instruct-128K-GGUF)  | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-14B-Instruct-bnb-4bit)      |
|                                                                                                                               | 32 B                                                              | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-32B-Instruct-128K-GGUF)  | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-32B-Instruct-bnb-4bit)      |
| **QwQ**                                                                                                                       | 32 B                                                              | [lien](https://huggingface.co/unsloth/QwQ-32B-GGUF)                          | [lien](https://huggingface.co/unsloth/QwQ-32B-unsloth-bnb-4bit)                 |
| **QVQ (aperçu)**                                                                                                              | 72 B                                                              | —                                                                            | [lien](https://huggingface.co/unsloth/QVQ-72B-Preview-bnb-4bit)                 |
| **Qwen 2 (chat)**                                                                                                             | 1.5 B                                                             | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct-bnb-4bit)             |
|                                                                                                                               | 7 B                                                               | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2-7B-Instruct-bnb-4bit)               |
|                                                                                                                               | 72 B                                                              | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2-72B-Instruct-bnb-4bit)              |
| **Qwen 2 VL**                                                                                                                 | 2 B                                                               | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2-VL-2B-Instruct-unsloth-bnb-4bit)    |
|                                                                                                                               | 7 B                                                               | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2-VL-7B-Instruct-unsloth-bnb-4bit)    |
|                                                                                                                               | 72 B                                                              | —                                                                            | [lien](https://huggingface.co/unsloth/Qwen2-VL-72B-Instruct-bnb-4bit)           |

#### **Modèles GLM :**

| Modèle  | Variante                                                      | GGUF                                                       | Instruct (4 bits) |
| ------- | ------------------------------------------------------------- | ---------------------------------------------------------- | ----------------- |
| **GLM** | [4.7-Flash](https://unsloth.ai/docs/fr/modeles/glm-4.7-flash) | [lien](https://huggingface.co/unsloth/GLM-4.7-Flash-GGUF)  | —                 |
|         | [5](https://unsloth.ai/docs/fr/modeles/tutorials/glm-5)       | [lien](https://huggingface.co/unsloth/GLM-5-GGUF)          | —                 |
|         | 4.6V-Flash                                                    | [lien](https://huggingface.co/unsloth/GLM-4.6V-Flash-GGUF) | —                 |
|         | 4.6                                                           | [lien](https://huggingface.co/unsloth/GLM-4.6-GGUF)        | —                 |
|         | 4.5-Air                                                       | [lien](https://huggingface.co/unsloth/GLM-4.5-Air-GGUF)    | —                 |

#### **Modèles Mistral :**

| Modèle            | Variante          | GGUF                                                                            | Instruct (4 bits)                                                                           |
| ----------------- | ----------------- | ------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
| **Magistral**     | Small (2506)      | [lien](https://huggingface.co/unsloth/Magistral-Small-2506-GGUF)                | [lien](https://huggingface.co/unsloth/Magistral-Small-2506-unsloth-bnb-4bit)                |
|                   | Small (2509)      | [lien](https://huggingface.co/unsloth/Magistral-Small-2509-GGUF)                | [lien](https://huggingface.co/unsloth/Magistral-Small-2509-unsloth-bnb-4bit)                |
|                   | Small (2507)      | [lien](https://huggingface.co/unsloth/Magistral-Small-2507-GGUF)                | [lien](https://huggingface.co/unsloth/Magistral-Small-2507-unsloth-bnb-4bit)                |
| **Mistral Small** | 3.2-24 B (2506)   | [lien](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506-GGUF) | [lien](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506-unsloth-bnb-4bit) |
|                   | 3.1-24 B (2503)   | [lien](https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF) | [lien](https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503-unsloth-bnb-4bit) |
|                   | 3-24 B (2501)     | [lien](https://huggingface.co/unsloth/Mistral-Small-24B-Instruct-2501-GGUF)     | [lien](https://huggingface.co/unsloth/Mistral-Small-24B-Instruct-2501-unsloth-bnb-4bit)     |
|                   | 2409-22 B         | —                                                                               | [lien](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409-bnb-4bit)                 |
| **Devstral**      | Small-24 B (2507) | [lien](https://huggingface.co/unsloth/Devstral-Small-2507-GGUF)                 | [lien](https://huggingface.co/unsloth/Devstral-Small-2507-unsloth-bnb-4bit)                 |
|                   | Small-24 B (2505) | [lien](https://huggingface.co/unsloth/Devstral-Small-2505-GGUF)                 | [lien](https://huggingface.co/unsloth/Devstral-Small-2505-unsloth-bnb-4bit)                 |
| **Pixtral**       | 12 B (2409)       | —                                                                               | [lien](https://huggingface.co/unsloth/Pixtral-12B-2409-bnb-4bit)                            |
| **Mistral NeMo**  | 12 B (2407)       | [lien](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407-GGUF)          | [lien](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit)                  |
| **Mistral Large** | 2407              | —                                                                               | [lien](https://huggingface.co/unsloth/Mistral-Large-Instruct-2407-bnb-4bit)                 |
| **Mistral 7 B**   | v0.3              | —                                                                               | [lien](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3-bnb-4bit)                    |
|                   | v0.2              | —                                                                               | [lien](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2-bnb-4bit)                    |
| **Mixtral**       | 8 × 7 B           | —                                                                               | [lien](https://huggingface.co/unsloth/Mixtral-8x7B-Instruct-v0.1-unsloth-bnb-4bit)          |

#### **Modèles Phi :**

| Modèle      | Variante          | GGUF                                                             | Instruct (4 bits)                                                            |
| ----------- | ----------------- | ---------------------------------------------------------------- | ---------------------------------------------------------------------------- |
| **Phi-4**   | Raisonnement-plus | [lien](https://huggingface.co/unsloth/Phi-4-reasoning-plus-GGUF) | [lien](https://huggingface.co/unsloth/Phi-4-reasoning-plus-unsloth-bnb-4bit) |
|             | Raisonnement      | [lien](https://huggingface.co/unsloth/Phi-4-reasoning-GGUF)      | [lien](https://huggingface.co/unsloth/phi-4-reasoning-unsloth-bnb-4bit)      |
|             | Mini-Raisonnement | [lien](https://huggingface.co/unsloth/Phi-4-mini-reasoning-GGUF) | [lien](https://huggingface.co/unsloth/Phi-4-mini-reasoning-unsloth-bnb-4bit) |
|             | Phi-4 (instruct)  | [lien](https://huggingface.co/unsloth/phi-4-GGUF)                | [lien](https://huggingface.co/unsloth/phi-4-unsloth-bnb-4bit)                |
|             | mini (instruct)   | [lien](https://huggingface.co/unsloth/Phi-4-mini-instruct-GGUF)  | [lien](https://huggingface.co/unsloth/Phi-4-mini-instruct-unsloth-bnb-4bit)  |
| **Phi-3.5** | mini              | —                                                                | [lien](https://huggingface.co/unsloth/Phi-3.5-mini-instruct-bnb-4bit)        |
| **Phi-3**   | mini              | —                                                                | [lien](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct-bnb-4bit)       |
|             | medium            | —                                                                | [lien](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct-bnb-4bit)     |

#### **Modèles autres (GLM, Orpheus, Smol, Llava, etc.) :**

| Modèle          | Variante             | GGUF                                                                           | Instruct (4 bits)                                                         |
| --------------- | -------------------- | ------------------------------------------------------------------------------ | ------------------------------------------------------------------------- |
| GLM             | 4.5-Air              | [lien](https://huggingface.co/unsloth/GLM-4.5-Air-GGUF)                        | —                                                                         |
|                 | 4.5                  | [4.5](https://huggingface.co/unsloth/GLM-4.5-GGUF)                             | —                                                                         |
|                 | 4-32B-0414           | [4-32B-0414](https://huggingface.co/unsloth/GLM-4-32B-0414-GGUF)               | —                                                                         |
| **Grok 2**      | 270B                 | [lien](https://huggingface.co/unsloth/grok-2-GGUF)                             | —                                                                         |
| **Baidu-ERNIE** | 4.5-21B-A3B-Thinking | [lien](https://huggingface.co/unsloth/ERNIE-4.5-21B-A3B-Thinking-GGUF)         | —                                                                         |
| Hunyuan         | A13B                 | [lien](https://huggingface.co/unsloth/Hunyuan-A13B-Instruct-GGUF)              | —                                                                         |
| Orpheus         | 0.1-ft (3B)          | [lien](https://app.gitbook.com/o/HpyELzcNe0topgVLGCZY/s/xhOjnexMCB3dmuQFQ2Zq/) | [lien](https://huggingface.co/unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit) |
| **LLava**       | 1.5 (7 B)            | —                                                                              | [lien](https://huggingface.co/unsloth/llava-1.5-7b-hf-bnb-4bit)           |
|                 | 1.6 Mistral (7 B)    | —                                                                              | [lien](https://huggingface.co/unsloth/llava-v1.6-mistral-7b-hf-bnb-4bit)  |
| **TinyLlama**   | Chat                 | —                                                                              | [lien](https://huggingface.co/unsloth/tinyllama-chat-bnb-4bit)            |
| **SmolLM 2**    | 135 M                | [lien](https://huggingface.co/unsloth/SmolLM2-135M-Instruct-GGUF)              | [lien](https://huggingface.co/unsloth/SmolLM2-135M-Instruct-bnb-4bit)     |
|                 | 360 M                | [lien](https://huggingface.co/unsloth/SmolLM2-360M-Instruct-GGUF)              | [lien](https://huggingface.co/unsloth/SmolLM2-360M-Instruct-bnb-4bit)     |
|                 | 1.7 B                | [lien](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct-GGUF)              | [lien](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct-bnb-4bit)     |
| **Zephyr-SFT**  | 7 B                  | —                                                                              | [lien](https://huggingface.co/unsloth/zephyr-sft-bnb-4bit)                |
| **Yi**          | 6 B (v1.5)           | —                                                                              | [lien](https://huggingface.co/unsloth/Yi-1.5-6B-bnb-4bit)                 |
|                 | 6 B (v1.0)           | —                                                                              | [lien](https://huggingface.co/unsloth/yi-6b-bnb-4bit)                     |
|                 | 34 B (chat)          | —                                                                              | [lien](https://huggingface.co/unsloth/yi-34b-chat-bnb-4bit)               |
|                 | 34 B (base)          | —                                                                              | [lien](https://huggingface.co/unsloth/yi-34b-bnb-4bit)                    |
| {% endtab %}    |                      |                                                                                |                                                                           |

{% tab title="• Instruct 16 bits" %}
Les modèles Instruct 16 bits et 8 bits sont utilisés pour l’inférence ou le fine-tuning dans [**Unsloth Studio**](https://unsloth.ai/docs/fr/nouveau/studio):

**Nouveaux modèles :**

| Modèle                | Variante               | Instruct (16 bits)                                                         |
| --------------------- | ---------------------- | -------------------------------------------------------------------------- |
| **gpt-oss** (nouveau) | 20b                    | [lien](https://huggingface.co/unsloth/gpt-oss-20b)                         |
|                       | 120b                   | [lien](https://huggingface.co/unsloth/gpt-oss-120b)                        |
| **Gemma 3n**          | E2B                    | [lien](https://huggingface.co/unsloth/gemma-3n-E4B-it)                     |
|                       | E4B                    | [lien](https://huggingface.co/unsloth/gemma-3n-E2B-it)                     |
| **DeepSeek-R1-0528**  | R1-0528-Qwen3-8B       | [lien](https://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B)           |
|                       | R1-0528                | [lien](https://huggingface.co/unsloth/DeepSeek-R1-0528)                    |
| **Mistral**           | Small 3.2 24B (2506)   | [lien](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506) |
|                       | Small 3.1 24B (2503)   | [lien](https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503) |
|                       | Small 3.0 24B (2501)   | [lien](https://huggingface.co/unsloth/Mistral-Small-24B-Instruct-2501)     |
|                       | Magistral Small (2506) | [lien](https://huggingface.co/unsloth/Magistral-Small-2506)                |
| **Qwen 3**            | 0.6 B                  | [lien](https://huggingface.co/unsloth/Qwen3-0.6B)                          |
|                       | 1.7 B                  | [lien](https://huggingface.co/unsloth/Qwen3-1.7B)                          |
|                       | 4 B                    | [lien](https://huggingface.co/unsloth/Qwen3-4B)                            |
|                       | 8 B                    | [lien](https://huggingface.co/unsloth/Qwen3-8B)                            |
|                       | 14 B                   | [lien](https://huggingface.co/unsloth/Qwen3-14B)                           |
|                       | 30B-A3B                | [lien](https://huggingface.co/unsloth/Qwen3-30B-A3B)                       |
|                       | 32 B                   | [lien](https://huggingface.co/unsloth/Qwen3-32B)                           |
|                       | 235B-A22B              | [lien](https://huggingface.co/unsloth/Qwen3-235B-A22B)                     |
| **Llama 4**           | Scout 17B-16E          | [lien](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct)      |
|                       | Maverick 17B-128E      | [lien](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct)  |
| **Qwen 2.5 Omni**     | 3 B                    | [lien](https://huggingface.co/unsloth/Qwen2.5-Omni-3B)                     |
|                       | 7 B                    | [lien](https://huggingface.co/unsloth/Qwen2.5-Omni-7B)                     |
| **Phi-4**             | Raisonnement-plus      | [lien](https://huggingface.co/unsloth/Phi-4-reasoning-plus)                |
|                       | Raisonnement           | [lien](https://huggingface.co/unsloth/Phi-4-reasoning)                     |

**Modèles DeepSeek**

| Modèle          | Variante              | Instruct (16 bits)                                                   |
| --------------- | --------------------- | -------------------------------------------------------------------- |
| **DeepSeek-V3** | V3-0324               | [lien](https://huggingface.co/unsloth/DeepSeek-V3-0324)              |
|                 | V3                    | [lien](https://huggingface.co/unsloth/DeepSeek-V3)                   |
| **DeepSeek-R1** | R1-0528               | [lien](https://huggingface.co/unsloth/DeepSeek-R1-0528)              |
|                 | R1-0528-Qwen3-8B      | [lien](https://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B)     |
|                 | R1                    | [lien](https://huggingface.co/unsloth/DeepSeek-R1)                   |
|                 | R1 Zero               | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Zero)              |
|                 | Distill Llama 3 8B    | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-8B)  |
|                 | Distill Llama 3.3 70B | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B) |
|                 | Distill Qwen 2.5 1.5B | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-1.5B) |
|                 | Distill Qwen 2.5 7B   | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-7B)   |
|                 | Distill Qwen 2.5 14B  | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-14B)  |
|                 | Distill Qwen 2.5 32B  | [lien](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-32B)  |

**Modèles Llama**

| Famille       | Variante          | Instruct (16 bits)                                                        |
| ------------- | ----------------- | ------------------------------------------------------------------------- |
| **Llama 4**   | Scout 17B-16E     | [lien](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct)     |
|               | Maverick 17B-128E | [lien](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct) |
| **Llama 3.3** | 70 B              | [lien](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct)             |
| **Llama 3.2** | 1 B               | [lien](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct)              |
|               | 3 B               | [lien](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct)              |
|               | 11 B Vision       | [lien](https://huggingface.co/unsloth/Llama-3.2-11B-Vision-Instruct)      |
|               | 90 B Vision       | [lien](https://huggingface.co/unsloth/Llama-3.2-90B-Vision-Instruct)      |
| **Llama 3.1** | 8 B               | [lien](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct)         |
|               | 70 B              | [lien](https://huggingface.co/unsloth/Meta-Llama-3.1-70B-Instruct)        |
|               | 405 B             | [lien](https://huggingface.co/unsloth/Meta-Llama-3.1-405B-Instruct)       |
| **Llama 3**   | 8 B               | [lien](https://huggingface.co/unsloth/llama-3-8b-Instruct)                |
|               | 70 B              | [lien](https://huggingface.co/unsloth/llama-3-70b-Instruct)               |
| **Llama 2**   | 7 B               | [lien](https://huggingface.co/unsloth/llama-2-7b-chat)                    |

**Modèles Gemma :**

| Modèle       | Variante | Instruct (16 bits)                                     |
| ------------ | -------- | ------------------------------------------------------ |
| **Gemma 3n** | E2B      | [lien](https://huggingface.co/unsloth/gemma-3n-E4B-it) |
|              | E4B      | [lien](https://huggingface.co/unsloth/gemma-3n-E2B-it) |
| **Gemma 3**  | 1 B      | [lien](https://huggingface.co/unsloth/gemma-3-1b-it)   |
|              | 4 B      | [lien](https://huggingface.co/unsloth/gemma-3-4b-it)   |
|              | 12 B     | [lien](https://huggingface.co/unsloth/gemma-3-12b-it)  |
|              | 27 B     | [lien](https://huggingface.co/unsloth/gemma-3-27b-it)  |
| **Gemma 2**  | 2 B      | [lien](https://huggingface.co/unsloth/gemma-2b-it)     |
|              | 9 B      | [lien](https://huggingface.co/unsloth/gemma-9b-it)     |
|              | 27 B     | [lien](https://huggingface.co/unsloth/gemma-27b-it)    |

**Modèles Qwen :**

| Famille                  | Variante  | Instruct (16 bits)                                                      |
| ------------------------ | --------- | ----------------------------------------------------------------------- |
| **Qwen 3**               | 0.6 B     | [lien](https://huggingface.co/unsloth/Qwen3-0.6B)                       |
|                          | 1.7 B     | [lien](https://huggingface.co/unsloth/Qwen3-1.7B)                       |
|                          | 4 B       | [lien](https://huggingface.co/unsloth/Qwen3-4B)                         |
|                          | 8 B       | [lien](https://huggingface.co/unsloth/Qwen3-8B)                         |
|                          | 14 B      | [lien](https://huggingface.co/unsloth/Qwen3-14B)                        |
|                          | 30B-A3B   | [lien](https://huggingface.co/unsloth/Qwen3-30B-A3B)                    |
|                          | 32 B      | [lien](https://huggingface.co/unsloth/Qwen3-32B)                        |
|                          | 235B-A22B | [lien](https://huggingface.co/unsloth/Qwen3-235B-A22B)                  |
| **Qwen 2.5 Omni**        | 3 B       | [lien](https://huggingface.co/unsloth/Qwen2.5-Omni-3B)                  |
|                          | 7 B       | [lien](https://huggingface.co/unsloth/Qwen2.5-Omni-7B)                  |
| **Qwen 2.5 VL**          | 3 B       | [lien](https://huggingface.co/unsloth/Qwen2.5-VL-3B-Instruct)           |
|                          | 7 B       | [lien](https://huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct)           |
|                          | 32 B      | [lien](https://huggingface.co/unsloth/Qwen2.5-VL-32B-Instruct)          |
|                          | 72 B      | [lien](https://huggingface.co/unsloth/Qwen2.5-VL-72B-Instruct)          |
| **Qwen 2.5**             | 0.5 B     | [lien](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct)            |
|                          | 1.5 B     | [lien](https://huggingface.co/unsloth/Qwen2.5-1.5B-Instruct)            |
|                          | 3 B       | [lien](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct)              |
|                          | 7 B       | [lien](https://huggingface.co/unsloth/Qwen2.5-7B-Instruct)              |
|                          | 14 B      | [lien](https://huggingface.co/unsloth/Qwen2.5-14B-Instruct)             |
|                          | 32 B      | [lien](https://huggingface.co/unsloth/Qwen2.5-32B-Instruct)             |
|                          | 72 B      | [lien](https://huggingface.co/unsloth/Qwen2.5-72B-Instruct)             |
| **Qwen 2.5 Coder 128 K** | 0.5 B     | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-0.5B-Instruct-128K) |
|                          | 1.5 B     | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-1.5B-Instruct-128K) |
|                          | 3 B       | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-3B-Instruct-128K)   |
|                          | 7 B       | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct-128K)   |
|                          | 14 B      | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-14B-Instruct-128K)  |
|                          | 32 B      | [lien](https://huggingface.co/unsloth/Qwen2.5-Coder-32B-Instruct-128K)  |
| **QwQ**                  | 32 B      | [lien](https://huggingface.co/unsloth/QwQ-32B)                          |
| **QVQ (aperçu)**         | 72 B      | —                                                                       |
| **Qwen 2 (Chat)**        | 1.5 B     | [lien](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct)              |
|                          | 7 B       | [lien](https://huggingface.co/unsloth/Qwen2-7B-Instruct)                |
|                          | 72 B      | [lien](https://huggingface.co/unsloth/Qwen2-72B-Instruct)               |
| **Qwen 2 VL**            | 2 B       | [lien](https://huggingface.co/unsloth/Qwen2-VL-2B-Instruct)             |
|                          | 7 B       | [lien](https://huggingface.co/unsloth/Qwen2-VL-7B-Instruct)             |
|                          | 72 B      | [lien](https://huggingface.co/unsloth/Qwen2-VL-72B-Instruct)            |

**Modèles Mistral :**

| Modèle           | Variante       | Instruct (16 bits)                                                 |
| ---------------- | -------------- | ------------------------------------------------------------------ |
| **Mistral**      | Small 2409-22B | [lien](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) |
| **Mistral**      | Large 2407     | [lien](https://huggingface.co/unsloth/Mistral-Large-Instruct-2407) |
| **Mistral**      | 7B v0.3        | [lien](https://huggingface.co/unsloth/mistral-7b-instruct-v0.3)    |
| **Mistral**      | 7B v0.2        | [lien](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2)    |
| **Pixtral**      | 12B 2409       | [lien](https://huggingface.co/unsloth/Pixtral-12B-2409)            |
| **Mixtral**      | 8×7B           | [lien](https://huggingface.co/unsloth/Mixtral-8x7B-Instruct-v0.1)  |
| **Mistral NeMo** | 12B 2407       | [lien](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407)  |
| **Devstral**     | Small 2505     | [lien](https://huggingface.co/unsloth/Devstral-Small-2505)         |

**Modèles Phi :**

| Modèle      | Variante          | Instruct (16 bits)                                              |
| ----------- | ----------------- | --------------------------------------------------------------- |
| **Phi-4**   | Raisonnement-plus | [lien](https://huggingface.co/unsloth/Phi-4-reasoning-plus)     |
|             | Raisonnement      | [lien](https://huggingface.co/unsloth/Phi-4-reasoning)          |
|             | Phi-4 (cœur)      | [lien](https://huggingface.co/unsloth/Phi-4)                    |
|             | Mini-Raisonnement | [lien](https://huggingface.co/unsloth/Phi-4-mini-reasoning)     |
|             | Mini              | [lien](https://huggingface.co/unsloth/Phi-4-mini)               |
| **Phi-3.5** | Mini              | [lien](https://huggingface.co/unsloth/Phi-3.5-mini-instruct)    |
| **Phi-3**   | Mini              | [lien](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct)   |
|             | Moyen             | [lien](https://huggingface.co/unsloth/Phi-3-medium-4k-instruct) |

**Modèles de synthèse vocale (TTS) :**

| Modèle                 | Instruct (16 bits)                                               |
| ---------------------- | ---------------------------------------------------------------- |
| Orpheus-3B (v0.1 ft)   | [lien](https://huggingface.co/unsloth/orpheus-3b-0.1-ft)         |
| Orpheus-3B (v0.1 pt)   | [lien](https://huggingface.co/unsloth/orpheus-3b-0.1-pretrained) |
| Sesame-CSM 1B          | [lien](https://huggingface.co/unsloth/csm-1b)                    |
| Whisper Large V3 (STT) | [lien](https://huggingface.co/unsloth/whisper-large-v3)          |
| Llasa-TTS 1B           | [lien](https://huggingface.co/unsloth/Llasa-1B)                  |
| Spark-TTS 0.5B         | [lien](https://huggingface.co/unsloth/Spark-TTS-0.5B)            |
| Oute-TTS 1B            | [lien](https://huggingface.co/unsloth/Llama-OuteTTS-1.0-1B)      |
| {% endtab %}           |                                                                  |

{% tab title="• Base 4 et 16 bits" %}
Les modèles de base sont généralement utilisés à des fins de fine-tuning :

**Nouveaux modèles :**

| Modèle       | Variante          | Base (16 bits)                                                   | Base (4 bits)                                                                          |
| ------------ | ----------------- | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------- |
| **Gemma 3n** | E2B               | [lien](https://huggingface.co/unsloth/gemma-3n-E2B)              | [lien](https://huggingface.co/unsloth/gemma-3n-E2B-unsloth-bnb-4bit)                   |
|              | E4B               | [lien](https://huggingface.co/unsloth/gemma-3n-E4B)              | [lien](https://huggingface.co/unsloth/gemma-3n-E4B-unsloth-bnb-4bit)                   |
| **Qwen 3**   | 0.6 B             | [lien](https://huggingface.co/unsloth/Qwen3-0.6B-Base)           | [lien](https://huggingface.co/unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit)                |
|              | 1.7 B             | [lien](https://huggingface.co/unsloth/Qwen3-1.7B-Base)           | [lien](https://huggingface.co/unsloth/Qwen3-1.7B-Base-unsloth-bnb-4bit)                |
|              | 4 B               | [lien](https://huggingface.co/unsloth/Qwen3-4B-Base)             | [lien](https://huggingface.co/unsloth/Qwen3-4B-Base-unsloth-bnb-4bit)                  |
|              | 8 B               | [lien](https://huggingface.co/unsloth/Qwen3-8B-Base)             | [lien](https://huggingface.co/unsloth/Qwen3-8B-Base-unsloth-bnb-4bit)                  |
|              | 14 B              | [lien](https://huggingface.co/unsloth/Qwen3-14B-Base)            | [lien](https://huggingface.co/unsloth/Qwen3-14B-Base-unsloth-bnb-4bit)                 |
|              | 30B-A3B           | [lien](https://huggingface.co/unsloth/Qwen3-30B-A3B-Base)        | [lien](https://huggingface.co/unsloth/Qwen3-30B-A3B-Base-bnb-4bit)                     |
| **Llama 4**  | Scout 17B 16E     | [lien](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E)     | [lien](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-unsloth-bnb-4bit) |
|              | Maverick 17B 128E | [lien](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E) | —                                                                                      |

**Modèles Llama :**

| Modèle        | Variante          | Base (16 bits)                                                   | Base (4 bits)                                               |
| ------------- | ----------------- | ---------------------------------------------------------------- | ----------------------------------------------------------- |
| **Llama 4**   | Scout 17B 16E     | [lien](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E)     | —                                                           |
|               | Maverick 17B 128E | [lien](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E) | —                                                           |
| **Llama 3.3** | 70 B              | [lien](https://huggingface.co/unsloth/Llama-3.3-70B)             | —                                                           |
| **Llama 3.2** | 1 B               | [lien](https://huggingface.co/unsloth/Llama-3.2-1B)              | —                                                           |
|               | 3 B               | [lien](https://huggingface.co/unsloth/Llama-3.2-3B)              | —                                                           |
|               | 11 B Vision       | [lien](https://huggingface.co/unsloth/Llama-3.2-11B-Vision)      | —                                                           |
|               | 90 B Vision       | [lien](https://huggingface.co/unsloth/Llama-3.2-90B-Vision)      | —                                                           |
| **Llama 3.1** | 8 B               | [lien](https://huggingface.co/unsloth/Meta-Llama-3.1-8B)         | —                                                           |
|               | 70 B              | [lien](https://huggingface.co/unsloth/Meta-Llama-3.1-70B)        | —                                                           |
| **Llama 3**   | 8 B               | [lien](https://huggingface.co/unsloth/llama-3-8b)                | [lien](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit)  |
| **Llama 2**   | 7 B               | [lien](https://huggingface.co/unsloth/llama-2-7b)                | [lien](https://huggingface.co/unsloth/llama-2-7b-bnb-4bit)  |
|               | 13 B              | [lien](https://huggingface.co/unsloth/llama-2-13b)               | [lien](https://huggingface.co/unsloth/llama-2-13b-bnb-4bit) |

**Modèles Qwen :**

| Modèle       | Variante | Base (16 bits)                                            | Base (4 bits)                                                              |
| ------------ | -------- | --------------------------------------------------------- | -------------------------------------------------------------------------- |
| **Qwen 3**   | 0.6 B    | [lien](https://huggingface.co/unsloth/Qwen3-0.6B-Base)    | [lien](https://huggingface.co/unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit)    |
|              | 1.7 B    | [lien](https://huggingface.co/unsloth/Qwen3-1.7B-Base)    | [lien](https://huggingface.co/unsloth/Qwen3-1.7B-Base-unsloth-bnb-4bit)    |
|              | 4 B      | [lien](https://huggingface.co/unsloth/Qwen3-4B-Base)      | [lien](https://huggingface.co/unsloth/Qwen3-4B-Base-unsloth-bnb-4bit)      |
|              | 8 B      | [lien](https://huggingface.co/unsloth/Qwen3-8B-Base)      | [lien](https://huggingface.co/unsloth/Qwen3-8B-Base-unsloth-bnb-4bit)      |
|              | 14 B     | [lien](https://huggingface.co/unsloth/Qwen3-14B-Base)     | [lien](https://huggingface.co/unsloth/Qwen3-14B-Base-unsloth-bnb-4bit)     |
|              | 30B-A3B  | [lien](https://huggingface.co/unsloth/Qwen3-30B-A3B-Base) | [lien](https://huggingface.co/unsloth/Qwen3-30B-A3B-Base-unsloth-bnb-4bit) |
| **Qwen 2.5** | 0.5 B    | [lien](https://huggingface.co/unsloth/Qwen2.5-0.5B)       | [lien](https://huggingface.co/unsloth/Qwen2.5-0.5B-bnb-4bit)               |
|              | 1.5 B    | [lien](https://huggingface.co/unsloth/Qwen2.5-1.5B)       | [lien](https://huggingface.co/unsloth/Qwen2.5-1.5B-bnb-4bit)               |
|              | 3 B      | [lien](https://huggingface.co/unsloth/Qwen2.5-3B)         | [lien](https://huggingface.co/unsloth/Qwen2.5-3B-bnb-4bit)                 |
|              | 7 B      | [lien](https://huggingface.co/unsloth/Qwen2.5-7B)         | [lien](https://huggingface.co/unsloth/Qwen2.5-7B-bnb-4bit)                 |
|              | 14 B     | [lien](https://huggingface.co/unsloth/Qwen2.5-14B)        | [lien](https://huggingface.co/unsloth/Qwen2.5-14B-bnb-4bit)                |
|              | 32 B     | [lien](https://huggingface.co/unsloth/Qwen2.5-32B)        | [lien](https://huggingface.co/unsloth/Qwen2.5-32B-bnb-4bit)                |
|              | 72 B     | [lien](https://huggingface.co/unsloth/Qwen2.5-72B)        | [lien](https://huggingface.co/unsloth/Qwen2.5-72B-bnb-4bit)                |
| **Qwen 2**   | 1.5 B    | [lien](https://huggingface.co/unsloth/Qwen2-1.5B)         | [lien](https://huggingface.co/unsloth/Qwen2-1.5B-bnb-4bit)                 |
|              | 7 B      | [lien](https://huggingface.co/unsloth/Qwen2-7B)           | [lien](https://huggingface.co/unsloth/Qwen2-7B-bnb-4bit)                   |

**Modèles Llama :**

| Modèle        | Variante          | Base (16 bits)                                                   | Base (4 bits)                                               |
| ------------- | ----------------- | ---------------------------------------------------------------- | ----------------------------------------------------------- |
| **Llama 4**   | Scout 17B 16E     | [lien](https://huggingface.co/unsloth/Llama-4-Scout-17B-16E)     | —                                                           |
|               | Maverick 17B 128E | [lien](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E) | —                                                           |
| **Llama 3.3** | 70 B              | [lien](https://huggingface.co/unsloth/Llama-3.3-70B)             | —                                                           |
| **Llama 3.2** | 1 B               | [lien](https://huggingface.co/unsloth/Llama-3.2-1B)              | —                                                           |
|               | 3 B               | [lien](https://huggingface.co/unsloth/Llama-3.2-3B)              | —                                                           |
|               | 11 B Vision       | [lien](https://huggingface.co/unsloth/Llama-3.2-11B-Vision)      | —                                                           |
|               | 90 B Vision       | [lien](https://huggingface.co/unsloth/Llama-3.2-90B-Vision)      | —                                                           |
| **Llama 3.1** | 8 B               | [lien](https://huggingface.co/unsloth/Meta-Llama-3.1-8B)         | —                                                           |
|               | 70 B              | [lien](https://huggingface.co/unsloth/Meta-Llama-3.1-70B)        | —                                                           |
| **Llama 3**   | 8 B               | [lien](https://huggingface.co/unsloth/llama-3-8b)                | [lien](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit)  |
| **Llama 2**   | 7 B               | [lien](https://huggingface.co/unsloth/llama-2-7b)                | [lien](https://huggingface.co/unsloth/llama-2-7b-bnb-4bit)  |
|               | 13 B              | [lien](https://huggingface.co/unsloth/llama-2-13b)               | [lien](https://huggingface.co/unsloth/llama-2-13b-bnb-4bit) |

**Modèles Gemma**

| Modèle      | Variante | Base (16 bits)                                        | Base (4 bits)                                                          |
| ----------- | -------- | ----------------------------------------------------- | ---------------------------------------------------------------------- |
| **Gemma 3** | 1 B      | [lien](https://huggingface.co/unsloth/gemma-3-1b-pt)  | [lien](https://huggingface.co/unsloth/gemma-3-1b-pt-unsloth-bnb-4bit)  |
|             | 4 B      | [lien](https://huggingface.co/unsloth/gemma-3-4b-pt)  | [lien](https://huggingface.co/unsloth/gemma-3-4b-pt-unsloth-bnb-4bit)  |
|             | 12 B     | [lien](https://huggingface.co/unsloth/gemma-3-12b-pt) | [lien](https://huggingface.co/unsloth/gemma-3-12b-pt-unsloth-bnb-4bit) |
|             | 27 B     | [lien](https://huggingface.co/unsloth/gemma-3-27b-pt) | [lien](https://huggingface.co/unsloth/gemma-3-27b-pt-unsloth-bnb-4bit) |
| **Gemma 2** | 2 B      | [lien](https://huggingface.co/unsloth/gemma-2-2b)     | —                                                                      |
|             | 9 B      | [lien](https://huggingface.co/unsloth/gemma-2-9b)     | —                                                                      |
|             | 27 B     | [lien](https://huggingface.co/unsloth/gemma-2-27b)    | —                                                                      |

**Modèles Mistral :**

| Modèle      | Variante         | Base (16 bits)                                                     | Base (4 bits)                                                   |
| ----------- | ---------------- | ------------------------------------------------------------------ | --------------------------------------------------------------- |
| **Mistral** | Small 24B 2501   | [lien](https://huggingface.co/unsloth/Mistral-Small-24B-Base-2501) | —                                                               |
|             | NeMo 12B 2407    | [lien](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407)      | —                                                               |
|             | 7B v0.3          | [lien](https://huggingface.co/unsloth/mistral-7b-v0.3)             | [lien](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit) |
|             | 7B v0.2          | [lien](https://huggingface.co/unsloth/mistral-7b-v0.2)             | [lien](https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit) |
|             | Pixtral 12B 2409 | [lien](https://huggingface.co/unsloth/Pixtral-12B-Base-2409)       | —                                                               |

**Modèles autres (TTS, TinyLlama) :**

| Modèle         | Variante        | Base (16 bits)                                                   | Base (4 bits)                                                                     |
| -------------- | --------------- | ---------------------------------------------------------------- | --------------------------------------------------------------------------------- |
| **TinyLlama**  | 1.1 B (Base)    | [lien](https://huggingface.co/unsloth/tinyllama)                 | [lien](https://huggingface.co/unsloth/tinyllama-bnb-4bit)                         |
| **Orpheus-3b** | 0.1-préentraîné | [lien](https://huggingface.co/unsloth/orpheus-3b-0.1-pretrained) | [lien](https://huggingface.co/unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit) |
| {% endtab %}   |                 |                                                                  |                                                                                   |

{% tab title="• FP8" %}
Vous pouvez utiliser nos versions FP8 pour l’entraînement ou le service/le déploiement.

FP8 Dynamic offre un entraînement légèrement plus rapide et une consommation de VRAM plus faible que FP8 Block, mais avec un léger compromis sur la précision.

| Modèle                | Variante                                                                                                                                                                                                                                                                                                                                                                                                                                                      | FP8 (Dynamique / Bloc)                                                                                                                                          |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Qwen3                 | Coder-Next                                                                                                                                                                                                                                                                                                                                                                                                                                                    | [Dynamique](https://huggingface.co/unsloth/Qwen3-Coder-Next-FP8-Dynamic) · [Bloc](https://huggingface.co/unsloth/Qwen3-Coder-Next-FP8)                          |
| GLM                   | 4.7-Flash                                                                                                                                                                                                                                                                                                                                                                                                                                                     | [Dynamique](https://huggingface.co/unsloth/GLM-4.7-Flash-FP8-Dynamic)                                                                                           |
| **Llama 3.3**         | 70B Instruct                                                                                                                                                                                                                                                                                                                                                                                                                                                  | [Dynamique](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-FP8-Dynamic) · [Bloc](https://huggingface.co/unsloth/Llama-3.3-70B-Instruct-FP8-Block)        |
| **Llama 3.2**         | 1B Base                                                                                                                                                                                                                                                                                                                                                                                                                                                       | [Dynamique](https://huggingface.co/unsloth/Llama-3.2-1B-FP8-Dynamic) · [Bloc](https://huggingface.co/unsloth/Llama-3.2-1B-FP8-Block)                            |
|                       | 1B Instruct                                                                                                                                                                                                                                                                                                                                                                                                                                                   | [Dynamique](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-FP8-Dynamic) · [Bloc](https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-FP8-Block)          |
|                       | 3B Base                                                                                                                                                                                                                                                                                                                                                                                                                                                       | [Dynamique](https://huggingface.co/unsloth/Llama-3.2-3B-FP8-Dynamic) · [Bloc](https://huggingface.co/unsloth/Llama-3.2-3B-FP8-Block)                            |
|                       | 3B Instruct                                                                                                                                                                                                                                                                                                                                                                                                                                                   | [Dynamique](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct-FP8-Dynamic) · [Bloc](https://huggingface.co/unsloth/Llama-3.2-3B-Instruct-FP8-Block)          |
| **Llama 3.1**         | 8B Base                                                                                                                                                                                                                                                                                                                                                                                                                                                       | [Dynamique](https://huggingface.co/unsloth/Llama-3.1-8B-FP8-Dynamic) · [Bloc](https://huggingface.co/unsloth/Llama-3.1-8B-FP8-Block)                            |
|                       | 8B Instruct                                                                                                                                                                                                                                                                                                                                                                                                                                                   | [Dynamique](https://huggingface.co/unsloth/Llama-3.1-8B-Instruct-FP8-Dynamic) · [Bloc](https://huggingface.co/unsloth/Llama-3.1-8B-Instruct-FP8-Block)          |
|                       | 70B Base                                                                                                                                                                                                                                                                                                                                                                                                                                                      | [Dynamique](https://huggingface.co/unsloth/Llama-3.1-70B-FP8-Dynamic) · [Bloc](https://huggingface.co/unsloth/Llama-3.1-70B-FP8-Block)                          |
| **Qwen3**             | 0.6B                                                                                                                                                                                                                                                                                                                                                                                                                                                          | [FP8](https://huggingface.co/unsloth/Qwen3-0.6B-FP8)                                                                                                            |
|                       | 1.7B                                                                                                                                                                                                                                                                                                                                                                                                                                                          | [FP8](https://huggingface.co/unsloth/Qwen3-1.7B-FP8)                                                                                                            |
|                       | 4B                                                                                                                                                                                                                                                                                                                                                                                                                                                            | [FP8](https://huggingface.co/unsloth/Qwen3-4B-FP8)                                                                                                              |
|                       | 8B                                                                                                                                                                                                                                                                                                                                                                                                                                                            | [FP8](https://huggingface.co/unsloth/Qwen3-8B-FP8)                                                                                                              |
|                       | 14B                                                                                                                                                                                                                                                                                                                                                                                                                                                           | [FP8](https://huggingface.co/unsloth/Qwen3-14B-FP8)                                                                                                             |
|                       | 32B                                                                                                                                                                                                                                                                                                                                                                                                                                                           | [FP8](https://huggingface.co/unsloth/Qwen3-32B-FP8)                                                                                                             |
|                       | 235B-A22B                                                                                                                                                                                                                                                                                                                                                                                                                                                     | [FP8](https://huggingface.co/unsloth/Qwen3-235B-A22B-FP8)                                                                                                       |
| **Qwen3 (2507)**      | 4B Instruct                                                                                                                                                                                                                                                                                                                                                                                                                                                   | [FP8](https://huggingface.co/unsloth/Qwen3-4B-Instruct-2507-FP8)                                                                                                |
|                       | 4B Thinking                                                                                                                                                                                                                                                                                                                                                                                                                                                   | [FP8](https://huggingface.co/unsloth/Qwen3-4B-Thinking-2507-FP8)                                                                                                |
|                       | 30B-A3B Instruct                                                                                                                                                                                                                                                                                                                                                                                                                                              | [FP8](https://huggingface.co/unsloth/Qwen3-30B-A3B-Instruct-2507-FP8)                                                                                           |
|                       | 30B-A3B Thinking                                                                                                                                                                                                                                                                                                                                                                                                                                              | [FP8](https://huggingface.co/unsloth/Qwen3-30B-A3B-Thinking-2507-FP8)                                                                                           |
|                       | 235B-A22B Instruct                                                                                                                                                                                                                                                                                                                                                                                                                                            | [FP8](https://huggingface.co/unsloth/Qwen3-235B-A22B-Instruct-2507-FP8)                                                                                         |
|                       | 235B-A22B Thinking                                                                                                                                                                                                                                                                                                                                                                                                                                            | [FP8](https://huggingface.co/unsloth/Qwen3-235B-A22B-Thinking-2507-FP8)                                                                                         |
| **Qwen3-VL**          | 4B Instruct                                                                                                                                                                                                                                                                                                                                                                                                                                                   | [FP8](https://huggingface.co/unsloth/Qwen3-VL-4B-Instruct-FP8)                                                                                                  |
|                       | 4B Thinking                                                                                                                                                                                                                                                                                                                                                                                                                                                   | [FP8](https://huggingface.co/unsloth/Qwen3-VL-4B-Thinking-FP8)                                                                                                  |
|                       | 8B Instruct                                                                                                                                                                                                                                                                                                                                                                                                                                                   | [FP8](https://huggingface.co/unsloth/Qwen3-VL-8B-Instruct-FP8)                                                                                                  |
|                       | 8B Thinking                                                                                                                                                                                                                                                                                                                                                                                                                                                   | [FP8](https://huggingface.co/unsloth/Qwen3-VL-8B-Thinking-FP8)                                                                                                  |
| **Qwen3-Coder**       | 480B-A35B Instruct                                                                                                                                                                                                                                                                                                                                                                                                                                            | [FP8](https://huggingface.co/unsloth/Qwen3-Coder-480B-A35B-Instruct-FP8)                                                                                        |
| **Granite 4.0**       | h-tiny                                                                                                                                                                                                                                                                                                                                                                                                                                                        | [FP8 Dynamic](https://huggingface.co/unsloth/granite-4.0-h-tiny-FP8-Dynamic)                                                                                    |
|                       | h-small                                                                                                                                                                                                                                                                                                                                                                                                                                                       | [FP8 Dynamic](https://huggingface.co/unsloth/granite-4.0-h-small-FP8-Dynamic)                                                                                   |
| **Magistral Small**   | 2509                                                                                                                                                                                                                                                                                                                                                                                                                                                          | [FP8 Dynamic](https://huggingface.co/unsloth/Magistral-Small-2509-FP8-Dynamic) · [FP8 torchao](https://huggingface.co/unsloth/Magistral-Small-2509-FP8-torchao) |
| **Mistral Small 3.2** | 24B Instruct-2506                                                                                                                                                                                                                                                                                                                                                                                                                                             | [FP8](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506-FP8)                                                                                   |
| **Gemma 3**           | <p>270M-it torchao<br>270m — <a href="https://huggingface.co/unsloth/gemma-3-270m-it-FP8-Dynamic">FP8</a><br>1B — <a href="https://huggingface.co/unsloth/gemma-3-1b-it-FP8-Dynamic">FP8</a><br>4B — <a href="https://huggingface.co/unsloth/gemma-3-4b-it-FP8-Dynamic">FP8</a><br>12B — <a href="https://huggingface.co/unsloth/gemma-3-12B-it-FP8-Dynamic">FP8</a><br>27B — <a href="https://huggingface.co/unsloth/gemma-3-27b-it-FP8-Dynamic">FP8</a></p> | [FP8 torchao](https://huggingface.co/unsloth/gemma-3-270m-it-torchao-FP8)                                                                                       |
| {% endtab %}          |                                                                                                                                                                                                                                                                                                                                                                                                                                                               |                                                                                                                                                                 |
| {% endtabs %}         |                                                                                                                                                                                                                                                                                                                                                                                                                                                               |                                                                                                                                                                 |


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://unsloth.ai/docs/fr/commencer/unsloth-model-catalog.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
