🦥Unsloth Docs
Unsloth is an open-source framework for running and training models.
Unsloth lets you run and train AI models on your own local hardware.
Our docs will guide you through running & training your own model locally.
🦥 Why Unsloth?
Unsloth streamlines local training, inference, data, and deployment
⭐ Features
Unsloth lets you run and train models for text, audio, embedding, vision and more. Unsloth provides many key features for both inference and training:
Inference
Search + download + run any model like GGUFs, LoRA adapters, safetensors.
Self-healing tool calling / web search and call OpenAI-compatible APIs.
Auto inference parameter tuning and edit chat templates.
Export or save your model to GGUF, 16-bit safetensor etc.
Compare outputs with two different model side by side.
Training
Train 500+ models ~2x faster with ~70% less VRAM (no accuracy loss)
Supports full fine-tuning, pre-training, 4-bit, 16-bit and FP8 training.
Auto-create datasets from PDF, CSV, DOCX files. Edit data in a visual node workflow.
Observability: Monitor training live, track loss, GPU usage, customize graphs
Most efficient reinforcement learning library, using 80% less VRAM for GRPO, FP8 etc.
Multi-GPU works but a much better version is coming!
Quickstart
Unsloth supports MacOS, Linux, Windows, NVIDIA and CPU setups. See: Unsloth Requirements
MacOS, Linux, WSL:
Windows PowerShell:
Docker
Use our official Docker image: unsloth/unsloth which currently works for Windows, WSL and Linux. MacOS support coming soon.
Launch Unsloth
MacOS, Linux, WSL:
Windows:
New Models
What is Fine-tuning and RL? Why?
Fine-tuning an LLM customizes its behavior, enhances domain knowledge, and optimizes performance for specific tasks. By fine-tuning a pre-trained model (e.g. Llama-3.1-8B) on a dataset, you can:
Update Knowledge: Introduce new domain-specific information.
Customize Behavior: Adjust the model’s tone, personality, or response style.
Optimize for Tasks: Improve accuracy and relevance for specific use cases.
Reinforcement Learning (RL) is where an "agent" learns to make decisions by interacting with an environment and receiving feedback in the form of rewards or penalties.
Action: What the model generates (e.g. a sentence).
Reward: A signal indicating how good or bad the model's action was (e.g. did the response follow instructions? was it helpful?).
Environment: The scenario or task the model is working on (e.g. answering a user’s question).
Example fine-tuning or RL use-cases:
Enables LLMs to predict if a headline impacts a company positively or negatively.
Can use historical customer interactions for more accurate and custom responses.
Fine-tune LLM on legal texts for contract analysis, case law research, and compliance.
You can think of a fine-tuned model as a specialized agent designed to do specific tasks more effectively and efficiently. Fine-tuning can replicate all of RAG's capabilities, but not vice versa.

Last updated
Was this helpful?












