Unsloth Studio Installation
Learn how to install Unsloth Studio on your local device.
Unsloth Studio works on Windows, Linux, WSL and MacOS. You should use the same installation process on every device, although the system requirements may differ by device.
Training: Supported on NVIDIA GPUs: RTX 3090, Blackwell 50-series, DGX Spark etc.
Mac: Like CPU - Chat only works for now. MLX training coming very soon.
CPU: Unsloth still works without a GPU, but only for Chat inference.
Coming soon: Support for Apple MLX, AMD, and Intel.
Install Instructions
Remember install instructions are the same across every device:
Setup Unsloth Studio
unsloth studio setupSetup automatically installs Node.js (via nvm), builds the frontend, installs all Python dependencies, and builds llama.cpp with CUDA support.

First install may take 5-10 minutes. This is normal as llama.cpp needs to compile binaries. Do not cancel it. We're working on precompiled binaries so next time it won't take so long.
WSL users: you will be prompted for your sudo password to install build dependencies (cmake, git, libcurl4-openssl-dev).
Start training and running
Start fine-tuning and building datasets immediately after launching. See our step-by-step guide to get started with Unsloth Studio:
Get StartedSystem Requirements
Windows
Unsloth Studio works directly on Windows without WSL. To train models, make sure your system satisfies these requirements:
Requirements
Windows 10 or Windows 11 (64-bit)
NVIDIA GPU with drivers installed
App Installer (includes
winget): hereGit:
winget install --id Git.Git -e --source wingetPython: version 3.11 up to, but not including, 3.14
Work inside a Python environment such as uv, venv, or conda/mamba
MacOS
Unsloth Studio works on Mac devices for Chat for GGUF models. MLX training coming soon!
macOS 12 Monterey or newer (Intel or Apple Silicon)
Install Homebrew: here
Git:
brew install gitcmake:
brew install cmakeopenssl:
brew install opensslPython: version 3.11 up to, but not including, 3.14
Work inside a Python environment such as uv, venv, or conda/mamba
Linux & WSL
Ubuntu 20.04+ or similar distro (64-bit)
NVIDIA GPU with drivers installed
CUDA toolkit (12.4+ recommended, 12.8+ for blackwell)
Git:
sudo apt install gitPython: version 3.11 up to, but not including, 3.14
Work inside a Python environment such as uv, venv, or conda/mamba
Docker
Pull our latest Unsloth container image:
docker pull unsloth/unslothRun the container via:
For more information, see here.
Access your studio instance at
http://localhost:8000or external ip addresshttp://external_ip_address:8000/
CPU only
Unsloth Studio only supports CPU devices for Chat for GGUF models.
Same as the ones mentioned above for Linux (except for NVIDIA GPU drivers) and MacOS.
Google Colab
We’ve created a free Google Colab notebook so you can explore all of Unsloth’s features on Colab’s T4 GPUs. You can train and run most models up to 22B parameters, and switch to a larger GPU for bigger models. Just Click 'Run all' and the UI should pop up after installation.
It'll take 40+ mins for llama.cpp to compile on a T4 GPU, thus we recommend using a bigger GPU for faster speeds.
Once installation is complete, scroll to Start Unsloth Studio and click Open Unsloth Studio in the white box shown on the left:

Troubleshooting
Python version error
sudo apt install python3.12 python3.12-venv version 3.11 up to, but not including, 3.14
nvidia-smi not found
Install NVIDIA drivers from https://www.nvidia.com/Download/index.aspx
nvcc not found (CUDA)
sudo apt install nvidia-cuda-toolkit or add /usr/local/cuda/bin to PATH
llama-server build failed
Non-fatal, Studio still works, GGUF inference won't be available. Install cmake and re-run setup to fix.
cmake not found
sudo apt install cmake
git not found
sudo apt install git
Build failed
Delete ~/.unsloth/llama.cpp and re-run setup

Last updated
Was this helpful?




