arrow-down-to-squareUnsloth Studio Installation

Learn how to install Unsloth Studio on your local device.

Unsloth Studio works on Windows, Linux, WSL and MacOS. You should use the same installation process on every device, although the system requirements may differ by device.

windowsWindowsappleMacOSlinuxLinux & WSLdockerDocker

  • Training: Supported on NVIDIA GPUs: RTX 3090, Blackwell 50-series, DGX Spark etc.

  • Mac: Like CPU - Chat only works for now. MLX training coming very soon.

  • CPU: Unsloth still works without a GPU, but only for Chat inference.

  • Coming soon: Support for Apple MLX, AMD, and Intel.

Install Instructions

Remember install instructions are the same across every device:

1

Install Unsloth

Firstly, install Unsloth with just one command:

pip install unsloth

Or install the latest code from source via pip with:

pip install git+https://github.com/unslothai/unsloth

Or you can download Studio directly from source:

2

Setup Unsloth Studio

unsloth studio setup

Setup automatically installs Node.js (via nvm), builds the frontend, installs all Python dependencies, and builds llama.cpp with CUDA support.

circle-exclamation
circle-info

WSL users: you will be prompted for your sudo password to install build dependencies (cmake, git, libcurl4-openssl-dev).

3

Launch

Launch Unsloth Studio via:

unsloth studio -H 0.0.0.0 -p 8888

Then open http://localhost:8888 in your browser.

4

Onboarding

On first launch you will need to create a password to secure your account and sign in again later. You’ll then see a brief onboarding wizard to choose a model, dataset, and basic settings. You can skip it at any time.

5

Start training and running

Start fine-tuning and building datasets immediately after launching. See our step-by-step guide to get started with Unsloth Studio:

boltGet Startedchevron-right

System Requirements

windows Windows

Unsloth Studio works directly on Windows without WSL. To train models, make sure your system satisfies these requirements:

Requirements

  • Windows 10 or Windows 11 (64-bit)

  • NVIDIA GPU with drivers installed

  • App Installer (includes winget): herearrow-up-right

  • Git: winget install --id Git.Git -e --source winget

  • Python: version 3.11 up to, but not including, 3.14

  • Work inside a Python environment such as uv, venv, or conda/mamba

apple MacOS

Unsloth Studio works on Mac devices for Chat for GGUF models. MLX training coming soon!

  • macOS 12 Monterey or newer (Intel or Apple Silicon)

  • Install Homebrew: herearrow-up-right

  • Git: brew install git

  • cmake: brew install cmake

  • openssl: brew install openssl

  • Python: version 3.11 up to, but not including, 3.14

  • Work inside a Python environment such as uv, venv, or conda/mamba

linux Linux & WSL

  • Ubuntu 20.04+ or similar distro (64-bit)

  • NVIDIA GPU with drivers installed

  • CUDA toolkit (12.4+ recommended, 12.8+ for blackwell)

  • Git: sudo apt install git

  • Python: version 3.11 up to, but not including, 3.14

  • Work inside a Python environment such as uv, venv, or conda/mamba

docker Docker

  • Pull our latest Unsloth container image: docker pull unsloth/unsloth

  • Run the container via:

For more information, see herearrow-up-right.

  • Access your studio instance at http://localhost:8000 or external ip address http://external_ip_address:8000/

microchip CPU only

Unsloth Studio only supports CPU devices for Chat for GGUF models.

  • Same as the ones mentioned above for Linux (except for NVIDIA GPU drivers) and MacOS.

google Google Colab

We’ve created a free Google Colab notebookarrow-up-right so you can explore all of Unsloth’s features on Colab’s T4 GPUs. You can train and run most models up to 22B parameters, and switch to a larger GPU for bigger models. Just Click 'Run all' and the UI should pop up after installation.

circle-exclamation

Once installation is complete, scroll to Start Unsloth Studio and click Open Unsloth Studio in the white box shown on the left:

Troubleshooting

Problem
Fix

Python version error

sudo apt install python3.12 python3.12-venv version 3.11 up to, but not including, 3.14

nvidia-smi not found

Install NVIDIA drivers from https://www.nvidia.com/Download/index.aspx

nvcc not found (CUDA)

sudo apt install nvidia-cuda-toolkit or add /usr/local/cuda/bin to PATH

llama-server build failed

Non-fatal, Studio still works, GGUF inference won't be available. Install cmake and re-run setup to fix.

cmake not found

sudo apt install cmake

git not found

sudo apt install git

Build failed

Delete ~/.unsloth/llama.cpp and re-run setup

Last updated

Was this helpful?