# Unsloth Studio Installation

Unsloth Studio works on Windows, Linux, WSL and MacOS. You should use the same installation process on every device, although the system requirements may differ by device.

<a href="#windows" class="button secondary" data-icon="windows">Windows</a><a href="#macos" class="button secondary" data-icon="apple">MacOS</a><a href="#linux-and-wsl" class="button secondary" data-icon="linux">Linux & WSL</a><a href="#docker" class="button secondary" data-icon="docker">Docker</a><a href="#developer-installation-advanced" class="button secondary" data-icon="screwdriver-wrench">Developer Install</a>

* **Mac:** Like CPU - [Chat](https://unsloth.ai/docs/new/chat#using-unsloth-studio-chat) + [Data Recipes](https://unsloth.ai/docs/new/studio/data-recipe) works for now. **MLX** training coming very soon.
* **CPU: Unsloth still works without a GPU**, but for Chat + Data Recipes.
* **Training:** Works on **NVIDIA**: RTX 30, 40, 50, Blackwell, DGX Spark/Station etc. + **Intel** GPUs
* **Coming soon:** Support for **Apple MLX** and **AMD**.

## Install Instructions

Remember install instructions are the same across every device:

{% stepper %}
{% step %}

#### Install Unsloth

**MacOS, Linux, WSL:**

```bash
curl -fsSL https://unsloth.ai/install.sh | sh
```

**Windows PowerShell:**

```bash
irm https://unsloth.ai/install.ps1 | iex
```

{% hint style="success" %}
**First install should now be 6x faster and with 50% reduced size due to precompiled llama.cpp binaries.**
{% endhint %}

{% hint style="info" %}
**WSL users:** you will be prompted for your `sudo` password to install build dependencies (`cmake`, `git`, `libcurl4-openssl-dev`).
{% endhint %}
{% endstep %}

{% step %}

#### Launch Unsloth Studio

```bash
unsloth studio -H 0.0.0.0 -p 8888
```

<div data-with-frame="true"><figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fd1yMMNa65Ccz50Ke0E7r%2FScreenshot%202026-03-17%20at%2012.32.38%E2%80%AFAM.png?alt=media&#x26;token=9369cfe7-35b1-4955-b8cb-42f7ecb43780" alt="" width="375"><figcaption></figcaption></figure></div>

**Then open `http://localhost:8888` in your browser.**
{% endstep %}

{% step %}

#### Onboarding

On first launch you will need to create a password to secure your account and sign in again later. You’ll then see a brief onboarding wizard to choose a model, dataset, and basic settings. You can skip it at any time.
{% endstep %}

{% step %}

#### Start training and running

Start fine-tuning and building datasets immediately after launching. See our step-by-step guide to get started with Unsloth Studio:

{% content-ref url="start" %}
[start](https://unsloth.ai/docs/new/studio/start)
{% endcontent-ref %}
{% endstep %}
{% endstepper %}

### Update Unsloth Studio:

Use the same install commands to update.

#### **MacOS, Linux, WSL:**

```bash
curl -fsSL https://unsloth.ai/install.sh | sh
```

#### **Windows PowerShell:**

```bash
irm https://unsloth.ai/install.ps1 | iex
```

Or use (currently does not work on Windows):

{% code overflow="wrap" %}

```bash
unsloth studio update 
```

{% endcode %}

## System Requirements

### <i class="fa-windows">:windows:</i> Window**s**

Unsloth Studio works directly on Windows without WSL. To train models, make sure your system satisfies these requirements:

**Requirements**

* Windows 10 or Windows 11 (64-bit)
* NVIDIA GPU with drivers installed
* **App Installer** (includes `winget`): [here](https://learn.microsoft.com/en-us/windows/msix/app-installer/install-update-app-installer)
* **Git**: `winget install --id Git.Git -e --source winget`
* **Python**: version 3.11 up to, but not including, 3.14
* Work inside a Python environment such as **uv**, **venv**, or **conda/mamba**

### <i class="fa-apple">:apple:</i> MacOS

Unsloth Studio works on Mac devices for [Chat](#run-models-locally) for GGUF models and [Data Recipes](https://unsloth.ai/docs/new/studio/data-recipe) ([Export](https://unsloth.ai/docs/new/studio/export) coming very soon). **MLX training coming soon!**

* macOS 12 Monterey or newer (Intel or Apple Silicon)
* Install Homebrew: [here](https://brew.sh/)
* Git: `brew install git`&#x20;
* cmake: `brew install cmake`&#x20;
* openssl: `brew install openssl`
* Python: version 3.11 up to, but not including, 3.14
* Work inside a Python environment such as **uv**, **venv**, or **conda/mamba**

### <i class="fa-linux">:linux:</i> Linux & WSL

* Ubuntu 20.04+ or similar distro (64-bit)
* NVIDIA GPU with drivers installed
* CUDA toolkit (12.4+ recommended, 12.8+ for blackwell)
* Git: `sudo apt install git`
* Python: version 3.11 up to, but not including, 3.14
* Work inside a Python environment such as **uv**, **venv**, or **conda/mamba**

### <i class="fa-docker">:docker:</i> Docker

{% hint style="success" %}
Our Docker image now works for Studio! We're working on Mac compatibility.
{% endhint %}

* Pull our latest Unsloth container image: `docker pull unsloth/unsloth`
* Run the container via:

```bash
docker run -d -e JUPYTER_PASSWORD="mypassword" \
  -p 8888:8888 -p 8000:8000 -p 2222:22 \
  -v $(pwd)/work:/workspace/work \
  --gpus all \
  unsloth/unsloth
```

For more information, [see here](https://hub.docker.com/r/unsloth/unsloth#unsloth-docker-image).

* Access your studio instance at `http://localhost:8000` or external ip address `http://external_ip_address:8000/`

### <i class="fa-microchip">:microchip:</i> CPU only

Unsloth Studio supports CPU devices for [Chat](#run-models-locally) for GGUF models and [Data Recipes](https://unsloth.ai/docs/new/studio/data-recipe) ([Export](https://unsloth.ai/docs/new/studio/export) coming very soon)

* Same as the ones mentioned above for Linux (except for NVIDIA GPU drivers) and MacOS.

## Developer Installation (Advanced)

#### **macOS, Linux, WSL developer installs:**

```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv unsloth_studio --python 3.13
source unsloth_studio/bin/activate
uv pip install unsloth --torch-backend=auto
unsloth studio setup
unsloth studio -H 0.0.0.0 -p 8888
```

#### **Windows PowerShell developer installs:**

```powershell
winget install -e --id Python.Python.3.13
winget install --id=astral-sh.uv  -e
uv venv unsloth_studio --python 3.13
.\unsloth_studio\Scripts\activate
uv pip install unsloth --torch-backend=auto
unsloth studio setup
unsloth studio -H 0.0.0.0 -p 8888
```

#### **Nightly - MacOS, Linux, WSL:**

```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone --filter=blob:none https://github.com/unslothai/unsloth.git unsloth_studio
cd unsloth_studio
uv venv --python 3.13
source .venv/bin/activate
uv pip install -e . --torch-backend=auto
unsloth studio setup
unsloth studio -H 0.0.0.0 -p 8888
```

Then to launch every time:

```bash
cd unsloth_studio
source .venv/bin/activate
unsloth studio -H 0.0.0.0 -p 8888
```

#### **Nightly - Windows:**

Run in Windows Powershell:

```bash
winget install -e --id Python.Python.3.13
winget install --id=astral-sh.uv  -e
git clone --filter=blob:none https://github.com/unslothai/unsloth.git unsloth_studio
cd unsloth_studio
uv venv --python 3.13
.\.venv\Scripts\activate
uv pip install -e . --torch-backend=auto
unsloth studio setup
unsloth studio -H 0.0.0.0 -p 8888
```

Then to launch every time:

```bash
cd unsloth_studio
.\.venv\Scripts\activate
unsloth studio -H 0.0.0.0 -p 8888
```

### Uninstall

You can uninstall Unsloth Studio by deleting its install folder usually located under `$HOME/.unsloth/studio` on Mac/Linux/WSL and `%USERPROFILE%\.unsloth\studio` on Windows. Or run:

* **MacOS, WSL, Linux:** `rm -rf ~/.unsloth/studio`
* **Windows (PowerShell):** `Remove-Item -Recurse -Force "$HOME\.unsloth\studio"`
* **Optional:** remove `$HOME\.unsloth` on Windows or `~/.unsloth` on MacOS/Linux/WSL if you want to delete all Unsloth files

{% hint style="warning" %}
Note: Using the `rm -rf` commands will **delete everything**, including your history, cache, chats etc.
{% endhint %}

### **Deleting model files**

You can delete old model files either from the bin icon in model search or by removing the relevant cached model folder from the default Hugging Face cache directory. By default, Hugging Face uses `~/.cache/huggingface/hub/` on macOS/Linux/WSL and `C:\Users\<username>\.cache\huggingface\hub\` on Windows.

* **MacOS, Linux, WSL:** `~/.cache/huggingface/hub/`
* **Windows:** `%USERPROFILE%\.cache\huggingface\hub\`

If `HF_HUB_CACHE` or `HF_HOME` is set, use that location instead. On Linux and WSL, `XDG_CACHE_HOME` can also change the default cache root.

### Using old / existing GGUF models

{% columns %}
{% column %}
**Apr 1 update:** You can now select an existing folder for Unsloth to detect from.

**Mar 27 update:** Unsloth Studio now **automatically detects older / pre-existing models** downloaded from Hugging Face, LM Studio etc.
{% endcolumn %}

{% column %}

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FBn3Fs1cchFchl328wSOs%2FScreenshot%202026-04-05%20at%205.43.57%E2%80%AFAM.png?alt=media&#x26;token=cc57ec6e-653a-4824-8e8d-a6bfbcd27493" alt=""><figcaption></figcaption></figure>
{% endcolumn %}
{% endcolumns %}

**Manual instructions:** Unsloth Studio detects models downloaded to your Hugging Face Hub cache `(C:\Users{your_username}.cache\huggingface\hub)`. If you have GGUF models downloaded through LM Studio, note that these are stored in `C:\Users{your_username}.cache\lm-studio\models` ***OR*** `C:\Users{your_username}\lm-studio\models` and are not visible to llama.cpp by default - you will need to move or copy those .gguf files into your Hugging Face Hub cache directory (or another path accessible to llama.cpp) for Unsloth Studio to load them.

After fine-tuning a model or adapter in Studio, you can export it to GGUF and run local inference with **llama.cpp** directly in Studio Chat. Unsloth Studio is powered by llama.cpp and Hugging Face.

### <i class="fa-google">:google:</i> Google Colab notebook

We’ve created a [free Google Colab notebook](https://colab.research.google.com/github/unslothai/unsloth/blob/main/studio/Unsloth_Studio_Colab.ipynb) so you can explore all of Unsloth’s features on Colab’s T4 GPUs. You can train and run most models up to 22B parameters, and switch to a larger GPU for bigger models. Just Click 'Run all' and the UI should pop up after installation.

{% columns %}
{% column %}
{% embed url="<https://colab.research.google.com/github/unslothai/unsloth/blob/main/studio/Unsloth_Studio_Colab.ipynb>" %}

Once installation is complete, scroll to **Start Unsloth Studio** and click **Open Unsloth Studio** in the white box shown on the left:

**Scroll further down, to see the actual UI.**
{% endcolumn %}

{% column %}

<div data-with-frame="true"><figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FkYitMrK55Ic6eIGqiKEJ%2FScreenshot%202026-03-16%20at%2011.21.16%E2%80%AFPM.png?alt=media&#x26;token=4388c309-a598-41f3-9301-e434c334ac1c" alt=""><figcaption></figcaption></figure></div>
{% endcolumn %}
{% endcolumns %}

{% hint style="warning" %}
We now precompile llama.cpp binaries for much faster install speeds.

Sometimes the Studio link may return an error. This happens because you might be using an adblocker or Mozilla or Google Colab expects you to stay on the Colab page; if it detects inactivity, it may shut down the GPU session. Nevertheless, you can scroll down a bit&#x20;
{% endhint %}

## Troubleshooting

<table><thead><tr><th width="211.5999755859375">Problem</th><th>Fix</th></tr></thead><tbody><tr><td>Python version error</td><td><code>sudo apt install python3.12 python3.12-venv</code> version 3.11 up to, but not including, 3.14</td></tr><tr><td><code>nvidia-smi not found</code></td><td>Install NVIDIA drivers from https://www.nvidia.com/Download/index.aspx</td></tr><tr><td><code>nvcc not found</code> (CUDA)</td><td><code>sudo apt install nvidia-cuda-toolkit</code> or add <code>/usr/local/cuda/bin</code> to PATH</td></tr><tr><td>llama-server build failed</td><td>Non-fatal, Studio still works, GGUF inference won't be available. Install <code>cmake</code> and re-run setup to fix.</td></tr><tr><td><code>cmake not found</code></td><td><code>sudo apt install cmake</code></td></tr><tr><td><code>git not found</code></td><td><code>sudo apt install git</code></td></tr><tr><td>Build failed</td><td>Delete <code>~/.unsloth/llama.cpp</code> and re-run setup</td></tr></tbody></table>
