# Unsloth Studio Installation

Unsloth Studio works on Windows, Linux, WSL and MacOS. You should use the same installation process on every device, although the system requirements may differ by device.

<a href="#windows" class="button secondary" data-icon="windows">Windows</a><a href="#macos" class="button secondary" data-icon="apple">MacOS</a><a href="#linux-and-wsl" class="button secondary" data-icon="linux">Linux & WSL</a><a href="#docker" class="button secondary" data-icon="docker">Docker</a><a href="#developer-installation-advanced" class="button secondary" data-icon="screwdriver-wrench">Developer Install</a>

* **Mac:** Like CPU - [Chat](https://unsloth.ai/docs/new/chat#using-unsloth-studio-chat) + [Data Recipes](https://unsloth.ai/docs/new/studio/data-recipe) works for now. **MLX** training coming very soon.
* **CPU: Unsloth still works without a GPU**, but for Chat + Data Recipes.
* **Training:** Works on **NVIDIA**: RTX 30, 40, 50, Blackwell, DGX Spark/Station etc. + **Intel** GPUs
* **Coming soon:** Support for **Apple MLX** and **AMD**.

## Install Instructions

Remember install instructions are the same across every device:

{% stepper %}
{% step %}

#### Install Unsloth

**MacOS, Linux, WSL:**

```bash
curl -fsSL https://unsloth.ai/install.sh | sh
```

**Windows PowerShell:**

```bash
irm https://unsloth.ai/install.ps1 | iex
```

{% hint style="success" %}
**First install should now be 6x faster and with 50% reduced size due to precompiled llama.cpp binaries.**
{% endhint %}

{% hint style="info" %}
**WSL users:** you will be prompted for your `sudo` password to install build dependencies (`cmake`, `git`, `libcurl4-openssl-dev`).
{% endhint %}
{% endstep %}

{% step %}

#### Launch Unsloth Studio

```bash
unsloth studio -H 0.0.0.0 -p 8888
```

<div data-with-frame="true"><figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2Fd1yMMNa65Ccz50Ke0E7r%2FScreenshot%202026-03-17%20at%2012.32.38%E2%80%AFAM.png?alt=media&#x26;token=9369cfe7-35b1-4955-b8cb-42f7ecb43780" alt="" width="375"><figcaption></figcaption></figure></div>

**Then open `http://localhost:8888` in your browser.**
{% endstep %}

{% step %}

#### Onboarding

On first launch you will need to create a password to secure your account and sign in again later. You’ll then see a brief onboarding wizard to choose a model, dataset, and basic settings. You can skip it at any time.
{% endstep %}

{% step %}

#### Start training and running

Start fine-tuning and building datasets immediately after launching. See our step-by-step guide to get started with Unsloth Studio:

{% content-ref url="start" %}
[start](https://unsloth.ai/docs/new/studio/start)
{% endcontent-ref %}
{% endstep %}
{% endstepper %}

### Update Unsloth Studio:

To update Unsloth Studio use:

{% code overflow="wrap" %}

```bash
unsloth studio update 
```

{% endcode %}

If that does not work, you can use the below:

#### **MacOS, Linux, WSL:**

```bash
curl -fsSL https://unsloth.ai/install.sh | sh
```

#### **Windows PowerShell:**

```bash
irm https://unsloth.ai/install.ps1 | iex
```

## System Requirements

### <i class="fa-windows">:windows:</i> Window**s**

Unsloth Studio works directly on Windows without WSL. To train models, make sure your system satisfies these requirements:

**Requirements**

* Windows 10 or Windows 11 (64-bit)
* NVIDIA GPU with drivers installed
* **App Installer** (includes `winget`): [here](https://learn.microsoft.com/en-us/windows/msix/app-installer/install-update-app-installer)
* **Git**: `winget install --id Git.Git -e --source winget`
* **Python**: version 3.11 up to, but not including, 3.14
* Work inside a Python environment such as **uv**, **venv**, or **conda/mamba**

### <i class="fa-apple">:apple:</i> MacOS

Unsloth Studio works on Mac devices for [Chat](#run-models-locally) for GGUF models and [Data Recipes](https://unsloth.ai/docs/new/studio/data-recipe) ([Export](https://unsloth.ai/docs/new/studio/export) coming very soon). **MLX training coming soon!**

* macOS 12 Monterey or newer (Intel or Apple Silicon)
* Install Homebrew: [here](https://brew.sh/)
* Git: `brew install git`&#x20;
* cmake: `brew install cmake`&#x20;
* openssl: `brew install openssl`
* Python: version 3.11 up to, but not including, 3.14
* Work inside a Python environment such as **uv**, **venv**, or **conda/mamba**

### <i class="fa-linux">:linux:</i> Linux & WSL

* Ubuntu 20.04+ or similar distro (64-bit)
* NVIDIA GPU with drivers installed
* CUDA toolkit (12.4+ recommended, 12.8+ for blackwell)
* Git: `sudo apt install git`
* Python: version 3.11 up to, but not including, 3.14
* Work inside a Python environment such as **uv**, **venv**, or **conda/mamba**

### <i class="fa-docker">:docker:</i> Docker

{% hint style="success" %}
Our Docker image now works for Studio! We're working on Mac compatibility.
{% endhint %}

* Pull our latest Unsloth container image: `docker pull unsloth/unsloth`
* Run the container via:

```bash
docker run -d -e JUPYTER_PASSWORD="mypassword" \
  -p 8888:8888 -p 8000:8000 -p 2222:22 \
  -v $(pwd)/work:/workspace/work \
  --gpus all \
  unsloth/unsloth
```

For more information, [see here](https://hub.docker.com/r/unsloth/unsloth#unsloth-docker-image).

* Access your studio instance at `http://localhost:8000` or external ip address `http://external_ip_address:8000/`

### <i class="fa-microchip">:microchip:</i> CPU only

Unsloth Studio supports CPU devices for [Chat](#run-models-locally) for GGUF models and [Data Recipes](https://unsloth.ai/docs/new/studio/data-recipe) ([Export](https://unsloth.ai/docs/new/studio/export) coming very soon)

* Same as the ones mentioned above for Linux (except for NVIDIA GPU drivers) and MacOS.

## Developer Installation (Advanced)

### **Install from Main Repo**

#### **macOS, Linux, WSL developer installs:**

```bash
git clone https://github.com/unslothai/unsloth
cd unsloth
./install.sh --local
unsloth studio -H 0.0.0.0 -p 8888
```

#### **Windows PowerShell developer installs:**

```powershell
winget install -e --id Python.Python.3.13 --source winget
winget install --id=astral-sh.uv  -e --source winget
winget install --id Git.Git -e --source winget
git clone https://github.com/unslothai/unsloth
cd unsloth
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
.\install.ps1 --local
unsloth studio -H 0.0.0.0 -p 8888
```

### **Nightly Install**

#### **Nightly - MacOS, Linux, WSL:**

```bash
git clone https://github.com/unslothai/unsloth
cd unsloth
git checkout nightly
./install.sh --local
```

Then to launch every time:

```bash
unsloth studio -H 0.0.0.0 -p 8888
```

#### **Nightly - Windows:**

Run in Windows Powershell:

```bash
winget install -e --id Python.Python.3.13 --source winget
winget install --id=astral-sh.uv  -e --source winget
winget install --id Git.Git -e --source winget
git clone https://github.com/unslothai/unsloth
cd unsloth
git checkout nightly
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
.\install.ps1 --local
```

Then to launch every time:

<pre class="language-bash"><code class="lang-bash"><strong>unsloth studio -H 0.0.0.0 -p 8888
</strong></code></pre>

### Uninstall

To uninstall Unsloth Studio, follow these 4 steps:

#### **1. Remove the application**

* MacOS, WSL, Linux: `rm -rf ~/.unsloth/studio/unsloth_studio`
* Windows (PowerShell): `Remove-Item -Recurse -Force "$HOME\.unsloth\studio\unsloth_studio"`&#x20;

This removes the application but keeps your model checkpoints, exports, history, cache, and chats intact.

#### **2. Remove shortcuts and symlinks**

**macOS:**

```bash
rm -rf ~/Applications/Unsloth\ Studio.app ~/Desktop/Unsloth\ Studio
```

**Linux:**

```bash
rm -f ~/.local/share/applications/unsloth-studio.desktop ~/Desktop/unsloth-studio.desktop
```

**WSL / Windows (PowerShell):**

```bash
Remove-Item -Force "$HOME\Desktop\Unsloth Studio.lnk"
Remove-Item -Force "$env:APPDATA\Microsoft\Windows\Start Menu\Programs\Unsloth Studio.lnk"
```

#### **3. Remove the CLI command**

**macOS, Linux, WSL:**

```bash
rm -f ~/.local/bin/unsloth
```

**Windows (PowerShell):** The installer added the venv's `Scripts` directory to your User PATH. To remove it, open Settings → System → About → Advanced system settings → Environment Variables, find `Path` under User variables, and remove the entry pointing to `.unsloth\studio\...\Scripts`.

#### **4. Remove everything (optional)**

To also delete history, cache, chats, model checkpoints, and model exports, delete the entire Unsloth folder:

* MacOS, WSL, Linux: `rm -rf ~/.unsloth`
* Windows (PowerShell): `Remove-Item -Recurse -Force "$HOME\.unsloth"`&#x20;

Note that downloaded HF model files are stored separately in the Hugging Face cache — none of the steps above will remove them. See **Deleting model files** below if you want to reclaim that disk space.

{% hint style="warning" %}
Note: Using the `rm -rf` commands will **delete everything**, including your history, cache, chats etc.
{% endhint %}

### **Deleting cached HF model files**

You can delete old model files either from the bin icon in model search or by removing the relevant cached model folder from the default Hugging Face cache directory. By default, Hugging Face uses `~/.cache/huggingface/hub/` on macOS/Linux/WSL and `C:\Users\<username>\.cache\huggingface\hub\` on Windows.

* **MacOS, Linux, WSL:** `~/.cache/huggingface/hub/`
* **Windows:** `%USERPROFILE%\.cache\huggingface\hub\`

If `HF_HUB_CACHE` or `HF_HOME` is set, use that location instead. On Linux and WSL, `XDG_CACHE_HOME` can also change the default cache root.

### Using old / existing GGUF models

{% columns %}
{% column %}
**Apr 1 update:** You can now select an existing folder for Unsloth to detect from.

**Mar 27 update:** Unsloth Studio now **automatically detects older / pre-existing models** downloaded from Hugging Face, LM Studio etc.
{% endcolumn %}

{% column %}

<figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FBn3Fs1cchFchl328wSOs%2FScreenshot%202026-04-05%20at%205.43.57%E2%80%AFAM.png?alt=media&#x26;token=cc57ec6e-653a-4824-8e8d-a6bfbcd27493" alt=""><figcaption></figcaption></figure>
{% endcolumn %}
{% endcolumns %}

**Manual instructions:** Unsloth Studio detects models downloaded to your Hugging Face Hub cache `(C:\Users{your_username}.cache\huggingface\hub)`. If you have GGUF models downloaded through LM Studio, note that these are stored in `C:\Users{your_username}.cache\lm-studio\models` ***OR*** `C:\Users{your_username}\lm-studio\models` . Sometimes when they are not visible, you will need to move or copy those .gguf files into your Hugging Face Hub cache directory (or another path accessible to llama.cpp) for Unsloth Studio to load them.

After fine-tuning a model or adapter in Studio, you can export it to GGUF and run local inference with **llama.cpp** directly in Studio Chat. Unsloth Studio is powered by llama.cpp and Hugging Face.

### <i class="fa-google">:google:</i> Google Colab notebook

We’ve created a [free Google Colab notebook](https://colab.research.google.com/github/unslothai/unsloth/blob/main/studio/Unsloth_Studio_Colab.ipynb) so you can explore all of Unsloth’s features on Colab’s T4 GPUs. You can train and run most models up to 22B parameters, and switch to a larger GPU for bigger models. Just Click 'Run all' and the UI should pop up after installation.

{% columns %}
{% column %}
{% embed url="<https://colab.research.google.com/github/unslothai/unsloth/blob/main/studio/Unsloth_Studio_Colab.ipynb>" %}

Once installation is complete, scroll to **Start Unsloth Studio** and click **Open Unsloth Studio** in the white box shown on the left:

**Scroll further down, to see the actual UI.**
{% endcolumn %}

{% column %}

<div data-with-frame="true"><figure><img src="https://3215535692-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FxhOjnexMCB3dmuQFQ2Zq%2Fuploads%2FkYitMrK55Ic6eIGqiKEJ%2FScreenshot%202026-03-16%20at%2011.21.16%E2%80%AFPM.png?alt=media&#x26;token=4388c309-a598-41f3-9301-e434c334ac1c" alt=""><figcaption></figcaption></figure></div>
{% endcolumn %}
{% endcolumns %}

{% hint style="warning" %}
Sometimes the Studio link may return an error. This happens because you might have disabled cookies or you're using an adblocker or Mozilla. You can still access the UI by scrolling below the button.

Google Colab also expects you to stay on the Colab page; if it detects inactivity, it may shut down the GPU session.
{% endhint %}

## Troubleshooting

<table><thead><tr><th width="211.5999755859375">Problem</th><th>Fix</th></tr></thead><tbody><tr><td>Python version error</td><td><code>sudo apt install python3.12 python3.12-venv</code> version 3.11 up to, but not including, 3.14</td></tr><tr><td><code>nvidia-smi not found</code></td><td>Install NVIDIA drivers from https://www.nvidia.com/Download/index.aspx</td></tr><tr><td><code>nvcc not found</code> (CUDA)</td><td><code>sudo apt install nvidia-cuda-toolkit</code> or add <code>/usr/local/cuda/bin</code> to PATH</td></tr><tr><td>llama-server build failed</td><td>Non-fatal, Studio still works, GGUF inference won't be available. Install <code>cmake</code> and re-run setup to fix.</td></tr><tr><td><code>cmake not found</code></td><td><code>sudo apt install cmake</code></td></tr><tr><td><code>git not found</code></td><td><code>sudo apt install git</code></td></tr><tr><td>Build failed</td><td>Delete <code>~/.unsloth/llama.cpp</code> and re-run setup</td></tr></tbody></table>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://unsloth.ai/docs/new/studio/install.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
