# How to Run Local AI Models with OpenCode

This guide walks you through connecting **OpenCode Desktop** to [Unsloth](https://github.com/unslothai/unsloth) to run open LLMs **entirely locally.** OpenCode is an **open-source AI coding agent** that reads, modifies, and executes code across your project using a connected model.\
\
This works with any **local model** exposed through Unsloth’s **OpenAI-compatible API**, including: DeepSeek, Qwen, Gemma, and more. OpenCode acts as the client, while Unsloth loads and serves models models via a local API.

After setup, OpenCode connects to Unsloth, where you can select a loaded model and use it as a **coding agent**.

<a href="https://sites.gitbook.com/preview/site_mXXTe/~/revisions/S6OVzFPYN3gcF4rziUb5/integrations/opencode?theme=light#installing-opencode-desktop" class="button primary">OpenCode Setup</a><a href="/pages/qaA8ZjTxsH2GTuBOHyra#quickstart" class="button primary">Quickstart</a>

{% hint style="info" %}
In this tutorial, we’ll use `Unsloth-gpt-oss-20b` loaded in Unsloth and access it directly inside OpenCode. Prefer a different model? Swap in any other model by loading it in Unsloth.
{% endhint %}

### Installing OpenCode Desktop

Download and install [**OpenCode**](https://opencode.ai/download) Desktop for your platform (Windows, macOS, or Linux).

<div data-with-frame="true"><figure><img src="/files/85Ryh9ej5xNvjDIzbqiQ" alt="" width="563"><figcaption></figcaption></figure></div>

**Windows**

* Run the downloaded `.exe`
* If prompted by SmartScreen, click **More info → Run anyway**
* After installation completes, click **Finish** (leave “Run OpenCode” enabled to launch immediately).

<div data-with-frame="true"><figure><img src="/files/85wCfvL52fcuq8o9zESp" alt="" width="375"><figcaption></figcaption></figure></div>

**macOS**

* Open the `.dmg` and drag **OpenCode** to Applications
* On first launch, you may need to **right-click → Open**

**Linux**

* `.deb`: `sudo dpkg -i <file>`
* `.rpm`: `sudo rpm -i <file>`

Once installed, launch OpenCode Desktop.

You should see the main interface open with an input bar at the bottom. If the app doesn’t open, try reinstalling or check your system’s security settings.

## Installing Unsloth

### ⚡ Quickstart

After installing OpenCode, we'll need install Unsloth Studio to enable OpenCode to serve and run inference of local models.

1. **Install or update Unsloth Studio.** Earlier versions don't expose the external API. See Installation.
2. **Launch Unsloth.** Note the port it starts on is usually `8000` or `8888`. You'll see it in the terminal output and in the browser URL (`http://localhost:PORT`).
3. **Load a model.** Click **New Chat**, pick or search a model (GGUF), and wait for it to finish loading.
4. **Create an API key.** In Unsloth, click your **Unsloth** avatar in the bottom-left → **Settings** → **API** → type a key name → **Create**. Copy the `sk-unsloth-…` value that appears . Unsloth only shows it once.
5. **Point your client at Unsloth.** Use `http://localhost:PORT` as the base URL and your `sk-unsloth-…` key for auth. Jump to the recipe for your tool below.

### 🔑 Creating an API key

1. Open the sidebar, click your **Unsloth** avatar at the bottom-left.
2. Go to **Settings** → **API**.
3. Enter a friendly name (e.g. `claude-code-macbook`).
4. *(Optional)* Set an expiry.
5. Click **Create**.
6. **Copy the key immediately.** Unsloth stores only a hash and you won't be able to view it again.

<div data-with-frame="true"><figure><img src="/files/h74Myk0Gm7aygIC5vAsH" alt="" width="375"><figcaption></figcaption></figure></div>

All keys start with the `sk-unsloth-` prefix. Revoke a key from the same page at any time. Requests made with a revoked key will fail with `401 Unauthorized`.

{% hint style="warning" %}
Treat your API key like a password. Anyone with the key and network access to your Unsloth instance can send requests to your loaded model.
{% endhint %}

## 🖇️ Connecting Unsloth to OpenCode Desktop

**Opencode** supports any OpenAI-compatible provider, so you can wire Unsloth in as a **Custom** provider. The setup is a one-time flow inside opencode's **Connect provider** dialog.

**1. Open the provider picker.** In opencode, type `/model` (or click the model selector at the bottom of the input).

<div data-with-frame="true"><figure><img src="/files/Q144MyFkMUTuCWXdO0bq" alt=""><figcaption></figcaption></figure></div>

Then click **Connect provider** at the top-right of the select model dialog.

<div data-with-frame="true"><figure><img src="/files/Fhnitdj2Rq7AzgpHfipg" alt="" width="375"><figcaption></figcaption></figure></div>

**2. Choose "Custom".** In the provider list, scroll to **Other** and pick **Custom**.

<div data-with-frame="true"><figure><img src="/files/EZU9UHZXpWKXkn93SuIc" alt="" width="375"><figcaption></figcaption></figure></div>

**3. Fill in the custom provider form:**

| Field            | Value                                                                                             |
| ---------------- | ------------------------------------------------------------------------------------------------- |
| **Provider ID**  | `unsloth-studio` *(lowercase, hyphens allowed)*                                                   |
| **Display name** | `Unsloth Studio`                                                                                  |
| **Base URL**     | `http://localhost:8888/v1/` *(replace `8888` with your* Unsloth *port; keep the trailing `/v1/`)* |
| **API key**      | Your `sk-unsloth-…` key                                                                           |

In the **Models** section, add one row per model you want to expose. The left field is the model ID as Unsloth serves it; the right field is what opencode will display:

| Model ID (left)                                                       | Display name (right)                            |
| --------------------------------------------------------------------- | ----------------------------------------------- |
| `gpt-oss-20b-GGUF` *(the exact name of the model as shown in Studio)* | `Unsloth-gpt-oss-20b` *(shown inside opencode)* |

Leave **Headers** empty unless you're proxying Unsloth through an auth layer that needs custom headers.

<div data-with-frame="true"><figure><img src="/files/vFQFTjaKt6k2ZPqGaywH" alt="" width="375"><figcaption></figcaption></figure></div>

**4. Click Submit.** You should see an *"Unsloth Studio connected. Unsloth models are now available to use"* toast.

<figure><img src="/files/Wjq1iVj625zoboIFfGVq" alt="" width="375"><figcaption></figcaption></figure>

{% hint style="warning" %}
**Restart opencode after adding the provider.** The new provider only becomes selectable after a restart.
{% endhint %}

**5. Select your Unsloth model.** Once opencode is back up, type `/model`, search `unsloth`, and pick the model under the **Unsloth Studio** group. It'll be active on your next message.

<div data-with-frame="true"><figure><img src="/files/b1NTBKWQVy7AkgSWhZji" alt="" width="375"><figcaption></figcaption></figure></div>

Unsloth supports both OpenAI and Anthropic python SDKs.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://unsloth.ai/docs/integrations/opencode.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
