# How to Run Local AI Models with OpenClaw

This guide will enable you to use open LLMs locally with **OpenClaw by connecting it to Unsloth**. OpenClaw is an **open-source AI agent** interface that connects to a model to run tasks across your project.

OpenClaw is able to works with any local model by connecting through **Unsloth’s OpenAI-compatible API**: including DeepSeek, Qwen, Gemma, and more. OpenClaw acts as the client, while Unsloth loads and serves models via a **local API**.

After setup, OpenClaw will run against your local model through Unsloth, letting you use it directly as an **AI agent.**

<a href="/pages/CwQEpEmkKPmyEYdnEngt#connecting-to-openclaw" class="button primary" data-icon="lobster">Connecting to OpenClaw</a><a href="/pages/CwQEpEmkKPmyEYdnEngt#quickstart" class="button primary">Quickstart</a>

{% hint style="info" %}
&#x20;In this tutorial, we’ll use `Unsloth-gpt-oss-120b` in Unsloth and access it through OpenClaw. Prefer a different model? Swap in any other model by loading it in Unsloth and updating the configuration.
{% endhint %}

### Installing OpenClaw

{% tabs %}
{% tab title="macOS, Linux, WSL" %}
Install OpenClaw using the official installer:

`curl -fsSL https://openclaw.ai/install.sh | bash`

This sets up OpenClaw and guides you through initial setup.
{% endtab %}

{% tab title="Windows (PowerShell)" %}
Install OpenClaw using the official installer:

`iwr -useb https://openclaw.ai/install.ps1 | iex`

This sets up OpenClaw and guides you through initial setup.
{% endtab %}
{% endtabs %}

{% hint style="info" %}
API access is part of **Unsloth (Beta)**. Make sure you're on the latest version, earlier builds don't expose the external API. See Installation to install or update.
{% endhint %}

### Installing Unsloth

### ⚡ Quickstart

After installing OpenClaw, we'll need install Unsloth Studio to enable OpenClaw to serve and run inference of local models.

1. **Install or update** [**Unsloth Studio**](/docs/new/studio.md)**.** Earlier versions don't expose the external API. See Installation.
2. **Launch Unsloth.** Note the port it starts on is usually `8000` or `8888`. You'll see it in the terminal output and in the browser URL (`http://localhost:PORT`).
3. **Load a model.** Click **New Chat**, pick or search a model (GGUF), and wait for it to finish loading.
4. **Create an API key.** In Unsloth, click your **Unsloth** avatar in the bottom-left → **Settings** → **API Keys** → type a key name → **Create**. Copy the `sk-unsloth-…` value that appears. Unsloth only shows it once.
5. **Point your client at Unsloth.** Use `http://localhost:PORT` as the base URL and your `sk-unsloth-…` key for auth. Jump to the recipe for your tool below.

### 🔑 Creating an API key

Keys are created from **Unsloth → Settings → API Keys**.

1. Open the sidebar, click your **Unsloth** avatar at the bottom-left.
2. Go to **Settings** → **API Keys**.
3. Enter a friendly name (e.g. `claude-code-macbook`).
4. *(Optional)* Set an expiry.
5. Click **Create**.
6. **Copy the key immediately.** Unsloth stores only a hash and you won't be able to view it again.

<div data-with-frame="true"><figure><img src="/files/h74Myk0Gm7aygIC5vAsH" alt="" width="375"><figcaption></figcaption></figure></div>

All keys start with the `sk-unsloth-` prefix. Revoke a key from the same page at any time. Requests made with a revoked key will fail with `401 Unauthorized`.

{% hint style="warning" %}
Treat your API key like a password. Anyone with the key and network access to your Unsloth instance can send requests to your loaded model.
{% endhint %}

### Connecting to OpenClaw

OpenClaw reads its config from `~/.openclaw/openclaw.json`. Add (or merge) a `models` block with a `unsloth` provider pointing at Unsloth's Anthropic Messages API.

<div data-with-frame="true"><figure><img src="/files/eGHdaU73Dw3RPGRj07KM" alt="" width="375"><figcaption></figcaption></figure></div>

{% code title="\~/.openclaw/openclaw\.json" %}

```json
{
  "models": {
    "mode": "merge",
    "providers": {
      "unsloth": {
        "baseUrl": "http://localhost:8888/v1",
        "apiKey": "sk-unsloth-xxxxxxxxxxxx",
        "api": "anthropic-messages",
        "models": [
          {
            "id": "unsloth/gpt-oss-120b-GGUF",
            "name": "unsloth/gpt-oss-120b-GGUF"
          }
        ],
        "authHeader": true
      }
    }
  }
}
```

{% endcode %}

**Notes:**

* `baseUrl` must end in `/v1`.
* `api: "anthropic-messages"` tells OpenClaw to talk to Unsloth's `/v1/messages` endpoint.
* `authHeader: true` sends your key as `Authorization: Bearer …`.
* Set each model's `id` and `name` to the name you chose when loading the model in Unsloth.
* If you're running Unsloth on a remote machine, replace `localhost:8888` with that machine's address (e.g. `http://10.0.0.42:8888/v1`).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://unsloth.ai/docs/integrations/openclaw.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
