claudeHow to Run Local LLMs with Claude Code

Guide to use open models with Claude Code on your local device.

This step-by-step guide shows you how to connect open LLMs and APIs to Claude Code entirely locally, complete with screenshots. Run using any open model like Qwen3.5, DeepSeek and Gemma.

For this tutorial, we’ll use Qwen3.5 and GLM-4.7-Flash. Both are the strongest 35B MoE agentic & coding model as of Mar 2026 (which works great on a 24GB RAM/unified mem device) to autonomously fine-tune an LLM with Unslotharrow-up-right. You can swap in any other model, just update the model names in your scripts.

Qwen3.5 TutorialGLM-4.7-Flash TutorialclaudeClaude Code Tutorial

For model quants, we will utilize Unsloth Dynamic GGUFs to run any LLM quantized, while retaining as much accuracy as possible.

circle-info

Claude Code has changed quite a lot since Jan 2026. There are lots more settings and necessary features you will need to toggle.

📖 LLM Setup Tutorials

Before we begin, we firstly need to complete setup for the specific model you're going to use. We use llama.cpp which is an open-source framework for running LLMs on your Mac, Linux, Windows etc. devices. Llama.cpp contains llama-server which allows you to serve and deploy LLMs efficiently. The model will be served on port 8001, with all agent tools routed through a single OpenAI-compatible endpoint.

Qwen3.5 Tutorial

We'll be using Qwen3.5-35B-A3B and specific settings for fast accurate coding tasks. If you don't have enough VRAM and want a smarter model, Qwen3.5-27B is a great choice, but it will be ~2x slower, or you can use other Qwen3.5 variants like 9B, 4B or 2B.

circle-info

Use Qwen3.5-27B if you want a smarter model or if you don't have enough VRAM. It will be ~2x slower than 35B-A3B however. Or you can use Qwen3-Coder-Next which is fantastic if you have enough VRAM.

1

Install llama.cpp

We need to install llama.cpp to deploy/serve local LLMs to use in Claude Code etc. We follow the official build instructions for correct GPU bindings and maximum performance. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference. For Apple Mac / Metal devices, set -DGGML_CUDA=OFF then continue as usual - Metal support is on by default.

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev git-all -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-mtmd-cli llama-server llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp
2

Download and use models locally

Download the model via huggingface_hub in Python (after installing via pip install huggingface_hub hf_transfer). We use the UD-Q4_K_XL quant for the best size/accuracy balance. You can find all Unsloth GGUF uploads in our Collection here. If downloads get stuck, see Hugging Face Hub, XET debugging

hf download unsloth/Qwen3.5-35B-A3B-GGUF \
    --local-dir unsloth/Qwen3.5-35B-A3B-GGUF \
    --include "*UD-Q4_K_XL*" # Use "*UD-Q2_K_XL*" for Dynamic 2bit
circle-check
3

Start the Llama-server

To deploy Qwen3.5 for agentic workloads, we use llama-server. We apply Qwen's recommended sampling parameters for thinking mode: temp 0.6, top_p 0.95 , top-k 20. Keep in mind these numbers change if you use non-thinking mode or other tasks.

Run this command in a new terminal (use tmux or open a new terminal). The below should fit perfectly in a 24GB GPU (RTX 4090) (uses 23GB) --fit on will also auto offload, but if you see bad performance, reduce --ctx-size .

triangle-exclamation
./llama.cpp/llama-server \
    --model unsloth/Qwen3.5-35B-A3B-GGUF/Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf \
    --alias "unsloth/Qwen3.5-35B-A3B" \
    --temp 0.6 \
    --top-p 0.95 \
    --top-k 20 \
    --min-p 0.00 \
    --port 8001 \
    --kv-unified \
    --cache-type-k q8_0 --cache-type-v q8_0 \
    --flash-attn on --fit on \
    --ctx-size 131072 # change as required
circle-check

GLM-4.7-Flash Tutorial

1

Install llama.cpp

We need to install llama.cpp to deploy/serve local LLMs to use in Claude Code etc. We follow the official build instructions for correct GPU bindings and maximum performance. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference. For Apple Mac / Metal devices, set -DGGML_CUDA=OFF then continue as usual - Metal support is on by default.

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev git-all -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-mtmd-cli llama-server llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp
2

Download and use models locally

Download the model via huggingface_hub in Python (after installing via pip install huggingface_hub hf_transfer). We use the UD-Q4_K_XL quant for the best size/accuracy balance. You can find all Unsloth GGUF uploads in our Collection here. If downloads get stuck, see Hugging Face Hub, XET debugging

circle-check
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download

snapshot_download(
    repo_id = "unsloth/GLM-4.7-Flash-GGUF",
    local_dir = "unsloth/GLM-4.7-Flash-GGUF",
    allow_patterns = ["*UD-Q4_K_XL*"],
)
3

Start the Llama-server

To deploy GLM-4.7-Flash for agentic workloads, we use llama-server. We apply Z.ai's recommended sampling parameters (temp 1.0, top_p 0.95).

Run this command in a new terminal (use tmux or open a new terminal). The below should fit perfectly in a 24GB GPU (RTX 4090) (uses 23GB) --fit on will also auto offload, but if you see bad performance, reduce --ctx-size .

triangle-exclamation
./llama.cpp/llama-server \
    --model unsloth/GLM-4.7-Flash-GGUF/GLM-4.7-Flash-UD-Q4_K_XL.gguf \
    --alias "unsloth/GLM-4.7-Flash" \
    --temp 1.0 \
    --top-p 0.95 \
    --min-p 0.01 \
    --port 8001 \
    --kv-unified \
    --cache-type-k q8_0 --cache-type-v q8_0 \
    --flash-attn on --fit on \
    --batch-size 4096 --ubatch-size 1024 \
    --ctx-size 131072 #change as required
circle-check

claude Claude Code Tutorial

triangle-exclamation

Once you are done doing the first steps of setting up your local LLM, it's time to setup Claude Code. Claude Code is Anthropic's agentic coding tool that lives in your terminal, understands your codebase, and handles complex Git workflows via natural language.

Install Claude Code and run it locally

Configure

Set the ANTHROPIC_BASE_URL environment variable to redirect Claude Code to your local llama.cpp server.

Also you might need to set ANTHROPIC_API_KEY depending on the server. For example:

Session vs Persistent: The commands above apply to the current terminal only. To persist across new terminals:

Add the export line to ~/.bashrc (bash) or ~/.zshrc (zsh).

circle-exclamation

Missing API key

If you see this, set export ANTHROPIC_API_KEY='sk-no-key-required' ## or 'sk-1234'

circle-info

If Claude Code still asks you to sign in on first run, add "hasCompletedOnboarding": true and "primaryApiKey": "sk-dummy-key" to ~/.claude.json. For the VS Code extension, also enable Disable Login Prompt in settings (or add "claudeCode.disableLoginPrompt": true to settings.json).

🕵️Fixing 90% slower inference in Claude Code

triangle-exclamation

To solve this, edit ~/.claude/settings.json to include CLAUDE_CODE_ATTRIBUTION_HEADER and set it to 0 within "env"

circle-info

Using export CLAUDE_CODE_ATTRIBUTION_HEADER=0 DOES NOT work!

For example do cat > ~/.claude/settings.json then add the below (when pasted, do ENTER then CTRL+D to save it). If you have a previous ~/.claude/settings.json file, just add "CLAUDE_CODE_ATTRIBUTION_HEADER" : "0" to the "env" section, and leave the rest of the settings file unchanged.

🌟Running Claude Code locally on Linux / Mac / Windows

circle-check
triangle-exclamation

Navigate to your project folder (mkdir project ; cd project) and run:

To use Qwen3.5-35B-A3B, simply change it to:

To set Claude Code to execute commands without any approvals do (BEWARE this will make Claude Code do and execute code however it likes without any approvals!)

Try this prompt to install and run a simple Unsloth finetune:

After waiting a bit, Unsloth will be installed in a venv via uv, and loaded up:

and finally you will see a successfully finetuned model with Unsloth!

IDE Extension (VS Code / Cursor)

You can also use Claude Code directly inside your editor via the official extension:

Alternatively, press Ctrl+Shift+X (Windows/Linux) or Cmd+Shift+X (Mac), search for Claude Code, and click Install.

circle-exclamation
triangle-exclamation

Last updated

Was this helpful?