claudeHow to Run Local LLMs with Claude Code

Guide to use Claude Code on your local device with open models.

This step-by-step guide shows you how to connect open LLMs and APIs to Claude Code entirely locally, complete with screenshots. Run using any open model like DeepSeek, Qwen and Gemma.

For this tutorial, we’ll use GLM-4.7-Flash, the strongest 30B MoE agentic & coding model as of Jan 2026 (which works great on a 24GB RAM/unified mem device) to autonomously fine-tune an LLM with Unslotharrow-up-right. You can swap in any other model, just update the model names in your scripts.

claudeClaude Code Tutorial

We use llama.cpp which is an open-source framework for running LLMs on your Mac, Linux, Windows etc. devices. Llama.cpp contains llama-server which allows you to serve and deploy LLMs efficiently. The model will be served on port 8001, with all agent tools routed through a single OpenAI-compatible endpoint.

For model quants, we will utilize Unsloth Dynamic GGUFs to run any LLM quantized, while retaining as much accuracy as possible.

📖 #1: Setup Tutorial

1

Instal llama.cpp

We need to install llama.cpp to deploy/serve local LLMs to use in Claude Code etc. We follow the official build instructions for correct GPU bindings and maximum performance. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev git-all -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-mtmd-cli llama-server llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp
2

Download and use models locally

Download the model via huggingface_hub in Python (after installing via pip install huggingface_hub hf_transfer). We use the UD-Q4_K_XL quant for the best size/accuracy balance. You can find all Unsloth GGUF uploads in our Collection here. If downloads get stuck, see Hugging Face Hub, XET debugging

circle-check
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download

snapshot_download(
    repo_id = "unsloth/GLM-4.7-Flash-GGUF",
    local_dir = "unsloth/GLM-4.7-Flash-GGUF",
    allow_patterns = ["*UD-Q4_K_XL*"],
)
3

Start the Llama-server

To deploy GLM-4.7-Flash for agentic workloads, we use llama-server. We apply Z.ai's recommended sampling parameters (temp 1.0, top_p 0.95) and enable --jinja for proper tool calling support.

Run this command in a new terminal (use tmux or open a new terminal). The below should fit perfectly in a 24GB GPU (RTX 4090) (uses 23GB) --fit on will also auto offload, but if you see bad performance, reduce --ctx-size . We used --cache-type-k q8_0 --cache-type-v q8_0 for KV cache quantization to reduce VRAM usage.

./llama.cpp/llama-server \
    --model unsloth/GLM-4.7-Flash-GGUF/GLM-4.7-Flash-UD-Q4_K_XL.gguf \
    --alias "unsloth/GLM-4.7-Flash" \
    --temp 1.0 \
    --top-p 0.95 \
    --min-p 0.01 \
    --port 8001 \
    --kv-unified \
    --cache-type-k q8_0 --cache-type-v q8_0 \
    --flash-attn on \
    --batch-size 4096 --ubatch-size 1024 \
    --ctx-size 131072
circle-check

claude Claude Code Tutorial

Claude Code is Anthropic's agentic coding tool that lives in your terminal, understands your codebase, and handles complex Git workflows via natural language.

Install Claude Code and run it locally

curl -fsSL https://claude.ai/install.sh | bash
# Or via Homebrew: brew install --cask claude-code

Configure

Set the ANTHROPIC_BASE_URL environment variable to redirect Claude Code to your local llama.cpp server:

export ANTHROPIC_BASE_URL="http://localhost:8001"

Also you might need to set ANTHROPIC_API_KEY depending on the server. For example:

export ANTHROPIC_API_KEY='sk-no-key-required' ## or 'sk-1234'

Session vs Persistent: The commands above apply to the current terminal only. To persist across new terminals:

Add the export line to ~/.bashrc (bash) or ~/.zshrc (zsh).

If you see Unable to connect to API (ConnectionRefused) , remember to unset ANTHROPIC_BASE_URL via unset ANTHROPIC_BASE_URL

Missing API key

If you see this, set export ANTHROPIC_API_KEY='sk-no-key-required' ## or 'sk-1234'

🌟Running Claude Code locally on Linux / Mac / Windows

circle-check

Navigate to your project folder (mkdir project ; cd project) and run:

To set Claude Code to execute commands without any approvals do (BEWARE this will make Claude Code do and execute code however it likes without any approvals!)

Try this prompt to install and run a simple Unsloth finetune:

After waiting a bit, Unsloth will be installed in a venv via uv, and loaded up:

and finally you will see a successfully finetuned model with Unsloth!

IDE Extension (VS Code / Cursor)

You can also use Claude Code directly inside your editor via the official extension:

Alternatively, press Ctrl+Shift+X (Windows/Linux) or Cmd+Shift+X (Mac), search for Claude Code, and click Install.

If you see Unable to connect to API (ConnectionRefused) , remember to unset ANTHROPIC_BASE_URL via unset ANTHROPIC_BASE_URL

Last updated

Was this helpful?