codeHow to Run Local LLMs with Claude Code & OpenAI Codex

Run Claude Code and OpenAI Codex on your local device guide.

This step-by-step guide shows you how to connect open LLMs to Claude Code and Codex entirely locally, complete with screenshots. Run using any open model like DeepSeek, Qwen and Gemma.

For this tutorial, we’ll use GLM-4.7-Flash, the strongest 30B MoE agentic & coding model as of Jan 2026 to autonomously fine-tune an LLM with Unslotharrow-up-right. You can swap in any other model, just update the model names in your scripts.

Claude Code TutorialOpenAI Codex Tutorial

We use llama.cpparrow-up-right which is an open-source framework for running LLMs on your Mac, Linux, Windows etc. devices. Llama.cpp contains llama-server which allows you to serve and deploy LLMs efficiently. The model will be served on port 8000, with all agent tools routed through a single OpenAI-compatible endpoint.

For model quants, we will utilize Unsloth Dynamic GGUFs to run any LLM quantized, while retaining as much accuracy as possible.

📖 Step #1: Install Llama.cpp Tutorial

1

We need to install llama.cpp to deploy/serve local LLMs to use in Codex etc. We follow the official build instructions for correct GPU bindings and maximum performance. Change -DGGML_CUDA=ON to -DGGML_CUDA=OFF if you don't have a GPU or just want CPU inference.

apt-get update
apt-get install pciutils build-essential cmake curl libcurl4-openssl-dev git-all -y
git clone https://github.com/ggml-org/llama.cpp
cmake llama.cpp -B llama.cpp/build \
    -DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-cli llama-mtmd-cli llama-server llama-gguf-split
cp llama.cpp/build/bin/llama-* llama.cpp
2

Download and use models locally

Download the model via huggingface_hub in Python (after installing via pip install huggingface_hub hf_transfer). We use the UD-Q4_K_XL quant for the best size/accuracy balance. You can find all Unsloth GGUF uploads in our Collection herearrow-up-right.

import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
from huggingface_hub import snapshot_download

snapshot_download(
    repo_id = "unsloth/GLM-4.7-Flash-GGUF",
    local_dir = "unsloth/GLM-4.7-Flash-GGUF",
    allow_patterns = ["*UD-Q4_K_XL*"],
)
3

Start the Llama-server

To deploy GLM-4.7-Flash for agentic workloads, we use llama-server. We apply Z.ai's recommended sampling parameters (temp 1.0, top_p 0.95) and enable --jinja for proper tool calling support.

Run this command in a new terminal (use tmux or open a new terminal). The below should fit perfectly in a 24GB GPU (RTX 4090) (uses 23GB) --fit on will also auto offload, but if you see bad performance, reduce --ctx-size . We used --cache-type-k q8_0 --cache-type-v q8_0 for KV cache quantization to reduce VRAM usage.

./llama.cpp/llama-server \
    --model unsloth/GLM-4.7-Flash-GGUF/GLM-4.7-Flash-UD-Q4_K_XL.gguf \
    --alias "unsloth/GLM-4.7-Flash" \
    --fit on \
    --temp 1.0 \
    --top-p 0.95 \
    --min-p 0.01 \
    --port 8000 \
    --jinja \
    --kv-unified \
    --cache-type-k q8_0 --cache-type-v q8_0 \
    --flash-attn on \
    --batch-size 4096 --ubatch-size 1024 \
    --ctx-size 131072

👾 Claude Code Tutorial

Claude Code is Anthropic's agentic coding tool that lives in your terminal, understands your codebase, and handles complex Git workflows via natural language.

Install Claude Code and run it locally

curl -fsSL https://claude.ai/install.sh | bash
# Or via Homebrew: brew install --cask claude-code

Configure

Set the ANTHROPIC_BASE_URL environment variable to redirect Claude Code to your local llama.cpp server:

export ANTHROPIC_BASE_URL="http://localhost:8000"

Session vs Persistent: The commands above apply to the current terminal only. To persist across new terminals:

Add the export line to ~/.bashrc (bash) or ~/.zshrc (zsh).

🌟Running Claude Code locally on Linux / Mac / Windows

Navigate to your project folder (mkdir project ; cd project) and run:

To set Claude Code to execute commands without any approvals do (BEWARE this will make Claude Code do and execute code however it likes without any approvals!)

Try this prompt to install and run a simple Unsloth finetune:

After waiting a bit, Unsloth will be installed in a venv via uv, and loaded up:

and finally you will see a successfully finetuned model with Unsloth!

IDE Extension (VS Code / Cursor)

You can also use Claude Code directly inside your editor via the official extension:

Alternatively, press Ctrl+Shift+X (Windows/Linux) or Cmd+Shift+X (Mac), search for Claude Code, and click Install.

👾 OpenAI Codex CLI Tutorial

Codex arrow-up-rightis OpenAI's official coding agent that runs locally. While designed for ChatGPT, it supports custom API endpoints, making it perfect for local LLMs. See https://developers.openai.com/codex/windows/arrow-up-right for installing on Windows - it's best to use WSL.

Install

Mac (Homebrew):

Universal (NPM) for Linux

Configure

First run codex to login and setup things, then create or edit the configuration file at ~/.codex/config.toml (Mac/Linux) or %USERPROFILE%\.codex\config.toml (Windows).

Use cat > ~/.codex/config.toml for Linux / Mac:

Navigate to your project folder (mkdir project ; cd project) and run:

Or to allow any code to execute. (BEWARE this will make Codex do and execute code however it likes without any approvals!)

And you will see:

circle-exclamation

Try this prompt to install and run a simple Unsloth finetune:

and you will see:

and if we wait a little longer, we finally get:

Last updated

Was this helpful?