Export models with Unsloth Studio
Learn how to export your safetensor or LoRA model files to GGUF or other formats.
Use Unsloth Studio to export, save, or convert models to GGUF, Safetensors, or LoRA for deployment, sharing, or local inference in Unsloth, llama.cpp, Ollama, vLLM, and more. Export a trained checkpoint or convert any existing model.

Export Methods
Depending on your workflow, you can export a merged model, LoRA adapter weights, or a GGUF model for local inference.

Each export method produces a different version of the model depending on how you plan to run or share it. The table below explains what each option exports.
Merged Model
16-bit model with the LoRA adapter merged into the base weights.
LoRA Only
Exports only the adapter weights. Requires the original base model.
GGUF / llama.cpp
Converts the model to GGUF format for Unsloth / llama.cpp / Ollama / LM Studio inference.
Export / Save Locally
When exporting a model, you can choose where the resulting files should be saved. Models can be downloaded directly to your machine or pushed to the Hugging Face Hub for hosting and sharing.
Save the exported model files directly to your machine. This option is useful for running the model locally, distributing files manually, or integrating with local inference tools.


Last updated
Was this helpful?




