🦥Introducing Unsloth Studio

Run and train AI models locally with Unsloth Studio.

Today, we’re launching Unsloth Studio (Beta): an open-source, no-code web UI for training, running and exporting open models in one unified local interface.

boltQuickstartstarFeaturesgithubGithub

  • Run GGUF and safetensor models locally on Mac, Windows, Linux.

  • Train 500+ models 2x faster with 70% less VRAM (no accuracy loss)

  • Run and train text, vision, TTS audio, embedding models

⭐ Features

Run models locally

Search and run GGUF and safetensor models with self-healing tool calling / web search, auto inference parameter tuning, code execution and APIs. Upload images, docs, audio, code files.

Battle models side by side. Powered by llama.cpp + Hugging Face, we support multi-GPU inference and most models.

No-code training

Upload PDF, CSV, JSON docs, or YAML configs and start training instantly on NVIDIA. Unsloth’s kernels optimize LoRA, FP8, FFT, PT across 500+ text, vision, TTS/audio and embedding models.

Fine-tune the latest LLMs like Qwen3.5 and NVIDIA Nemotron 3. Multi-GPU works automatically, with a new version coming.

Data Recipes

Data Recipes transforms your docs into useable / synthetic datasets via graph-node workflow. Upload unstructured or structured files like PDFs, CSV and JSON. Unsloth Data Recipes, powered by NVIDIA DataDesignerarrow-up-right, auto turns documents into your desired formats.

Observability

Gain complete visibility into and control over your training runs. Track training loss, gradient norms, and GPU utilization in real time, and customize to your liking.

You can even view the training progress on other devices like your phone.

Export / Save models

Export any model, including your fine-tuned models, to safetensors, or GGUF for use with llama.cpp, vLLM, Ollama, LM Studio, and more.

Stores your training history, so you can revisit runs, export again and experiment.

Model Arena

Chat with and compare 2 different models, such as a base model and a fine-tuned one, to see how their outputs differ.

Just load your first GGUF/model, then the second, and voilà! Inference will firstly load for one model, then the second one.

Privacy first + Secure

Unsloth Studio can be used 100% offline and locally on your computer.

Its token-based authentication, including password and JWT access / refresh flows keeps your data secure and under your control.

circle-exclamation

⚡ Quickstart

Unsloth Studio works on Windows, Linux, WSL and MacOS (chat only currently).

  • CPU: Unsloth still works without a GPU, but only for Chat inference.

  • Training: Works on NVIDIA GPUs: RTX 30, 40, 50, Blackwell, DGX Spark/Station etc.

  • Mac: Like CPU - Chat only works for now. MLX training coming very soon.

  • Coming soon: Support for Apple MLX, AMD, and Intel.

  • Multi-GPU: Works already, with a major upgrade on the way.

Windows, MacOS, Linux, WSL:

Our Docker image is still in the works, will be working later today: unsloth/unsloth. Read our Docker guide.

circle-exclamation

Git from source:

For more details about installation please visit the Unsloth Studio Install section. You can also view NVIDIA's Video Tutorial here.

arrow-down-to-squareInstallationchevron-right

google Google Colab notebook

We’ve created a free Google Colab notebookarrow-up-right so you can explore all of Unsloth’s features on Colab’s T4 GPUs. You can train and run most models up to 22B parameters, and switch to a larger GPU for bigger models. Just Click 'Run all' and the UI should pop up after installation.

circle-exclamation

Once installation is complete, scroll to Start Unsloth Studio and click Open Unsloth Studio in the white box shown on the left:

seedling Workflow

Here is a usual workflow of Unsloth Studio to get you started:

  1. Launch Studio from install instructions.

  2. Load a model from local files or a supported integration.

  3. Import training data from PDFs, CSVs, or JSONL files, or build a dataset from scratch.

  4. Clean, refine, and expand your dataset in Data Recipes.

  5. Start training with recommended presets or customize the config yourself.

  6. Chat with the trained model and compare its outputs against the base model.

  7. Save or export locally to the stack you already use.

You can read our individual deep dives into each section of Unsloth Studio:

video Video Tutorials

Here is a video tutorial created by NVIDIA to get you started with Studio:

comments-question FAQ

Does Unsloth collect or store data? We do not collect usage telemetry. We only collect the minimal hardware information required for compatibility, such as GPU type and device (e.g. Mac). Unsloth Studio runs 100% offline and locally.

Is Unsloth now licensed under AGPL-3.0? No. The main Unsloth package is still licensed under Apache 2.0. Only certain optional components, such as the Unsloth Studio UI, are under the AGPL-3.0 open-source license. Unsloth now has dual-licensing where some parts of the codebase are licensed Apache 2.0, while others are licensed AGPL-3.0. This structure helps support ongoing Unsloth development while keeping the project open-source and enabling the ecosystem to grow.

Does Studio only support LLMs? No. Studio supports a range of supported transformers compatible model families, including text, multimodal models, text-to-speech, audio, embeddings, and BERT-style models.

Can I use my own training config? Yes. Import a YAML config and Studio will pre-fill the relevant settings.

Do you need to train models to use the UI? No, you can just download any GGUF or model without fine-tuning any model.

Future of Unsloth

We're working hard to make open-source AI as accessible as possible. Coming next for Unsloth and Unsloth Studio, we're releasing official support for: multi-GPU, Apple Silicon/MLX, AMD, and Intel. Reminder this is the BETA version of Unsloth Studio so expect a lot of announcements and improvements in the coming weeks. We’re also working closely with NVIDIA on multi-GPU support to deliver the best and simplest experience possible.

Acknowledgements

A huge thank you to NVIDIA and Hugging Face for being part of our launch. Also thanks to all of our early beta testers for Unsloth Studio, we truly appreciate your time and feedback. We’d also like to thank llama.cpp, PyTorch and open model labs for providing the infrastructure that made Unsloth Studio possible.

Last updated

Was this helpful?