6 cute pastel coloured sloths staring at their computer screens happy
Fine-tune gpt-oss with Unsloth

Aug 5, 2025 β€’ By Daniel & Michael

Aug 5, 2025

β€’

By Daniel & Michael

gpt-oss are OpenAI's new open models, achieving SOTA performance in text, reasoning, math and code. gpt-oss-120b, trained with RL and advanced OpenAI insights, rivals o4-mini in reasoning while running on a single 80 GB GPU. gpt-oss-20b matches o3-mini benchmarks and fits in 16 GB memory. Both models excel at function calling and CoT reasoning, outperforming proprietary models like o1 and GPT-4o.

Please note we're still working on fine-tuning support for gpt-oss but you can run them now.
✨ gpt-oss Fine-tuning

🌡 Fine-tuning gpt-oss-20b:

We're also actively working on fine-tuning support for both models!

gpt-oss-20b finetuning fits with Unsloth in under 24GB of VRAM! It’s also 1.6x faster, and default uses Unsloth dynamic 4-bit quants for superior accuracy!

Performance benchmarks

Model
VRAM
πŸ¦₯Unsloth speed
πŸ¦₯ VRAM reduction
πŸ¦₯ Longer context
πŸ€—Hugging Face+FA2
gpt-oss-20b
24GB
1.5x
>50%
5xlonger
1x
We tested using the Alpaca Dataset, a batch size of 2, gradient accumulation steps of 4, rank = 32, and applied QLoRA on all linear layers (q, k, v, o, gate, up, down).
πŸ’• Thank you!Β 
A huge thank you everyone for using & supporting Unsloth - we really appreciate it. πŸ™

As always, be sure to join our Reddit page and Discord server for help or just to show your support! You can also follow us on Twitter and join our newsletter.
Thank you for reading!
Daniel & Michael Han πŸ¦₯
Aug 5, 2025

Fine-tune gpt-oss now!

Join Our Discord