unsloth
About
Pricing
Blog
Contact
Documentation
Blog
DeepSeek R1 - Run & Finetune
Jan 20, 2025
Phi-4 Finetuning + Bug Fixes
Jan 10, 2025
Fine-tune Llama 3.3
Dec 10, 2024
Unsloth - Dynamic 4-bit Quantization
Dec 4, 2024
Llama 3.2 Vision Finetuning
Nov 19, 2024
Gradient Accumulation by Unsloth
Oct 15, 2024
Unsloth Roadmap Update
Sep 5, 2024
Finetune & Run Llama 3.1 with Unsloth
Jul 23, 2024
Mistral NeMo Bug Fixes & Ollama support
Jul 18, 2024
Gemma 2 - 2x faster + 63% less VRAM
Jul 3, 2024
Continued Pretraining with Unsloth
Jun 4, 2024
Phi-3, Mistral v0.3 and Llama 3 Bug Fixes
May 23, 2024
Llama 3 - 2x faster + 68% less VRAM
Apr 23, 2024
4x longer context windows & 1.7x larger batch sizes
Apr 9, 2024
Unsloth Google Gemma Bug Fixes
Mar 6, 2024
2.4x faster Gemma + 58% less VRAM
Feb 26, 2024
387% faster TinyLlama + 6x faster GGUF
Jan 18, 2024
Hugging Face + Unsloth 2024 Collab
Jan 10, 2024
New Mistral support + Benchmarks
Dec 14, 2023
Introducing Unsloth
Dec 1, 2023
Ready to use unsloth?
Get started for free
Company
About
LinkedIn
Privacy Policy
Terms of Service
Product
Introduction
Pricing
Download
Documentation
🦥 Models
Community
Twitter (X)
Reddit
Hugging Face
Discord
LinkedIn
unsloth
support@unsloth.ai
© 2025 unsloth. All rights reserved.
Join Our Discord