Gemma 4 Fine-tuning Guide
Train Gemma 4 by Google with Unsloth.
pip install --upgrade --force-reinstall --no-cache-dir unsloth unsloth_zooQuickstart
🦥 Unsloth Studio Guide

1
🦥 Unsloth Core (code-based) Guide
MoE fine-tuning (26B-A4B)
Multimodal fine-tuning (E2B / E4B)
Gemma 4 Multimodal LoRA example:
Image example format
Audio example format
Saving / export fine-tuned model
Save to GGUF
Gemma 4 data best practices
1. Use standard chat roles
2. Thinking mode is explicit
3. Multi-turn rule
4. Multimodal content should come first
Last updated
Was this helpful?




