Deploying with Hugging Face Jobs

Using Hugging Face jobs to fine-tune LFM with Codex / Claude Code with a SKILL.

This guide covers how to use Unslotharrow-up-right and Liquid LFM2.5 for fast LLM fine-tuning through coding agents like Claude Code & OpenAI Codex. Unsloth provides ~2x faster training and ~60% less VRAM usage compared to standard methods.

You will need

Installing the Skill

Claude Code

Claude Code discovers skills through its plugin systemarrow-up-right.

  1. Add the marketplace:

/plugin marketplace add huggingface/skills
  1. Browse available skills in the Discover tab:

/plugin
  1. Install the model trainer skill:

/plugin install hugging-face-model-trainer@huggingface-skills

For more details, see the Claude Code plugins docsarrow-up-right and the Skills docsarrow-up-right.

Codex

Codex discovers skills through AGENTS.mdarrow-up-right files and .agents/skills/arrow-up-right directories.

Install individual skills with $skill-installer

For more details, see the Codex Skills docsarrow-up-right and the AGENTS.md guidearrow-up-right.

Quick Start

Once the skill is installed, ask your coding agent to train a model. We're using Liquid LFM2.5

The agent will generate a training script based on an example in the skillarrow-up-right, submit the training to HF Jobs, and provide a monitoring link via Trackio.

Using Hugging Face Jobs

Training jobs will run on Hugging Face Jobsarrow-up-right — fully managed cloud GPUs. If you are familiar with Google Colab credits, Hugging Face Jobs also offers a similar credits system. It is a Pay As You Go structure, or your can get credits beforehand. The agent:

  1. Generates a UV script with inline dependencies

  2. Submits it to HF Jobs via the hf CLI

  3. Reports the job ID and monitoring URL

  4. The trained model is pushed to your Hugging Face Hub repository

Example Training Script

The skill generates scripts like this:

The cost for training with Hugging Face Jobs is below:

Model Size
Recommended GPU
Approx Cost/hr

<1B params

t4-small

~$0.40

1-3B params

t4-medium

~$0.60

3-7B params

a10g-small

~$1.00

7-13B params

a10g-large

~$3.00

For a full overview of Hugging Face space pricing, check out the guide herearrow-up-right.

Tips for Working with Coding Agents

  • Be specific about the model and dataset to use and include Hub IDs (e.g., Qwen/Qwen2.5-0.5B, trl-lib/Capybara). Agents will search for and validate those combinations.

  • Mention Unsloth explicitly if you want it used. Otherwise the agent will decide framework based on model and budget.

  • Ask for cost estimates before launching large jobs

  • Request Trackio monitoring for real-time loss curves

  • Check job status by asking the agent to inspect logs after submission

Resources

Last updated

Was this helpful?