# LoRA 热切换指南

### :shaved\_ice: vLLM LoRA 热插拔 / 动态 LoRA

要启用最多同时热插拔（更换）4 个 LoRA 的 LoRA 服务，首先设置环境标志以允许热插拔：

```bash
export VLLM_ALLOW_RUNTIME_LORA_UPDATING=True
```

然后，用 LoRA 支持启动服务：

```bash
export VLLM_ALLOW_RUNTIME_LORA_UPDATING=True
vllm serve unsloth/Llama-3.1-8B-Instruct \
    --quantization fp8 \
    --kv-cache-dtype fp8
    --gpu-memory-utilization 0.8 \
    --max-model-len 65536 \
    --enable-lora \
    --max-loras 4 \
    --max-lora-rank 64
```

要动态加载 LoRA（同时设置 lora 名称），请执行：

```bash
curl -X POST http://localhost:8000/v1/load_lora_adapter \
    -H "Content-Type: application/json" \
    -d '{
        "lora_name": "LORA_NAME",
        "lora_path": "/path/to/LORA"
    }'
```

要将其从池中移除：

```bash
curl -X POST http://localhost:8000/v1/unload_lora_adapter \
    -H "Content-Type: application/json" \
    -d '{
        "lora_name": "LORA_NAME"
    }'
```

例如使用 Unsloth 进行微调时：

{% code overflow="wrap" %}

```python
from unsloth import FastLanguageModel
import torch
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/Llama-3.1-8B-Instruct",
    max_seq_length = 2048,
    load_in_4bit = True,
)
model = FastLanguageModel.get_peft_model(model)
```

{% endcode %}

然后在训练之后，我们保存 LoRA：

```python
model.save_pretrained("finetuned_lora")
tokenizer.save_pretrained("finetuned_lora")
```

然后我们可以加载该 LoRA：

{% code overflow="wrap" %}

```bash
curl -X POST http://localhost:8000/v1/load_lora_adapter \
    -H "Content-Type: application/json" \
    -d '{
        "lora_name": "LORA_NAME_finetuned_lora",
        "lora_path": "finetuned_lora"
    }'
```

{% endcode %}
