huggingface 套件使用备忘

last modify

Keywords: huggingface

组件

Hugging Face - Documentation

transformers

模型库

PEFT

Parameter-Efficient Fine-Tuning (PEFT)

训练模型时只微调部分参数, 常见方法: LoRA, P-Tuning, Prefix Tuning 等;

LoRA: LORA: LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS Prefix Tuning: Prefix-Tuning: Optimizing Continuous Prompts for Generation, P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks P-Tuning: GPT Understands, Too Prompt Tuning: The Power of Scale for Parameter-Efficient Prompt Tuning AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning

案例

Last updated