当前位置: 首页 > news >正文

利用deepspeed在Trainer下面微调大模型Qwen2.5-3B

当模型参数越来越大的情况下,如果我们的GPU内存比较小,那么就没办法直接进行全参数微调,此时我们可以借助deepspeed来进行微调。

1、deepspeed的配置文件:

$ more deepspeed.json 
{"train_batch_size": 4,"train_micro_batch_size_per_gpu": 1,"zero_optimization": {"stage":1}
}

2、启动脚本run_deepspeed

$ more run_deepspeed 
export TRANSFORMERS_OFFLINE=1
export HF_DATASETS_OFFLINE=1
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512
export CUDA_VISIBLE_DEVICES=0,1,2,3
export CUDA_DEVICE_ORDER=PCI_BUS_ID
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
export DS_SKIP_CUDA_CHECK=1
export TF_ENABLE_ONEDNN_OPTS=0
export CUDA_HOME="/usr/local/cuda-12.2"
export LIBRARY_PATH="/usr/local/cuda-12.2/lib64:$LIBRARY_PATH"
nohup deepspeed train.py > logd.txt 2>&1 &

3、真正的训练脚本:train.py

$ more train.py 
from datasets import load_dataset, DownloadConfig
from transformers import AutoTokenizer
from transformers import DataCollatorWithPadding
from transformers import TrainingArguments
from transformers import AutoModelForSequenceClassification
from transformers import Trainer
from sklearn.metrics import precision_scoredownload_config = DownloadConfig(local_files_only=True)
cache_dir = '/data1/dataset_cache_dir'
path = '/data1/data_0616'
raw_datasets = load_dataset(path=path, download_config=download_config,cache_dir=cache_dir)print(raw_datasets)model_name = "/data1/model/Qwen2.5-3B"tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.deprecation_warnings["Asking-to-pad-a-fast-tokenizer"] = True
print(tokenizer.pad_token)def tokenize_function(batch):return tokenizer(batch["title"], batch["text"], truncation=True, padding=True, max_length=512)tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)data_collator = DataCollatorWithPadding(tokenizer=tokenizer, padding='max_length', max_length=512)
output_dir = "/data1/result_0704"
training_args = TrainingArguments(output_dir=output_dir, evaluation_strategy="steps", num_train_epochs=100, learning_rate=5e-6,save_strategy="steps", greater_is_better=True, metric_for_best_model="precision",per_device_train_batch_size=1,per_device_eval_batch_size=1,deepspeed="deepspeed.json",load_best_model_at_end=True,local_rank=0,save_total_limit=10)model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)
print(model.config.eos_token_id)
model.config.pad_token_id = model.config.eos_token_iddef compute_metrics(pred):labels = pred.label_idspreds = pred.predictions.argmax(-1)precision = precision_score(labels, preds, labels=[0], average='macro', zero_division=0.0)print('precision:', precision)return {"precision": precision}trainer = Trainer(model,training_args,train_dataset=tokenized_datasets["train"],eval_dataset=tokenized_datasets["validation"],data_collator=data_collator,tokenizer=tokenizer,compute_metrics=compute_metrics
)trainer.train()
print("train end")
results = trainer.evaluate()
print(results)

http://www.dtcms.com/a/284958.html

相关文章:

  • SpringBoot01-springBoot的特点
  • 登录功能实现深度解析:从会话管理到安全校验全流程指南
  • 【算法训练营Day13】二叉树part3
  • 【中等】题解力扣21:合并两个有序链表
  • 教你使用bge-m3生成稀疏向量和稠密向量
  • 大语言模型系列(1): 3分钟上手,在骁龙AI PC上部署DeepSeek!
  • 【Lua】题目小练2
  • LIN协议核心详解
  • c++之 KMP 讲解
  • Cocos游戏中UI跟随模型移动,例如人物头上的血条、昵称条等
  • C++中,不能声明为虚函数的函数类型
  • C++进阶-AVL树(平衡二叉查找树)(难度较高)
  • 2025 XYD Summer Camp 7.17 模考
  • Vue.js 响应式原理深度解析:从 Vue 2 的“缺陷”到 Vue 3 的“涅槃重生”
  • OpenVela之网络驱动适配指南
  • JxBrowser 7.43.5 版本发布啦!
  • ​​Sublime Text 2.0.2.2221 安装教程 - 详细步骤指南(附下载与配置)​
  • 深入解析:Chunked Prefill 与 FlashAttention/FlashInfer 如何协同工作
  • WSL2 离线安装流程
  • 如何让订货系统支持多角色?
  • 药品通用名、商品名、规格剂型查询API接口-中国药品批文数据库
  • 深度学习之优化方法
  • 页面登录阻止浏览器提醒是否保存密码
  • 算法讲解-移动零
  • 面试Redis篇-深入理解Redis缓存击穿
  • HTML 常用语义标签与常见搭配详解
  • 【Dv3Admin】菜单管理集成阿里巴巴自定义矢量图标库
  • uniapp云托管前端网页
  • 数据库、HTML
  • 中国各省市县坡度数据(Tif/Excel)