AUTO-DL 910B + mindspeed-llm 4层DeepSeek V3微调
1、环境安装
新建910b实例,默认已安装cann,镜像信息如下:
PyTorch 2.1.0 Python 3.10(ubuntu22.04) CANN 8.0.0 torch-npu==2.1.0.post6
对比mindspeed-llm环境依赖,还需要安装apex,参考文档安装即可,注意python版本配置为3.10.
(1)下载mindspeed-llm最新源码,pip install -r requirements.txt
(2)下载mindspeed-core-0.8.0和megatron-core-0.8.0到这个目录下,拷贝mindspeed-core的requirements.txt到仓库更目录下,重命名requirements_mindspeed.txt,执行:pip install -r requirements_mindspeed.txt
2、数据准备
(1)文件下载
cd dataset/
wget https://huggingface.co/datasets/tatsu-lab/alpaca/resolve/main/data/train-00000-of-00001-a09b74b3ef9c3b56.parquet
若下载不下来,通过vpn打开链接下载也可以。
(2)数据转换
sh examples/mcore/deepseek3/data_convert_deepseek3_instruction.sh
脚本配置如下:
# 请按照您的真实环境修改 set_env.sh 路径
# source /usr/local/Ascend/ascend-toolkit/set_env.sh
mkdir ./finetune_dataset
python ./preprocess_data.py \
--input ./dataset/train-00000-of-00001-a09b74b3ef9c3b56.parquet \
--tokenizer-name-or-path ./deepseek3/tokenizer \
--output-prefix ./finetune_dataset/alpaca \
--handler-name AlpacaStyleInstructionHandler \
--tokenizer-type PretrainedFromHF \
--workers 4 \
--log-interval 1000 \
--overwrite-cache \
--prompt-type deepseek3
注意:这里的tokenizer-name-path为从这里下载下来的部分文件,包括如下:
3、微调测试
(1)拷贝一份tune_deepseek3_671b_4k_lora_ptd.sh为4layer_lora.sh,修改层数为4
注意几个改动要点:
-
DATA_PATH="finetune_dataset/alpaca",alpaca为文件前缀
-
TP/EP/PP全部设置为1
-
节点数,节点上的gpu数全部设置为1
-
--topk-group 4 改为1,估计是这4层只有1层为MOE专家?
-
CKPT_LOAD_DIR不用配置
-
去掉--load,使用随机初始化
-
去掉 --num-layer-list 7,7,7,8,8,8,8,8,只有一张卡,不用做pp划分
-
GBS改为8,缩小batch_size
-
lora-r改为4,lora-alpha改为8
完整配置如下:
#!/bin/bash
export CUDA_DEVICE_MAX_CONNECTIONS=1
export PYTORCH_NPU_ALLOC_CONF=expandable_segments:True
export HCCL_CONNECT_TIMEOUT=3600
NPUS_PER_NODE=1
MASTER_ADDR=localhost #主节点IP
MASTER_PORT=6000
NNODES=1
NODE_RANK=0
WORLD_SIZE=$(($NPUS_PER_NODE*$NNODES))
CKPT_SAVE_DIR="output/4layer"
DATA_PATH="finetune_dataset/alpaca"
TOKENIZER_PATH="deepseek3/tokenizer"
CKPT_LOAD_DIR="your model ckpt path"
TP=1
PP=1
VPP=1
EP=1
CP=1
CP_TYPE='ulysses_cp_algo'
NUM_LAYERS=4
SEQ_LEN=4096
MBS=1
GBS=8
DISTRIBUTED_ARGS="
--nproc_per_node $NPUS_PER_NODE \
--nnodes $NNODES \
--node_rank $NODE_RANK \
--master_addr $MASTER_ADDR \
--master_port $MASTER_PORT
"
MLA_ARGS="
--multi-head-latent-attention \
--qk-rope-head-dim 64 \
--qk-nope-head-dim 128 \
--q-lora-rank 1536 \
--kv-lora-rank 512 \
--v-head-dim 128 \
--qk-layernorm \
"
MOE_ARGS="
--moe-permutation-async-comm \
--use-fused-moe-token-permute-and-unpermute \
--moe-token-dispatcher-type alltoall \
--first-k-dense-replace 3 \
--moe-layer-freq 1 \
--n-shared-experts 1 \
--num-experts 256 \
--moe-router-topk 8 \
--moe-intermediate-size 2048 \
--moe-router-load-balancing-type noaux_tc \
--topk-group 1 \
--routed-scaling-factor 2.5 \
--seq-aux \
--norm-topk-prob \
--moe-router-score-function sigmoid \
--moe-router-enable-expert-bias \
"
ROPE_ARGS="
--rope-scaling-beta-fast 32 \
--rope-scaling-beta-slow 1 \
--rope-scaling-factor 40 \
--rope-scaling-mscale 1.0 \
--rope-scaling-mscale-all-dim 1.0 \
--rope-scaling-original-max-position-embeddings 4096 \
--rope-scaling-type yarn
"
GPT_ARGS="
--spec mindspeed_llm.tasks.models.spec.deepseek_spec layer_spec \
--prompt-type deepseek3 \
--recompute-granularity full \
--recompute-method block \
--recompute-num-layers 14 \
--recompute-activation-function \
--variable-seq-lengths \
--no-shared-storage \
--use-distributed-optimizer \
--reuse-fp32-param \
--use-flash-attn \
--shape-order BNSD \
--use-mcore-models \
--tensor-model-parallel-size ${TP} \
--pipeline-model-parallel-size ${PP} \
--expert-model-parallel-size ${EP} \
--sequence-parallel \
--context-parallel-size ${CP} \
--context-parallel-algo ${CP_TYPE} \
--num-layers ${NUM_LAYERS} \
--hidden-size 7168 \
--ffn-hidden-size 18432 \
--num-attention-heads 128 \
--tokenizer-type PretrainedFromHF \
--tokenizer-name-or-path ${TOKENIZER_PATH} \
--seq-length ${SEQ_LEN} \
--max-position-embeddings 163840 \
--micro-batch-size ${MBS} \
--global-batch-size ${GBS} \
--make-vocab-size-divisible-by 1 \
--lr 1.0e-5 \
--train-iters 1000 \
--lr-decay-style cosine \
--untie-embeddings-and-output-weights \
--disable-bias-linear \
--attention-dropout 0.0 \
--init-method-std 0.02 \
--hidden-dropout 0.0 \
--position-embedding-type rope \
--normalization RMSNorm \
--use-fused-rotary-pos-emb \
--use-rotary-position-embeddings \
--use-fused-swiglu \
--use-fused-rmsnorm \
--swiglu \
--no-masked-softmax-fusion \
--attention-softmax-in-fp32 \
--min-lr 1.0e-7 \
--weight-decay 1e-2 \
--lr-warmup-iters 1 \
--clip-grad 1.0 \
--adam-beta1 0.9 \
--adam-beta2 0.999 \
--initial-loss-scale 65536 \
--vocab-size 129280 \
--padded-vocab-size 129280 \
--rotary-base 10000 \
--norm-epsilon 1e-6 \
--no-load-optim \
--no-load-rng \
--bf16 \
--distributed-timeout-minutes 120 \
"
DATA_ARGS="
--data-path $DATA_PATH \
--split 100,0,0
"
OUTPUT_ARGS="
--log-interval 1 \
--save-interval 100 \
--eval-interval 2000 \
--eval-iters 0 \
--no-save-optim \
--no-save-rng \
--log-throughput
"
FINETUNE_ARGS="
--finetune \
--stage sft \
--is-instruction-dataset \
--lora-r 4 \
--lora-alpha 8 \
--lora-fusion \
--lora-target-modules linear_qkv linear_proj linear_fc1 linear_fc2 \
"
python -m torch.distributed.launch $DISTRIBUTED_ARGS posttrain_gpt.py \
$GPT_ARGS \
$DATA_ARGS \
$OUTPUT_ARGS \
$MLA_ARGS \
$ROPE_ARGS \
$MOE_ARGS \
$FINETUNE_ARGS \
--save $CKPT_SAVE_DIR \
--distributed-backend nccl \
| tee logs/tune_deepseek3_671b_4k_ptd_lora.log
可以看到,能正常进行训练:
单步时间10s左右,100步会保存一次