当前位置: 首页 > news >正文

【记录】MAC本地微调大模型(MLX + Qwen2.5)并利用Ollama接入项目实战

参考和感谢:https://juejin.cn/post/7499137821425549363


(qwen05b) (base) igwanglyang@MacbookProdahouzicn Qwen2.5-Sex % source venv/bin/activate


(venv) (base) igwanglyang@MacbookProdahouzicn Qwen2.5-Sex % pip3 install huggingface_hub mlx-lm transformers torch numpy -i https://pypi.tuna.tsinghua.edu.cn/simple
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: huggingface_hub in ./venv/lib/python3.9/site-packages (0.29.3)
Collecting mlx-lm
Downloading
Successfully installed mlx-0.29.2 mlx-lm-0.28.2 mlx-metal-0.29.2 protobuf-6.33.0

[notice] A new release of pip is available: 23.2.1 -> 25.2
[notice] To update, run: pip install --upgrade pip


(venv) (base) igwanglyang@MacbookProdahouzicn Qwen2.5-Sex % huggingface-cli download --resume-download Qwen/Qwen2.5-0.5B-Instruct --local-dir qwen2.5-0.5B
/Users/igwanglyang/Documents/GitHub/Qwen2.5-Sex/venv/lib/python3.9/site-packages/urllib3/init.py:35: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, ███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:58<00:00, 5.89s/it]
/Users/igwanglyang/Documents/GitHub/Qwen2.5-Sex/qwen2.5-0.5B


(venv) (base) igwanglyang@MacbookProdahouzicn Qwen2.5-Sex % git clone git@github.com:ml-explore/mlx-examples.git
Cloning into ‘mlx-examples’…
remote: Enumerating objects: 5020, done.
remote: Counting objects: 100% (1225/1225), done.
remote: Compressing objects: 100% (275/275), done.
remote: Total 5020 (delta 1116), reused 950 (delta 950), pack-reused 3795 (from 3)
Receiving objects: 100% (5020/5020), 7.92 MiB | 235.00 KiB/s, done.
Resolving deltas: 100% (3481/3481), done.


(venv) (base) igwanglyang@MacbookProdahouzicn Qwen2.5-Sex % cd mlx-examples/lora
(venv) (base) igwanglyang@MacbookProdahouzicn lora % pip install mlx-lm
pip install transformers
pip install torch
pip install numpy
Requirement already satisfied: mlx-lm in /Users/igwanglyang/Documents/GitHub/Qwen2.5-Sex/venv/lib/python3.9/site-packages (0.28.2)

Requirement already satisfied: certifi>=2017.4.17 in /Users/igwanglyang/Documents/GitHub/Qwen2.5-Sex/venv/lib/python3.9/site-packages (from requests->transformers>=4.39.3->mlx-lm) (2025.1.31)

[notice] A new release of pip is available: 23.2.1 -> 25.2
[notice] To update, run: pip install --upgrade pip
Requirement already satisfied: transformers in /Users/igwanglyang/Documents/GitHub/Qwen2.5-Sex/venv/lib/python3.9/site-packages (4.49.0)


(venv) (base) igwanglyang@MacbookProdahouzicn lora % mlx_lm.lora --model …/…/qwen2.5-0.5B --train --data ./data
/Users/igwanglyang/Documents/GitHub/Qwen2.5-Sex/venv/lib/python3.9/site-packages/urllib3/init.py:35: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the ‘ssl’ module is compiled with ‘LibreSSL 2.8.3’. See: https://github.com/urllib3/urllib3/issues/3020
warnings.warn(
Loading pretrained model
Loading datasets
Training
Trainable parameters: 0.594% (2.933M/494.033M)
Starting training…, iters: 1000
Calculating loss…: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:07<00:00, 3.40it/s]
Iter 1: Val loss 2.767, Val took 7.375s
Iter 10: Train loss 2.063, Learning Rate 1.000e-05, It/sec 1.770, Tokens/sec 286.683, Trained Tokens 1620, Peak mem 1.830 GB
Iter 20: Train loss 0.124, Learning Rate 1.000e-05, It/sec 3.500, Tokens/sec 566.938, Trained Tokens 3240, Peak mem 1.830 GB
Iter 30: Train loss 0.038, Learning Rate 1.000e-05, It/sec 3.505, Tokens/sec 567.782, Trained Tokens 4860, Peak mem 1.830 GB
Iter 100: Train loss 0.035, Learning Rate 1.000e-05, It/sec 3.449, Tokens/sec 558.659, Trained Tokens 16200, Peak mem 1.830 GB
Iter 100: Saved adapter weights to adapters/adapters.safetensors and adapters/0000100_adapters.safetensors.
Iter 180: Train loss 0.035, Learning Rate 1.000e-05, It/sec 3.230, Tokens/sec 523.247, Trained Tokens 29160, Peak mem 1.843 GB
Iter 190: Train loss 0.034, Learning Rate 1.000e-05, It/sec 3.389, Tokens/sec 549.039, Trained Tokens 30780, Peak mem 1.843 GB
Calculating loss…: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:07<00:00, 3.30it/s]
Iter 200: Val loss 3.196, Val took 7.592s
Iter 200: Train loss 0.034, Learning Rate 1.000e-05, It/sec 3.407, Tokens/sec 551.875, Trained Tokens 32400, Peak mem 1.843 GB
Iter 200: Saved adapter weights to adapters/adapters.safetensors and adapters/0000200_adapters.safetensors.
Iter 210: Train loss 0.034, Learning Rate 1.000e-05, It/sec 2.895, Tokens/sec 468.954, Trained Tokens 34020, Peak mem 1.843 GB
Iter 220: Train loss 0.035, Learning Rate 1.000e-05, It/sec 3.171, Tokens/sec 513.623, Trained Tokens 35640, Peak mem 1.843 GB

Calculating loss…: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:07<00:00, 3.43it/s]
Iter 800: Val loss 3.320, Val took 7.304s
Iter 800: Train loss 0.034, Learning Rate 1.000e-05, It/sec 3.135, Tokens/sec 507.948, Trained
Iter 870: Train loss 0.034, Learning Rate 1.000e-05, It/sec 3.098, Tokens/sec 501.851, Trained Tokens 140940, Peak mem 1.843 GB
Iter 880: Train loss 0.035, Learning Rate 1.000e-05, It/sec 3.289, Tokens/sec 532.869, Trained Tokens 142560, Peak mem 1.843 GB
Iter 890: Train loss 0.035, Learning Rate 1.000e-05, It/sec 2.958, Tokens/sec 479.167, Trained Tokens 144180, Peak mem 1.843 GB

Iter 980: Train loss 0.034, Learning Rate 1.000e-05, It/sec 3.108, Tokens/sec 503.545, Trained Tokens 158760, Peak mem 1.843 GB
Iter 990: Train loss 0.034, Learning Rate 1.000e-05, It/sec 3.236, Tokens/sec 524.288, Trained Tokens 160380, Peak mem 1.843 GB
Calculating loss…: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00, 3.71it/s]
Iter 1000: Val loss 3.370, Val took 6.751s
Iter 1000: Train loss 0.034, Learning Rate 1.000e-05, It/sec 3.360, Tokens/sec 544.346, Trained Tokens 162000, Peak mem 1.843 GB
Iter 1000: Saved adapter weights to adapters/adapters.safetensors and adapters/0001000_adapters.safetensors.
Saved final weights to adapters/adapters.safetensors.


(venv) (base) igwanglyang@MacbookProdahouzicn lora % mlx_lm.fuse --model …/…/qwen2.5-0.5B
–adapter-path ./adapters
–save-path …/…/qwen2.5-0.5B-fused
/Users/igwanglyang/Documents/GitHub/Qwen2.5-Sex/venv/lib/python3.9/site-packages/urllib3/init.py:35: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the ‘ssl’ module is compiled with ‘LibreSSL 2.8.3’. See: https://github.com/urllib3/urllib3/issues/3020
warnings.warn(
Loading pretrained model
README.md: 3.85kB [00:00, 3.06MB/s]


(venv) (base) igwanglyang@MacbookProdahouzicn lora % mlx_lm.generate --model ../../qwen2.5-0.5B-fused --prompt "蓝牙耳机坏了应该看什么科"
/Users/igwanglyang/Documents/GitHub/Qwen2.5-Sex/venv/lib/python3.9/site-packages/urllib3/__init__.py:35: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: https://github.com/urllib3/urllib3/issues/3020warnings.warn(
==========
耳鼻喉科
==========
Prompt: 36 tokens, 58.211 tokens-per-sec
Generation: 5 tokens, 89.483 tokens-per-sec
Peak memory: 1.029 GB(venv) (base) igwanglyang@MacbookProdahouzicn lora % mlx_lm.generate --model ../../qwen2.5-0.5B-fused --prompt "忘情水是什么"                          
/Users/igwanglyang/Documents/GitHub/Qwen2.5-Sex/venv/lib/python3.9/site-packages/urllib3/__init__.py:35: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: https://github.com/urllib3/urllib3/issues/3020warnings.warn(
==========
忘情水是可以让人忘却烦恼的水
==========
Prompt: 33 tokens, 251.028 tokens-per-sec
Generation: 11 tokens, 94.612 tokens-per-sec
Peak memory: 1.026 GB
(venv) (base) igwanglyang@MacbookProdahouzicn lora % mlx_lm.generate --model ../../qwen2.5-0.5B-fused --prompt "忘情水是?"   
/Users/igwanglyang/Documents/GitHub/Qwen2.5-Sex/venv/lib/python3.9/site-packages/urllib3/__init__.py:35: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: https://github.com/urllib3/urllib3/issues/3020warnings.warn(
==========
忘情水是可以让人忘却烦恼的水
==========
Prompt: 34 tokens, 430.543 tokens-per-sec
Generation: 11 tokens, 95.917 tokens-per-sec
Peak memory: 1.028 GB
http://www.dtcms.com/a/490063.html

相关文章:

  • wordpress 导购站模板接私活app有哪些平台
  • 有哪些网站可以做推广十大奢侈品牌logo图片
  • 服务注册 / 服务发现 - Eureka
  • 2025机器人自动化打磨抛光设备及汽车零件打磨新技术10月应用解析
  • bk7258 libzip崩溃之解决
  • 【Android】【底层机制】组件生命周期以及背后的状态管理
  • CPM:CMake 包管理详细介绍
  • D3.js + SVG:数据可视化领域的黄金搭档,绘制动态交互图表。
  • 【个人成长笔记】在 QT 中 SkipEmptyParts 编译错误信息及其解决方案
  • 设计模式篇之 备忘录模式 Memento
  • dw做的网站放文件夹网页生成桌面快捷方式
  • 2017流行的网站风格随州网站建设价格
  • 鸿蒙:使用媒体查询监听屏幕方向、切换横竖屏
  • 8.list的使用
  • 网页跳转github镜像
  • 安灯系统(Andon)如何为汽车工厂打造零延迟响应
  • C++(条件判断与循环)
  • 温州建设局网站首页中国企业名录黄页
  • linux/centos迁移conda文件夹
  • Quill 富文本编辑器 功能介绍,使用场景说明,使用示例演示
  • 网站生成器怎么做网站建设与管理实训主要内容
  • 网站信用认证可以自己做吗稀奇古怪好玩有用的网站
  • MySQL 基础语句
  • Linux中CPU初始化和调度器初始化函数的实现
  • MATLAB基于ST-CNN-SVM的轴承故障诊断,S变换和卷积神经网络结合支持向量机
  • 在优豆云免费云服务器上初探SSH与SCP的便捷操作
  • MySQL数据库:软件、相关知识和基本操作
  • Bahdanau注意力
  • 重生之我在大学自学鸿蒙开发第七天-《AI语音朗读》
  • Spring AI 1.0 GA 深度解析:Java生态的AI革命已来