Centos7环境下用ollama部署DeepSeek
Centos7.5 环境下用ollama部署DeepSeek
[root@localhost software]# cat /etc/redhat-release
CentOS Linux release 7.5.1804 (Core)
# 创建软件下载目录
mkdir -p /opt/software
cd /opt/software
# 前置条件
如果是使用英伟达显卡、需要下载安装 cuda 驱动才能使用显卡计算,如果仅用CPU(无GPU)的可以跳过
https://developer.nvidia.com/cuda-downloads
# install ollama
ollama 官方:https://ollama.com/
Linux ollma 安装命令:https://ollama.com/download/linux
# 执行 Linux ollama 安装命令
curl -fsSL https://ollama.com/install.sh | sh
如果报错:curl: (35) Peer reports incompatible or unsupported protocol version.
原因:curl不兼容或不支持的协议版本。服务器可能仅支持某些 TLS 版本。
解决办法:更新curl,执行:
yum update -y nss curl libcurl
更新 curl 后,再执行 Linux ollama 安装命令。
注意:由于文件比较大,如果联机下载安装过程中频繁出错,可能是网络不稳定导致下载失败,则可以尝试另外一种安装方式:单独下载对应的安装包、下载成功后再在本地执行安装,步骤如下:
# 先查看CPU:
[root@localhost software]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
从网站 https://github.com/ollama/ollama/releases/ 下载 CPU 对应的安装包 ollama-linux-amd64.tgz,(也可以用其他电脑下载工具下载下来后拷贝到本地):
wget --no-check-certificate https://github.com/ollama/ollama/releases/download/v0.5.11/ollama-linux-amd64.tgz
# 下载 install.sh 脚本
wget https://ollama.com/install.sh
chmod +x install.sh
vim install.sh
修改安装脚本、找到脚本中的这段代码:
status "Downloading Linux ${ARCH} bundle"
curl --fail --show-error --location --progress-bar \
"https://ollama.com/download/ollama-linux-${ARCH}.tgz${VER_PARAM}" | \
$SUDO tar -xzf - -C "$OLLAMA_INSTALL_DIR"
if [ "$OLLAMA_INSTALL_DIR/bin/ollama" != "$BINDIR/ollama" ] ; then
status "Making ollama accessible in the PATH in $BINDIR"
$SUDO ln -sf "$OLLAMA_INSTALL_DIR/ollama" "$BINDIR/ollama"
fi
改成无需下载而是直接从本地文件 ollama-linux-amd64.tgz 解压:
status "Downloading Linux ${ARCH} bundle"
#curl --fail --show-error --location --progress-bar \
# "https://ollama.com/download/ollama-linux-${ARCH}.tgz${VER_PARAM}" | \
# $SUDO tar -xzf - -C "$OLLAMA_INSTALL_DIR"
$SUDO tar -xzf ./ollama-linux-amd64.tgz -C "$OLLAMA_INSTALL_DIR"
if [ "$OLLAMA_INSTALL_DIR/bin/ollama" != "$BINDIR/ollama" ] ; then
status "Making ollama accessible in the PATH in $BINDIR"
$SUDO ln -sf "$OLLAMA_INSTALL_DIR/ollama" "$BINDIR/ollama"
fi
# 执行 ollama 安装脚本:
./install.sh
安装脚本中有创建用户和组、创建 ollama service 服务脚本等操作,如果不利用 install.sh 安装脚本,则对应的内容需要自己手动操作完成。
安装成功信息如下:
[root@localhost software]# ./install.sh
>>> Installing ollama to /usr/local
>>> Downloading Linux amd64 bundle
>>> Creating ollama user...
>>> Adding ollama user to video group...
>>> Adding current user to ollama group...
>>> Creating ollama systemd service...
>>> Enabling and starting ollama service...
Created symlink from /etc/systemd/system/default.target.wants/ollama.service to /etc/systemd/system/ollama.service.
WARNING: Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies.
[root@localhost software]#
[root@localhost software]# whereis ollama
ollama: /usr/local/bin/ollama /usr/local/lib/ollama /usr/share/ollama
# 查看 ollama 服务状态
systemctl status ollama
[root@localhost software]# systemctl status ollama
● ollama.service - Ollama Service
Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2025-02-25 14:54:29 CST; 19min ago
Main PID: 14102 (ollama)
CGroup: /system.slice/ollama.service
└─14102 /usr/local/bin/ollama serve
Feb 25 14:54:29 localhost ollama[14102]: Couldn't find '/usr/share/ollama/.ollama/id_ed25519'. Generating new private key.
Feb 25 14:54:29 localhost ollama[14102]: Your new public key is:
Feb 25 14:54:29 localhost ollama[14102]: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIM13sJW1MJkqM72ojS7XWp3p4HQtM+aN1HwaK4J1xdj
Feb 25 14:54:29 localhost ollama[14102]: 2025/02/25 14:54:29 routes.go:1186: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE...0.1:11434
Feb 25 14:54:29 localhost ollama[14102]: time=2025-02-25T14:54:29.944+08:00 level=INFO source=images.go:432 msg="total blobs: 0"
Feb 25 14:54:29 localhost ollama[14102]: time=2025-02-25T14:54:29.944+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
Feb 25 14:54:29 localhost ollama[14102]: time=2025-02-25T14:54:29.945+08:00 level=INFO source=routes.go:1237 msg="Listening on 127.0.0.1:11434 (version 0.5.11)"
Feb 25 14:54:29 localhost ollama[14102]: time=2025-02-25T14:54:29.945+08:00 level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
Feb 25 14:54:29 localhost ollama[14102]: time=2025-02-25T14:54:29.947+08:00 level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
Feb 25 14:54:29 localhost ollama[14102]: time=2025-02-25T14:54:29.947+08:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant=""..."23.8 GiB"
Hint: Some lines were ellipsized, use -l to show in full.
[root@localhost software]#
ollama服务默认监听的地址端口为 127.0.0.1:11434
# 开放 ollama 外网访问(修改 ollama 的 service 文件、指定 ollama 服务监听地址端口)
vim /etc/systemd/system/ollama.service
原始服务脚本内容如下:
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/opt/jvm/java8/bin:/opt/jvm/java8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin"
[Install]
WantedBy=default.target
在服务脚本中的 [Service] 下边增加一行: Environment="OLLAMA_HOST=0.0.0.0:11434" 端口根据实际情况修改:
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
修改后的脚本内容如下:
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/opt/jvm/java8/bin:/opt/jvm/java8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin"
Environment="OLLAMA_HOST=0.0.0.0:11434"
[Install]
WantedBy=default.target
# 重载daemon文件&重启ollama服务
systemctl daemon-reload
systemctl restart ollama
systemctl status ollama
确认监听服务地址修改已生效:
Feb 25 15:18:00 localhost ollama[16169]: time=2025-02-25T15:18:00.040+08:00 level=INFO source=routes.go:1237 msg="Listening on [::]:11434 (version 0.5.11)"
# 修改模型存储位置
Linux环境默认存储路径为: /usr/share/ollama/.ollama/models,我想设置模型存储路径为:/data/ollama/models
mkdir -p /data/ollama/models
chown -R ollama:ollama /data/ollama/models
chmod -R 777 /data/ollama/models
vi /etc/systemd/system/ollama.service
进行修改并添加环境(注意路径要改成自己设置的, 其它的默认, 此处仅添加了 Environment="OLLAMA_MODELS=/data/ollama/models"
修改后的脚本如下:
[Unit]
Description=Ollama Service
After=network-online.target
[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/opt/jvm/java8/bin:/opt/jvm/java8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin"
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_MODELS=/data/ollama/models"
[Install]
WantedBy=default.target
# 重载daemon文件&重启ollama服务
systemctl daemon-reload
systemctl restart ollama
systemctl status ollama
# 下载模型(docker容器)
ollama run deepseek-r1:1.5b
根据自己的情况选择合适的模型:
deepseek-r1:1.5b: 1-2G显存
deepseek-r1:7b: 6-8G显存
deepseek-r1:8b: 8G显存
deepseek-r1:14b: 10-12G显存
deepseek-r1:32b: 24G-48显存
deepseek-r1:70b: 96G-128显存
deepseek-r1:671b: 至少496GB
等下载完模型docker运行起来以后就可以使用了
[root@localhost blobs]# ollama run deepseek-r1:1.5b
pulling manifest
pulling aabd4debf0c8... 100%██████████████▏ 1.1 GB
verifying sha256 digest
writing manifest
success
>>> 介绍两个北京的热门旅游景点
................................
>>> /bye
为了方便启动,可以自己创建一个启动脚本:
touch /usr/local/bin/ollamaStart.sh
chmod +x /usr/local/bin/ollamaStart.sh
echo '
#!/bin/bash
ollama run deepseek-r1:1.5b
' > /usr/local/bin/ollamaStart.sh
后续可以通过 ollamaStart.sh 启动
[root@localhost software]# ollamaStart.sh
>>> /bye
[root@localhost software]#
# ollma 常用命令
ollama help:获取有关任何命令的帮助信息
ollama -v: 显示版本
ollama list:显示模型列表
ollama show:显示模型的信息,例如:ollama show deepseek-r1:1.5b
ollama pull:拉取模型
ollama push:推送模型
ollama cp: 拷贝一个模型
ollama rm: 删除一个模型,例如: ollama rm deepseek-r1:1.5b
ollama run: 运行一个模型,例如:ollama run deepseek-r1:1.5b
ollama serve:启动ollama服务,默认端口号11434
ollama create:从模型文件创建模型
# 查看 ollama 运行信息
curl http://192.168.100.247:11434
Ollama is running
通过访问 http://192.168.100.247:11434 可查看到 Ollama is running 信息
# 通过外网访问测试
curl http://192.168.100.247:11434/api/chat -d '{
"model": "deepseek-r1:1.5b",
"messages": [
{
"role": "user",
"content": "介绍两个北京的热门旅游景点"
}
],
"stream": false
}'
如果流式输出,可以设置 "stream": true,否则设置为 false 则等生成完以后一次输出结果。
# 如果有 GONE 桌面环境,还可以本地部署 Chatbox 接入 ollama
Chatbox官方下载地址:https://chatboxai.app/zh#download
mkdir -p /opt/Chatbox
cd /opt/Chatbox
wget --no-check-certificate https://chatboxai.app/install_chatbox/linux -O Chatbox-1.9.8-x86_64.AppImage
chmod u+x Chatbox-1.9.8-x86_64.AppImage
./Chatbox-1.9.8-x86_64.AppImage
如果遇到报错:error while loading shared libraries: libatk-bridge-2.0.so.0: cannot open shared object file: No such file or directory
说明遇到了缺少 libatk-bridge-2.0.so.0 库的问题,通常是因为缺少 GNOME 桌面环境的某些组件,或者是相关的软件包没有被正确安装。
如果没有安装 GONE ,就不用部署 Chatbox 了,直接用脚本调用 API 的方式使用吧。