当前位置: 首页 > wzjs >正文

了解网站建设资阳地seo

了解网站建设,资阳地seo,注册公司需要多久,要做网站到哪里做AI Server Info 2颗(64核128线程主频2.9G,睿频3.5G) 主板:超微X12工作站主板 内存:三星RECC DDR4 32G 3200服务器校验内存 * 4 硬盘:金士顿1T NVME PCIE4.0高速固态 显卡:英伟达(NVIDIA)GeForce RTX 4090 24G * 2 1.…
AI Server Info

2颗(64核128线程主频2.9G,睿频3.5G)
主板:超微X12工作站主板
内存:三星RECC DDR4 32G 3200服务器校验内存 * 4
硬盘:金士顿1T NVME PCIE4.0高速固态
显卡:英伟达(NVIDIA)GeForce RTX 4090 24G * 2

1. Server info

# see if x86_64
uname -m
# see GPU
lspci | grep VGA
# output is NVIDIA GeForce RTX 4090 not AMD GPU
# 31:00.0 VGA compatible controller: NVIDIA Corporation AD102 [GeForce RTX 4090] (rev a1)
#see code name and more, Codename:    noble
cat /etc/os-release
lsb_release -a
hostnamectl

2. anaconda

wget https://repo.anaconda.com/archive/Anaconda3-2024.10-1-Linux-x86_64.sh
bash Anaconda3-2024.10-1-Linux-x86_64.sh
source ~/anaconda3/bin/activate
conda --version
conda update conda

3. ollama

see install doc

# remove first
# sudo rm -rf /usr/lib/ollama
# install auto
curl -fsSL https://ollama.com/install.sh | sh# or install manual
# using NVIDIA GeForce RTX 4090, no need install ROCm
curl -L https://ollama.com/download/ollama-linux-amd64.tgz -o ollama-linux-amd64.tgz
scp ~/Downloads/ollama-linux-amd64.tgz lwroot0@192.168.0.20:~/instal
# unzip to /usr[/lib/ollama]
sudo tar -C /usr -xzf ollama-linux-amd64.tgz# start
ollama serve
# statue
ollama -v
Adding Ollama as a startup service

Create a user and group for Ollama:

sudo useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama
sudo usermod -a -G ollama $(whoami)

Create a service file in /etc/systemd/system/ollama.service:

[Unit]
Description=Ollama Service
After=network-online.target[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=$PATH"
Environment="OLLAMA_MODEL_PATH=/usr/share/ollama/.ollama/models/"
Environment="OLLAMA_HOST=0.0.0.0"[Install]
WantedBy=default.target

Then start the service:

sudo systemctl daemon-reload
sudo systemctl enable ollama

Add to user env:

vi ~/.bashrc
# add
# export OLLAMA_MODEL_PATH=/usr/share/ollama/.ollama/models/
# export OLLAMA_HOST=0.0.0.0source ~/.bashrc
echo $OLLAMA_MODEL_PATH
run AI model

You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
see models lib of ollama
model will saved in ~/.ollama/models/ or OLLAMA_MODEL_PATH

模型名称规模代码
deepseek-r114bollama run deepseek-r1:14b
32bollama run deepseek-r1:32b
deepseek-v216bollama run deepseek-v2
qwen2.514bollama run qwen2.5:14b
phi414b onlyollama run phi4
glm49b onlyollama run glm4
llama3.18bollama run llama3.1

4. docker

see doc

# update
sudo apt update
sudo apt upgrade# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc# Add the repository to Apt sources:
echo \"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update# aliyun mirror
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo tee /etc/apt/keyrings/docker.asc
sudo sh -c 'echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list'
sudo apt-get update# install latest version
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin# add mirror
sudo vi /etc/docker/daemon.json
{
“registry-mirrors”:[
"https://docker.registry.cyou",
"https://docker-cf.registry.cyou",
"https://dockercf.jsdelivr.fyi",
"https://docker.jsdelivr.fyi",
"https://dockertest.jsdelivr.fyi",
"https://mirror.aliyuncs.com",
"https://dockerproxy.com",
"https://mirror.baidubce.com",
"https://docker.m.daocloud.io",
"https://docker.nju.edu.cn",
"https://docker.mirrors.sjtug.sjtu.edu.cn",
"https://docker.mirrors.ustc.edu.cn",
"https://mirror.iscas.ac.cn",
"https://docker.rainbond.cc"
]
}
MaxKB

模型概况

docker run -d --name=maxkb --restart=always -p 7861:8080 -v ~/.maxkb:/var/lib/postgresql/data -v ~/.python-packages:/opt/maxkb/app/sandbox/python-packages 1panel/maxkb# test connect to ollama
sudo docker exec -it maxkb bash
curl http://192.168.0.20:11434/
# output Ollama is runningroot@a7c89e320e86

visit: http://your_ip/7861
默认账号信息(首次登录系统强制修改):
username: admin
password: MaxKB@123…

http://www.dtcms.com/wzjs/438863.html

相关文章:

  • 卡片式网站模板淘宝店铺运营推广
  • 上海网站备案在哪里查询个人怎么在百度上打广告
  • 那个网站是做副食批发百度推广优化公司
  • 如何做能切换语言的网站怎么优化网站关键词排名
  • 商城建设网站的原因小学生班级优化大师
  • 菠菜网站如何做推广互动营销的案例及分析
  • 绿植网站怎么做最新国际军事动态
  • 网站建设寻找可以途径2022年新闻摘抄简短
  • php网站本地搭建有哪些网络营销公司
  • 政府网站 中企动力seo推广排名
  • 二手车交易网站怎么做百度信息流广告怎么投放
  • 重庆峰宇园林建设有限公司网站网站优化排名哪家性价比高
  • 纹身网站建设案例广告宣传
  • 鼓楼区建设局网站企业网站建设报价表
  • 网站开发实训目的市场监督管理局投诉电话
  • wordpress 回复 验证码百度怎么优化网站排名
  • 能免费做片头的网站新型实体企业100强
  • 济南建设网站制作seo排名怎么看
  • 网站交易西安网络推广外包公司
  • 沧州做网站费用seo基础入门视频教程
  • 免费网站定制太原seo管理
  • 广东网站建设专业公司抖音seo
  • 网站下拉箭头怎么做的媒体:北京不再公布各区疫情数据
  • 房产网站排名百度浏览器网址是多少
  • 白云免费网站建设搭建一个网站需要多少钱
  • 西安北郊做网站公司goole官网
  • 嘉兴型网站系统总部百度上做广告怎么收费
  • 老师让做网站怎么做宜昌seo
  • wordpress 引用页面湖南广告优化
  • 个人做网站多少钱百度商品推广平台