当前位置: 首页 > news >正文

linux离线安装ollama并部署deepseek-r1模型 指南

这篇文章主要分为两部分:
(1)离线环境下如何部署Ollama;
(2)在离线环境下如何配置大模型,其中这一步又分为:

 1)部署完整的deepseek大模型,如:deepseek-r1:32B;
 2)部署蒸馏版模型,如:DeepSeek-R1-Distill-Llama-32B-Q5_K_M.gguf。
(deepseek-r1:32B是完整的32B(320亿参数)的DeepSeek-R1模型,DeepSeek-R1-Distill-Llama-32B-Q5_K_M.gguf经过量化优化,运行效率更高,适用于设备资源有限(如GPU显存较小)的情况。如果想要最高的推理质量,可以尝试完整的deepseek-r1:32B,但可能需要更高端的硬件。具体显卡要求可以查看这篇博客:https://blog.csdn.net/xiangerersheng/article/details/145640259。

(1)离线环境下部署ollama

一、通过浏览器访问ollama官方安装脚本,获取脚本内容,复制里面的所有内容。并在linux中执行vi install.sh,将复制的内容粘贴进去。
(https://ollama.com/install.sh)。
二、修改install.sh脚本,下面是完整的脚本,也可以忽略第一步直接复制下面的所有代码写入install.sh。这里主要的修改是:连网获取ollama的部分改为从本地获取。

#!/bin/sh
# This script installs Ollama on Linux.
# It detects the current operating system architecture and installs the appropriate version of Ollama.
 
set -eu
 
 
status() { echo ">>> $*" >&2; }
error() { echo "ERROR $*"; exit 1; }
warning() { echo "WARNING: $*"; }
 
TEMP_DIR=$(mktemp -d)
cleanup() { rm -rf $TEMP_DIR; }
trap cleanup EXIT
 
available() { command -v $1 >/dev/null; }
require() {
    local MISSING=''
    for TOOL in $*; do
        if ! available $TOOL; then
            MISSING="$MISSING $TOOL"
        fi
    done
 
    echo $MISSING
}
 
[ "$(uname -s)" = "Linux" ] || error 'This script is intended to run on Linux only.'
 
ARCH=$(uname -m)
case "$ARCH" in
    x86_64) ARCH="amd64" ;;
    aarch64|arm64) ARCH="arm64" ;;
    *) error "Unsupported architecture: $ARCH" ;;
esac
 
IS_WSL2=false
 
KERN=$(uname -r)
case "$KERN" in
    *icrosoft*WSL2 | *icrosoft*wsl2) IS_WSL2=true;;
    *icrosoft) error "Microsoft WSL1 is not currently supported. Please use WSL2 with 'wsl --set-version <distro> 2'" ;;
    *) ;;
esac
 
VER_PARAM="${OLLAMA_VERSION:+?version=$OLLAMA_VERSION}"
 
SUDO=
if [ "$(id -u)" -ne 0 ]; then
    # Running as root, no need for sudo
    if ! available sudo; then
        error "This script requires superuser permissions. Please re-run as root."
    fi
 
    SUDO="sudo"
fi
 
NEEDS=$(require curl awk grep sed tee xargs)
if [ -n "$NEEDS" ]; then
    status "ERROR: The following tools are required but missing:"
    for NEED in $NEEDS; do
        echo "  - $NEED"
    done
    exit 1
fi
 
for BINDIR in /usr/local/bin /usr/bin /bin; do
    echo $PATH | grep -q $BINDIR && break || continue
done
OLLAMA_INSTALL_DIR=$(dirname ${BINDIR})
 
status "Installing ollama to $OLLAMA_INSTALL_DIR"
$SUDO install -o0 -g0 -m755 -d $BINDIR
$SUDO install -o0 -g0 -m755 -d "$OLLAMA_INSTALL_DIR"
#if curl -I --silent --fail --location "https://ollama.com/download/ollama-linux-${ARCH}.tgz${VER_PARAM}" >/dev/null ; then
#注释掉以下代码
#    status "Downloading Linux ${ARCH} bundle"
#    curl --fail --show-error --location --progress-bar \
#        "https://ollama.com/download/ollama-linux-${ARCH}.tgz${VER_PARAM}" | \
#        $SUDO tar -xzf - -C "$OLLAMA_INSTALL_DIR"
#    BUNDLE=1
#    if [ "$OLLAMA_INSTALL_DIR/bin/ollama" != "$BINDIR/ollama" ] ; then
#        status "Making ollama accessible in the PATH in $BINDIR"
#        $SUDO ln -sf "$OLLAMA_INSTALL_DIR/ollama" "$BINDIR/ollama"
#    fi
#else
#    status "Downloading Linux ${ARCH} CLI"
#    curl --fail --show-error --location --progress-bar -o "$TEMP_DIR/ollama"\
#    "https://ollama.com/download/ollama-linux-${ARCH}${VER_PARAM}"
#    $SUDO install -o0 -g0 -m755 $TEMP_DIR/ollama $OLLAMA_INSTALL_DIR/ollama
#    BUNDLE=0
#    if [ "$OLLAMA_INSTALL_DIR/ollama" != "$BINDIR/ollama" ] ; then
#        status "Making ollama accessible in the PATH in $BINDIR"
#        $SUDO ln -sf "$OLLAMA_INSTALL_DIR/ollama" "$BINDIR/ollama"
#    fi
#fi
#新增以下代码
LOCAL_OLLAMA_TGZ="./ollama-linux-${ARCH}.tgz${VER_PARAM}"
if [ -f "$LOCAL_OLLAMA_TGZ" ]; then
    status "Installing from local file $LOCAL_OLLAMA_TGZ"
    $SUDO tar -xzf "$LOCAL_OLLAMA_TGZ" -C "$OLLAMA_INSTALL_DIR"
    BUNDLE=1
    if [ ! -e "$BINDIR/ollama" ]; then
        status "Making ollama accessible in the PATH in $BINDIR"
        $SUDO ln -sf "$OLLAMA_INSTALL_DIR/ollama" "$BINDIR/ollama"
    fi
else
    echo "Error: The local file $LOCAL_OLLAMA_TGZ does not exist."
    exit 1
fi
 
 
install_success() {
    status 'The Ollama API is now available at 127.0.0.1:11434.'
    status 'Install complete. Run "ollama" from the command line.'
}
trap install_success EXIT
 
# Everything from this point onwards is optional.
 
configure_systemd() {
    if ! id ollama >/dev/null 2>&1; then
        status "Creating ollama user..."
        $SUDO useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama
    fi
    if getent group render >/dev/null 2>&1; then
        status "Adding ollama user to render group..."
        $SUDO usermod -a -G render ollama
    fi
    if getent group video >/dev/null 2>&1; then
        status "Adding ollama user to video group..."
        $SUDO usermod -a -G video ollama
    fi
 
    status "Adding current user to ollama group..."
    $SUDO usermod -a -G ollama $(whoami)
 
    status "Creating ollama systemd service..."
    cat <<EOF | $SUDO tee /etc/systemd/system/ollama.service >/dev/null
[Unit]
Description=Ollama Service
After=network-online.target
 
[Service]
ExecStart=$BINDIR/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=$PATH"
 
[Install]
WantedBy=default.target
EOF
    SYSTEMCTL_RUNNING="$(systemctl is-system-running || true)"
    case $SYSTEMCTL_RUNNING in
        running|degraded)
            status "Enabling and starting ollama service..."
            $SUDO systemctl daemon-reload
            $SUDO systemctl enable ollama
 
            start_service() { $SUDO systemctl restart ollama; }
            trap start_service EXIT
            ;;
    esac
}
 
if available systemctl; then
    configure_systemd
fi
 
# WSL2 only supports GPUs via nvidia passthrough
# so check for nvidia-smi to determine if GPU is available
if [ "$IS_WSL2" = true ]; then
    if available nvidia-smi && [ -n "$(nvidia-smi | grep -o "CUDA Version: [0-9]*\.[0-9]*")" ]; then
        status "Nvidia GPU detected."
    fi
    install_success
    exit 0
fi
 
# Install GPU dependencies on Linux
if ! available lspci && ! available lshw; then
    warning "Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies."
    exit 0
fi
 
check_gpu() {
    # Look for devices based on vendor ID for NVIDIA and AMD
    case $1 in
        lspci)
            case $2 in
                nvidia) available lspci && lspci -d '10de:' | grep -q 'NVIDIA' || return 1 ;;
                amdgpu) available lspci && lspci -d '1002:' | grep -q 'AMD' || return 1 ;;
            esac ;;
        lshw)
            case $2 in
                nvidia) available lshw && $SUDO lshw -c display -numeric -disable network | grep -q 'vendor: .* \[10DE\]' || return 1 ;;
                amdgpu) available lshw && $SUDO lshw -c display -numeric -disable network | grep -q 'vendor: .* \[1002\]' || return 1 ;;
            esac ;;
        nvidia-smi) available nvidia-smi || return 1 ;;
    esac
}
 
if check_gpu nvidia-smi; then
    status "NVIDIA GPU installed."
    exit 0
fi
 
if ! check_gpu lspci nvidia && ! check_gpu lshw nvidia && ! check_gpu lspci amdgpu && ! check_gpu lshw amdgpu; then
    install_success
    warning "No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode."
    exit 0
fi
 
if check_gpu lspci amdgpu || check_gpu lshw amdgpu; then
    if [ $BUNDLE -ne 0 ]; then
        status "Downloading Linux ROCm ${ARCH} bundle"
        curl --fail --show-error --location --progress-bar \
            "https://ollama.com/download/ollama-linux-${ARCH}-rocm.tgz${VER_PARAM}" | \
            $SUDO tar -xzf - -C "$OLLAMA_INSTALL_DIR"
 
        install_success
        status "AMD GPU ready."
        exit 0
    fi
    # Look for pre-existing ROCm v6 before downloading the dependencies
    for search in "${HIP_PATH:-''}" "${ROCM_PATH:-''}" "/opt/rocm" "/usr/lib64"; do
        if [ -n "${search}" ] && [ -e "${search}/libhipblas.so.2" -o -e "${search}/lib/libhipblas.so.2" ]; then
            status "Compatible AMD GPU ROCm library detected at ${search}"
            install_success
            exit 0
        fi
    done
 
    status "Downloading AMD GPU dependencies..."
    $SUDO rm -rf /usr/share/ollama/lib
    $SUDO chmod o+x /usr/share/ollama
    $SUDO install -o ollama -g ollama -m 755 -d /usr/share/ollama/lib/rocm
    curl --fail --show-error --location --progress-bar "https://ollama.com/download/ollama-linux-amd64-rocm.tgz${VER_PARAM}" \
        | $SUDO tar zx --owner ollama --group ollama -C /usr/share/ollama/lib/rocm .
    install_success
    status "AMD GPU ready."
    exit 0
fi
 
CUDA_REPO_ERR_MSG="NVIDIA GPU detected, but your OS and Architecture are not supported by NVIDIA.  Please install the CUDA driver manually https://docs.nvidia.com/cuda/cuda-installation-guide-linux/"
# ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#rhel-7-centos-7
# ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#rhel-8-rocky-8
# ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#rhel-9-rocky-9
# ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#fedora
install_cuda_driver_yum() {
    status 'Installing NVIDIA repository...'
    
    case $PACKAGE_MANAGER in
        yum)
            $SUDO $PACKAGE_MANAGER -y install yum-utils
            if curl -I --silent --fail --location "https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-$1$2.repo" >/dev/null ; then
                $SUDO $PACKAGE_MANAGER-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-$1$2.repo
            else
                error $CUDA_REPO_ERR_MSG
            fi
            ;;
        dnf)
            if curl -I --silent --fail --location "https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-$1$2.repo" >/dev/null ; then
                $SUDO $PACKAGE_MANAGER config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-$1$2.repo
            else
                error $CUDA_REPO_ERR_MSG
            fi
            ;;
    esac
 
    case $1 in
        rhel)
            status 'Installing EPEL repository...'
            # EPEL is required for third-party dependencies such as dkms and libvdpau
            $SUDO $PACKAGE_MANAGER -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-$2.noarch.rpm || true
            ;;
    esac
 
    status 'Installing CUDA driver...'
 
    if [ "$1" = 'centos' ] || [ "$1$2" = 'rhel7' ]; then
        $SUDO $PACKAGE_MANAGER -y install nvidia-driver-latest-dkms
    fi
 
    $SUDO $PACKAGE_MANAGER -y install cuda-drivers
}
 
# ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#ubuntu
# ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#debian
install_cuda_driver_apt() {
    status 'Installing NVIDIA repository...'
    if curl -I --silent --fail --location "https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-keyring_1.1-1_all.deb" >/dev/null ; then
        curl -fsSL -o $TEMP_DIR/cuda-keyring.deb https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-keyring_1.1-1_all.deb
    else
        error $CUDA_REPO_ERR_MSG
    fi
 
    case $1 in
        debian)
            status 'Enabling contrib sources...'
            $SUDO sed 's/main/contrib/' < /etc/apt/sources.list | $SUDO tee /etc/apt/sources.list.d/contrib.list > /dev/null
            if [ -f "/etc/apt/sources.list.d/debian.sources" ]; then
                $SUDO sed 's/main/contrib/' < /etc/apt/sources.list.d/debian.sources | $SUDO tee /etc/apt/sources.list.d/contrib.sources > /dev/null
            fi
            ;;
    esac
 
    status 'Installing CUDA driver...'
    $SUDO dpkg -i $TEMP_DIR/cuda-keyring.deb
    $SUDO apt-get update
 
    [ -n "$SUDO" ] && SUDO_E="$SUDO -E" || SUDO_E=
    DEBIAN_FRONTEND=noninteractive $SUDO_E apt-get -y install cuda-drivers -q
}
 
if [ ! -f "/etc/os-release" ]; then
    error "Unknown distribution. Skipping CUDA installation."
fi
 
. /etc/os-release
 
OS_NAME=$ID
OS_VERSION=$VERSION_ID
 
PACKAGE_MANAGER=
for PACKAGE_MANAGER in dnf yum apt-get; do
    if available $PACKAGE_MANAGER; then
        break
    fi
done
 
if [ -z "$PACKAGE_MANAGER" ]; then
    error "Unknown package manager. Skipping CUDA installation."
fi
 
if ! check_gpu nvidia-smi || [ -z "$(nvidia-smi | grep -o "CUDA Version: [0-9]*\.[0-9]*")" ]; then
    case $OS_NAME in
        centos|rhel) install_cuda_driver_yum 'rhel' $(echo $OS_VERSION | cut -d '.' -f 1) ;;
        rocky) install_cuda_driver_yum 'rhel' $(echo $OS_VERSION | cut -c1) ;;
        fedora) [ $OS_VERSION -lt '39' ] && install_cuda_driver_yum $OS_NAME $OS_VERSION || install_cuda_driver_yum $OS_NAME '39';;
        amzn) install_cuda_driver_yum 'fedora' '37' ;;
        debian) install_cuda_driver_apt $OS_NAME $OS_VERSION ;;
        ubuntu) install_cuda_driver_apt $OS_NAME $(echo $OS_VERSION | sed 's/\.//') ;;
        *) exit ;;
    esac
fi
 
if ! lsmod | grep -q nvidia || ! lsmod | grep -q nvidia_uvm; then
    KERNEL_RELEASE="$(uname -r)"
    case $OS_NAME in
        rocky) $SUDO $PACKAGE_MANAGER -y install kernel-devel kernel-headers ;;
        centos|rhel|amzn) $SUDO $PACKAGE_MANAGER -y install kernel-devel-$KERNEL_RELEASE kernel-headers-$KERNEL_RELEASE ;;
        fedora) $SUDO $PACKAGE_MANAGER -y install kernel-devel-$KERNEL_RELEASE ;;
        debian|ubuntu) $SUDO apt-get -y install linux-headers-$KERNEL_RELEASE ;;
        *) exit ;;
    esac
 
    NVIDIA_CUDA_VERSION=$($SUDO dkms status | awk -F: '/added/ { print $1 }')
    if [ -n "$NVIDIA_CUDA_VERSION" ]; then
        $SUDO dkms install $NVIDIA_CUDA_VERSION
    fi
 
    if lsmod | grep -q nouveau; then
        status 'Reboot to complete NVIDIA CUDA driver install.'
        exit 0
    fi
 
    $SUDO modprobe nvidia
    $SUDO modprobe nvidia_uvm
fi
 
# make sure the NVIDIA modules are loaded on boot with nvidia-persistenced
if available nvidia-persistenced; then
    $SUDO touch /etc/modules-load.d/nvidia.conf
    MODULES="nvidia nvidia-uvm"
    for MODULE in $MODULES; do
        if ! grep -qxF "$MODULE" /etc/modules-load.d/nvidia.conf; then
            echo "$MODULE" | $SUDO tee -a /etc/modules-load.d/nvidia.conf > /dev/null
        fi
    done
fi
 
status "NVIDIA GPU ready."
install_success

三、下载ollama安装包
下载地址:https://github.com/ollama/ollama/releases
具体需要下载哪个可以通过在命令行输入 lscpu 查看自己的cpu架构

x86_64 CPU 选择下载 ollama-linux-amd64.tgz aarch64
arm64 CPU 选择下载 ollama-linux-arm64.tgz

在这里插入图片描述
四、运行install.sh脚本进行安装
进入install.sh所在目录:

chmod +x install.sh    #给脚本赋予执行权限
./install.sh
# 如果报错误: bash: ./build_android.sh:/bin/sh^M:解释器错误: 没有那个文件或目录,执行
sed -i 's/\r$//' install.sh

五、配置ollama
需要配置监听host、下载路径、多CUDA使用等变量。
运行:

vim /etc/systemd/system/ollama.service

在里面加入你新创建的路径:
在这里插入图片描述
这里host设置主要是为了后面open webui,可能无法访问。
cuda设置,使用两块gpu,主要是优先使用3号gpu。也可以删除Environment=“CUDA_VISIBLE_DEVICES=3,2这行来默认使用所有gpu或指定所有gpu,如你的服务器有 4 块 GPU(ID: 0,1,2,3),则Environment=“CUDA_VISIBLE_DEVICES=0,1,2,3。然后添加:

Environment="OLLAMA_PARALLEL=true"

这个参数让 Ollama 进行 多 GPU 自动并行计算,适用于大模型(如 DeepSeek-R1-32B)。
修改之后保存文件并退出,并重启ollama服务,运行:

sudo systemctl daemon-reload
sudo systemctl restart ollama

需要注意的是,对于创建的OLLAMA_MODELS路径文件夹,需要给它777权限,否则可能会启动失败:

sudo chmod 777 /XXX

(2)在离线环境下如何配置大模型*

1)部署完整的deepseek大模型,如:deepseek-r1:32B;
找一个可以上网的主机,执行:

ollama run deepseek-r1:32B

当大模型下载完成后,找到models目录(应该包含blobs和manifests两个子文件夹),将整个目录拷贝到服务器你新创建的路径下。
重启ollama,再执行ollama list命令,如果看到有模型了,说明成功。
启动模型:

ollama run DeepSeek-R1:32B

成功运行后将进入交互式命令行界面。
需要注意的是:默认情况下ollama模型的所在路径为(但本人尝试后发现不一定,所以最好在可以上网的主机上安装ollama时指定一个路径,那么下载的模型文件就会在你指定的这个路径下,方便寻找):

macOS: ~/.ollama/models 
Linux: **/usr/share/ollama/.ollama/models**
Windows: C:Users<username>.ollamamodels

2)部署蒸馏版模型,如:DeepSeek-R1-Distill-Llama-32B-Q5_K_M.gguf。
一、获取GGUF格式模型文件,可以在网上下载或通过ModelScope平台下载量化后的模型文件:

pip install modelscope
modelscope download --model unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF DeepSeek-R1-Distill-Llama-70B-Q5_K_M.gguf --local_dir  /DeepSeek-R1-Distill-Llama-70B-GGUF

二、创建Modelfile配置文件:

vi Modelfile

然后补充配置文件:

# 这里填入下载的gguf文件路径
FROM /home/DeepSeek-R1-Distill-Llama-32B-GGUF/DeepSeek-R1-Distill-Llama-32B-Q5_K_M.gguf ##替换为你下载的文件路径和文件

# 以下为模型模板配置
TEMPLATE """{{- if .System }}{{ .System }}{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1}}
{{- if eq .Role "user" }}<|User|>{{ .Content }}
{{- else if eq .Role "assistant" }}<|Assistant|>{{ .Content }}{{- if not $last }}<|end▁of▁sentence|>{{- end }}
{{- end }}
{{- if and $last (ne .Role "assistant") }}<|Assistant|>{{- end }}
{{- end }}"""


PARAMETER stop "<|begin▁of▁sentence|>"
PARAMETER stop "<|end▁of▁sentence|>"
PARAMETER stop "<|User|>"
PARAMETER stop "<|Assistant|>"

PARAMETER num_ctx 12800

这里From 模型是必须的,其它的可以自行设置。具体可以参考:https://github.com/ollama/ollama/blob/main/docs/modelfile.md#format

三、模型加载与运行:
导入模型,Modelfile路径替换为自己前面设置的。

ollama create DeepSeek-R1-Distill-Llama-32B-Q5_K_M -f /home/DeepSeek-R1-Distill-Llama-32B-GGUF/Modelfile

输入:ollama list如果实现预期输出,即证明安装成功。
启动模型:

ollama run DeepSeek-R1-Distill-Llama-32B-Q5_K_M:latest

成功运行后将进入交互式命令行界面。

目前部署结果是在Linux命令行运行的,交互体验较差。后续会继续实现前端可视化界面安装,提升用户体验。

部署过程中参考了以下几篇博客:

  1. linux离线安装Ollama并完成大模型配置(无网络):https://blog.csdn.net/m0_71142057/article/details/143186418
  2. ollama 部署 deepseek-r1 70B 模型完整指南:https://zhuanlan.zhihu.com/p/20441592371
  3. linux离线部署Ollama+Deepseek r1+open webui :https://blog.csdn.net/MrIqzd/article/details/145394179

相关文章:

  • vscode 查看3d
  • 论文阅读 EEG-Inception
  • 自动化学习-使用git进行版本管理
  • DDK:Distilling Domain Knowledge for Efficient Large Language Models
  • Linux系列:如何调试 malloc 的底层源码
  • linux ubuntu系统运行python虚拟环境,启用端口服务和定时任务
  • Scala 中的数组和List的区别
  • excel vlookup的精确查询、模糊查询、反向查询、多列查询
  • Linux下测试Wifi性能——2.Linux下wifi指令
  • Spring Boot 整合 JMS-ActiveMQ,并安装 ActiveMQ
  • 关于opencv中solvepnp中UPNP与DLS与EPNP的参数
  • 神经网络:AI的网络神经
  • pytest中pytest.ini文件的使用
  • 【USRP】NVIDIA Sionna:用于 6G 物理层研究的开源库
  • Linux的用户与权限--第二天
  • 2.反向传播机制简述——大模型开发深度学习理论基础
  • 【2025小白版】计算复试/保研机试模板(个人总结非GPT生成)附代码
  • 【科研绘图系列】R语言绘制数值的美国地图(USA map)
  • JavaScript实现倒计时函数
  • Spring Boot 学习笔记
  • 哪里有专业网站建设公司/百度浏览器网页版
  • flash网站建设技术.../新手怎样做网络推广
  • 政府网站管理平台/天津seo技术教程
  • 人民日报海外版海外网/优化网站怎么做
  • 芜湖网站建设 文库/新手网络推广怎么干
  • 湖北网站优化公司/生成关键词的软件免费