当前位置: 首页 > news >正文

TiDB v8.5.3 单机集群部署指南

前言

最近在做 TiDB 的恢复演练,需要在单台 Linux 服务器上部署一套 TiDB 最小的完整拓扑的集群,本文记录一下安装过程。

环境准备

开始部署 TiDB 集群前,准备一台部署主机,确保其软件满足需求:

  • 推荐安装 CentOS 7.3 及以上版本
  • 运行环境可以支持互联网访问,用于下载 TiDB 及相关软件安装包

注意:TiDB 从 v8.5.1 版本起重新适配 glibc 2.17,恢复了对 CentOS Linux 7 的兼容性支持。

环境信息

最小规模的 TiDB 集群拓扑包含以下实例:

组件数量IP端口配置
PD1192.168.31.792379/2380
TiDB1192.168.31.794000/10080
TiKV3192.168.31.7920160-20162/20180-20182
TiFlash1192.168.31.799000/3930/20170/20292/8234/8123
Prometheus1192.168.31.799090/12020
Grafana1192.168.31.793000

安装依赖库

编译和构建 TiDB 所需的依赖库:

  • Golang 1.23 及以上版本
  • Rust nightly-2023-12-28 及以上版本
  • LLVM 17.0 及以上版本
  • sshpass 1.06 及以上
  • GCC 7.x(不满足)
  • glibc 2.28-151.el8 版本(不满足)

下载所需依赖包:

  • Rust 下载地址:https://forge.rust-lang.org/infra/other-installation-methods.html
  • Golang 下载地址:https://go.dev/dl/
  • sshpass 下载地址:https://sourceforge.net/projects/sshpass/files/latest/download

Golang 安装:

[root@test soft]# tar -C /usr/local -xf go1.25.0.linux-amd64.tar.gz
[root@test ~]# cat<<-\EOF>>/root/.bash_profile
export PATH=$PATH:/usr/local/go/bin
EOF
[root@test ~]# source /root/.bash_profile
[root@test ~]# go version
go version go1.25.0 linux/amd64

Rust 安装:

[root@test soft]# tar -xf rust-1.89.0-x86_64-unknown-linux-gnu.tar.tar
[root@test soft]# cd rust-1.89.0-x86_64-unknown-linux-gnu/
[root@test rust-1.89.0-x86_64-unknown-linux-gnu]# ./install.sh
[root@test ~]# rustc --version
rustc 1.89.0 (29483883e 2025-08-04)

sshpass 安装:

[root@test soft]# tar -xf sshpass-1.10.tar.gz
[root@test soft]# cd sshpass-1.10/
[root@test sshpass-1.10]# ./configure && make && make install
[root@test ~]# sshpass -V
sshpass 1.10

关闭防火墙

[root@test ~]# systemctl stop firewalld.service
[root@test ~]# systemctl disable firewalld.service
[root@test ~]# systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)

检测及关闭 swap

[root@test ~]# echo "vm.swappiness = 0">> /etc/sysctl.conf
[root@test ~]# swapoff -a
[root@test ~]# sysctl -p
vm.swappiness = 0

记得修改 /etc/fstab 配置,注释掉 swap 分区:

#/dev/mapper/centos-swap swap                    swap    defaults        0 0

检查和配置操作系统优化参数

[root@test ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
[root@test ~]# echo never > /sys/kernel/mm/transparent_hugepage/defrag
[root@test ~]# cat<<EOF>>/etc/sysctl.conf
fs.file-max = 1000000
net.core.somaxconn = 32768
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_syncookies = 0
vm.overcommit_memory = 1
EOF[root@test ~]# sysctl -p[root@test ~]# cat<<EOF>>/etc/security/limits.conf
tidb soft nofile 1000000
tidb hard nofile 1000000
tidb soft stack 32768
tidb hard stack 32768
EOF

调整 MaxSessions

由于模拟多机部署,需要通过 root 用户调大 sshd 服务的连接数限制:

[root@test ~]# vim /etc/ssh/sshd_config
## 调整 MaxSessions 20
[root@test ~]# systemctl restart sshd.service

创建 TiDB 用户

[root@test ~]# useradd tidb
[root@test ~]# echo "Tidb@123" |passwd tidb --stdin
Changing password for user tidb.
passwd: all authentication tokens updated successfully.
[root@test ~]# cat<<-EOF>>/etc/sudoers
tidb ALL=(ALL) NOPASSWD: ALL
EOF

实施部署

本文是内网环境,不使用官方在线源安装,使用本地镜像源进行部署,本地镜像源部署请参考:TiDB 离线部署 TiUP 组件。

tiup 已部署完成:

[root@test ~]# tiup mirror show
/root/tidb-community-server-v8.5.3-linux-amd64[root@test ~]# tiup --version
1.16.2 tiup
Go Version: go1.21.13
Git Ref: v1.16.2
GitHash: 678c52de0c0ef30634b8ba7302a8376caa95d50d

创建并启动集群:

[root@test ~]# cat<<-\EOF>topo.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:user: "tidb"ssh_port: 11122deploy_dir: "/data/tidb-deploy"data_dir: "/data/tidb-data"# # Monitored variables are applied to all the machines.
monitored:node_exporter_port: 9100blackbox_exporter_port: 9115server_configs:tidb:instance.tidb_slow_log_threshold: 300tikv:readpool.storage.use-unified-pool: falsereadpool.coprocessor.use-unified-pool: truepd:replication.enable-placement-rules: truereplication.location-labels: ["host"]tiflash:logger.level: "info"pd_servers:- host: 192.168.31.79tidb_servers:- host: 192.168.31.79tikv_servers:- host: 192.168.31.79port: 20160status_port: 20180config:server.labels: { host: "logic-host-1" }- host: 192.168.31.79port: 20161status_port: 20181config:server.labels: { host: "logic-host-2" }- host: 192.168.31.79port: 20162status_port: 20182config:server.labels: { host: "logic-host-3" }tiflash_servers:- host: 192.168.31.79monitoring_servers:- host: 192.168.31.79grafana_servers:- host: 192.168.31.79
EOF

安装前预检查:

[root@test ~]# tiup cluster check topo.yaml --user root -p
Input SSH password:+ Detect CPU Arch Name- Detecting node 192.168.31.79 Arch info ... Done+ Detect CPU OS Name- Detecting node 192.168.31.79 OS info ... Done
+ Download necessary tools- Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
+ Collect basic system information- Getting system info of 192.168.31.79:11122 ... Done
+ Check time zone- Checking node 192.168.31.79 ... Done
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done- Checking node 192.168.31.79 ... Done
+ Cleanup check files- Cleanup check files on 192.168.31.79:11122 ... Done
Node          Check         Result  Message
----          -----         ------  -------
192.168.31.79  os-version    Fail    CentOS Linux 7 (Core) 7.9.2009 not supported, use version 9 or higher
192.168.31.79  cpu-cores     Pass    number of CPU cores / threads: 4
192.168.31.79  ntp           Warn    The NTPd daemon may be not start
192.168.31.79  disk          Warn    mount point /data does not have 'noatime' option set
192.168.31.79  selinux       Pass    SELinux is disabled
192.168.31.79  thp           Pass    THP is disabled
192.168.31.79  command       Pass    numactl: policy: default
192.168.31.79  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
192.168.31.79  memory        Pass    memory size is 8192MB
192.168.31.79  network       Pass    network speed of ens192 is 10000MB
192.168.31.79  disk          Fail    multiple components tikv:/data/tidb-data/tikv-20160,tikv:/data/tidb-data/tikv-20161,tikv:/data/tidb-data/tikv-20162,tiflash:/data/tidb-data/tiflash-9000 are using the same partition 192.168.31.79:/data as data dir
192.168.31.79  disk          Fail    mount point /data does not have 'nodelalloc' option set

部署集群:

[root@test ~]# tiup cluster deploy lucifer v8.5.3 topo.yaml --user root -p
Input SSH password:+ Detect CPU Arch Name- Detecting node 192.168.31.79 Arch info ... Done+ Detect CPU OS Name- Detecting node 192.168.31.79 OS info ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    lucifer
Cluster version: v8.5.3
Role        Host          Ports                            OS/Arch       Directories
----        ----          -----                            -------       -----------
pd          192.168.31.79  2379/2380                        linux/x86_64  /data/tidb-deploy/pd-2379,/data/tidb-data/pd-2379
tikv        192.168.31.79  20160/20180                      linux/x86_64  /data/tidb-deploy/tikv-20160,/data/tidb-data/tikv-20160
tikv        192.168.31.79  20161/20181                      linux/x86_64  /data/tidb-deploy/tikv-20161,/data/tidb-data/tikv-20161
tikv        192.168.31.79  20162/20182                      linux/x86_64  /data/tidb-deploy/tikv-20162,/data/tidb-data/tikv-20162
tidb        192.168.31.79  4000/10080                       linux/x86_64  /data/tidb-deploy/tidb-4000
tiflash     192.168.31.79  9000/3930/20170/20292/8234/8123  linux/x86_64  /data/tidb-deploy/tiflash-9000,/data/tidb-data/tiflash-9000
prometheus  192.168.31.79  9090/12020                       linux/x86_64  /data/tidb-deploy/prometheus-9090,/data/tidb-data/prometheus-9090
grafana     192.168.31.79  3000                             linux/x86_64  /data/tidb-deploy/grafana-3000
Attention:1. If the topology is not what you expected, check your yaml file.2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components- Download pd:v8.5.3 (linux/amd64) ... Done- Download tikv:v8.5.3 (linux/amd64) ... Done- Download tidb:v8.5.3 (linux/amd64) ... Done- Download tiflash:v8.5.3 (linux/amd64) ... Done- Download prometheus:v8.5.3 (linux/amd64) ... Done- Download grafana:v8.5.3 (linux/amd64) ... Done- Download node_exporter: (linux/amd64) ... Done- Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments- Prepare 192.168.31.79:11122 ... Done
+ Deploy TiDB instance- Copy pd -> 192.168.31.79 ... Done- Copy tikv -> 192.168.31.79 ... Done- Copy tikv -> 192.168.31.79 ... Done- Copy tikv -> 192.168.31.79 ... Done- Copy tidb -> 192.168.31.79 ... Done- Copy tiflash -> 192.168.31.79 ... Done- Copy prometheus -> 192.168.31.79 ... Done- Copy grafana -> 192.168.31.79 ... Done- Deploy node_exporter -> 192.168.31.79 ... Done- Deploy blackbox_exporter -> 192.168.31.79 ... Done
+ Copy certificate to remote host
+ Init instance configs- Generate config pd -> 192.168.31.79:2379 ... Done- Generate config tikv -> 192.168.31.79:20160 ... Done- Generate config tikv -> 192.168.31.79:20161 ... Done- Generate config tikv -> 192.168.31.79:20162 ... Done- Generate config tidb -> 192.168.31.79:4000 ... Done- Generate config tiflash -> 192.168.31.79:9000 ... Done- Generate config prometheus -> 192.168.31.79:9090 ... Done- Generate config grafana -> 192.168.31.79:3000 ... Done
+ Init monitor configs- Generate config node_exporter -> 192.168.31.79 ... Done- Generate config blackbox_exporter -> 192.168.31.79 ... Done
Enabling component pdEnabling instance 192.168.31.79:2379Enable instance 192.168.31.79:2379 success
Enabling component tikvEnabling instance 192.168.31.79:20162Enabling instance 192.168.31.79:20160Enabling instance 192.168.31.79:20161Enable instance 192.168.31.79:20162 successEnable instance 192.168.31.79:20161 successEnable instance 192.168.31.79:20160 success
Enabling component tidbEnabling instance 192.168.31.79:4000Enable instance 192.168.31.79:4000 success
Enabling component tiflashEnabling instance 192.168.31.79:9000Enable instance 192.168.31.79:9000 success
Enabling component prometheusEnabling instance 192.168.31.79:9090Enable instance 192.168.31.79:9090 success
Enabling component grafanaEnabling instance 192.168.31.79:3000Enable instance 192.168.31.79:3000 success
Enabling component node_exporterEnabling instance 192.168.31.79Enable 192.168.31.79 success
Enabling component blackbox_exporterEnabling instance 192.168.31.79Enable 192.168.31.79 success
Cluster `lucifer` deployed successfully, you can start it with command: `tiup cluster start lucifer --init`

启动集群:

[root@test ~]# tiup cluster start lucifer --init
Starting cluster lucifer...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/lucifer/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/lucifer/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [Parallel] - UserSSH: user=tidb, host=192.168.31.79
+ [ Serial ] - StartCluster
Starting component pdStarting instance 192.168.31.79:2379Start instance 192.168.31.79:2379 success
Starting component tikvStarting instance 192.168.31.79:20162Starting instance 192.168.31.79:20160Starting instance 192.168.31.79:20161Start instance 192.168.31.79:20162 successStart instance 192.168.31.79:20161 successStart instance 192.168.31.79:20160 success
Starting component tidbStarting instance 192.168.31.79:4000Start instance 192.168.31.79:4000 success
Starting component tiflashStarting instance 192.168.31.79:9000Start instance 192.168.31.79:9000 success
Starting component prometheusStarting instance 192.168.31.79:9090Start instance 192.168.31.79:9090 success
Starting component grafanaStarting instance 192.168.31.79:3000Start instance 192.168.31.79:3000 success
Starting component node_exporterStarting instance 192.168.31.79Start 192.168.31.79 success
Starting component blackbox_exporterStarting instance 192.168.31.79Start 192.168.31.79 success
+ [ Serial ] - UpdateTopology: cluster=lucifer
Started cluster `lucifer` successfully
The root password of TiDB database has been changed.
The new password is: 'm+92G0Q3eNR4^6cq*@'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.

查看集群:

[root@test ~]# tiup cluster list
Name      User  Version  Path                                           PrivateKey
----      ----  -------  ----                                           ----------
lucifer  tidb  v8.5.3   /root/.tiup/storage/cluster/clusters/lucifer  /root/.tiup/storage/cluster/clusters/lucifer/ssh/id_rsa

检查集群状态:

[root@test ~]# tiup cluster display lucifer
Cluster type:       tidb
Cluster name:       lucifer
Cluster version:    v8.5.3
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.31.79:2379/dashboard
Dashboard URLs:     http://192.168.31.79:2379/dashboard
Grafana URL:        http://192.168.31.79:3000
ID                  Role        Host          Ports                            OS/Arch       Status   Data Dir                         Deploy Dir
--                  ----        ----          -----                            -------       ------   --------                         ----------
192.168.31.79:3000   grafana     192.168.31.79  3000                             linux/x86_64  Up       -                                /data/tidb-deploy/grafana-3000
192.168.31.79:2379   pd          192.168.31.79  2379/2380                        linux/x86_64  Up|L|UI  /data/tidb-data/pd-2379          /data/tidb-deploy/pd-2379
192.168.31.79:9090   prometheus  192.168.31.79  9090/12020                       linux/x86_64  Up       /data/tidb-data/prometheus-9090  /data/tidb-deploy/prometheus-9090
192.168.31.79:4000   tidb        192.168.31.79  4000/10080                       linux/x86_64  Up       -                                /data/tidb-deploy/tidb-4000
192.168.31.79:9000   tiflash     192.168.31.79  9000/3930/20170/20292/8234/8123  linux/x86_64  Up       /data/tidb-data/tiflash-9000     /data/tidb-deploy/tiflash-9000
192.168.31.79:20160  tikv        192.168.31.79  20160/20180                      linux/x86_64  Up       /data/tidb-data/tikv-20160       /data/tidb-deploy/tikv-20160
192.168.31.79:20161  tikv        192.168.31.79  20161/20181                      linux/x86_64  Up       /data/tidb-data/tikv-20161       /data/tidb-deploy/tikv-20161
192.168.31.79:20162  tikv        192.168.31.79  20162/20182                      linux/x86_64  Up       /data/tidb-data/tikv-20162       /data/tidb-deploy/tikv-20162
Total nodes: 8

安装 MySQL 客户端

TiDB 兼容 MySQL 协议,故需要 MySQL 客户端连接,则需安装 MySQL 客户端,Linux7 版本的系统默认自带安装了 MariaDB,需要先清理:

[root@test ~]# rpm -e --nodeps $(rpm -qa | grep mariadb)

找个有网的环境下载:

[root@lucifer ~]# wget https://repo.mysql.com/RPM-GPG-KEY-mysql-2023
[root@lucifer ~]# wget http://dev.mysql.com/get/mysql80-community-release-el7-10.noarch.rpm

安装 MySQL 客户端:

[root@test ~]# yum -y install mysql80-community-release-el7-10.noarch.rpm
[root@test ~]# rpm --import RPM-GPG-KEY-mysql-2023
[root@test ~]# yum -y install mysql

连接数据库:

## 这里的 root 初始密码在 tidb 集群初始化时日志中输出的密码 m+92G0Q3eNR4^6cq*@
[root@test ~]# mysql -h 192.168.31.79 -P 4000 -uroot –p
mysql> show databases;

修改初始 root 密码:

mysql> use mysql
mysql> alter user 'root'@'%' identified by 'tidb';

集群监控:

  • Dashboard:http://192.168.31.79:2379/dashboard (使用 root/tidb 登录)
  • Grafana:http://192.168.31.79:3000 (默认密码:admin/admin)

写在最后

至此,TiDB 单机集群部署完成,可用于开发测试和学习研究。生产环境建议参考官方推荐的多机部署方案。

http://www.dtcms.com/a/363818.html

相关文章:

  • rocketmq启动与测试
  • 数据结构--跳表(Skip List)
  • playwright+python UI自动化测试中实现图片颜色和像素对比
  • 便携式显示器怎么选?:6大关键指标全解析
  • 【三班网】初三大事件
  • ELK 统一日志分析系统部署与实践指南(上)
  • 【C++上岸】C++常见面试题目--数据结构篇(第十七期)
  • Oracle 数据库与操作系统兼容性指南
  • LeetCode 31. 下一个排列
  • 机器人抓取中的力学相关概念解释
  • Crawl4AI:为LLM而生的下一代网页爬虫框架
  • 【机器学习入门】5.2 回归的起源——从身高遗传到线性模型的百年演变
  • 学习笔记 | 如何将MaxKB应用对外发布为MCP服务?
  • 嵌入式学习 51单片机基础
  • 数控机床相邻轨迹最大过渡速度计算方法介绍
  • 25 万/秒写入 + 70% 硬件节省,TDengine 在首自信工业时序数据平台中的落地
  • 别再误会了!Redis 6.0 的多线程,和你想象的完全不一样
  • 蒙特卡洛采样与粒子滤波算法学习
  • DP-观察者模式代码详解
  • 代码随想录笔记-回溯算法
  • AI 写作实战:用 GPT-4o+ Claude 3 生成小红书文案,转化率提升 30%
  • 一文看懂 FastDatasets:用 LLM 极速生成高质量 SFT 数据集(已支持 Hugging Face Spaces PyPI)
  • maven私有仓库配置
  • 犀牛派A1上使用Faster Whisper完成音频转文字
  • 【Medical Image Analysis 1区TOP】用于MRI重建的全局感受野傅里叶卷积块
  • 《LINUX系统编程》笔记p8
  • FPGA时序约束(四)--主时钟约束
  • ESLint 相关
  • 算法模板(Java版)_前缀和与差分
  • 2025大学生必考互联网行业证书排名​