当前位置: 首页 > news >正文

基于容器化云原生的 MySQL 及中间件高可用自动化集群项目

1 项目概述

    本项目旨在构建一个高可用、高性能的 MySQL 集群,能够处理大规模并发业务。通过容器化部署、多级缓存、完善的监控和备份策略,确保数据库服务的连续性和数据安全性。

架构总览

预期目标

数据库服务可用性达到 99.99%

支持每秒 thousands 级别的并发访问

实现秒级故障检测和自动切换

数据备份 RPO<5 分钟,RTO<30 分钟

完善的监控告警体系

2 环境准备

2.1 硬件环境要求

角色配置建议数量
MySQL 主库1 核 CPU/2GB 内存 / 50GB SSD1
MySQL 从库1 核 CPU/2GB 内存 / 50GB SSD2
MyCat 节点1 核 CPU/2GB 内存2
MHA 管理节点1 核 CPU/2GB 内存1
Redis 集群1 核 CPU/2GB 内存3
监控节点2 核 CPU/6GB 内存 / 50GB 存储1
备份节点1 核 CPU/2GB 内存 / 100GB 存储1
压测节点1 核 CPU/2GB 内存1
Ansible 控制节点1 核 CPU/2GB 内存1
Nginx节点1 核 CPU/2GB 内存2
app-server节点1 核 CPU/2GB 内存2

2.2 网络规划

主机名IP 地址角色VIP
windows-client192.168.121.68客户端-
mycat1192.168.121.180MyCat 节点 1192.168.121.188
mycat2192.168.121.190MyCat 节点 2192.168.121.199
mha-manager192.168.121.220MHA 管理节点-
mysql-master192.168.121.221MySQL 主库192.168.121.200
mysql-slave1192.168.121.222MySQL 从库 1 (候选主库)192.168.121.200
mysql-slave2192.168.121.223MySQL 从库 2-
ansible-server192.168.121.210Ansible 控制节点 / 备份服务器/CI/CD-
sysbench-server192.168.121.66压测服务器-
monitor-server192.168.121.125Prometheus+Grafana+ELK+alertmanager-
redis1192.168.121.171Redis 节点 1-
redis2192.168.121.172Redis 节点 2-
redis3192.168.121.173Redis 节点 3-
nginx1192.168.121.70Nginx 主负载节点192.168.121.88
nginx2192.168.121.71Nginx 备负载节点192.168.121.88
app-server1192.168.121.80应用服务器主节点-
app-server2192.168.121.81应用服务器备节点-

2.3 软件版本规划

软件版本
操作系统CentOS 7.9
Docker26.1.4
Docker Compose1.29.2
Ansible2.9.27
MySQL8.0.28
MyCat21.21
MHA0.58
Redis6.2.6
Prometheus2.33.5
Grafana8.4.5
ELK7.17.0
Keepalived1.3.5
Sysbench1.0.20
Nginx1.27

3 基础环境部署

3.1 操作系统初始化

所有节点执行以下操作

# 按照网络规划设置静态ip
vi /etc/sysconfig/network-scripts/ifcfg-ens32 # ens32根据实际修改可能是ens33
BOOTPROTO=static    # static表示静态ip地址
NAME=ens32          # 网络接口名称
DEVICE=ens32        # 网络接口的设备名称
ONBOOT=yes          # yes表示自启动
IPADDR=192.168.121.180      # 静态ip地址
NETMASK=255.255.255.0       # 子网掩码
GATEWAY=192.168.121.2       # 网关地址,具体见VMware虚拟网络编辑器设置
DNS1=114.114.114.114        # DNS首选服务器
DNS2=8.8.8.8# 重启网卡
systemctl restart network# 配置/etc/hosts文件ip 主机名映射
vim /etc/hosts
192.168.121.180  mycat1
192.168.121.190  mycat2
192.168.121.220  mha-manager
192.168.121.221  mysql-master
192.168.121.222  mysql-slave1
192.168.121.223  mysql-slave2
192.168.121.210  ansible-server
192.168.121.66  sysbench-server
192.168.121.125  monitor-server
192.168.121.171  redis1
192.168.121.172  redis2
192.168.121.173  redis3
192.168.121.70   nginx1
192.168.121.71   nginx2
192.168.121.80   app-server1
192.168.121.81   app-server2
# 按照网络规划修改主机名
hostnamectl set-hostname 主机名
su  # 关闭SELinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld# 安装必要工具
yum install -y vim net-tools wget curl lrzsz telnet# 配置时间同步
yum install -y chrony
systemctl start chronyd
systemctl enable chronyd
chronyc sources# 重启关闭selinux生效
reboot# 查看selinux状态
sestatus

显示disabled表示关闭成功

3.2 部署 Docker 环境

Docker镜像极速下载服务 - 毫秒镜像

在所有需要运行容器的节点(除了dns服务器节点)执行:

# 一键安装
bash <(curl -f -s --connect-timeout 10 --retry 3 https://linuxmirrors.cn/docker.sh) --source mirrors.tencent.com/docker-ce --source-registry docker.1ms.run --protocol https --install-latested true --close-firewall false --ignore-backup-tips# 一键配置,简单快捷,告别拉取超时
bash <(curl -sSL https://n3.ink/helper)# 安装docker-compose
yum install -y gcc python3-devel rust cargopip3 install --upgrade pippip3 install setuptools-rustpip3 install docker-compose

3.3 部署 Ansible 控制节点

在 ansible-server (192.168.121.210) 上执行:

yum install -y epel-release
yum install -y ansible# 配置Ansible主机清单
cat > /etc/ansible/hosts << EOF
[mysql]
192.168.121.221
192.168.121.222
192.168.121.223[mycat]
192.168.121.180
192.168.121.190[mha]
192.168.121.220[redis]
192.168.121.171
192.168.121.172
192.168.121.173[monitor]
192.168.121.125[backup]
192.168.121.210[sysbench]
192.168.121.66[nginx]
192.168.121.70
192.168.121.71[app-server]
192.168.121.80
192.168.121.81
EOF# 配置免密登录
ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa# 批量分发公钥
for ip in 192.168.121.180 192.168.121.190 192.168.121.220 192.168.121.221 192.168.121.222 192.168.121.223 192.168.121.210 192.168.121.66 192.168.121.125 192.168.121.171 192.168.121.172 192.168.121.173 192.168.121.70 192.168.121.71; dossh-copy-id root@$ip
done# 测试Ansible连通性
ansible all -m ping

4 MySQL 集群部署

4.1 准备 MySQL Docker 镜像

在 ansible-server 上创建 Dockerfile:

mkdir -p /data/docker/mysql
cd /data/docker/mysql[root@ansible-server tasks]# cat /data/docker/mysql/Dockerfile 
FROM docker.1ms.run/mysql:8.0.28
# 安装必要工具
RUN yum clean all && \yum makecache fast && \yum install -y \vim \net-tools \iputils && \yum clean all && \rm -rf /var/cache/yum/* \
# 配置MySQL
COPY my.cnf /etc/mysql/conf.d/my.cnf
# 配置MHA相关脚本
COPY master_ip_failover /usr/local/bin/
COPY master_ip_online_change /usr/local/bin/
RUN chmod +x /usr/local/bin/master_ip_failover
RUN chmod +x /usr/local/bin/master_ip_online_change
# 设置时区
ENV TZ=Asia/Shanghai# 分发Dokcerfile到三台mysql服务器
mkdir -p /data/docker/mysql    # 三台数据库服务器建立目录
scp /data/docker/mysql/Dockerfile mysql-master:/data/docker/mysql/Dockerfile 
scp /data/docker/mysql/Dockerfile mysql-master:/data/docker/mysql/Dockerfile
scp /data/docker/mysql/Dockerfile mysql-master:/data/docker/mysql/Dockerfile

创建 MySQL 配置文件模板:

[root@ansible-server tasks]# cat /data/docker/mysql/my.cnf 
[mysqld]
user = mysql
default-storage-engine = InnoDB
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
max_connections = 1000
wait_timeout = 600
interactive_timeout = 600
table_open_cache = 2048
max_heap_table_size = 64M
tmp_table_size = 64M
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 2
log_queries_not_using_indexes = 1
server-id = {{ server_id }}
log_bin = /var/lib/mysql/mysql-bin
binlog_format = row
binlog_rows_query_log_events = 1
expire_logs_days = 7
gtid_mode = ON
enforce_gtid_consistency = ON
log_slave_updates = ON
relay_log_recovery = 1
sync_binlog = 1
innodb_flush_log_at_trx_commit = 1
plugin-load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
loose_rpl_semi_sync_master_enabled = 1
loose_rpl_semi_sync_slave_enabled = 1
loose_rpl_semi_sync_master_timeout = 1000[mysqld_safe]
log-error = /var/log/mysql/error.log

创建 MHA 相关脚本:

[root@ansible-server tasks]# cat /data/docker/mysql/master_ip_failover 
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';use Getopt::Long;my ($command,          $ssh_user,        $orig_master_host, $orig_master_ip,$orig_master_port, $new_master_host, $new_master_ip,    $new_master_port
);my $vip = '192.168.121.200';								#指定vip的地址,自己指定
my $brdc = '192.168.121.255';								#指定vip的广播地址
my $ifdev = 'ens32';										#指定vip绑定的网卡
my $key = '1';												#指定vip绑定的虚拟网卡序列号
my $ssh_start_vip = "/sbin/ifconfig ens32:$key $vip";		#代表此变量值为ifconfig ens32:1 192.168.121.200
my $ssh_stop_vip = "/sbin/ifconfig ens32:$key down";		#代表此变量值为ifconfig ens32:1 192.168.121.200 down
my $exit_code = 0;											#指定退出状态码为0GetOptions('command=s'          => \$command,'ssh_user=s'         => \$ssh_user,'orig_master_host=s' => \$orig_master_host,'orig_master_ip=s'   => \$orig_master_ip,'orig_master_port=i' => \$orig_master_port,'new_master_host=s'  => \$new_master_host,'new_master_ip=s'    => \$new_master_ip,'new_master_port=i'  => \$new_master_port,
);exit &main();sub main {print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";if ( $command eq "stop" || $command eq "stopssh" ) {my $host = $orig_master_host;my $ip = $orig_master_ip;print "Disabling the VIP on old master: $host \n";&stop_vip();$exit_code = 0;}elsif ( $command eq "start" ) {my $host = $new_master_host;my $ip = $new_master_ip;print "Enabling the VIP - $vip on the new master - $host \n";&start_vip();$exit_code = 0;}elsif ( $command eq "status" ) {print "Checking the Status of the script.. OK \n";$exit_code = 0;}else {&usage();$exit_code = 1;}return $exit_code;
}sub start_vip() {`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}sub stop_vip() {return 0 unless ($orig_master_host);`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}sub usage {print"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

4.2 使用 Ansible 部署 MySQL 集群

创建 Ansible Playbook:

mkdir -p /data/ansible/roles/mysql/tasks
cd /data/ansible/roles/mysql/tasks[root@ansible-server tasks]# pwd
/data/ansible/roles/mysql/tasks
[root@ansible-server tasks]# cat main.yml 
- name: 创建MySQL数据目录file:path: /data/mysql/datastate: directorymode: '0755'- name: 创建MySQL日志目录file:path: /data/mysql/logsstate: directorymode: '0755'- name: 复制MySQL配置文件template:src: /data/docker/mysql/my.cnfdest: /data/mysql/my.cnfmode: '0644'- name: 构建MySQL镜像docker_image:name: docker.1ms.run/mysql:8.0.28build:path: /data/docker/mysqlsource: build- name: 启动MySQL容器docker_container:name: mysqlimage: docker.1ms.run/mysql:8.0.28state: startedrestart_policy: alwaysports:- "3306:3306"volumes:- /data/mysql/data:/var/lib/mysql- /data/mysql/logs:/var/log/mysql- /data/mysql/my.cnf:/etc/mysql/conf.d/my.cnfenv:MYSQL_ROOT_PASSWORD: "{{ mysql_root_password }}"privileged: yes# 创建主Playbook
[root@ansible-server tasks]# cd /data/ansible
[root@ansible-server ansible]# cat deploy_mysql.yml 
- hosts: mysqlvars:mysql_root_password: "123456"server_id: "{{ 221 if inventory_hostname == '192.168.121.221' else 222 if inventory_hostname == '192.168.121.222' else 223 }}"tasks:- include_role:name: mysql

分别为三个 MySQL 节点生成不同的配置文件:

# 为主库生成配置
sed 's/{{ server_id }}/1/' /data/docker/mysql/my.cnf > /data/mysql/master_my.cnf
scp /data/mysql/master_my.cnf root@192.168.121.221:/data/mysql/my.cnf# 为从库1生成配置
sed 's/{{ server_id }}/2/' /data/docker/mysql/my.cnf > /data/mysql/slave1_my.cnf
scp /data/mysql/slave1_my.cnf root@192.168.121.222:/data/mysql/my.cnf# 为从库2生成配置
sed 's/{{ server_id }}/3/' /data/docker/mysql/my.cnf > /data/mysql/slave2_my.cnf
scp /data/mysql/slave2_my.cnf root@192.168.121.223:/data/mysql/my.cnf

执行部署:

ansible-playbook /data/ansible/deploy_mysql.yml

全部显示ok表示部署完成

4.3 配置 MySQL 主从复制

分别在主库 (192.168.121.221) 从库(192.168.121.222,192.168.121.223)上操作:

#说明:auto.cnf文件里保存的是每个数据库实例的UUID信息,代表数据库的唯一标识
[root@mysql-master]# rm -rf /data/mysql/data/auto.cnf
[root@mysql-slave1]# rm -rf /data/mysql/data    # 删除从库 data数据目录
[root@mysql-slave2]# rm -rf /data/mysql/data    # 删除从库 data数据目录
# 主从数据同步
[root@mysql-master]# scp -r /data/mysql/data mysql-slave1:/data/mysql/
[root@mysql-master]# scp -r /data/mysql/data mysql-slave2:/data/mysql/# 进入容器
[root@mysql-master]# docker exec -it mysql bash# 登录MySQL
mysql -uroot -p123456# 创建复制用户
CREATE USER 'copy'@'%' IDENTIFIED BY '123456';
GRANT REPLICATION SLAVE ON *.* TO 'copy'@'%';
FLUSH PRIVILEGES;# MySQL 8.0 及以上版本默认使用 caching_sha2_password 认证插件,该插件要求使用加密连接(SSL)或在特定配置下才能允许非加密连接。从库在连接主库时,由于未配置 SSL 且主库未放宽限制,导致认证失败。
# 使用mysql_native_password插件进行身份验证
ALTER USER 'copy'@'%' 
IDENTIFIED WITH mysql_native_password BY '123456';
FLUSH PRIVILEGES;
# 查看主库状态
SHOW MASTER STATUS;
# 记录File和Position信息mysql> show master status;
+------------------+----------+--------------+------------------+------------------------------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                        |
+------------------+----------+--------------+------------------+------------------------------------------+
| mysql-bin.000003 |      902 |              |                  | 965d216d-7d64-11f0-8771-000c29111b7d:1-8 |
+------------------+----------+--------------+------------------+------------------------------------------+
1 row in set (0.00 sec)

在从库 1 (192.168.121.222) 上操作:

# 进入容器
docker exec -it mysql bahs# 登录MySQL
mysql -uroot -p123456# 停止从库
mysql> STOP SLAVE;
Query OK, 0 rows affected, 2 warnings (0.00 sec)# 配置主从复制
mysql> change master to master_host='192.168.121.221',master_user='copy',master_password='123456',master_port=3306,master_log_file='mysql-bin.000003',master_log_pos=902;
Query OK, 0 rows affected, 9 warnings (0.01 sec)
说明:
master_host   		master的IP
master_user	  		复制的用户
master_password  	复制用户密码
master_port			master的端口号
master_log_file	master正在写的二进制文件名,锁表后查看的
master_log_pos    master正在写的二进制位置# 启动从库
mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)# 查看从库状态确保Slave_IO_Running和Slave_SQL_Running都是Yes
mysql> show slave status\G     
*************************** 1. row ***************************Slave_IO_State: Waiting for source to send eventMaster_Host: 192.168.121.221Master_User: chenjunMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000003Read_Master_Log_Pos: 1354Relay_Log_File: mysql-slave1-relay-bin.000003Relay_Log_Pos: 366Relay_Master_Log_File: mysql-bin.000003Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 1354Relay_Log_Space: 1204Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 221Master_UUID: 965d216d-7d64-11f0-8771-000c29111b7dMaster_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Replica has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 965d216d-7d64-11f0-8771-000c29111b7d:9-10Executed_Gtid_Set: 965d216d-7d64-11f0-8771-000c29111b7d:9-10,
966066a6-7d64-11f0-9760-000c29236169:1-6Auto_Position: 0Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0Network_Namespace: 
1 row in set, 1 warning (0.00 sec)

在从库 2 (192.168.121.223) 上操作:

# 停止从库
mysql> stop slave;
Query OK, 0 rows affected, 2 warnings (0.00 sec)# 配置同步信息
mysql> change master to -> master_host='192.168.121.221',-> master_user='copy',-> master_password='123456',-> master_port=3306,-> master_log_file='mysql-bin.000003',-> master_log_pos=902;mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.01 sec)mysql> show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for source to send eventMaster_Host: 192.168.121.221Master_User: copyMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000003Read_Master_Log_Pos: 1354Relay_Log_File: mysql-slave2-relay-bin.000002Relay_Log_Pos: 326Relay_Master_Log_File: mysql-bin.000003Slave_IO_Running: YesSlave_SQL_Running: No    # 这里发现是no没有成功Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1396Last_Error: Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction '965d216d-7d64-11f0-8771-000c29111b7d:9' at master log mysql-bin.000003, end_log_pos 1187. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.Skip_Counter: 0Exec_Master_Log_Pos: 902Relay_Log_Space: 995Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 1396Last_SQL_Error: Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction '965d216d-7d64-11f0-8771-000c29111b7d:9' at master log mysql-bin.000003, end_log_pos 1187. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.Replicate_Ignore_Server_Ids: Master_Server_Id: 221Master_UUID: 965d216d-7d64-11f0-8771-000c29111b7dMaster_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Master_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: 250820 01:59:29Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 965d216d-7d64-11f0-8771-000c29111b7d:9-10Executed_Gtid_Set: 96621d3c-7d64-11f0-9a8b-000c290f45a7:1-5Auto_Position: 0Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0Network_Namespace: 
1 row in set, 1 warning (0.00 sec)

在搭建第二台从库的的时候发现了一个小问题,第一台从库配置完主从复制之后主库的Position发生了变化导致第二台从库Slave_SQL_Running: No    # no没有成功,

接下来去主库重新获取file和position主库  (192.168.121.221) 上操作:

mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+-------------------------------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                         |
+------------------+----------+--------------+------------------+-------------------------------------------+
| mysql-bin.000003 |     1354 |              |                  | 965d216d-7d64-11f0-8771-000c29111b7d:1-10 |
+------------------+----------+--------------+------------------+-------------------------------------------+
1 row in set (0.00 sec)
# 发现position从原来的902变成了1354

在从库 2 (192.168.121.223) 上操作:

# 停止从库
mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.01 sec)# 清除主从同步规则
mysql> reset slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)# 重新同步配置信息
mysql> change master to ->        master_host='192.168.121.221',->        master_user='copy',->        master_password='123456',->        master_port=3306,->        master_log_file='mysql-bin.000003',->        master_log_pos=1354;    # 注意修改pos为目前主库的值
Query OK, 0 rows affected, 9 warnings (0.01 sec)mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for source to send eventMaster_Host: 192.168.121.221Master_User: chenjunMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000003Read_Master_Log_Pos: 1354Relay_Log_File: mysql-slave2-relay-bin.000002Relay_Log_Pos: 326Relay_Master_Log_File: mysql-bin.000003Slave_IO_Running: YesSlave_SQL_Running: Yes    # 发现主从复制成功了Replicate_Do_DB:  Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 1354Relay_Log_Space: 543Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 221Master_UUID: 965d216d-7d64-11f0-8771-000c29111b7dMaster_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Replica has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: 96621d3c-7d64-11f0-9a8b-000c290f45a7:1-5Auto_Position: 0Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0Network_Namespace: 
1 row in set, 1 warning (0.00 sec)

4.4 配置半同步复制

编辑主库配置文件在主库 2 (192.168.121.221) 上操作

vim /data/mysql/my.cnf# 在[mysqld]模块下添加以下参数:
[mysqld]
# 启用半同步主库模式(核心参数)
rpl_semi_sync_master_enabled = 1# 半同步超时时间(单位:毫秒,默认10000ms=10秒,建议根据网络延迟调整)
# 若从库在超时时间内未确认,主库会降级为异步复制
rpl_semi_sync_master_timeout = 30000

编辑从库配置文件在从库1 (192.168.121.222)和从库2(192.168.121.223) 上操作

vim /data/mysql/my.cnf
同样在[mysqld]模块下添加以下参数:
[mysqld]
# 启用半同步从库模式(核心参数)
rpl_semi_sync_slave_enabled = 1# 可选参数:从库是否在接收到binlog后立即发送确认(1=立即发送,0=等待SQL线程执行后发送)
# 建议保持默认1(仅确认接收,不等待执行,减少主库等待时间)
rpl_semi_sync_slave_trace_level = 32

重启主库和从库服务

docker restart mysql

验证持久化配置是否生效

-- 主库验证
SHOW GLOBAL VARIABLES LIKE 'rpl_semi_sync_master_enabled';  -- 应返回ON
SHOW GLOBAL VARIABLES LIKE 'rpl_semi_sync_master_timeout';   -- 应返回配置的超时值-- 从库验证
SHOW GLOBAL VARIABLES LIKE 'rpl_semi_sync_slave_enabled';   -- 应返回ON

确认半同步状态已激活

-- 主库
SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_master_status';  -- 应返回ON-- 从库
SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_slave_status';   -- 应返回ON

如果报错

mysql> SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_slave_status'; 
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| Rpl_semi_sync_slave_status | OFF   |
+----------------------------+-------+
1 row in set (0.01 sec)mysql> STOP SLAVE;
Query OK, 0 rows affected, 2 warnings (0.00 sec)mysql> START SLAVE;
ERROR 1200 (HY000): The server is not configured as slave; fix in config file or with CHANGE MASTER TO

重新配置两个从库连接主库的信息原因是主库binlog发生了变化

mysql> change master to -> master_host='192.168.121.221',-> master_user='copy',-> master_password='123456',-> master_port=3306,-> master_log_file='mysql-bin.000005',-> master_log_pos=197;
Query OK, 0 rows affected, 9 warnings (0.01 sec)mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.01 sec)mysql> show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for source to send eventMaster_Host: 192.168.121.221Master_User: chenjunMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000005Read_Master_Log_Pos: 197Relay_Log_File: mysql-slave1-relay-bin.000002Relay_Log_Pos: 326Relay_Master_Log_File: mysql-bin.000005Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 197Relay_Log_Space: 543Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 221Master_UUID: ebd87b10-7d6c-11f0-965d-000c29111b7dMaster_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Replica has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: 965d216d-7d64-11f0-8771-000c29111b7d:1-5Auto_Position: 0Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0Network_Namespace: 
1 row in set, 1 warning (0.00 sec)

测试主从复制

# 主库执行
create database test;# 从库执行
show databases;

半同步测试

在主库(192.168.121.221)上操作:

# 记录初始事务计数,用于后续对比
mysql> SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_master_yes_tx';    #成功等待从库确认的事务数
+-----------------------------+-------+
| Variable_name               | Value |
+-----------------------------+-------+
| Rpl_semi_sync_master_yes_tx | 6     |
+-----------------------------+-------+
1 row in set (0.00 sec)mysql> SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_master_no_tx';     #未等待确认的事务数(异步)
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| Rpl_semi_sync_master_no_tx | 0     |
+----------------------------+-------+
1 row in set (0.00 sec)# 主库创建测试库表并插入数据
mysql> CREATE DATABASE IF NOT EXISTS test;
Query OK, 1 row affected (0.00 sec)mysql> USE test;
Database changed
mysql> CREATE TABLE IF NOT EXISTS t (id INT PRIMARY KEY, val VARCHAR(50));
Query OK, 0 rows affected (0.01 sec)# 执行事务
mysql> INSERT INTO t VALUES (1, 'semi-sync-test');    
Query OK, 1 row affected (0.00 sec)mysql> SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_master_yes_tx';
+-----------------------------+-------+
| Variable_name               | Value |
+-----------------------------+-------+
| Rpl_semi_sync_master_yes_tx | 9     |
+-----------------------------+-------+
1 row in set (0.00 sec)mysql> SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_master_no_tx';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| Rpl_semi_sync_master_no_tx | 0     |
+----------------------------+-------+
1 row in set (0.00 sec)# 若yes_tx增加,说明事务在收到从库确认后才提交,半同步正常。
# 若no_tx增加,说明半同步未生效(需排查从库连接或配置)

5 MHA 部署与配置

5.1 部署 MHA 节点

在 mha-manager (192.168.121.220) 上操作:

# 安装依赖
yum install -y perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker epel-release # 安装MHA Node
wget https://github.com/yoshinorim/mha4mysql-node/releases/download/v0.58/mha4mysql-node-0.58-0.el7.centos.noarch.rpm
rpm -ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpm# 安装MHA Manager
wget https://github.com/yoshinorim/mha4mysql-manager/releases/download/v0.58/mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
rpm -ivh mha4mysql-manager-0.58-0.el7.centos.noarch.rpm# 创建MHA配置目录
mkdir -p /etc/mha/mysql_cluster
mkdir -p /var/log/mha/mysql_cluster# 创建MHA配置文件
cat > /etc/mha/mysql_cluster.cnf << EOF
[server default]
manager_workdir=/var/log/mha/mysql_cluster
manager_log=/var/log/mha/mysql_cluster/manager.log
master_binlog_dir=/var/lib/mysql
user=mha
password=123456
ping_interval=1
remote_workdir=/tmp
repl_user=mha
repl_password=123456
ssh_user=root
master_ip_failover_script=/usr/local/bin/master_ip_failover
master_ip_online_change_script=/usr/local/bin/master_ip_online_change
secondary_check_script=masterha_secondary_check -s 192.168.121.222 -s 192.168.121.223
shutdown_script=""[server1]
hostname=192.168.121.221
port=3306
candidate_master=1[server2]
hostname=192.168.121.222
port=3306
candidate_master=1[server3]
hostname=192.168.121.223
port=3306
candidate_master=0
EOFscp root@192.168.121.210:/data/docker/mysql/master_ip_failover /usr/local/bin/
scp root@192.168.121.210:/data/docker/mysql/master_ip_online_change /usr/local/bin/
chmod +x /usr/local/bin/master_ip_failover
chmod +x /usr/local/bin/master_ip_online_change

在所有 MySQL 节点上安装 MHA Node:

# 在ansible-server上执行
ansible mysql -m shell -a 'yum install -y perl-DBD-MySQL'
cd /data/docker
wget https://github.com/yoshinorim/mha4mysql-node/releases/download/v0.58/mha4mysql-node-0.58-0.el7.centos.noarch.rpmansible mysql -m copy -a 'src=/data/docker/mha4mysql-node-0.58-0.el7.centos.noarch.rpm dest=/tmp/'ansible mysql -m shell -a 'rpm -ivh /tmp/mha4mysql-node-0.58-0.el7.centos.noarch.rpm'

5.2 配置 MySQL 监控用户

在主库上创建 MHA 监控用户(主从复制从库会同步):

# 登录MySQL
mysql -uroot -p123456# 创建监控用户
CREATE USER 'mha'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'mha'@'%';
ALTER USER 'mha'@'%' IDENTIFIED WITH mysql_native_password BY '123456';
FLUSH PRIVILEGES;

5.3 测试 MHA 配置

# 测试SSH连接,如果没用配置ssh免密登录可能会报错
[root@mha-manager mha]# masterha_check_ssh --conf=/etc/mha/mysql_cluster.cnf
Thu Aug 21 15:40:49 2025 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 21 15:40:49 2025 - [info] Reading application default configuration from /etc/mha/mysql_cluster.cnf..
Thu Aug 21 15:40:49 2025 - [info] Reading server configuration from /etc/mha/mysql_cluster.cnf..
Thu Aug 21 15:40:49 2025 - [info] Starting SSH connection tests..
Thu Aug 21 15:40:50 2025 - [debug] 
Thu Aug 21 15:40:49 2025 - [debug]  Connecting via SSH from root@192.168.121.221(192.168.121.221:22) to root@192.168.121.222(192.168.121.222:22)..
Thu Aug 21 15:40:50 2025 - [debug]   ok.
Thu Aug 21 15:40:50 2025 - [debug]  Connecting via SSH from root@192.168.121.221(192.168.121.221:22) to root@192.168.121.223(192.168.121.223:22)..
Thu Aug 21 15:40:50 2025 - [debug]   ok.
Thu Aug 21 15:40:50 2025 - [debug] 
Thu Aug 21 15:40:50 2025 - [debug]  Connecting via SSH from root@192.168.121.222(192.168.121.222:22) to root@192.168.121.221(192.168.121.221:22)..
Thu Aug 21 15:40:50 2025 - [debug]   ok.
Thu Aug 21 15:40:50 2025 - [debug]  Connecting via SSH from root@192.168.121.222(192.168.121.222:22) to root@192.168.121.223(192.168.121.223:22)..
Thu Aug 21 15:40:50 2025 - [debug]   ok.
Thu Aug 21 15:40:51 2025 - [debug] 
Thu Aug 21 15:40:50 2025 - [debug]  Connecting via SSH from root@192.168.121.223(192.168.121.223:22) to root@192.168.121.221(192.168.121.221:22)..
Thu Aug 21 15:40:51 2025 - [debug]   ok.
Thu Aug 21 15:40:51 2025 - [debug]  Connecting via SSH from root@192.168.121.223(192.168.121.223:22) to root@192.168.121.222(192.168.121.222:22)..
Thu Aug 21 15:40:51 2025 - [debug]   ok.
Thu Aug 21 15:40:51 2025 - [info] All SSH connection tests passed successfully.# 测试MySQL复制
[root@mha-manager mha]# masterha_check_repl --conf=/etc/mha/mysql_cluster.cnf
Thu Aug 21 15:40:33 2025 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 21 15:40:33 2025 - [info] Reading application default configuration from /etc/mha/mysql_cluster.cnf..
Thu Aug 21 15:40:33 2025 - [info] Reading server configuration from /etc/mha/mysql_cluster.cnf..
Thu Aug 21 15:40:33 2025 - [info] MHA::MasterMonitor version 0.58.
Thu Aug 21 15:40:34 2025 - [info] GTID failover mode = 1
Thu Aug 21 15:40:34 2025 - [info] Dead Servers:
Thu Aug 21 15:40:34 2025 - [info] Alive Servers:
Thu Aug 21 15:40:34 2025 - [info]   192.168.121.221(192.168.121.221:3306)
Thu Aug 21 15:40:34 2025 - [info]   192.168.121.222(192.168.121.222:3306)
Thu Aug 21 15:40:34 2025 - [info]   192.168.121.223(192.168.121.223:3306)
Thu Aug 21 15:40:34 2025 - [info] Alive Slaves:
Thu Aug 21 15:40:34 2025 - [info]   192.168.121.222(192.168.121.222:3306)  Version=8.0.28 (oldest major version between slaves) log-bin:enabled
Thu Aug 21 15:40:34 2025 - [info]     GTID ON
Thu Aug 21 15:40:34 2025 - [info]     Replicating from 192.168.121.221(192.168.121.221:3306)
Thu Aug 21 15:40:34 2025 - [info]     Primary candidate for the new Master (candidate_master is set)
Thu Aug 21 15:40:34 2025 - [info]   192.168.121.223(192.168.121.223:3306)  Version=8.0.28 (oldest major version between slaves) log-bin:enabled
Thu Aug 21 15:40:34 2025 - [info]     GTID ON
Thu Aug 21 15:40:34 2025 - [info]     Replicating from 192.168.121.221(192.168.121.221:3306)
Thu Aug 21 15:40:34 2025 - [info] Current Alive Master: 192.168.121.221(192.168.121.221:3306)
Thu Aug 21 15:40:34 2025 - [info] Checking slave configurations..
Thu Aug 21 15:40:34 2025 - [info]  read_only=1 is not set on slave 192.168.121.222(192.168.121.222:3306).
Thu Aug 21 15:40:34 2025 - [info]  read_only=1 is not set on slave 192.168.121.223(192.168.121.223:3306).
Thu Aug 21 15:40:34 2025 - [info] Checking replication filtering settings..
Thu Aug 21 15:40:34 2025 - [info]  binlog_do_db= , binlog_ignore_db= 
Thu Aug 21 15:40:34 2025 - [info]  Replication filtering check ok.
Thu Aug 21 15:40:34 2025 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Thu Aug 21 15:40:34 2025 - [info] Checking SSH publickey authentication settings on the current master..
Thu Aug 21 15:40:34 2025 - [info] HealthCheck: SSH to 192.168.121.221 is reachable.
Thu Aug 21 15:40:34 2025 - [info] 
192.168.121.221(192.168.121.221:3306) (current master)+--192.168.121.222(192.168.121.222:3306)+--192.168.121.223(192.168.121.223:3306)Thu Aug 21 15:40:34 2025 - [info] Checking replication health on 192.168.121.222..
Thu Aug 21 15:40:34 2025 - [info]  ok.
Thu Aug 21 15:40:34 2025 - [info] Checking replication health on 192.168.121.223..
Thu Aug 21 15:40:34 2025 - [info]  ok.
Thu Aug 21 15:40:34 2025 - [info] Checking master_ip_failover_script status:
Thu Aug 21 15:40:34 2025 - [info]   /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.121.221 --orig_master_ip=192.168.121.221 --orig_master_port=3306 IN SCRIPT TEST====/sbin/ifconfig ens32:1 down==/sbin/ifconfig ens32:1 192.168.121.200===Checking the Status of the script.. OK 
Thu Aug 21 15:40:34 2025 - [info]  OK.
Thu Aug 21 15:40:34 2025 - [warning] shutdown_script is not defined.
Thu Aug 21 15:40:34 2025 - [info] Got exit code 0 (Not master dead).MySQL Replication Health is OK.# 启动MHA Manager
nohup masterha_manager --conf=/etc/mha/mysql_cluster.cnf > /var/log/mha/mysql_cluster/manager.log 2>&1 &# 查看MHA状态
[root@mha-manager mha]# masterha_check_status --conf=/etc/mha/mysql_cluster.cnf
mysql_cluster (pid:2942) is running(0:PING_OK), master:192.168.121.221

出现以下内容表示启动成功

5.4 故障转移效果测试,模拟mysql-matser宕机,指定mysql-slave1成为新的master

5.4.1 在mysql主节点手动开启vip

ifconfig ens32:1 192.168.121.200/24

5.4.2 mha-manager节点监控日志记录

[root@mha-manager mha]# tail -f /var/log/mha/mysql_cluster/manager.log 

5.4.3 模拟mysql-master宕机,停掉master

[root@mysql-master ~]# docker stop mysql
mysql

查看vip是否漂移到了mysql-slave1

5.4.4 查看状态master是不是salve1的ip

[root@mha-manager mha]# masterha_check_status --conf=/etc/mha/mysql_cluster.cnf 
mysql_cluster (pid:4680) is running(0:PING_OK), master:192.168.121.222

在看看mysql-slave2的主节点信息

至此测试完成,故障主备自动切换master主切换到slave1为主节点,slave2也指向了slave1为主节点。

5.4.5 原mysql-master节点故障恢复

[root@mysql-master ~]# docker start mysql
mysql
[root@mysql-master ~]# docker exec -it mysql bash
root@mysql-master:/# mysql -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 10
Server version: 8.0.28 MySQL Community Server - GPLCopyright (c) 2000, 2022, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> change master to master_host='192.168.121.222',master_user='mha',master_password='123456',master_port=3306,master_auto_positioon=1;
Query OK, 0 rows affected, 8 warnings (0.01 sec)mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> show slave status\G;
*************************** 1. row ***************************Slave_IO_State: Waiting for source to send eventMaster_Host: 192.168.121.222Master_User: mhaMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000008Read_Master_Log_Pos: 1855Relay_Log_File: mysql-master-relay-bin.000002Relay_Log_Pos: 420Relay_Master_Log_File: mysql-bin.000008Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 1855Relay_Log_Space: 637Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 222Master_UUID: e6b13ba9-7d6c-11f0-8a0b-000c29236169Master_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Replica has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: 965d216d-7d64-11f0-8771-000c29111b7d:1-10,
e6b13ba9-7d6c-11f0-8a0b-000c29236169:1-4,
ebd87b10-7d6c-11f0-965d-000c29111b7d:1-56Auto_Position: 1Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0Network_Namespace: 
1 row in set, 1 warning (0.00 sec)

5.4.6 重启mha manager,并检查此时的master节点

[root@mha-manager mha]# systemctl restart mha
[root@mha-manager mha]# masterha_check_status --conf=/etc/mha/mysql_cluster.cnf 
mysql_cluster (pid:5425) is running(0:PING_OK), master:192.168.121.222

5.5 配置 MHA 自动启动

[root@mha-manager mha]# nohup masterha_manager --conf=/etc/mha/mysql_cluster.cnf > /var/log/mha/mysql_cluster/manager.log 2>&1 &
[1] 3978
[root@mha-manager mha]# masterha_check_status --conf=/etc/mha/mysql_cluster.cnf
mysql_cluster (pid:3978) is running(0:PING_OK), master:192.168.121.221
[root@mha-manager mha]# vim /etc/systemd/system/mha.service
[root@mha-manager mha]# systemctl daemon-reload
[root@mha-manager mha]# systemctl enable mha
Created symlink from /etc/systemd/system/multi-user.target.wants/mha.service to /etc/systemd/system/mha.service.
[root@mha-manager mha]# systemctl start mha
[root@mha-manager mha]# systemctl status mha
● mha.service - MHA Manager for MySQL ClusterLoaded: loaded (/etc/systemd/system/mha.service; enabled; vendor preset: disabled)Active: active (running) since 三 2025-08-20 02:24:22 CST; 4s agoMain PID: 4164 (perl)Tasks: 1Memory: 16.9MCGroup: /system.slice/mha.service└─4164 perl /usr/bin/masterha_manager --conf=/etc/mha/mysql_cluster.cnf8月 20 02:24:22 mha-manager systemd[1]: Started MHA Manager for MySQL Cluster.
8月 20 02:24:22 mha-manager masterha_manager[4164]: Wed Aug 20 02:24:22 2025 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
8月 20 02:24:22 mha-manager masterha_manager[4164]: Wed Aug 20 02:24:22 2025 - [info] Reading application default configuration from /etc/mha/mysql_cluster.cnf..
8月 20 02:24:22 mha-manager masterha_manager[4164]: Wed Aug 20 02:24:22 2025 - [info] Reading server configuration from /etc/mha/mysql_cluster.cnf..

6 MyCat2 部署与配置

6.1 安装部署MyCat2

6.1.1 在mycat1和mycat2服务器部署环境所需要的MySQL数据库

[root@mycat1 ~]# docker run -d --name mysql --restart=always -p 3306:3306 -v /data/mysql/data:/var/lib/mysql -v /data/mysql/logs:/var/log/mysql -e MYSQL_ROOT_PASSWORD=123456 docker.1ms.run/mysql:8.0.28[root@mycat2 ~]# docker run -d --name mysql --restart=always -p 3306:3306 -v /data/mysql/data:/var/lib/mysql -v /data/mysql/logs:/var/log/mysql -e MYSQL_ROOT_PASSWORD=123456 docker.1ms.run/mysql:8.0.28

6.1.2 安装java环境

# 安装mycat需要的Java环境
[root@mycat1 ~]# yum install -y java
[root@mycat2 ~]# yum install -y java

6.1.3安装java环境下载mycat安装包和jar包


链接: https://pan.baidu.com/s/1w9hr2EH9Cpqt6LFjn8MPrw?pwd=63hu 提取码: 63hu 

6.1.4 解压mycat ZIP包

[root@mycat1 ~]# yum install -y unzip
[root@mycat2 ~]# yum install -y unzip[root@mycat1 ~]# unzip mycat2-install-template-1.21.zip
[root@mycat2 ~]# unzip mycat2-install-template-1.21.zip

6.1.5 把解压后的mycat目录移动到 /usr/local/目录下

[root@mycat1 ~]# mv mycat /usr/local/
[root@mycat2 ~]# mv mycat /usr/local/

6.1.6 将jar包放入/usr/local/mycat/lib下

[root@mycat1 ~]# mv mycat2-1.22-release-jar-with-dependencies-2022-10-13.jar /usr/local/mycat/lib/
[root@mycat2 ~]# mv mycat2-1.22-release-jar-with-dependencies-2022-10-13.jar /usr/local/mycat/lib/

6.1.7 授予bin目录可执行权限,防止启动报错

[root@mycat1 ~]# cd /usr/local/mycat/
[root@mycat1 mycat]# ls
bin  conf  lib  logs
[root@mycat1 mycat]# chmod +x bin/*
[root@mycat1 mycat]# cd bin/
[root@mycat1 bin]# ll
总用量 2588
-rwxr-xr-x 1 root root  15666 3月   5 2021 mycat
-rwxr-xr-x 1 root root   3916 3月   5 2021 mycat.bat
-rwxr-xr-x 1 root root 281540 3月   5 2021 wrapper-aix-ppc-32
-rwxr-xr-x 1 root root 319397 3月   5 2021 wrapper-aix-ppc-64
-rwxr-xr-x 1 root root 253808 3月   5 2021 wrapper-hpux-parisc-64
-rwxr-xr-x 1 root root 140198 3月   5 2021 wrapper-linux-ppc-64
-rwxr-xr-x 1 root root  99401 3月   5 2021 wrapper-linux-x86-32
-rwxr-xr-x 1 root root 111027 3月   5 2021 wrapper-linux-x86-64
-rwxr-xr-x 1 root root 114052 3月   5 2021 wrapper-macosx-ppc-32
-rwxr-xr-x 1 root root 233604 3月   5 2021 wrapper-macosx-universal-32
-rwxr-xr-x 1 root root 253432 3月   5 2021 wrapper-macosx-universal-64
-rwxr-xr-x 1 root root 112536 3月   5 2021 wrapper-solaris-sparc-32
-rwxr-xr-x 1 root root 148512 3月   5 2021 wrapper-solaris-sparc-64
-rwxr-xr-x 1 root root 110992 3月   5 2021 wrapper-solaris-x86-32
-rwxr-xr-x 1 root root 204800 3月   5 2021 wrapper-windows-x86-32.exe
-rwxr-xr-x 1 root root 220672 3月   5 2021 wrapper-windows-x86-64.exe[root@mycat2 ~]# cd /usr/local/mycat/
[root@mycat2 mycat]# ls
bin  conf  lib  logs
[root@mycat2 mycat]# chmod +x bin/*
[root@mycat2 mycat]# cd bin/
[root@mycat2 bin]# ll
总用量 2588
-rwxr-xr-x 1 root root  15666 3月   5 2021 mycat
-rwxr-xr-x 1 root root   3916 3月   5 2021 mycat.bat
-rwxr-xr-x 1 root root 281540 3月   5 2021 wrapper-aix-ppc-32
-rwxr-xr-x 1 root root 319397 3月   5 2021 wrapper-aix-ppc-64
-rwxr-xr-x 1 root root 253808 3月   5 2021 wrapper-hpux-parisc-64
-rwxr-xr-x 1 root root 140198 3月   5 2021 wrapper-linux-ppc-64
-rwxr-xr-x 1 root root  99401 3月   5 2021 wrapper-linux-x86-32
-rwxr-xr-x 1 root root 111027 3月   5 2021 wrapper-linux-x86-64
-rwxr-xr-x 1 root root 114052 3月   5 2021 wrapper-macosx-ppc-32
-rwxr-xr-x 1 root root 233604 3月   5 2021 wrapper-macosx-universal-32
-rwxr-xr-x 1 root root 253432 3月   5 2021 wrapper-macosx-universal-64
-rwxr-xr-x 1 root root 112536 3月   5 2021 wrapper-solaris-sparc-32
-rwxr-xr-x 1 root root 148512 3月   5 2021 wrapper-solaris-sparc-64
-rwxr-xr-x 1 root root 110992 3月   5 2021 wrapper-solaris-x86-32
-rwxr-xr-x 1 root root 204800 3月   5 2021 wrapper-windows-x86-32.exe
-rwxr-xr-x 1 root root 220672 3月   5 2021 wrapper-windows-x86-64.exe

6.1.8 mycat2加入PATH环境变量,并设置开机启动

[root@mycat1 ~]# echo "PATH=/usr/local/mycat/bin/:$PATH" >>/root/.bashrc
[root@mycat1 ~]# PATH=/usr/local/mycat/bin/:$PATH[root@mycat2 ~]# echo "PATH=/usr/local/mycat/bin/:$PATH" >>/root/.bashrc
[root@mycat2 ~]# PATH=/usr/local/mycat/bin/:$PATH

6.1.9 编辑prototypeDs.datasource.json默认数据源文件,并启动mycat(连接本机docker容器mysql数据库环境)

[root@mycat1 ~]# cd /usr/local/mycat/
[root@mycat1 mycat]# ls
bin  conf  lib  logs
[root@mycat1 mycat]# cd conf/datasources/
[root@mycat1 datasources]# ls
prototypeDs.datasource.json
[root@mycat1 datasources]# vim prototypeDs.datasource.json[root@mycat2 ~]# cd /usr/local/mycat/
[root@mycat2 mycat]# ls
bin  conf  lib  logs
[root@mycat2 mycat]# cd conf/datasources/
[root@mycat2 datasources]# ls
prototypeDs.datasource.json
[root@mycat2 datasources]# vim prototypeDs.datasource.json{"dbType":"mysql","idleTimeout":60000,"initSqls":[],"initSqlsGetConnection":true,"instanceType":"READ_WRITE","maxCon":1000,"maxConnectTimeout":3000,"maxRetryCount":5,"minCon":1,"name":"prototypeDs","password":"123456",    # 本机MySQL密码"type":"JDBC","url":"jdbc:mysql://localhost:3306/mysql?useUnicode=true&serverTimezone=Asia/Shanghai&characterEncoding=UTF-8",    # localhost表示本机可以改可不改"user":"root",    # 本机MySQL用户名"weight":0
}[root@mycat2 datasources]# mycat start

6.2 mysql集群(mysql-master、mysql-slave1、mysql-slave2)新建授权用户,允许mycat2访问

# 登录mysql-master
mysql -uroot -p123456mysql> CREATE USER 'mycat2'@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.01 sec)mysql> GRANT ALL PRIVILEGES ON *.* TO 'mycat2'@'%';
Query OK, 0 rows affected (0.00 sec)# 由于 MySQL 客户端与服务器端的认证插件不兼容,MySQL 8.0 及以上版本默认使用caching_sha2_password认证插件,而较旧的客户端或中间件(如 MyCat)可能不支持该插件。
# 修改 MySQL 用户的认证插件
mysql> ALTER USER 'mycat2'@'%' IDENTIFIED WITH mysql_native_password BY '123456';
Query OK, 0 rows affected (0.00 sec)mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.01 sec)mysql> select user,host from mysql.user;
+------------------+-----------+
| user             | host      |
+------------------+-----------+
| chenjun          | %         |
| mha              | %         |
| mycat2           | %         |
| root             | %         |
| test             | %         |
| mysql.infoschema | localhost |
| mysql.session    | localhost |
| mysql.sys        | localhost |
| root             | localhost |
+------------------+-----------+
9 rows in set (0.00 sec)

之前已经实现了mysql集群主从复制,所以只需要在mysql-master主节点进行创建授权用户就可以了。

6.3 验证数据库访问情况

[root@mycat1 datasources]# mysql -umycat2 -p123456 -h 192.168.121.221
[root@mycat1 datasources]# mysql -umycat2 -p123456 -h 192.168.121.222
[root@mycat1 datasources]# mysql -umycat2 -p123456 -h 192.168.121.223

6.4 登录mycat2客户端,新建逻辑库testdb(先要在mysql-master主节点新建testdb数据库并插入一条测试数据)

----------------------- mysql-master ---------------------------------
mysql> create database testdb;
Query OK, 1 row affected (0.01 sec)mysql> use testdb;
Database changedmysql> create table test_table(id int, name varchar(50),age int);
Query OK, 0 rows affected (0.01 sec)mysql> insert into test_table(id,name,age) values(1,'test',20);
Query OK, 1 row affected (0.00 sec)mysql> show tables;
+------------------+
| Tables_in_testdb |
+------------------+
| test_table       |
+------------------+
1 row in set (0.00 sec)
----------------------- mysql-master ---------------------------------
----------------------- mycat1 mycat2---------------------------------
[root@mycat1 datasources]# mysql -uroot -p123456 -P8066 -h 192.168.121.180
# 新建逻辑库test
create database testdb;
[root@mycat2 datasources]# mysql -uroot -p123456 -P8066 -h 192.168.121.190
# 新建逻辑库test
create database testdb;

新建完成之后会产生一个逻辑库test.db.schema.json的文件

[root@mycat1 conf]# cd /usr/local/mycat/conf/schemas/
[root@mycat1 schemas]# ls
information_schema.schema.json  mysql.schema.json  testdb.schema.json[root@mycat2 conf]# cd /usr/local/mycat/conf/schemas/
[root@mycat2 schemas]# ls
information_schema.schema.json  mysql.schema.json  testdb.schema.json

6.5 修改配置testdb.schema.json配置文件,配置逻辑库对应的集群

[root@mycat1 schemas]# vim testdb.schema.json
[root@mycat2 schemas]# vim testdb.schema.json{"customTables":{},"globalTables":{},"normalProcedures":{},"normalTables":{},"schemaName":"testdb","targetName":"cluster",    # 添加此行,指向后端数据源集群的名字时cluster"shardingTables":{},"views":{}
}# 重启mycat
[root@mycat1 schemas]# mycat restart
[root@mycat2 schemas]# mycat restart

6.6 MyCat2 支持通过注解方配置数据源和集群,这种方式相比传统的 JSON 配置文件更灵活,以下是通过注解方式添加集群和数据源的完整步骤:

# 两台mycat服务器都需要运行 推荐使用navicat进行连接写查询语句
# 登录mycat
[root@mycat1 ~]# mysql -uroot -p123456 -P8066 -h 192.168.121.190#添加读写数据源也就是对应的msyql主节点192.168.121.200,之前mha故障主备切换测试已经切换成slave1所以读写数据源填写vip的地址/*+ mycat:createDataSource{
"name":"rw",
"url":"jdbc:mysql://192.168.121.200:3306/testdb?useSSL=false&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true",
"user":"mycat2",
"password":"123456"
} */;#添加只读数据源也就是对应的原msyql主节点192.168.121.221
/*+ mycat:createDataSource{
"name":"r1",
"url":"jdbc:mysql://192.168.121.221:3306/testdb?useSSL=false&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true",
"user":"mycat2",
"password":"123456"
} */;#添加只读数据源对应的msyql从节点192.168.121.223
/*+ mycat:createDataSource{
"name":"r2",
"url":"jdbc:mysql://192.168.121.223:3306/testdb?useSSL=false&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true",
"user":"mycat2",
"password":"123456"
} */;# 查询数据源结果
/*+ mycat:showDataSources{} */;# 创建集群信息
/*!mycat:createCluster{"name":"cluster","masters":["rw"],"replicas":["r1","r2"]} */;# 查看集群信息
/*+ mycat:showClusters{} */;

6.7 验证读写分离

[root@mycat1 schemas]# mysql -uroot -p123456 -P8066 -h192.168.121.180
[root@mycat2 schemas]# mysql -uroot -p123456 -P8066 -h192.168.121.190---------------------------mycat1-------------------------------------------
[root@mycat1 datasources]# mysql -uroot -p123456 -P8066 -h192.168.121.180
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.33-mycat-2.0 MySQL Community Server - GPLCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MySQL [(none)]> show databases;
+--------------------+
| `Database`         |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| testdb             |
+--------------------+
4 rows in set (0.01 sec)MySQL [(none)]> use testdb;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -ADatabase changed
MySQL [testdb]> show tables;
+------------------+
| Tables_in_testdb |
+------------------+
| test_table       |
+------------------+
1 row in set (0.00 sec)MySQL [testdb]> select * from test_table;
+------+------+------+
| id   | name | age  |
+------+------+------+
|    1 | test |   20 |
+------+------+------+
1 row in set (0.07 sec)---------------------------mycat1----------------------------------------------------------------------mycat2-------------------------------------------
[root@mycat2 datasources]# mysql -uroot -p123456 -P8066 -h192.168.121.190
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.7.33-mycat-2.0 MySQL Community Server - GPLCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MySQL [(none)]> show databases;
+--------------------+
| `Database`         |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| testdb             |
+--------------------+
4 rows in set (0.11 sec)MySQL [(none)]> use testdb;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -ADatabase changed
MySQL [testdb]> show tables;
+------------------+
| Tables_in_testdb |
+------------------+
| test_table       |
+------------------+
1 row in set (0.00 sec)MySQL [testdb]> select * from test_table;
+------+------+------+
| id   | name | age  |
+------+------+------+
|    1 | test |   20 |
+------+------+------+
1 row in set (0.07 sec)---------------------------mycat2-------------------------------------------

可以看到两台mycat逻辑库testdb已经成功关联到了后端真实mysql数据库

6.7.1 验证读写分离为了让读写分离的效果更明显,依次关闭从节点的主从复制

# mysal-slave2  停止主从复制
mysql> stop slave;# mysal-slave1   现主节点   插入数据
mysql> insert into test_table(id,name,age)values(3,'li',23);
Query OK, 1 row affected (0.00 sec)# mysal-master  原主节点    停止主从复制
mysql> stop slave;# mysal-slave1   现主节点   插入数据
mysql> insert into test_table(id,name,age)values(4,'w',24);
Query OK, 1 row affected (0.01 sec)# 登录mycat客户端,查询test_table表数据[root@mycat1 datasources]# mysql -uroot -p123456 -P8066 -h 192.168.121.180
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.33-mycat-2.0 MySQL Community Server - GPLCopyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MySQL [(none)]> use testdb;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -ADatabase changed
MySQL [testdb]> select * from test_table;
+------+----------+------+
| id   | name     | age  |
+------+----------+------+
|    1 | test     |   20 |
|    2 | chenzong |   21 |
|    3 | li       |   23 |
|    4 | w        |   24 |
+------+----------+------+
4 rows in set (0.00 sec)MySQL [testdb]> select * from test_table;
+------+----------+------+
| id   | name     | age  |
+------+----------+------+
|    1 | test     |   20 |
|    2 | chenzong |   21 |
|    3 | li       |   23 |
|    4 | w        |   24 |
+------+----------+------+
4 rows in set (0.01 sec)MySQL [testdb]> select * from test_table;
+------+----------+------+
| id   | name     | age  |
+------+----------+------+
|    1 | test     |   20 |
|    2 | chenzong |   21 |
|    3 | li       |   23 |
+------+----------+------+
3 rows in set (0.01 sec)MySQL [testdb]> select * from test_table;
+------+----------+------+
| id   | name     | age  |
+------+----------+------+
|    1 | test     |   20 |
|    2 | chenzong |   21 |
|    3 | li       |   23 |
+------+----------+------+
3 rows in set (0.00 sec)MySQL [testdb]> select * from test_table;
+------+----------+------+
| id   | name     | age  |
+------+----------+------+
|    1 | test     |   20 |
|    2 | chenzong |   21 |
|    3 | li       |   23 |
|    4 | w        |   24 |
|    5 | r        |   25 |
+------+----------+------+
5 rows in set (0.00 sec)MySQL [testdb]> select * from test_table;
+------+----------+------+
| id   | name     | age  |
+------+----------+------+
|    1 | test     |   20 |
|    2 | chenzong |   21 |
|    3 | li       |   23 |
|    4 | w        |   24 |
+------+----------+------+
4 rows in set (0.00 sec)MySQL [testdb]> select * from test_table;
+------+----------+------+
| id   | name     | age  |
+------+----------+------+
|    1 | test     |   20 |
|    2 | chenzong |   21 |
|    3 | li       |   23 |
+------+----------+------+
3 rows in set (0.01 sec)

发现数据再不断的发生变化,这就是所需要的读请求的负载均衡效果

对mycat逻辑库test_table的读操作,负载均衡的将读请求转发到了后端3个真实数据库

而对mycat逻辑库test_table的写操作,只会转发到后端的主服务器上

7 配置 Keepalived 实现 MyCat 高可用

7.1 在两个 MyCat 节点上安装 Keepalived(在ansible-server上执行):

[root@ansible-server ~]# ansible mycat -m shell -a 'yum install -y keepalived'
[WARNING]: Consider using the yum module rather than running 'yum'.  If you need to use
command because yum is insufficient you can add 'warn: false' to this command task or set
'command_warnings=False' in ansible.cfg to get rid of this message.
192.168.121.180 | CHANGED | rc=0 >>
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
正在解决依赖关系
--> 正在检查事务
---> 软件包 keepalived.x86_64.0.1.3.5-19.el7 将被 安装
--> 正在处理依赖关系 libnetsnmpmibs.so.31()(64bit),它被软件包 keepalived-1.3.5-19.el7.x86_64 需要
--> 正在处理依赖关系 libnetsnmpagent.so.31()(64bit),它被软件包 keepalived-1.3.5-19.el7.x86_64 需要
--> 正在处理依赖关系 libnetsnmp.so.31()(64bit),它被软件包 keepalived-1.3.5-19.el7.x86_64 需要
--> 正在检查事务
---> 软件包 net-snmp-agent-libs.x86_64.1.5.7.2-49.el7_9.4 将被 安装
--> 正在处理依赖关系 libsensors.so.4()(64bit),它被软件包 1:net-snmp-agent-libs-5.7.2-49.el7_9.4.x86_64 需要
---> 软件包 net-snmp-libs.x86_64.1.5.7.2-49.el7_9.4 将被 安装
--> 正在检查事务
---> 软件包 lm_sensors-libs.x86_64.0.3.4.0-8.20160601gitf9185e5.el7_9.1 将被 安装
--> 解决依赖关系完成依赖关系解决================================================================================Package             架构   版本                                  源       大小
================================================================================
正在安装:keepalived          x86_64 1.3.5-19.el7                          base    332 k
为依赖而安装:lm_sensors-libs     x86_64 3.4.0-8.20160601gitf9185e5.el7_9.1    updates  42 knet-snmp-agent-libs x86_64 1:5.7.2-49.el7_9.4                    updates 707 knet-snmp-libs       x86_64 1:5.7.2-49.el7_9.4                    updates 752 k事务概要
================================================================================
安装  1 软件包 (+3 依赖软件包)总下载量:1.8 M
安装大小:6.0 M
Downloading packages:
--------------------------------------------------------------------------------
总计                                               164 kB/s | 1.8 MB  00:11     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction正在安装    : 1:net-snmp-libs-5.7.2-49.el7_9.4.x86_64                     1/4 正在安装    : lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7_9.1.x86_64   2/4 正在安装    : 1:net-snmp-agent-libs-5.7.2-49.el7_9.4.x86_64               3/4 正在安装    : keepalived-1.3.5-19.el7.x86_64                              4/4 验证中      : 1:net-snmp-agent-libs-5.7.2-49.el7_9.4.x86_64               1/4 验证中      : keepalived-1.3.5-19.el7.x86_64                              2/4 验证中      : lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7_9.1.x86_64   3/4 验证中      : 1:net-snmp-libs-5.7.2-49.el7_9.4.x86_64                     4/4 已安装:keepalived.x86_64 0:1.3.5-19.el7                                              作为依赖被安装:lm_sensors-libs.x86_64 0:3.4.0-8.20160601gitf9185e5.el7_9.1                   net-snmp-agent-libs.x86_64 1:5.7.2-49.el7_9.4                                 net-snmp-libs.x86_64 1:5.7.2-49.el7_9.4                                       完毕!
192.168.121.190 | CHANGED | rc=0 >>
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
正在解决依赖关系
--> 正在检查事务
---> 软件包 keepalived.x86_64.0.1.3.5-19.el7 将被 安装
--> 正在处理依赖关系 libnetsnmpmibs.so.31()(64bit),它被软件包 keepalived-1.3.5-19.el7.x86_64 需要
--> 正在处理依赖关系 libnetsnmpagent.so.31()(64bit),它被软件包 keepalived-1.3.5-19.el7.x86_64 需要
--> 正在处理依赖关系 libnetsnmp.so.31()(64bit),它被软件包 keepalived-1.3.5-19.el7.x86_64 需要
--> 正在检查事务
---> 软件包 net-snmp-agent-libs.x86_64.1.5.7.2-49.el7_9.4 将被 安装
--> 正在处理依赖关系 libsensors.so.4()(64bit),它被软件包 1:net-snmp-agent-libs-5.7.2-49.el7_9.4.x86_64 需要
---> 软件包 net-snmp-libs.x86_64.1.5.7.2-49.el7_9.4 将被 安装
--> 正在检查事务
---> 软件包 lm_sensors-libs.x86_64.0.3.4.0-8.20160601gitf9185e5.el7_9.1 将被 安装
--> 解决依赖关系完成依赖关系解决================================================================================Package             架构   版本                                  源       大小
================================================================================
正在安装:keepalived          x86_64 1.3.5-19.el7                          base    332 k
为依赖而安装:lm_sensors-libs     x86_64 3.4.0-8.20160601gitf9185e5.el7_9.1    updates  42 knet-snmp-agent-libs x86_64 1:5.7.2-49.el7_9.4                    updates 707 knet-snmp-libs       x86_64 1:5.7.2-49.el7_9.4                    updates 752 k事务概要
================================================================================
安装  1 软件包 (+3 依赖软件包)总下载量:1.8 M
安装大小:6.0 M
Downloading packages:
--------------------------------------------------------------------------------
总计                                               291 kB/s | 1.8 MB  00:06     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction正在安装    : 1:net-snmp-libs-5.7.2-49.el7_9.4.x86_64                     1/4 正在安装    : lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7_9.1.x86_64   2/4 正在安装    : 1:net-snmp-agent-libs-5.7.2-49.el7_9.4.x86_64               3/4 正在安装    : keepalived-1.3.5-19.el7.x86_64                              4/4 验证中      : 1:net-snmp-agent-libs-5.7.2-49.el7_9.4.x86_64               1/4 验证中      : keepalived-1.3.5-19.el7.x86_64                              2/4 验证中      : lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7_9.1.x86_64   3/4 验证中      : 1:net-snmp-libs-5.7.2-49.el7_9.4.x86_64                     4/4 已安装:keepalived.x86_64 0:1.3.5-19.el7                                              作为依赖被安装:lm_sensors-libs.x86_64 0:3.4.0-8.20160601gitf9185e5.el7_9.1                   net-snmp-agent-libs.x86_64 1:5.7.2-49.el7_9.4                                 net-snmp-libs.x86_64 1:5.7.2-49.el7_9.4                                       完毕!

7.2 创建 keepalived_script 用户:

[root@ansible-server ~]# ansible mycat -m shell -a 'useradd -r -s /sbin/nologin keepalived_script'
192.168.121.190 | CHANGED | rc=0 >>192.168.121.180 | CHANGED | rc=0 >>

7.3 在 mycat1 (192.168.121.180) 上配置:

vim /etc/keepalived/keepalived.conf# 输入dG删除全部内容
# 粘贴以下配置! Configuration File for keepalived
# Keepalived配置文件
# 用于实现MyCAT服务的高可用,通过VRRP协议实现主备切换# 全局定义部分
global_defs {router_id MYCAT1  # 路由器唯一标识,通常使用主机名或服务名,同一集群应不同script_security User keepalived_script  # 启用脚本安全,并指定执行用户enable_script_security  # 允许脚本执行(部分版本需要此参数)
}# 定义VRRP脚本(健康检查脚本)
vrrp_script check_mycat {script "/etc/keepalived/check_mycat.sh"  # 指向MyCAT健康检查脚本的路径interval 2                               # 检查间隔时间(秒),每2秒执行一次检查weight 2                                 # 权重调整值:如果脚本返回0(健康),节点优先级+2;非0则不调整
}# VRRP实例配置(定义高可用集群实例)
vrrp_instance VI_1 {state MASTER               # 节点角色:MASTER(主节点)或BACKUP(备节点)interface ens32             # 绑定的网络接口,修改成实际网络接口名称如ens32,ens33virtual_router_id 51       # 虚拟路由ID(0-255),同一集群的主备节点必须相同priority 100               # 节点优先级(1-254),MASTER应高于BACKUP(如主100,备90)advert_int 1               # VRRP通告发送间隔(秒),主备节点需一致authentication {           # 主备节点之间的认证配置,必须一致auth_type PASS         # 认证类型:PASS(密码认证)auth_pass 1111         # 认证密码(最多8位字符)}virtual_ipaddress {        # 虚拟IP地址(VIP),故障转移时会在主备节点间漂移192.168.121.188         # 业务访问的VIP地址}track_script {             # 引用上面定义的健康检查脚本check_mycat            # 启用MyCAT服务健康检查}
}

7.4 创建健康脚本

vim /etc/keepalived/check_mycat.sh#!/bin/bash
if ! nc -z 127.0.0.1 8066; thensystemctl stop keepalived
fichmod +x /etc/keepalived/check_mycat.sh
chown keepalived_script:keepalived_script /etc/keepalived/check_mycat.sh

7.5 在 mycat2 (192.168.121.190) 上配置:

# 把配置文件和健康脚本复制到mycat2
[root@mycat1 keepalived]# scp keepalived.conf check_mycat.sh  mycat2:/etc/keepalived/
keepalived.conf                                                                                                                                                                                                                                     100% 1766     2.0MB/s   00:00    
check_mycat.sh# 修改配置文件! Configuration File for keepalived
# Keepalived配置文件
# 用于实现MyCAT服务的高可用,通过VRRP协议实现主备切换# 全局定义部分
global_defs {router_id MYCAT2  # 修改路由器唯一标识,通常使用主机名或服务名,同一集群应不同script_security User keepalived_script  # 启用脚本安全,并指定执行用户enable_script_security  # 允许脚本执行(部分版本需要此参数)
}# 定义VRRP脚本(健康检查脚本)
vrrp_script check_mycat {script "/etc/keepalived/check_mycat.sh"  # 指向MyCAT健康检查脚本的路径interval 2                               # 检查间隔时间(秒),每2秒执行一次检查weight 2                                 # 权重调整值:如果脚本返回0(健康),节点优先级+2;非0则不调整
}# VRRP实例配置(定义高可用集群实例)
vrrp_instance VI_1 {state BACKUP               # 修改成BACKUP(备节点)interface ens32virtual_router_id 51priority 90               # 节点优先级(1-254),MASTER应高于BACKUP(如主100,备90)advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {        # 虚拟IP地址(VIP),故障转移时会在主备节点间漂移192.168.121.188         # 业务访问的VIP地址}track_script {             # 引用上面定义的健康检查脚本check_mycat            # 启用MyCAT服务健康检查}
}

7.6 启动keepalived

在ansible服务器批量设置开机自启动并启动keepalived,查看状态显示running则启动成功,在mycat1服务器上输入ip a查看是否多了一个188的vip

[root@ansible-server ~]# ansible mycat -m shell -a 'systemctl enable keepalived'
192.168.121.180 | CHANGED | rc=0 >>192.168.121.190 | CHANGED | rc=0 >>[root@ansible-server ~]# ansible mycat -m shell -a 'systemctl start keepalived'
192.168.121.180 | CHANGED | rc=0 >>192.168.121.190 | CHANGED | rc=0 >>[root@ansible-server ~]# ansible mycat -m shell -a 'systemctl status keepalived'
192.168.121.180 | CHANGED | rc=0 >>
● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)Active: active (running) since 一 2025-08-25 10:57:00 CST; 58s agoMain PID: 3110 (keepalived)CGroup: /system.slice/keepalived.service├─3110 /usr/sbin/keepalived -D├─3111 /usr/sbin/keepalived -D└─3112 /usr/sbin/keepalived -D8月 25 10:57:40 mycat1 Keepalived_vrrp[3112]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:42 mycat1 Keepalived_vrrp[3112]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:44 mycat1 Keepalived_vrrp[3112]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:46 mycat1 Keepalived_vrrp[3112]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:48 mycat1 Keepalived_vrrp[3112]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:50 mycat1 Keepalived_vrrp[3112]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:52 mycat1 Keepalived_vrrp[3112]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:54 mycat1 Keepalived_vrrp[3112]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:56 mycat1 Keepalived_vrrp[3112]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:58 mycat1 Keepalived_vrrp[3112]: /etc/keepalived/check_mycat.sh exited with status 1
192.168.121.190 | CHANGED | rc=0 >>
● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)Active: active (running) since 一 2025-08-25 10:57:00 CST; 58s agoMain PID: 3150 (keepalived)CGroup: /system.slice/keepalived.service├─3150 /usr/sbin/keepalived -D├─3151 /usr/sbin/keepalived -D└─3153 /usr/sbin/keepalived -D8月 25 10:57:40 mycat2 Keepalived_vrrp[3153]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:42 mycat2 Keepalived_vrrp[3153]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:44 mycat2 Keepalived_vrrp[3153]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:46 mycat2 Keepalived_vrrp[3153]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:48 mycat2 Keepalived_vrrp[3153]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:50 mycat2 Keepalived_vrrp[3153]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:52 mycat2 Keepalived_vrrp[3153]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:54 mycat2 Keepalived_vrrp[3153]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:56 mycat2 Keepalived_vrrp[3153]: /etc/keepalived/check_mycat.sh exited with status 1
8月 25 10:57:58 mycat2 Keepalived_vrrp[3153]: /etc/keepalived/check_mycat.sh exited with status 1

7.7 模拟服务器不可用VIP切换

7.7.1 模拟主服务器断电关闭

# 关闭主服务器  VIP所在服务器
[root@mycat1 keepalived]# shutdown -h now

在mycat2服务器上可以看到vip漂移过来了

mycat1服务器开启keepalived之后vip自动回归主服务器

[root@mycat1 ~]# systemctl start keepalived
[root@mycat1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:75:16:d7 brd ff:ff:ff:ff:ff:ffinet 192.168.121.180/24 brd 192.168.121.255 scope global noprefixroute ens32valid_lft forever preferred_lft foreverinet 192.168.121.188/32 scope global ens32valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fe75:16d7/64 scope link valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:33:2c:be:1a brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft foreverinet6 fe80::42:33ff:fe2c:be1a/64 scope link valid_lft forever preferred_lft forever
5: veth44727c1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether ba:b1:cf:2f:6c:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0inet6 fe80::b8b1:cfff:fe2f:6c1d/64 scope link valid_lft forever preferred_lft forever

8 备份系统部署

8.1 配置 rsync+sersync 实时备份

在备份服务器 (192.168.121.210) 上操作:

# 安装rsync
[root@ansible-server ]# yum install -y rsync# 配置rsync
[root@ansible-server ]# /etc/rsyncd.conf
uid = root
gid = root
use chroot = no
max connections = 200
timeout = 300
pid file = /var/run/rsyncd.pid
lock file = /var/run/rsync.lock
log file = /var/log/rsyncd.log[mysql_backup]
path = /data/backup/mysql
comment = MySQL backup
read only = no
list = yes
hosts allow = 192.168.121.0/24
auth users = backup
secrets file = /etc/rsync.passwd# 创建备份目录
[root@ansible-server ]# mkdir -p /data/backup/mysql# 配置rsync密码
[root@ansible-server ]# echo "backup:123456" > /etc/rsync.passwd
[root@ansible-server ]# chmod 600 /etc/rsync.passwd# 启动rsync服务
[root@ansible-server ]# systemctl start rsyncd
[root@ansible-server ]# systemctl enable rsyncd

在 MySQL 主库 (192.168.121.221) 上安装 sersync:

通过网盘分享的文件:sersync2.5.4_64bit_binary_stable_final.tar.gz
链接: https://pan.baidu.com/s/1uWRYq1IMAEQ8g4o9J_34bw?pwd=ea9p 提取码: ea9p 

[root@mysql-master]# yum install -y inotify-tools rsync# 上传软件包到服务器并解压
[root@mysql-master]# tar xzf sersync2.5.4_64bit_binary_stable_final.tar.gz[root@mysql-master]# rm -rf sersync2.5.4_64bit_binary_stable_final.tar.gz[root@mysql-master]# mv GNU-Linux-x86/ /usr/local/sersync[root@mysql-master]# cd /usr/local/sersync[root@mysql-master]# vim confxml.xml<?xml version="1.0" encoding="ISO-8859-1"?>
<head version="2.5"><host hostip="localhost" port="8008"></host><debug start="false"/><fileSystem xfs="false"/><filter start="false"><exclude expression="(.*)\.svn"></exclude><exclude expression="(.*)\.gz"></exclude><exclude expression="^info/*"></exclude><exclude expression="^static/*"></exclude></filter><inotify><delete start="true"/><createFolder start="true"/><createFile start="true"/><closeWrite start="true"/><moveFrom start="true"/><moveTo start="true"/><attrib start="true"/><modify start="true"/></inotify><sersync><localpath watch="/data/mysql/data">    # 指定监控的本地目录<remote ip="192.168.121.210" name="mysql_backup"/>    # 指定要同步的目标服务器的IP地址,及目标服务器rsync的[模块]</localpath><rsync><commonParams params="-artuz"/><auth start="true" users="backup" passwordfile="/etc/rsync.passwd"/>   #是否开启rsync的认证模式,需要配置users及passwordfile,根据情况开启(如果开启,注意密码文件权限一定要是600)<userDefinedPort start="false" port="873"/><timeout start="true" time="100"/><ssh start="false"/></rsync><failLog path="/tmp/rsync_fail_log.sh" timeToExecute="60"/><crontab start="true" schedule="600"><crontabfilter start="false"><exclude expression="*.php"></exclude><exclude expression="info/*"></exclude></crontabfilter></crontab><plugin start="false" name="command"/></sersync><plugin name="command"><param prefix="/bin/sh" suffix="" ignoreError="true"/><filter start="false"><include expression="(.*)\.php"/><include expression="(.*)\.sh"/></filter></plugin><plugin name="socket"><localpath watch="/opt/tongbu"><deshost ip="192.168.138.20" port="8009"/></localpath></plugin><plugin name="refreshCDN"><localpath watch="/data0/htdocs/cms.xoyo.com/site/"><cdninfo domainname="ccms.chinacache.com" port="80" username="xxxx" passwd="xxxx"/><sendurl base="http://pic.xoyo.com/cms"/><regexurl regex="false" match="cms.xoyo.com/site([/a-zA-Z0-9]*).xoyo.com/images"/></localpath></plugin>
</head>----------------------------------------分割线----------------------------------------
# 配置rsync密码
[root@mysql-master]# echo "123456" > /etc/rsync.passwd[root@mysql-master]# chmod 600 /etc/rsync.passwd# 创建启动脚本
[root@mysql-master]# vim /etc/systemd/system/sersync.service[Unit]
Description=Sersync for real-time file synchronization
After=network.target[Service]
Type=forking
ExecStart=/usr/local/sersync/sersync2 -d -r -o /usr/local/sersync/confxml.xml
Restart=on-failure
RestartSec=5
ExecStop=/bin/pkill -f 'sersync2 -d -r -o /usr/local/sersync/confxml.xml'[Install]
WantedBy=multi-user.target# 启动sersync
[root@mysql-master]# systemctl daemon-reload
[root@mysql-master]# systemctl enable sersync
[root@mysql-master]# systemctl start sersync# 手动进行一次全量备份测试是否同步[root@mysql-master data]# cd /data/mysql/data && rsync -artuz -R --delete ./ backup@192.168.121.210::mysql_backup --password-file=/etc/rsync.passwd

备份服务器上看到有内容说明同步成功

9 Redis 缓存集群部署

9.1 构建redis docker镜像

在 ansible-server 上操作:

# 创建目录
mkdir -p /data/docker/redis
cd /data/docker/redis[root@ansible-server redis]# docker pull docker.1ms.run/redis:6.2.6    # 拉取镜像
6.2.6: Pulling from redis
1fe172e4850f: Pull complete 
6fbcd347bf99: Pull complete 
993114c67627: Pull complete 
2a560260ca39: Pull complete 
b7179886a292: Pull complete 
8901ffe2be84: Pull complete 
Digest: sha256:b7fd1a2c89d09a836f659d72c52d27b9f71202c97014a47639f87c992e8c0f1b
Status: Downloaded newer image for docker.1ms.run/redis:6.2.6
docker.1ms.run/redis:6.2.6
[root@ansible-server redis]# docker save -o redis6.2.6.tar docker.1ms.run/redis:6.2.6  # 打包镜像到本地
[root@ansible-server redis]# ls
Dockerfile  redis6.2.6.tar
# 节约下载时间
[root@ansible-server redis]# scp redis6.2.6.tar redis1:~/    # 将镜相包分发给redis1
redis6.2.6.tar                                                                                         100%  111MB 144.7MB/s   00:00    
[root@ansible-server redis]# scp redis6.2.6.tar redis2:~/    # 将镜相包分发给redis2
redis6.2.6.tar                                                                                           0%    0     0.0KB/s   --:-- ETA^redis6.2.6.tar                                                                                         100%  111MB  89.5MB/s   00:01    
[root@ansible-server redis]# scp redis6.2.6.tar redis3:~/    # 将镜相包分发给redis3
redis6.2.6.tar                                                                                         100%  111MB  94.5MB/s   00:01    
# 批量读取镜相包
[root@ansible-server redis]# ansible redis -m shell -a 'docker load -i redis6.2.6.tar'    
192.168.121.171 | CHANGED | rc=0 >>
Loaded image: docker.1ms.run/redis:6.2.6
192.168.121.173 | CHANGED | rc=0 >>
Loaded image: docker.1ms.run/redis:6.2.6
192.168.121.172 | CHANGED | rc=0 >>
Loaded image: docker.1ms.run/redis:6.2.6
[root@ansible-server redis]# ansible redis -m shell -a 'docker images'
192.168.121.171 | CHANGED | rc=0 >>
REPOSITORY             TAG       IMAGE ID       CREATED       SIZE
docker.1ms.run/redis   6.2.6     3c3da61c4be0   3 years ago   113MB
192.168.121.172 | CHANGED | rc=0 >>
REPOSITORY             TAG       IMAGE ID       CREATED       SIZE
docker.1ms.run/redis   6.2.6     3c3da61c4be0   3 years ago   113MB
192.168.121.173 | CHANGED | rc=0 >>
REPOSITORY             TAG       IMAGE ID       CREATED       SIZE
docker.1ms.run/redis   6.2.6     3c3da61c4be0   3 years ago   113MB[root@redis1 conf]# vim /data/redis/conf/redis.conf
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile /data/redis-server.log
databases 16
always-show-logo yes
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-disable-tcp-nodelay no
replica-priority 100
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
cluster-replica-validity-factor 10
cluster-migration-barrier 1
cluster-require-full-coverage yes
cluster-replica-no-failover no
requirepass 123456scp /data/redis/conf/redis.conf redis2:/data/redis/conf/redis.conf
scp /data/redis/conf/redis.conf redis3:/data/redis/conf/redis.conf

9.1.2 部署 Redis 集群

创建 Ansible Playbook:

[root@ansible-server redis]# mkdir -p /data/ansible/roles/redis/tasks
[root@ansible-server redis]# cd /data/ansible/roles/redis/tasks/
[root@ansible-server tasks]# vim main.yml
vim main.yml
- name: 创建redis数据目录file:path: /data/redis/datastate: directorymode: '0755'
- name: 创建redis配置目录template:src: /data/docker/redis/redis.confdest: /data/redis/conf/redis.confmode: '0644'
- name: 启动Redis容器docker_container:name: redisimage: docker.1ms.run/redis:6.2.6state: startedrestart_policy: alwaysports:- "6379:6379"- "16379:16379"volumes:- /data/redis/data:/data- /data/redis/conf/redis.conf:/usr/local/etc/redis/redis.confnetwork_mode: hostcommand:--requirepass "123456"--masterauth "123456"--cluster-enabled yes[root@ansible-server tasks]# cd /data/ansible
[root@ansible-server ansible]# vim deploy_redis.yml
- hosts: redisvars:ansible_python_interpreter: /usr/bin/python3.6tasks:- include_role:name: redis
[root@ansible-server ansible]# ansible-playbook /data/ansible/deploy_redis.ymlPLAY [redis] ****************************************************************************************************************************TASK [Gathering Facts] ******************************************************************************************************************
ok: [192.168.121.172]
ok: [192.168.121.173]
ok: [192.168.121.171]TASK [include_role : redis] *************************************************************************************************************TASK [创建redis数据目录] **********************************************************************************************************************
ok: [192.168.121.173]
ok: [192.168.121.172]
ok: [192.168.121.171]TASK [创建redis配置目录] **********************************************************************************************************************
ok: [192.168.121.172]
ok: [192.168.121.171]
ok: [192.168.121.173]TASK [redis : 启动Redis容器] ****************************************************************************************************************
ok: [192.168.121.172]
ok: [192.168.121.171]
ok: [192.168.121.173]PLAY RECAP ******************************************************************************************************************************
192.168.121.171            : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.172            : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.173            : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

python版本不匹配报错解决方案:

# 在Ansible Playbook -host下面添加以下内容
vars:ansible_python_interpreter: /usr/bin/python3.6# 在目标主机下载python3
pip3 install requests docker

9.1.3 初始化 Redis 集群

[root@redis1 ~]# docker exec -it redis bash
root@redis1:/data# 
root@redis1:/data# redis-cli -a 123456 --cluster create 192.168.121.171:6379 192.168.121.172:6379 192.168.121.173:6379 --cluster-replicas 0
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 3 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
M: c01f96b413891ed642d6bfe999c9f4d6cfdcd5a3 192.168.121.171:6379slots:[0-5460] (5461 slots) master
M: af5084c94efc219472ea3ecc6b29b446e710af61 192.168.121.172:6379slots:[5461-10922] (5462 slots) master
M: 95bec852ef26753351d1fee5b4b8c84a77b6ccb8 192.168.121.173:6379slots:[10923-16383] (5461 slots) master
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.121.171:6379)
M: c01f96b413891ed642d6bfe999c9f4d6cfdcd5a3 192.168.121.171:6379slots:[0-5460] (5461 slots) master
M: 95bec852ef26753351d1fee5b4b8c84a77b6ccb8 192.168.121.173:6379slots:[10923-16383] (5461 slots) master
M: af5084c94efc219472ea3ecc6b29b446e710af61 192.168.121.172:6379slots:[5461-10922] (5462 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
# 验证集群状态
root@redis1:/data# redis-cli -a 123456 cluster info
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:3
cluster_size:3
cluster_current_epoch:3
cluster_my_epoch:1
cluster_stats_messages_ping_sent:17
cluster_stats_messages_pong_sent:18
cluster_stats_messages_sent:35
cluster_stats_messages_ping_received:16
cluster_stats_messages_pong_received:17
cluster_stats_messages_meet_received:2
cluster_stats_messages_received:35
# 验证集群状态
root@redis1:/data# redis-cli -a 123456 cluster nodes
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
c01f96b413891ed642d6bfe999c9f4d6cfdcd5a3 192.168.121.171:6379@16379 myself,master - 0 1756095963000 1 connected 0-5460
95bec852ef26753351d1fee5b4b8c84a77b6ccb8 192.168.121.173:6379@16379 master - 0 1756095965396 3 connected 10923-16383
af5084c94efc219472ea3ecc6b29b446e710af61 192.168.121.172:6379@16379 master - 0 1756095964391 2 connected 5461-10922

10 部署 Nginx 主备负载均衡(nginx1/2 操作)

10.1 安装 Nginx 1.20并启动

两个 Nginx 节点操作完全一致。

[root@nginx1 ]# docker run -d --name nginx -v ng_conf:/etc/nginx -v ng_html:/usr/share/nginx -p 80:80 --restart=always docker.1ms.run/nginx:1.27[root@nginx2 ]# docker run -d --name nginx -v ng_conf:/etc/nginx -v ng_html:/usr/share/nginx -p 80:80 --restart=always docker.1ms.run/nginx:1.27

在 Windows 客户端浏览器访问 http://192.168.121.70(nginx1)和 http://192.168.121.71(nginx2),若显示 Nginx 欢迎页,说明安装成功。

10.2 配置 Nginx 负载均衡(指向 App-Server)

编辑 Nginx 主配置文件 nginx.conf,修改 http 块内容,添加负载均衡规则(两台服务器操作一致):

[root@nginx1 _data]# vim /var/lib/docker/volumes/ng_conf/_data/nginx.confuser  nginx;
worker_processes  auto;error_log  /var/log/nginx/error.log notice;
pid        /run/nginx.pid;events {worker_connections  1024;
}http {include       /etc/nginx/mime.types;default_type  application/octet-stream;log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log  /var/log/nginx/access.log  main;sendfile        on;keepalive_timeout  65;# 用于处理常规请求头的缓冲区大小(默认 1k)client_header_buffer_size 4k;  # 增大到 4k 或 8k# 用于处理大型请求头的缓冲区(默认 4 个 8k 缓冲区)large_client_header_buffers 4 16k;  # 4 个缓冲区,每个 16k(总 64k)# 1. 定义应用服务器集群(upstream 名称可自定义,如 app_servers)upstream app_servers{server 192.168.121.80:8080;server 192.168.121.81:8080;}include /etc/nginx/conf.d/*.conf;}[root@nginx1 _data]# vim /var/lib/docker/volumes/ng_conf/_data/conf.d/default.conf
server {listen       80;listen  [::]:80;server_name  localhost;# 前端页面根目录root   /usr/share/nginx/html;index  index.html;# 前端页面访问日志access_log  /var/log/nginx/frontend_access.log  main;error_log   /var/log/nginx/frontend_error.log;location / {root   /usr/share/nginx/html;index  index.html index.htm;}location /api/houses {# 转发请求到应用服务器集群proxy_pass http://app_servers/api/houses;# 传递客户端真实 IP 和请求头(可选,用于 App 日志分析)proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;proxy_set_header X-Forwarded-Proto $scheme;}error_page   500 502 503 504  /50x.html;location = /50x.html {root   /usr/share/nginx/html;}}# 重启nginx生效配置
[root@nginx1 _data]# docker restart nginx

10.3 安装keepalived(实现 VIP 与主备切换)

在ansible服务器运行

[root@ansible-server ]# ansible nginx -m shell -a 'yum install -y keepalived'

10.4 编写nginx健康检测脚本

[root@nginx1 keepalived]# cd /etc/keepalived/
[root@nginx1 keepalived]# vim check_nginx.sh#!/bin/bash
# 检测 HTTP 服务是否返回 200 状态码
curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:80 | grep -q "200"
if [ $? -ne 0 ]; thensystemctl stop keepalived
fi[root@nginx1 keepalived]# vim check_keepalived.sh#!/bin/bash
if  nc -z 127.0.0.1 80; thensystemctl start keepalived
fi# 设置定时任务检测nginx服务正常之后恢复keeplived
[root@nginx1 keepalived]# crontab -e
* * * * * sh /etc/keepalived/check_keepalived.sh

10.5 修改keepalived.conf配置

[root@nginx1 keepalived]# vim /etc/keepalived/keepalived.conf! Configuration File for keepalived
# Keepalived配置文件# 全局定义部分
global_defs {router_id nginx1  # 路由器唯一标识,通常使用主机名或服务名,同一集群应不同script_security User keepalived_script  # 启用脚本安全,并指定执行用户enable_script_security  # 允许脚本执行(部分版本需要此参数)
}# 定义VRRP脚本(健康检查脚本)
vrrp_script check_nginx {script "/etc/keepalived/check_nginx.sh"  # 指向nginx健康检查脚本的路径interval 2                               # 检查间隔时间(秒),每2秒执行一次检查weight 2                                 # 权重调整值:如果脚本返回0(健康),节点优先级+2;非0则不调整
}# VRRP实例配置(定义高可用集群实例)
vrrp_instance VI_1 {state MASTER               # 节点角色:MASTER(主节点)或BACKUP(备节点)interface ens32             # 绑定的网络接口,用于发送VRRP通告virtual_router_id 70       # 虚拟路由ID(0-255),同一集群的主备节点必须相同,注意之前mycat的高可用集群是51所以这里要修改idpriority 100               # 节点优先级(1-254),MASTER应高于BACKUP(如主100,备90)advert_int 1               # VRRP通告发送间隔(秒),主备节点需一致authentication {           # 主备节点之间的认证配置,必须一致auth_type PASS         # 认证类型:PASS(密码认证)auth_pass 1111         # 认证密码(最多8位字符)}virtual_ipaddress {        # 虚拟IP地址(VIP),故障转移时会在主备节点间漂移192.168.121.80         # 业务访问的VIP地址}track_script {             # 引用上面定义的健康检查脚本check_nginx            # 启用nginx服务健康检查}
}

10.6 将健康检测脚本及keepalived配置文件分发给nginx2服务器

[root@nginx1 keepalived]# scp /etc/keepalived/* nginx2:/etc/keepalived

10.7 修改nginx2服务器上keepalived配置文件

[root@nginx2 keepalived]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
# Keepalived配置文件
# 全局定义部分
global_defs {router_id nginx2  # 路由器唯一标识,通常使用主机名或服务名,同一集群应不同 修改标识script_security User keepalived_script  enable_script_security  
}# 定义VRRP脚本(健康检查脚本)
vrrp_script check_nginx {script "/etc/keepalived/check_nginx.sh" interval 2                             weight 2                                
}# VRRP实例配置(定义高可用集群实例)
vrrp_instance VI_1 {state BACKUP               # 节点角色:BACKUP(备节点)修改此处interface ens32virtual_router_id 70priority 80               # 节点优先级(1-254),MASTER应高于BACKUP(如主100,备90)修改此处advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.121.80}track_script {check_nginx}
}

10.8 启动keepalived主备服务并设置开机自启动

[root@nginx1 keepalived]# systemctl start keepalived
[root@nginx1 keepalived]# systemctl enable keepalived[root@nginx2 keepalived]# systemctl start keepalived
[root@nginx2 keepalived]# systemctl enable keepalived

10.9 查看主服务器是否有vip

[root@nginx1 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:ca:f3:80 brd ff:ff:ff:ff:ff:ffinet 192.168.121.70/24 brd 192.168.121.255 scope global noprefixroute ens32valid_lft forever preferred_lft foreverinet 192.168.121.80/32 scope global ens32       # 可以看到vip说明成功了valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:07:76:f8:f2 brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft foreverinet6 fe80::42:7ff:fe76:f8f2/64 scope link valid_lft forever preferred_lft forever
283: veth05c8b6c@if282: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 4a:e0:1c:b5:a7:92 brd ff:ff:ff:ff:ff:ff link-netnsid 0inet6 fe80::48e0:1cff:feb5:a792/64 scope link valid_lft forever preferred_lft forever

10.10 故障场景测试

10.10.1 故障场景测试(高可用验证)

(1)模拟nginx服务崩溃

[root@nginx1 keepalived]# docker stop nginx
nginx
[root@nginx1 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:ca:f3:80 brd ff:ff:ff:ff:ff:ffinet 192.168.121.70/24 brd 192.168.121.255 scope global noprefixroute ens32valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:07:76:f8:f2 brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft foreverinet6 fe80::42:7ff:fe76:f8f2/64 scope link valid_lft forever preferred_lft forever
[root@nginx1 keepalived]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)Active: inactive (dead) since 二 2025-09-02 15:11:40 CST; 10s agoProcess: 26942 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)Main PID: 26943 (code=exited, status=0/SUCCESS)Tasks: 0Memory: 156.0KCGroup: /system.slice/keepalived.service9月 02 15:11:38 nginx1 Keepalived_vrrp[26945]: (VI_1): ip address associated with VRID 51 not present in MASTER advert : 192.168.121.80
9月 02 15:11:38 nginx1 Keepalived_vrrp[26945]: bogus VRRP packet received on ens32 !!!
9月 02 15:11:38 nginx1 Keepalived_vrrp[26945]: VRRP_Instance(VI_1) Dropping received VRRP packet...
9月 02 15:11:39 nginx1 Keepalived[26943]: Stopping
9月 02 15:11:39 nginx1 systemd[1]: Stopping LVS and VRRP High Availability Monitor...
9月 02 15:11:39 nginx1 Keepalived_vrrp[26945]: VRRP_Instance(VI_1) sent 0 priority
9月 02 15:11:39 nginx1 Keepalived_vrrp[26945]: VRRP_Instance(VI_1) removing protocol VIPs.
9月 02 15:11:40 nginx1 Keepalived_vrrp[26945]: Stopped
9月 02 15:11:40 nginx1 Keepalived[26943]: Stopped Keepalived v1.3.5 (03/19,2017), git commit v1.3.5-6-g6fa32f2
9月 02 15:11:40 nginx1 systemd[1]: Stopped LVS and VRRP High Availability Monitor.# 查看nginx2服务器网络状态
[root@nginx2 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:4c:1a:5a brd ff:ff:ff:ff:ff:ffinet 192.168.121.71/24 brd 192.168.121.255 scope global noprefixroute ens32valid_lft forever preferred_lft foreverinet 192.168.121.80/32 scope global ens32valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:33:f1:03:e3 brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft foreverinet6 fe80::42:33ff:fef1:3e3/64 scope link valid_lft forever preferred_lft forever
13: veth527268b@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 2a:64:a8:a1:9d:0d brd ff:ff:ff:ff:ff:ff link-netnsid 0inet6 fe80::2864:a8ff:fea1:9d0d/64 scope link valid_lft forever preferred_lft forever# 恢复nginx1的nginx服务看看keepalived是否vip转移回来
[root@nginx1 keepalived]# docker start nginx
nginx
[root@nginx1 keepalived]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)Active: active (running) since 二 2025-09-02 15:13:01 CST; 1s agoProcess: 28161 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)Main PID: 28162 (keepalived)Tasks: 3Memory: 1.5MCGroup: /system.slice/keepalived.service├─28162 /usr/sbin/keepalived -D├─28163 /usr/sbin/keepalived -D└─28164 /usr/sbin/keepalived -D9月 02 15:13:03 nginx1 Keepalived_vrrp[28164]: VRRP_Instance(VI_1) setting protocol VIPs.
9月 02 15:13:03 nginx1 Keepalived_vrrp[28164]: Sending gratuitous ARP on ens32 for 192.168.121.80
9月 02 15:13:03 nginx1 Keepalived_vrrp[28164]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens32 for 192.168.121.80
9月 02 15:13:03 nginx1 Keepalived_vrrp[28164]: Sending gratuitous ARP on ens32 for 192.168.121.80
9月 02 15:13:03 nginx1 Keepalived_vrrp[28164]: Sending gratuitous ARP on ens32 for 192.168.121.80
9月 02 15:13:03 nginx1 Keepalived_vrrp[28164]: Sending gratuitous ARP on ens32 for 192.168.121.80
9月 02 15:13:03 nginx1 Keepalived_vrrp[28164]: Sending gratuitous ARP on ens32 for 192.168.121.80
9月 02 15:13:03 nginx1 Keepalived_vrrp[28164]: (VI_1): ip address associated with VRID 51 not present in MASTER advert : 192.168.121.80
9月 02 15:13:03 nginx1 Keepalived_vrrp[28164]: bogus VRRP packet received on ens32 !!!
9月 02 15:13:03 nginx1 Keepalived_vrrp[28164]: VRRP_Instance(VI_1) Dropping received VRRP packet...
[root@nginx1 keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:ca:f3:80 brd ff:ff:ff:ff:ff:ffinet 192.168.121.70/24 brd 192.168.121.255 scope global noprefixroute ens32valid_lft forever preferred_lft foreverinet 192.168.121.80/32 scope global ens32valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:07:76:f8:f2 brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft foreverinet6 fe80::42:7ff:fe76:f8f2/64 scope link valid_lft forever preferred_lft forever
289: veth6e8d184@if288: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 7a:73:a9:c7:8a:e1 brd ff:ff:ff:ff:ff:ff link-netnsid 0inet6 fe80::7873:a9ff:fec7:8ae1/64 scope link valid_lft forever preferred_lft forever

看到测试结果,nginx高可用服务器搭建完毕,主服务器掉线备服务器接上vip,主服务器nginx恢复后定时任务自动启动keepalived,vip漂移回主服务器。

11 容器化 CI/CD 流水线搭建

11.1 架构

设计 "Git代码仓库 → Jenkins 构建 → Harbor 镜像仓库 → Docker 容器部署" 的流水线,适配现有 app-serverMySQLRedis 等组件。

11.1.2 组件选型

组件作用部署节点(复用现有)网络配置(IP / 端口)
Git仓库存储应用代码(含 Docker 配置文件)外部(Github.com)-
Jenkins流水线核心(拉取代码、构建、部署)ansible-server(192.168.121.210)192.168.121.210:8080
Harbor私有 Docker 镜像仓库(存储应用镜像)ansible-server(192.168.121.210)192.168.121.210:80(HTTP)
Docker容器运行时(构建 / 运行镜像)ansible-serverapp-server1app-server2-
Docker Compose容器编排(管理多容器应用)app-server1app-server2-

11.2 部署 Harbor 私有镜像仓库(ansible-server 节点)

11.2.1 安装 Docker Compose(ansible-server、app-server1app-server2

三节点操作完全一致

[root@app-server1 ~]#  yum install docker-compose
[root@app-server1 ~]#  yum install -y gcc python3-devel rust cargo
[root@app-server1 ~]#  pip3 install --upgrade pip
[root@app-server1 ~]#  pip3 install setuptools-rust
[root@app-server1 ~]#  pip3 install docker-compose
[root@app-server1 ~]# docker-compose -v
/usr/local/lib/python3.6/site-packages/paramiko/transport.py:32: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography. The next release of cryptography will remove support for Python 3.6.from cryptography.hazmat.backends import default_backend
docker-compose version 1.29.2, build unknown

11.3 部署 Harbor 私有镜像仓库(ansible-server 节点)

11.3.1 下载并配置 Harbor

# 1. 下载 Harbor 离线包(v2.8.2 稳定版)
[root@ansible-server]# wget https://github.com/goharbor/harbor/releases/tag/v2.8.2/harbor-offline-installer-v2.8.2.tgz# 2. 解压并进入目录
[root@ansible-server]# tar -zxvf harbor-offline-installer-v2.8.2.tgz -C /opt/
[root@ansible-server]# cd /opt/harbor# 3. 复制配置文件并修改
[root@ansible-server harbor]# cp harbor.yml.tmpl harbor.yml# 4.使用 OpenSSL 生成一个自签名的 SSL/TLS 证书(.pem 格式)和对应的私钥(.pem 格式)openssl req -newkey rsa:4096 -nodes -sha256 -keyout key.pem -x509 -days 365 -out cert.pem
[root@ansible-server harbor]# mkdir /certificate
[root@ansible-server harbor]# mkdir /private
[root@ansible-server harbor]# mv cert.pem /certificate/
[root@ansible-server harbor]# mv key.pem /private/# 5.编辑配置文件
[root@ansible-server harbor]# vim harbor.yml
# 修改以下关键配置(其他默认):
hostname: 192.168.121.210  # 改为 ansible-server 的 IP
http:port: 80  # HTTP 端口(测试用,生产建议配 HTTPS)
harbor_admin_password: 123456  # 管理员密码(自定义)
data_volume: /opt/harbor/data  # 镜像数据存储路径-------------------------------完整配置-------------------------------hostname: 192.168.121.210
http:port: 80
https:port: 443certificate: /certificate/cert.pem        # 自签名证书路径修改private_key: /private/key.pem             # 私钥路径修改
harbor_admin_password: 123456
database:password: root123max_idle_conns: 100max_open_conns: 900conn_max_lifetime: 5mconn_max_idle_time: 0
data_volume: /opt/harboe/data
trivy:ignore_unfixed: falseskip_update: falseoffline_scan: falsesecurity_check: vulninsecure: false
jobservice:max_job_workers: 10logger_sweeper_duration: 1 #days
notification:webhook_job_max_retry: 3webhook_job_http_client_timeout: 3 #seconds
log:level: infolocal:rotate_count: 50rotate_size: 200Mlocation: /var/log/harbor
_version: 2.8.0
proxy:http_proxy:https_proxy:no_proxy:components:- core- jobservice- trivy
upload_purging:enabled: trueage: 168hinterval: 24hdryrun: false
cache:enabled: falseexpire_hours: 24
-------------------------------end--------------------------------------

11.3.2 安装并启动 Harbor

# 1. 执行安装脚本(自动拉取依赖镜像并启动)
./install.sh# 2. 验证 Harbor 状态(所有容器为 Up 状态即可)
docker-compose -f /opt/harbor/docker-compose.yml ps# 3. 访问 Harbor 控制台
浏览器打开 http://192.168.121.210
账号:admin,密码:123456# 4. 创建应用镜像仓库(如 "app-demo")
登录后 → 点击 "新建项目" → 项目名称:app-demo → 公开:可选(测试选公开,生产选私有)

11.4 部署 Jenkins 流水线核心(ansible-server 节点)
11.4.1 准备本地文件下载maven

下载maven 安装包和docker-compose并上传到ansible-server服务器:通过网盘分享的文件:apache-maven-3.9.11-bin.tar.gz
链接: https://pan.baidu.com/s/1OTfzJlnIv7RRA97BDVLh_A?pwd=miyg 提取码: miyg 

通过网盘分享的文件:docker-compose-linux-x86_64
链接: https://pan.baidu.com/s/15dmjeQdSTa0pwxSYpqb-Yg?pwd=m5pk 提取码: m5pk 
--来自百度网盘超级会员v1的分享

[root@ansible-server jenkins_home]# mv apache-maven-3.9.11-bin.tar.gz /data/jenkins_home/

10.4.2 编写 Dockerfile(使用本地 Maven 包)

[root@ansible-server jenkins_home]# vim Dockerfile# 基础镜像:基于Debian Stretch的Jenkins镜像
FROM docker.1ms.run/jenkins/jenkins:2.150.3# 切换root用户
USER rootRUN echo "deb http://archive.debian.org/debian stretch main" > /etc/apt/sources.list && \echo 'Acquire::Check-Valid-Until "false";' > /etc/apt/apt.conf.d/01ignore-valid-until && \echo 'APT::Get::AllowUnauthenticated "true";' > /etc/apt/apt.conf.d/02allow-unauthRUN apt update --allow-insecure-repositories && \apt install -y --allow-unauthenticated \coreutils \bash \&& rm -rf /var/lib/apt/lists/*# 使用本地Maven安装包
COPY apache-maven-3.9.11-bin.tar.gz /opt/
RUN tar -zxvf /opt/apache-maven-3.9.11-bin.tar.gz -C /opt/ && \rm -rf /opt/apache-maven-3.9.11-bin.tar.gz && \mv /opt/apache-maven-3.9.11 /opt/maven
# 使用docker-compose本地包
COPY docker-compose-linux-x86_64 /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose 
# 配置环境变量
ENV MAVEN_HOME=/opt/maven
ENV PATH=$MAVEN_HOME/bin:$PATH# 验证安装
RUN date --version && \bash --version && \mvn -v \docker-compose --version

11.4.2 构建镜像

[root@ansible-server jenkins_home]# docker build -t jenkins-with-tools:v1 .
[+] Building 7.5s (11/11) FINISHED                                                                                                                                                              docker:default=> [internal] load build definition from Dockerfile                                                                                                                                                      0.0s=> => transferring dockerfile: 1.25kB                                                                                                                                                                    0.0s=> [internal] load metadata for docker.1ms.run/jenkins/jenkins:2.150.3                                                                                                                                   0.0s=> [internal] load .dockerignore                                                                                                                                                                         0.0s=> => transferring context: 2B                                                                                                                                                                           0.0s=> CACHED [1/6] FROM docker.1ms.run/jenkins/jenkins:2.150.3                                                                                                                                              0.0s=> [internal] load build context                                                                                                                                                                         0.0s=> => transferring context: 54B                                                                                                                                                                          0.0s=> [2/6] RUN echo "deb http://archive.debian.org/debian stretch main" > /etc/apt/sources.list &&     echo 'Acquire::Check-Valid-Until "false";' > /etc/apt/apt.conf.d/01ignore-valid-until &&     echo   0.2s=> [3/6] RUN apt update --allow-insecure-repositories &&     apt install -y --allow-unauthenticated     coreutils     bash     && rm -rf /var/lib/apt/lists/*                                            6.5s=> [4/6] COPY apache-maven-3.9.11-bin.tar.gz /opt/                                                                                                                                                       0.0s=> [5/6] RUN tar -zxvf /opt/apache-maven-3.9.11-bin.tar.gz -C /opt/ &&     rm -rf /opt/apache-maven-3.9.11-bin.tar.gz &&     mv /opt/apache-maven-3.9.11 /opt/maven                                      0.3s => [6/6] RUN date --version &&     bash --version &&     mvn -v                                                                                                                                          0.4s => exporting to image                                                                                                                                                                                    0.1s => => exporting layers                                                                                                                                                                                   0.1s => => writing image sha256:027f72e610772c4378f125cc1cfbd4113b65f0193ae35f98f91e463b0369071e                                                                                                              0.0s => => naming to docker.io/library/jenkins-with-tools:v1 

11.4.3 启动容器

# 写成shell脚本形式方便后续修改和执行
[root@ansible-server jenkins_home]# vim jenkins_up.shdocker run -d --name jenkins \-p 8080:8080 \-p 50000:50000 \-v /var/run/docker.sock:/var/run/docker.sock \-v /usr/bin/docker:/usr/bin/docker \-v /etc/docker/daemon.json:/etc/docker/daemon.json \-v jenkins_home:/var/jenkins_home \-e TZ="Asia/Shanghai" \--restart=always \jenkins-with-tools:v1echo "启动成功"[root@ansible-server jenkins_home]# sh jenkins_up.sh
# 获取初始管理员密码
[root@ansible-server ~]# docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword

11.4.4 初始化 Jenkins 控制台

  1.访问 Jenkins:浏览器打开 http://192.168.121.210:8080,粘贴初始密码 → 点击 "继续"。

   2.安装插件:选择 "安装推荐的插件"(等待安装完成,失败的话不管直接下一步)。

  3.创建管理员账号:输入用户名(如 admin)、密码(自定义)、邮箱 → 完成初始化。

11.4.5 安装流水线必需插件

通过网盘分享的文件:jenkins_plugins.tar
链接: https://pan.baidu.com/s/1e4QN2oHPJYChCWmG6mQ9LA?pwd=cfds 提取码: cfds 
下载我已经打包好的插件包

# 进入dockervolume挂载的plugins插件目录
[root@ansible-server plugins]# cd /var/lib/docker/volumes/jenkins_home/_data/plugins# 将plugins包上传到服务器/var/lib/docker/volumes/jenkins_home/_data/plugins
# 解压plugins包
[root@ansible-server plugins]# tar -xvf jenkins_plugins.tar
# 重启jenkins容器
[root@ansible-server plugins]# docker restart jenkins

打开插件管理页面可以看到插件已经加载完毕

11.4.6 配置jenkins凭证(关键步骤)

需配置 3 类凭证:

  1. Git 仓库凭证(拉取代码用):

    • 左侧 "凭证" → "系统" → "全局凭证" → "添加凭证"。
    • 类型:选择"SSH Username with private key"(SSH 密钥)。
    • ID:自定义(如 git-cred),描述:Git 仓库凭证 → 保存。

复制ssh密钥

    2.HarBor凭证(推送镜像用):

        添加凭证 → 类型:"Username with password"。

        用户名:Harbor 管理员账号(admin),密码:123456

        ID:harbor-cred,描述:Harbor 镜像仓库凭证 → 保存。

    3.APP服务器ssh凭证(部署用,app-server1/app-server2):

        先在ansible-server服务器生成ssh密钥,之前初始化环境已经生成过了,这里不再重复。

        将公钥复制给app-server1和app-server2

ssh-copy-id app-server1
ssh-copy-id app-server2
  • Jenkins 添加凭证 → 类型:"SSH Username with private key"。
  • 用户名:root(app-server 登录用户),私钥:粘贴 ~/.ssh/id_rsa 内容。
[root@ansible-server jenkins_home]# cat ~/.ssh/id_rsa
-----BEGIN RSA PRIVATE KEY-----
MIIJJwIBAAKCAgEA1vin00eN+N45FmyAWlVyCH6vwJK23Vbme4lQJmYMFhxL/cOd
5X7RIy0Sjk+RofjjRtHalXpwaJIs3z1hMHjkCtO6bC7HofEU4OEzeFAdAHz2+kxE
BdCfP92LR9avj1rRPkt1pP/w0z6qNruY2iBp9leQhKCxQIbDeOh/xBa/HofenU5m
eLoXt/M5kJ0OXRtmaDXGSVdDW0Xfh3MokBaFIk6x1o1EHO/q78wqJS+tfL0cwi09
PUgo7tR+opNlCjdBu+llExVz8JLBxA9wH+LA9l/evXgS+JhgAUALtA/gUjBRKtwg
eAq6xcHHTJMKoSvz/U1qVj4NHt4w2Q+ODttE44exhDWQtgc6287STmSlAeq+k3iV
2OthrXDvlOpe/WwHyJU6DIhqqZg9kmybgjSoxQGjoe5ptac32LusxQr7fNRfL6AB
wkZlXPLyoFGmbhsbtyGZQDafZ+vvPWXCVvRAFob6PHMj5staAx3JZBtiaYOZFvQR
6u7sysNo5abCihPYqBMcgFnPeT5B1XMyYssvk4ReJXGrJBt2/E1WLhZEKil+ba2/
Q36dstGSai9YRJaGQlngA7TYWVxY2dcwJNFQ3AG21aamWev+WTY3daOX+NSnxd2Y
1Tu+GG/LDg55jPaHfo/WmlBlo9p7CzvqThthO1HKOWsuKp0pswBz8tsD7jcCAwEA
AQKCAgAyH755/Bg1bBNhkCEJbxzssCVowIzU5TtOmMDQg0DUMvrhC6iYZ056ZjsK
ZbEuVCsiSzItYmQtbc/6qYQs2jNJ9v5j1TCFKQJWQQxQRFXO1FR+HiRKOs+3A4BD
WuKKiYF6hfvDYk4T42uq4WkNiztJzjcLRbCuu/1+BrAr16Xuh323rh0kjzeSk6rb
dlNwEEB7kfZPCYLSGGO7YHWXyzh1vGWpAj1chfCAw1kcXJaWHD5FZGkADgBFV9TD
MZ0AmcvA9fW0Um87K+z0OylItgWKLOZxxTqfLmBMSlOwQ3dpkoyKctM7Sj0seTdw
OmTjGa2FZXSi8Ur75JD9O6xC+IuCGGOFGzgfjRb0RBl2Wvv2ZjxQtWmWNGW051ZB
5pB0W2a8oIQb5qaypian6o/Sx21eiNhR+t607u0N1OuZC6rorw2jOGXAps+XJkK7
xcQ9DjtZyu1ViFPLD1dZRpX8GY0tkAXwcKEexlvo4MpUPVchpq5plFuD9L9EUMZ3
0aRLHnBcw7sRgXTO22bXjlTfX7mTdv3xCA3FE5Zu4uNi90TrOHLEJcXx/5vl38GC
gxF4yiA9OFVYW7nzAgpIdlZzUQhwEnhDi1IJWl7tGCRmnUv+WARkSZudD2qF3i9O
6dMW3bjyfqn/BYhQZg4ev6ofB7cpmLXS3eVuShIXlbBCQRfecQKCAQEA/Ge7+eDU
0S2AHIaR/pCdhN4LirQeH/z+9Wvpgf4m8B9brcVUo5CFV2hVF0a4Z6ncWXRO6a20
s+1DsIJ1vONHvs9K8OWCF9dIdGqQyV9Md1MqQVj4qrFPfxT/OVG34bUrZazcYZYm
4HHMQgKsvAFsdD88EgizXRkyzpMHVkGeM0mHDeCax6Z2x8VNqy63kKR/zVAgEXOj
3uT7odE7jGHY1RLoKYmL23nXaFbxFGCHROEcfXMsB3Ja58WtPO+KWkWuhKmVagAh
YvhYTL+RZZ0+9/xjIj4vqcXEk+/s8wv2P37B2lTDOHGZzP4XVrc6/uHfviqjPqoL
KrG4sFyAOhUFSQKCAQEA2ghwFYl4Jowe8yaNxKThAI+MeUs6iPyOxMWSOzCf30i8
qP0yWCsnPN/K8Pxx5Pu59pctkp6ON2RNRmEb3pJc9OOAiPbw+1ZpO4n0BAwuYXxT
EvW0dmFmDjnGH7hcvS6s7Ge4PvYUXE/xn8Od1H3e25VseAoMqUbaHNP7bdX/DAIE
OIRmKCjnYwIIzwbOe78B8pId3lopDIPBOL25ptoET5QjHjeQYEgAaDiA2isFtvLx
NswWZhOTEeVWpEnsepDdD0gWLXHbYffKFC4EArKccZ+R39Of3604Sd5tkJjqru3L
gh43ZZBCtF8Afz50dvnu1g7dskhP55kamWeT1/bXfwKCAQBhFaWAH7K8Irw8PKa7
O/Tavm3CFDXiJ/YJgFB458Eia21gEZ7UqyoezMquAU280eEnp00TJPV0n7aBliyj
UuitxB4XOrAna287GCJI0pce7qY6LHa5cSoav4DME1qfPohKu4qpHpAllJ/0ZAL8
7a9Bp3D7ns0e6ipYusT/sI2hPI7uD455bNYTURjm1zlUMXHXDxLGo6xMd9cyDsDQ
5nH4wyT5lSZubRcl0ws3w0lEfTHwLvSoiJveunJAFgMpZdQSwwftlc9BujR8kNLk
Ou+Vg0a+TR0YODG8lXSWp+s30RHPYPsWItv9tV5UxHW0xDzDcLMJz24sJd/cNjg8
HwnBAoIBAH3fPy3/1gCTBk8jo9axxT/4n4Vq29k3zQhmczx+nt1d9aStwAHMr/Nh
05x6cRpcBQkKUAIETWBHJKGL8HX3E3lBWfQ4c/j18vyvcNNhYOlgx+j7NnrdUfjG
e83WNpv1NVmpq2GV2T1N3dV5LkX9gMpOInfOfW7Ae60G6HGJiJubEmq6bOukaajs
BL/YUx53sB0lI985N9eEvOkQBvz/gluazwdj1pLvHmUMsb7B9aOf74fOHORDSrWb
LADeuIot1aE74anMwHV3gw9RXXldOhoSoDmSyApuyz9CDQjcbygcGk/9N8gHl6rf
6b6MBNqnAa5MmMqTGqY+6m9Dr8OPOusCggEAdGJsc7SiotEdXYY6qWuic6/pe3dK
trLXZ8qIiAfBtY+tbBFD205P1Khn68ATFJqxq0xscZYGA46GkfU0Q1+3B3sfbprU
125IabNDA3B1GI7F3gcSHJBvffjlQIUVJQ0zuuJmesBGryN82SDylVbTXwnN/JV2
eEvI39o9GRwRshsLeEgPHUDa6qRuYH/GqqyBinGOgVPzeph9tPSYnQAy6HcJkm3U
54t2iM60tH/coLm9uzLhkGkXEHlfteN+YZnweOa52L+owv0HyNnO+kN0eM6/J/yj
ZZyfcg2OWgiS+dvyKVQbe15Irtqsa4iSUpO9oRT6ljetf+xe6H1QYsNsfw==
-----END RSA PRIVATE KEY-----
  • ID:app-server-ssh,描述:App 服务器 SSH 凭证 → 保存。

11.5 上传代码至代码仓库

11.5.1 项目结构

app-demo/                  # 项目根目录(对应 Github仓库根目录)
├── pom.xml                # Maven 依赖配置(核心)
├── src/
│   ├── main/
│   │   ├── java/
│   │   │   └── com/
│   │   │       └── demo/
│   │   │           ├── AppDemoApplication.java  # 应用启动类
│   │   │           ├── config/                  # 配置类(Redis、Swagger 等)
│   │   │           │   ├── RedisConfig.java     # Redis 缓存配置
│   │   │           │   └── SpringDocConfig.java # Swagger 接口文档配置
│   │   │           ├── controller/              # 接口层(对外提供 API)
│   │   │           │   └── HouseController.java  # 用户管理接口
│   │   │           ├── entity/                  # 实体类(对应 MySQL 表)
│   │   │           │   └── House.java            # User 实体
│   │   │           ├── repository/              # 数据访问层(操作 MySQL)
│   │   │           │   └── HouseRepository.java  # User 数据库操作
│   │   │           └── service/                 # 业务逻辑层
│   │   │               ├── HouseService.java     # User 业务接口
│   │   │               └── impl/
│   │   │                   └── HouseServiceImpl.java # 业务实现(含 Redis 缓存)
│   │   └── resources/
│   │       ├── application.yml                  # 应用配置(环境变量占位符)
│   │       └── db/
│   │           └── init.sql                     # MySQL 初始化脚本(建表)
│   └── test/                                      # 测试类
├── Dockerfile
├── docker-compose.yml
└── Jenkinsfile

10.5.2 github建立代码仓库

11.5.3 ansible-server服务器部署git

[root@ansible-server ~]# yum install git
[root@ansible-server ~]# git --version
git version 1.8.3.1

11.5.4 git身份设置

[root@ansible-server ~]# git config --global user.name 'chenjun'    # 名字
[root@ansible-server ~]# git config --global user.email '3127103271@qq.com'    # 邮箱
[root@ansible-server ~]# git config --global color.ui true
[root@ansible-server ~]# git config --list# 设置免密push
[root@ansible-server app_demo]# cat /root/.ssh/id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDW+KfTR4343jkWbIBaVXIIfq/AkrbdVuZ7iVAmZgwWHEv9w53lftEjLRKOT5Gh+ONG0dqVenBokizfPWEweOQK07psLseh8RTg4TN4UB0AfPb6TEQF0J8/3YtH1q+PWtE+S3Wk//DTPqo2u5jaIGn2V5CEoLFAhsN46H/EFr8eh96dTmZ4uhe38zmQnQ5dG2ZoNcZJV0NbRd+HcyiQFoUiTrHWjUQc7+rvzColL618vRzCLT09SCju1H6ik2UKN0G76WUTFXPwksHED3Af4sD2X969eBL4mGABQAu0D+BSMFEq3CB4CrrFwcdMkwqhK/P9TWpWPg0e3jDZD44O20Tjh7GENZC2BzrbztJOZKUB6r6TeJXY62GtcO+U6l79bAfIlToMiGqpmD2SbJuCNKjFAaOh7mm1pzfYu6zFCvt81F8voAHCRmVc8vKgUaZuGxu3IZlANp9n6+89ZcJW9EAWhvo8cyPmy1oDHclkG2Jpg5kW9BHq7uzKw2jlpsKKE9ioExyAWc95PkHVczJiyy+ThF4lcaskG3b8TVYuFkQqKX5trb9Dfp2y0ZJqL1hEloZCWeADtNhZXFjZ1zAk0VDcAbbVpqZZ6/5ZNjd1o5f41KfF3ZjVO74Yb8sODnmM9od+j9aaUGWj2nsLO+pOG2E7Uco5ay4qnSmzAHPy2wPuNw== root@ansible-server#把公钥文件的内容复制到剪贴板

添加公钥

11.5.5 应用代码

1. pom.xml(Maven 依赖配置)
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><!-- 父工程:Spring Boot 2.7.x 稳定版(适配 JDK 11) --><parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>2.7.18</version><relativePath/></parent><groupId>com.demo</groupId><artifactId>app-demo</artifactId><version>1.0.0</version><name>app-demo</name><description>Spring Boot 应用(适配 CI/CD 流水线 + MySQL + Redis)</description><properties><java.version>8</java.version><springdoc.version>1.6.15</springdoc.version> <!-- Swagger 3 依赖版本 --></properties><dependencies><!-- 1. Spring Boot 核心依赖 --><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId> <!-- Web 功能(接口) --></dependency><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-actuator</artifactId> <!-- 健康检查、监控 --></dependency><!-- 2. MySQL 依赖(操作数据库) --><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-data-jpa</artifactId> <!-- JPA 简化数据库操作 --></dependency><dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>8.0.33</version><scope>runtime</scope> <!-- 运行时依赖 --></dependency><!-- 3. Redis 依赖(缓存) --><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-data-redis</artifactId></dependency><dependency><groupId>org.apache.commons</groupId><artifactId>commons-pool2</artifactId> <!-- Redis 连接池 --></dependency><!-- 4. Swagger 3(接口文档,方便测试) --><dependency><groupId>org.springdoc</groupId><artifactId>springdoc-openapi-ui</artifactId><version>${springdoc.version}</version></dependency><!-- 5. 工具类(简化开发) --><dependency><groupId>org.projectlombok</groupId><artifactId>lombok</artifactId><optional>true</optional> <!-- 编译时依赖,减少包体积 --></dependency><dependency><groupId>com.fasterxml.jackson.datatype</groupId><artifactId>jackson-datatype-jsr310</artifactId></dependency></dependencies><!-- 构建配置:确保打包成可执行 JAR,且路径匹配 Dockerfile --><build><finalName>app-demo</finalName> <!-- 打包后的 JAR 名:app-demo.jar --><plugins><plugin><groupId>org.springframework.boot</groupId><artifactId>spring-boot-maven-plugin</artifactId><configuration><excludes><exclude><groupId>org.projectlombok</groupId><artifactId>lombok</artifactId></exclude></excludes></configuration></plugin></plugins></build>
</project>
2. 应用启动类:AppDemoApplication.java
package com.demo;import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cache.annotation.EnableCaching; // 开启 Redis 缓存/*** 应用启动类(入口)*/
@SpringBootApplication
@EnableCaching // 启用缓存(Redis)
public class AppDemoApplication {public static void main(String[] args) {SpringApplication.run(AppDemoApplication.class, args);}}
3. 配置类
(1)RedisConfig.java(Redis 缓存配置)
package com.demo.config;import com.fasterxml.jackson.annotation.JsonTypeInfo;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.jsontype.impl.LaissezFaireSubTypeValidator;
import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.cache.RedisCacheConfiguration;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.RedisSerializationContext;
import org.springframework.data.redis.serializer.StringRedisSerializer;import java.time.Duration;/*** Redis 缓存配置(指定序列化方式、默认缓存时间)*/
@Configuration
@EnableCaching
public class RedisConfig {/*** 配置支持Java 8日期时间类型的Jackson序列化器*/private GenericJackson2JsonRedisSerializer genericJackson2JsonRedisSerializer() {ObjectMapper objectMapper = new ObjectMapper();// 注册Java 8日期时间模块objectMapper.registerModule(new JavaTimeModule());// 禁用日期作为时间戳的序列化方式objectMapper.disable(com.fasterxml.jackson.databind.SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);// 启用类型信息,以便正确反序列化objectMapper.activateDefaultTyping(LaissezFaireSubTypeValidator.instance,ObjectMapper.DefaultTyping.NON_FINAL,JsonTypeInfo.As.PROPERTY);return new GenericJackson2JsonRedisSerializer(objectMapper);}/*** 配置 Redis 缓存管理器(自定义序列化,避免缓存乱码)*/@Beanpublic RedisCacheManager cacheManager(RedisConnectionFactory connectionFactory) {// 1. 配置缓存序列化方式(Key:String,Value:JSON)RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofMinutes(30)) // 默认缓存时间:30分钟.serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer())).serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(genericJackson2JsonRedisSerializer())).disableCachingNullValues(); // 不缓存 null 值// 2. 创建缓存管理器return RedisCacheManager.builder(connectionFactory).cacheDefaults(config).build();}
}
(2) SpringDocConfig.java(Swagger 3 配置)
package com.demo.config;import io.swagger.v3.oas.models.OpenAPI;
import io.swagger.v3.oas.models.info.Info;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;/*** Swagger 3(OpenAPI)配置:访问地址 http://ip:8080/swagger-ui.html*/
@Configuration
public class SpringDocConfig {@Beanpublic OpenAPI customOpenAPI() {return new OpenAPI().info(new Info().title("App-Demo 接口文档").version("1.0.0").description("用户管理 CRUD 接口(适配 MySQL + Redis)"));}
}
4. 实体类:House.java(对应 MySQL 表)
package com.demo.entity;import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;import javax.persistence.*;
import java.io.Serializable;/*** House 实体类(对应 MySQL 中的 house 表)*/
@Data
@NoArgsConstructor
@AllArgsConstructor
@Entity
@Table(schema = "app_demo", name = "house") // 对应数据库表
public class House implements Serializable {private static final long serialVersionUID = 1L;@Id@GeneratedValue(strategy = GenerationType.IDENTITY) // 自增主键private Long id; // 新增主键字段(表结构未提及,实际表通常需要主键)@Column(name = "行政区", nullable = false, length = 255)private String district; // 行政区@Column(name = "所属小区", nullable = false, length = 255)private String community; // 所属小区@Column(name = "房屋户型", length = 255)private String houseType; // 房屋户型@Column(name = "房屋朝向", length = 255)private String orientation; // 房屋朝向@Column(name = "所在楼层")private Integer floor; // 所在楼层@Column(name = "装修程度", length = 255)private String decoration; // 装修程度@Column(name = "配套电梯", length = 255)private String elevator; // 配套电梯(可存储"有"或"无")@Column(name = "建筑面积")private Integer area; // 建筑面积(单位:平方米)@Column(name = "房屋总价")private Integer totalPrice; // 房屋总价(单位:元)@Column(name = "建造年代")private Integer buildYear; // 建造年代
}
5. 数据访问层:HouseRepository.java(操作 MySQL)
package com.demo.repository;import com.demo.entity.House;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;/*** House 数据访问层(JPA 自动实现 CRUD 方法)*/
@Repository
public interface HouseRepository extends JpaRepository<House, Long> {// 可根据需求添加自定义查询方法,例如:// List<House> findByDistrict(String district); // 按行政区查询
}
6. 业务逻辑层
(1)HouseService.java(业务接口)
package com.demo.service;import com.demo.entity.House;import java.util.List;
import java.util.Optional;/*** House 业务接口*/
public interface HouseService {// 新增房屋House save(House house);// 根据 ID 删除房屋void deleteById(Long id);// 根据 ID 更新房屋House update(House house);// 根据 ID 查询房屋(缓存)Optional<House> getById(Long id);// 查询所有房屋(缓存)List<House> listAll();
}
(2)HouseServiceImpl.java(业务实现,含 Redis 缓存)
package com.demo.service.impl;import com.demo.entity.House;
import com.demo.repository.HouseRepository;
import com.demo.service.HouseService;
import lombok.RequiredArgsConstructor;
import org.springframework.cache.annotation.CacheEvict;
import org.springframework.cache.annotation.CachePut;
import org.springframework.cache.annotation.Cacheable;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;import java.util.List;
import java.util.Optional;/*** House 业务实现(含 Redis 缓存)*/
@Service
@RequiredArgsConstructor
public class HouseServiceImpl implements HouseService {private final HouseRepository houseRepository;@Override@Transactional@CacheEvict(value = "houseList", allEntries = true) // 新增后清空列表缓存public House save(House house) {return houseRepository.save(house);}@Override@Transactional@CacheEvict(value = {"house", "houseList"}, allEntries = true) // 清空相关缓存public void deleteById(Long id) {if (!houseRepository.existsById(id)) {throw new RuntimeException("房屋不存在:ID=" + id);}houseRepository.deleteById(id);}@Override@Transactional@CachePut(value = "house", key = "#house.id") // 更新缓存@CacheEvict(value = "houseList", allEntries = true) // 清空列表缓存public House update(House house) {if (!houseRepository.existsById(house.getId())) {throw new RuntimeException("房屋不存在:ID=" + house.getId());}return houseRepository.save(house);}@Override@Cacheable(value = "house", key = "#id", unless = "#result == null") // 缓存单个房屋public Optional<House> getById(Long id) {return houseRepository.findById(id);}@Override@Cacheable(value = "houseList", key = "'all'", unless = "#result.isEmpty()") // 缓存房屋列表public List<House> listAll() {return houseRepository.findAll();}
}
7. 接口层:HouseController.java(对外提供 API)
package com.demo.controller;import com.demo.entity.House;
import com.demo.service.HouseService;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.Parameter;
import io.swagger.v3.oas.annotations.tags.Tag;
import lombok.RequiredArgsConstructor;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;import java.util.List;
import java.util.Optional;/*** 房屋管理接口* 访问地址:http://ip:8080/api/houses*/
@RestController
@RequestMapping("/api/houses")
@RequiredArgsConstructor
@Tag(name = "房屋管理接口", description = "提供房屋 CRUD 操作")
public class HouseController {private final HouseService houseService;/*** 新增房屋*/@PostMapping@Operation(summary = "新增房屋", description = "传入房屋信息,创建新记录")public ResponseEntity<House> save(@Parameter(description = "房屋信息(id 无需传入)") @RequestBody House house) {House savedHouse = houseService.save(house);return new ResponseEntity<>(savedHouse, HttpStatus.CREATED);}/*** 根据 ID 删除房屋*/@DeleteMapping("/{id}")@Operation(summary = "删除房屋", description = "根据 ID 删除房屋记录")public ResponseEntity<Void> deleteById(@Parameter(description = "房屋 ID") @PathVariable Long id) {houseService.deleteById(id);return ResponseEntity.noContent().build();}/*** 更新房屋信息*/@PutMapping@Operation(summary = "更新房屋", description = "传入完整房屋信息(含 id)")public ResponseEntity<House> update(@Parameter(description = "房屋信息(id 必须传入)") @RequestBody House house) {House updatedHouse = houseService.update(house);return ResponseEntity.ok(updatedHouse);}/*** 根据 ID 查询房屋*/@GetMapping("/{id}")@Operation(summary = "查询单个房屋", description = "根据 ID 查询房屋详情")public ResponseEntity<House> getById(@Parameter(description = "房屋 ID") @PathVariable Long id) {Optional<House> house = houseService.getById(id);return house.map(ResponseEntity::ok).orElseGet(() -> ResponseEntity.notFound().build());}/*** 查询所有房屋*/@GetMapping@Operation(summary = "查询所有房屋", description = "返回所有房屋列表(带缓存)")public ResponseEntity<List<House>> listAll() {List<House> houses = houseService.listAll();return ResponseEntity.ok(houses);}
}
8. 应用配置:application.yml(支持环境变量)

关键:所有 MySQL/Redis 配置通过环境变量读取,适配 docker-compose.yml 中传递的 SPRING_DATASOURCE_URL 等参数:

# 服务器配置
server:port: 8080 # 应用端口(与 docker-compose 一致)servlet:context-path: / # 上下文路径(默认根路径)# Spring 配置
spring:# 1. MySQL 配置(环境变量占位符,由 docker-compose 传递)?useSSL=false&serverTimezone=UTC&serverTimezone=UTC&allowPublicKeyRetrieval=truedatasource:url: ${SPRING_DATASOURCE_URL:jdbc:mysql://192.168.121.188:8066/app_demo?useSSL=false&serverTimezone=UTC&allowPublicKeyRetrieval=true}username: ${SPRING_DATASOURCE_USERNAME:root}password: ${SPRING_DATASOURCE_PASSWORD:123456}driver-class-name: com.mysql.cj.jdbc.Driverhikari:connection-init-sql: USE app_demo;# 2. JPA 配置(简化数据库操作)jpa:hibernate:ddl-auto: none # 禁用自动建表(用 init.sql 手动建表,避免冲突)show-sql: true # 控制台打印 SQL(开发/测试用,生产可关闭)properties:hibernate:dialect: org.hibernate.dialect.MySQL8Dialectdefault_schema: app_demo  # 关键配置:全局强制带app_demo库名format_sql: true  # 可选:格式化SQL日志,便于查看open-in-view: false # 关闭 OpenSessionInView(避免事务问题)# 3. Redis 配置(环境变量占位符,默认本地 Redis)这里遇到了一个错误,redis是集群,写成了单机连接池redis:cluster:# 节点列表:用逗号分隔多个 "host:port",或每个节点单独一行(推荐后者,更清晰)nodes:- 192.168.121.171:6379- 192.168.121.172:6379- 192.168.121.173:6379max-redirects: 3  # 整数类型,无需引号timeout: 5000  # 毫秒,无需引号password: ${SPRING_REDIS_PASSWORD:123456} # 若 Redis 无密码,留空lettuce:pool:max-active: 8 # 最大连接数max-idle: 8 # 最大空闲连接min-idle: 2 # 最小空闲连接max-wait: 1000ms # 连接等待时间# 4. Actuator 配置(健康检查,适配 docker-compose healthcheck)
management:endpoints:web:exposure:include: health,info # 暴露健康检查和基础信息接口endpoint:health:show-details: always # 显示健康检查详情(测试用,生产可设为 when_authorized)# 5. 日志配置(简化版:输出到控制台,便于容器日志查看)
logging:level:root: INFOcom.demo: DEBUG # 应用包日志级别(DEBUG 便于排查问题)pattern:console: "%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{50} - %msg%n"
9. MySQL 初始化脚本:init.sql(建库建表)
-- 1. 创建数据库(若不存在)
CREATE DATABASE IF NOT EXISTS app_demo CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;-- 2. 使用数据库
USE app_demo;-- 3. 创建房屋表(对应 House 实体)
CREATE TABLE IF NOT EXISTS `house` (`id` bigint NOT NULL AUTO_INCREMENT COMMENT '房屋ID(主键)',`行政区` varchar(255) NOT NULL COMMENT '行政区',`所属小区` varchar(255) NOT NULL COMMENT '所属小区',`房屋户型` varchar(255) DEFAULT NULL COMMENT '房屋户型',`房屋朝向` varchar(255) DEFAULT NULL COMMENT '房屋朝向',`所在楼层` int DEFAULT NULL COMMENT '所在楼层',`装修程度` varchar(255) DEFAULT NULL COMMENT '装修程度',`配套电梯` varchar(255) DEFAULT NULL COMMENT '配套电梯(有/无)',`建筑面积` int DEFAULT NULL COMMENT '建筑面积(平方米)',`房屋总价` int DEFAULT NULL COMMENT '房屋总价(元)',`建造年代` int DEFAULT NULL COMMENT '建造年代',PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='房屋表';-- 4. 插入测试数据
INSERT IGNORE INTO `house` (行政区, 所属小区, 房屋户型, 房屋朝向, 所在楼层, 装修程度, 配套电梯, 建筑面积, 房屋总价, 建造年代)
VALUES
('朝阳区', '阳光小区', '3室2厅', '南北通透', 15, '精装修', '有', 120, 8000000, 2010),
('海淀区', '学府花园', '2室1厅', '朝南', 8, '简装修', '有', 89, 6500000, 2005);
10. Dockerfile(构建应用镜像)
# 基础镜像(Java 8)
FROM openjdk:8-jre-slim# 维护者信息
LABEL maintainer="陈俊"# 复制应用 JAR 包(假设 Maven 构建输出到 target/app-demo.jar)
COPY target/app-demo.jar /app/app-demo.jar# 暴露应用端口(需与 docker-compose 一致)
EXPOSE 8080# 启动命令
ENTRYPOINT ["java", "-jar", "/app/app-demo.jar"]
11. docker-compose.yml(容器编排,app-server部署用)
version: '3.8'
services:app-demo:# 镜像地址(Harbor 仓库 + 镜像名 + 标签,标签由 Jenkins 动态生成)image: 192.168.121.210/app-demo/app-demo:${IMAGE_TAG}container_name: app-demo    # 容器名restart: always  # 容器异常自动重启ports:- "8080:8080"  # 宿主机端口:容器端口(与应用端口一致)environment:# 应用配置(对接现有 MySQL/Redis)- SPRING_DATASOURCE_URL=jdbc:mysql://192.168.121.188:8066/app_demo?useSSL=false&serverTimezone=UTC&allowPublicKeyRetrieval=true  # 对接mycat的vip,保证高可用与读写分离- SPRING_DATASOURCE_USERNAME=root    # 用户名- SPRING_DATASOURCE_PASSWORD=123456    # 密码- SPRING_REDIS_HOST=192.168.121.171    # redis主机ip,集群模式是三主,填写一个就好了- SPRING_REDIS_PORT=6379    # redis端口# 健康检查(确保应用启动成功)healthcheck:test: ["CMD", "curl", "-f", "http://localhost:8080/actuator/health"]  # 需应用有健康检查接口interval: 30stimeout: 10sretries: 3
12. Jenkinsfile(流水线脚本,核心!)

定义流水线的 拉取代码 → 构建 JAR → 构建镜像 → 推送镜像 → 部署 全流程:

pipeline {agent anyenvironment {// 基础环境变量GIT_URL = "https://github.com/cj3127/app-demo.git"GIT_BRANCH = "main"    # 分支HARBOR_URL = "192.168.121.210"HARBOR_PROJECT = "app-demo"IMAGE_NAME = "app-demo"IMAGE_TAG = "${env.BUILD_NUMBER}-${env.GIT_COMMIT.substring(0,8)}"APP_SERVERS = "192.168.121.80,192.168.121.81"     # 需要部署的目标服务器APP_BASE_DIR = "/opt/app-demo"    # app-server服务器的docker-compose路径,需要提前将docker-compose.yml文件放在目标服务器/opt/app-demo内}stages {stage("拉取 Git 代码") {steps {checkout([$class: 'GitSCM',branches: [[name: "*/${GIT_BRANCH}"]],userRemoteConfigs: [[url: "${GIT_URL}",credentialsId: "git-cred"]]])echo "✅ 代码拉取完成,分支:${GIT_BRANCH},版本:${env.GIT_COMMIT.substring(0,8)}"}}stage("构建 Java 应用") {steps {sh "mvn clean package -Dmaven.test.skip=true"sh "ls -lh target/${IMAGE_NAME}.jar || exit 1"echo "✅ 应用构建完成,JAR路径:target/${IMAGE_NAME}.jar"}}stage("构建 Docker 镜像") {steps {script {withCredentials([usernamePassword(credentialsId: "harbor-cred",usernameVariable: "HARBOR_USER",passwordVariable: "HARBOR_PWD")]) {sh "docker login ${HARBOR_URL} -u ${HARBOR_USER} -p ${HARBOR_PWD}"sh "docker build -t ${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG} ."echo "✅ 镜像构建完成:${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG}"}}}}stage("推送镜像到 Harbor") {steps {sh "docker push ${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG}"sh "docker logout ${HARBOR_URL}"sh "docker image prune -f --filter 'until=720h'"echo "✅ 镜像推送完成,Harbor地址:http://${HARBOR_URL}/${HARBOR_PROJECT}"}}stage("部署到 App 服务器") {steps {script {// 拆分服务器列表def serverList = APP_SERVERS.split(',').collect { it.trim() }if (serverList.isEmpty()) {error("❌ 部署服务器列表为空,请检查 APP_SERVERS 配置")}echo "即将部署到 ${serverList.size()} 台服务器:${serverList.join(', ')}"// 构建并行部署任务def parallelTasks = [:]for (int i = 0; i < serverList.size(); i++) {def server = serverList[i]parallelTasks["部署到 ${server}"] = getDeploymentTask(server)}// 执行并行部署parallel parallelTasksecho "✅ 所有服务器部署完成!"}}}}// 流水线后置通知post {success {echo "=================================================="echo "🎉 CI/CD 流水线执行成功!"echo "镜像标签:${IMAGE_TAG}"echo "Harbor地址:http://${HARBOR_URL}/${HARBOR_PROJECT}"echo "部署服务器:${APP_SERVERS}"echo "=================================================="}failure {echo "=================================================="echo "❌ CI/CD 流水线执行失败!"echo "失败阶段:${currentBuild.currentResult}"echo "排查方向:"echo "1. 检查凭证(git-cred/harbor-cred/app-server-ssh)是否有效"echo "2. 目标服务器是否可通(示例:ssh root@192.168.121.80)"echo "3. Harbor镜像是否推送成功(访问:http://${HARBOR_URL})"echo "4. 目标服务器是否有 ${APP_BASE_DIR} 目录和docker-compose.yml"echo "5. 检查目标服务器上的Docker Compose配置"echo "=================================================="}}
}// 定义部署任务的方法
def getDeploymentTask(server) {return {withCredentials([sshUserPrivateKey(credentialsId: "app-server-ssh",keyFileVariable: "SSH_KEY",usernameVariable: "SSH_USER")]) {// 使用更可靠的部署脚本sh """echo "开始部署到服务器: ${server}"ssh -i ${SSH_KEY} -o StrictHostKeyChecking=no ${SSH_USER}@${server} '# 拉取镜像echo "[${server}] 拉取镜像:${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG}"docker pull ${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG} || exit 1# 检查并停止旧容器CONTAINER_ID=\$(docker ps -q -f name=${IMAGE_NAME})if [ ! -z "\$CONTAINER_ID" ]; thenecho "[${server}] 停止旧容器:${IMAGE_NAME} (\$CONTAINER_ID)"docker stop \$CONTAINER_ID && docker rm \$CONTAINER_ID# 等待容器完全停止sleep 2fi# 启动新容器echo "[${server}] 启动新容器:${IMAGE_NAME}:${IMAGE_TAG}"cd ${APP_BASE_DIR} || exit 1IMAGE_TAG=${IMAGE_TAG} HARBOR_URL=${HARBOR_URL} HARBOR_PROJECT=${HARBOR_PROJECT} IMAGE_NAME=${IMAGE_NAME} docker-compose up -d# 等待几秒让容器启动sleep 5# 验证部署结果NEW_CONTAINER_ID=\$(docker ps -q -f name=${IMAGE_NAME})if [ ! -z "\$NEW_CONTAINER_ID" ]; thenecho "[${server}] 容器状态:"docker ps -f name=${IMAGE_NAME}echo "[${server}] ✅ 部署成功"elseecho "[${server}] ❌ 部署失败,容器未启动"echo "[${server}] 尝试查看日志:"docker logs ${IMAGE_NAME} 2>/dev/null || echo "[${server}] 无法获取日志"exit 1fi'"""}}
}

11.5.6 创建本地仓库

(1)创建工作目录
[root@ansible-server ~]# mkdir /app_demo
[root@ansible-server ~]# git init
会产生一个.git子目录,所有除代码数据外的相关数据都在此目录,不要修改它.
(它就是仓库或叫版本库)
[root@ansible-server app_demo]# ls .git/
branches  COMMIT_EDITMSG  config  description  HEAD  hooks  index  info  logs  objects  refs
(2)克隆github代码仓库到本地

[root@ansible-server ~]# git clone git@github.com:cj3127/app-demo.git
[root@ansible-server app_demo]# ls
app-demo  docker-compose.yml  Dockerfile  Jenkinsfilegit add file # 添加到暂存区git commit -m "提交了什么什么"    # 提交文件git push -u origin main     # 推送更改文件到github

11.5.7 创建Jenkins流水线项目并测试

1. 创建Pipeline项目

(1)登录 Jenkins → 左侧 "new任务" → 输入项目名(如 app-demo-pipeline)→ 选择 "流水线" → 点击 "确定"

(2)配置流水线来源:

  • 下拉 "Definition" → 选择 "Pipeline script from SCM"(从 Git 拉取 Jenkinsfile)。
  • SCM:选择 "Git" → 输入 Git 仓库地址(如 https://github.com/cj3127/app-demo.git)。
  • 凭证:选择之前配置的 git-cred
  • 分支说明:*/main(与 Jenkinsfile 中一致)。
  • 脚本路径:Jenkinsfile(Git 仓库中 Jenkinsfile 的路径,根目录则填此)

(3)点击保存

11.5.8 触发流水线构建

(1)进入 app-demo-pipeline 项目 → 点击左侧 "立即构建" → 开始执行流水线。

(2)查看构建日志:点击构建记录(如 #1)→ 点击 "控制台输出",实时查看各阶段执行情况。

(3)成功标志:

  • 日志最后输出 "CI/CD 流水线执行成功!"。
  • Harbor 仓库 app-demo 项目中出现新镜像(标签如 1-abc12345)。
  • app-server1/app-server2 执行 docker ps 能看到 app-demo 容器(状态 Up)。

11.5.9 验证应用访问

在之前的nginx部署过程中已经配置好了应用服务器负载均衡集群,并配置了反向代理到负载均衡集群的8080端口

  1. 访问 Nginx VIP(http://192.168.121.88/swagger-ui/index.html)→ Nginx 会自动转发到 app-server1:8080 和 app-server2:8080的接口文档,再次访问http://192.168.121.88/api/houses,查询所有用户的接口,可以看到测试内容

  2. 验证数据库连接:通过应用操作数据,查看 MySQL 主库(192.168.121.221)中数据是否正常写入。

  3. 验证 Redis 连接:查看应用缓存功能是否正常(如navicat图形化界面远程连接redis)。

各种验证通过,通过CI/CD流水线部署的后端应用服务器已经顺利完成。

12 前端与应用层部署

最终打通完整业务链路:Windows 客户端 → Nginx(VIP)→ App-Server → MyCat → MySQL

12.1 环境说明

主机名IP 地址操作系统核心角色
nginx1192.168.121.70CentOS 7.9Nginx 主负载节点
nginx2192.168.121.71CentOS 7.9Nginx 备负载节点
虚拟 IP(VIP)192.168.121.88-客户端统一访问地

12.2 前端代码

分别在两台nginx服务器编辑前端页面代码

  // API基础URL - 改为相对路径,将由Nginx代理处理
    const API_BASE_URL = '/api/houses';   

[root@nginx1 _data]# vim /var/lib/docker/volumes/ng_conf/_data/index.html
[root@nginx2 _data]# vim /var/lib/docker/volumes/ng_conf/_data/index.html<!DOCTYPE html>
<html lang="zh-CN">
<head><meta charset="UTF-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><title>房屋信息管理系统</title><!-- 生产环境应替换为本地安装的Tailwind CSS --><script src="https://cdn.tailwindcss.com"></script><link href="https://cdn.jsdelivr.net/npm/font-awesome@4.7.0/css/font-awesome.min.css" rel="stylesheet"><script src="https://cdn.jsdelivr.net/npm/axios/dist/axios.min.js"></script><!-- 配置Tailwind主题 --><script>tailwind.config = {theme: {extend: {colors: {primary: '#165DFF',secondary: '#36CFC9',accent: '#FF7D00',success: '#52C41A',warning: '#FAAD14',danger: '#FF4D4F',dark: '#1D2129','dark-2': '#4E5969','light-1': '#F2F3F5','light-2': '#E5E6EB','light-3': '#C9CDD4'},fontFamily: {inter: ['Inter', 'system-ui', 'sans-serif'],},},}}</script><style type="text/tailwindcss">@layer utilities {.content-auto {content-visibility: auto;}.card-shadow {box-shadow: 0 10px 30px -5px rgba(0, 0, 0, 0.1);}.card-hover {transition: all 0.3s ease;}.card-hover:hover {transform: translateY(-5px);}.gradient-bg {background: linear-gradient(135deg, #165DFF 0%, #0E42D2 100%);}.text-gradient {background-clip: text;-webkit-background-clip: text;color: transparent;background-image: linear-gradient(135deg, #165DFF 0%, #0E42D2 100%);}.scrollbar-hide::-webkit-scrollbar {display: none;}.scrollbar-hide {-ms-overflow-style: none;scrollbar-width: none;}}</style>
</head><body class="font-inter bg-gray-50 text-dark overflow-x-hidden"><!-- 顶部导航栏 --><header class="fixed top-0 left-0 right-0 bg-white shadow-md z-50 transition-all duration-300" id="navbar"><div class="container mx-auto px-4 sm:px-6 lg:px-8"><div class="flex justify-between items-center h-16"><div class="flex items-center"><i class="fa fa-home text-primary text-2xl mr-2"></i><h1 class="text-xl font-bold text-gradient">房屋信息管理系统</h1></div><div class="hidden md:flex items-center space-x-6"><!-- <button id="theme-toggle" class="p-2 rounded-full hover:bg-light-1 transition-colors"><i class="fa fa-moon-o text-dark-2"></i></button> --><button id="refresh-btn" class="p-2 rounded-full hover:bg-light-1 transition-colors"><i class="fa fa-refresh text-dark-2"></i></button></div><div class="md:hidden"><button id="mobile-menu-button" class="p-2 rounded-md text-dark-2 hover:bg-light-1"><i class="fa fa-bars"></i></button></div></div></div><!-- 移动端菜单 --><div id="mobile-menu" class="hidden md:hidden bg-white border-t"><div class="container mx-auto px-4 py-3 space-y-2"><button id="mobile-theme-toggle" class="w-full text-left p-2 rounded-md hover:bg-light-1 transition-colors"><i class="fa fa-moon-o mr-2 text-dark-2"></i> 切换主题</button><button id="mobile-refresh-btn" class="w-full text-left p-2 rounded-md hover:bg-light-1 transition-colors"><i class="fa fa-refresh mr-2 text-dark-2"></i> 刷新数据</button></div></div></header><!-- 主内容区 --><main class="container mx-auto px-4 sm:px-6 lg:px-8 pt-24 pb-16"><!-- 页面标题和操作区 --><div class="mb-8 flex flex-col md:flex-row md:items-center md:justify-between"><div><h2 class="text-[clamp(1.5rem,3vw,2.5rem)] font-bold text-dark">房屋信息管理</h2><p class="text-dark-2 mt-1">管理、查询和维护所有房屋信息</p></div><button id="add-house-btn" class="mt-4 md:mt-0 gradient-bg text-white px-6 py-3 rounded-lg shadow-lg hover:shadow-xl transition-all flex items-center"><i class="fa fa-plus mr-2"></i> 新增房屋</button></div><!-- 搜索和筛选区 --><div class="bg-white rounded-xl shadow-md p-4 mb-8 transform transition-all"><div class="flex flex-col md:flex-row gap-4"><div class="flex-1"><div class="relative"><i class="fa fa-search absolute left-3 top-1/2 -translate-y-1/2 text-light-3"></i><input type="text" id="search-input" placeholder="搜索行政区、小区或户型..." class="w-full pl-10 pr-4 py-2 border border-light-2 rounded-lg focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"></div></div><div class="flex gap-4"><select id="district-filter" class="border border-light-2 rounded-lg px-4 py-2 focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"><option value="">所有行政区</option><option value="朝阳区">朝阳区</option><option value="海淀区">海淀区</option><!-- 其他行政区会根据数据动态添加 --></select><select id="elevator-filter" class="border border-light-2 rounded-lg px-4 py-2 focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"><option value="">电梯配置</option><option value="有">有电梯</option><option value="无">无电梯</option></select></div></div></div><!-- 统计卡片 --><div class="grid grid-cols-1 sm:grid-cols-2 lg:grid-cols-4 gap-6 mb-8"><div class="bg-white rounded-xl p-6 shadow-md card-shadow card-hover"><div class="flex items-center justify-between"><div><p class="text-dark-2 text-sm">总房屋数</p><h3 class="text-3xl font-bold mt-1" id="total-houses">0</h3></div><div class="w-12 h-12 rounded-full bg-primary/10 flex items-center justify-center"><i class="fa fa-building text-primary text-xl"></i></div></div><div class="mt-4 text-sm text-success flex items-center"><i class="fa fa-arrow-up mr-1"></i><span>较上月增长 12%</span></div></div><div class="bg-white rounded-xl p-6 shadow-md card-shadow card-hover"><div class="flex items-center justify-between"><div><p class="text-dark-2 text-sm">平均面积</p><h3 class="text-3xl font-bold mt-1" id="avg-area">0</h3><p class="text-dark-2 text-sm">平方米</p></div><div class="w-12 h-12 rounded-full bg-secondary/10 flex items-center justify-center"><i class="fa fa-arrows-alt text-secondary text-xl"></i></div></div><div class="mt-4 text-sm text-success flex items-center"><i class="fa fa-arrow-up mr-1"></i><span>较上月增长 5%</span></div></div><div class="bg-white rounded-xl p-6 shadow-md card-shadow card-hover"><div class="flex items-center justify-between"><div><p class="text-dark-2 text-sm">平均价格</p><h3 class="text-3xl font-bold mt-1" id="avg-price">0</h3><p class="text-dark-2 text-sm">万元</p></div><div class="w-12 h-12 rounded-full bg-accent/10 flex items-center justify-center"><i class="fa fa-rmb text-accent text-xl"></i></div></div><div class="mt-4 text-sm text-danger flex items-center"><i class="fa fa-arrow-down mr-1"></i><span>较上月下降 3%</span></div></div><div class="bg-white rounded-xl p-6 shadow-md card-shadow card-hover"><div class="flex items-center justify-between"><div><p class="text-dark-2 text-sm">有电梯比例</p><h3 class="text-3xl font-bold mt-1" id="elevator-ratio">0%</h3></div><div class="w-12 h-12 rounded-full bg-success/10 flex items-center justify-center"><i class="fa fa-area-chart text-success text-xl"></i></div></div><div class="mt-4 text-sm text-success flex items-center"><i class="fa fa-arrow-up mr-1"></i><span>较上月增长 8%</span></div></div></div><!-- 房屋列表 --><div class="bg-white rounded-xl shadow-md overflow-hidden"><div class="overflow-x-auto"><table class="min-w-full divide-y divide-light-2"><thead class="bg-light-1"><tr><th scope="col" class="px-6 py-4 text-left text-xs font-medium text-dark-2 uppercase tracking-wider">ID</th><th scope="col" class="px-6 py-4 text-left text-xs font-medium text-dark-2 uppercase tracking-wider">行政区</th><th scope="col" class="px-6 py-4 text-left text-xs font-medium text-dark-2 uppercase tracking-wider">小区</th><th scope="col" class="px-6 py-4 text-left text-xs font-medium text-dark-2 uppercase tracking-wider">户型</th><th scope="col" class="px-6 py-4 text-left text-xs font-medium text-dark-2 uppercase tracking-wider">面积(㎡)</th><th scope="col" class="px-6 py-4 text-left text-xs font-medium text-dark-2 uppercase tracking-wider">总价(万)</th><th scope="col" class="px-6 py-4 text-left text-xs font-medium text-dark-2 uppercase tracking-wider">电梯</th><th scope="col" class="px-6 py-4 text-left text-xs font-medium text-dark-2 uppercase tracking-wider">操作</th></tr></thead><tbody id="house-list" class="bg-white divide-y divide-light-2"><!-- 房屋数据将通过JS动态填充 --><tr class="text-center"><td colspan="8" class="px-6 py-20 text-dark-2"><div class="flex flex-col items-center"><i class="fa fa-home text-4xl text-light-3 mb-4"></i><p>暂无房屋数据</p></div></td></tr></tbody></table></div><!-- 分页控件 --><div class="px-6 py-4 bg-white border-t border-light-2 flex items-center justify-between"><div class="flex-1 flex justify-between sm:hidden"><button id="prev-page-mobile" class="relative inline-flex items-center px-4 py-2 border border-light-2 text-sm font-medium rounded-md text-dark-2 bg-white hover:bg-light-1">上一页</button><button id="next-page-mobile" class="ml-3 relative inline-flex items-center px-4 py-2 border border-light-2 text-sm font-medium rounded-md text-dark-2 bg-white hover:bg-light-1">下一页</button></div><div class="hidden sm:flex-1 sm:flex sm:items-center sm:justify-between"><div><p class="text-sm text-dark-2">显示第 <span id="current-range">1 到 10</span> 条,共 <span id="total-count">0</span> 条记录</p></div><div><nav class="relative z-0 inline-flex rounded-md shadow-sm -space-x-px" aria-label="Pagination"><button id="first-page" class="relative inline-flex items-center px-2 py-2 rounded-l-md border border-light-2 bg-white text-sm font-medium text-dark-2 hover:bg-light-1"><span class="sr-only">首页</span><i class="fa fa-angle-double-left"></i></button><button id="prev-page" class="relative inline-flex items-center px-2 py-2 border border-light-2 bg-white text-sm font-medium text-dark-2 hover:bg-light-1"><span class="sr-only">上一页</span><i class="fa fa-angle-left"></i></button><span id="page-indicator" class="relative inline-flex items-center px-4 py-2 border border-light-2 bg-primary text-sm font-medium text-white">1</span><button id="next-page" class="relative inline-flex items-center px-2 py-2 border border-light-2 bg-white text-sm font-medium text-dark-2 hover:bg-light-1"><span class="sr-only">下一页</span><i class="fa fa-angle-right"></i></button><button id="last-page" class="relative inline-flex items-center px-2 py-2 rounded-r-md border border-light-2 bg-white text-sm font-medium text-dark-2 hover:bg-light-1"><span class="sr-only">末页</span><i class="fa fa-angle-double-right"></i></button></nav></div></div></div></div></main><!-- 页脚 --><footer class="bg-white border-t border-light-2 py-6"><div class="container mx-auto px-4 sm:px-6 lg:px-8"><div class="flex flex-col md:flex-row justify-between items-center"><p class="text-dark-2 text-sm">© 2025 房屋信息管理系统 chenjun. 保留所有权利.</p><div class="flex space-x-6 mt-4 md:mt-0"><a href="#" class="text-dark-2 hover:text-primary transition-colors"><i class="fa fa-question-circle"></i> 帮助中心</a><a href="#" class="text-dark-2 hover:text-primary transition-colors"><i class="fa fa-file-text-o"></i> 使用文档</a><a href="#" class="text-dark-2 hover:text-primary transition-colors"><i class="fa fa-envelope-o"></i> 联系我们</a></div></div></div></footer><!-- 新增/编辑房屋模态框 --><div id="house-modal" class="fixed inset-0 bg-black bg-opacity-50 z-50 hidden flex items-center justify-center p-4"><div class="bg-white rounded-xl shadow-2xl w-full max-w-2xl max-h-[90vh] overflow-y-auto transform transition-all"><div class="p-6 border-b border-light-2"><div class="flex justify-between items-center"><h3 id="modal-title" class="text-xl font-bold text-dark">新增房屋</h3><button id="close-modal" class="text-dark-2 hover:text-dark transition-colors"><i class="fa fa-times text-xl"></i></button></div></div><form id="house-form" class="p-6 space-y-5"><input type="hidden" id="house-id"><div class="grid grid-cols-1 md:grid-cols-2 gap-5"><div><label for="district" class="block text-sm font-medium text-dark-2 mb-1">行政区 <span class="text-danger">*</span></label><input type="text" id="district" name="district" requiredclass="w-full px-4 py-2 border border-light-2 rounded-lg focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"></div><div><label for="community" class="block text-sm font-medium text-dark-2 mb-1">所属小区 <span class="text-danger">*</span></label><input type="text" id="community" name="community" requiredclass="w-full px-4 py-2 border border-light-2 rounded-lg focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"></div><div><label for="houseType" class="block text-sm font-medium text-dark-2 mb-1">房屋户型</label><input type="text" id="houseType" name="houseType"class="w-full px-4 py-2 border border-light-2 rounded-lg focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"></div><div><label for="orientation" class="block text-sm font-medium text-dark-2 mb-1">房屋朝向</label><input type="text" id="orientation" name="orientation"class="w-full px-4 py-2 border border-light-2 rounded-lg focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"></div><div><label for="floor" class="block text-sm font-medium text-dark-2 mb-1">所在楼层</label><input type="number" id="floor" name="floor"class="w-full px-4 py-2 border border-light-2 rounded-lg focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"></div><div><label for="decoration" class="block text-sm font-medium text-dark-2 mb-1">装修程度</label><input type="text" id="decoration" name="decoration"class="w-full px-4 py-2 border border-light-2 rounded-lg focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"></div><div><label for="elevator" class="block text-sm font-medium text-dark-2 mb-1">配套电梯</label><select id="elevator" name="elevator"class="w-full px-4 py-2 border border-light-2 rounded-lg focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"><option value="">请选择</option><option value="有">有</option><option value="无">无</option></select></div><div><label for="area" class="block text-sm font-medium text-dark-2 mb-1">建筑面积(平方米)</label><input type="number" id="area" name="area"class="w-full px-4 py-2 border border-light-2 rounded-lg focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"></div><div><label for="totalPrice" class="block text-sm font-medium text-dark-2 mb-1">房屋总价(元)</label><input type="number" id="totalPrice" name="totalPrice"class="w-full px-4 py-2 border border-light-2 rounded-lg focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"></div><div><label for="buildYear" class="block text-sm font-medium text-dark-2 mb-1">建造年代</label><input type="number" id="buildYear" name="buildYear"class="w-full px-4 py-2 border border-light-2 rounded-lg focus:outline-none focus:ring-2 focus:ring-primary/30 focus:border-primary transition-all"></div></div><div class="pt-4 flex justify-end space-x-3 border-t border-light-2"><button type="button" id="cancel-form" class="px-6 py-2 border border-light-2 rounded-lg text-dark-2 hover:bg-light-1 transition-colors">取消</button><button type="submit" class="px-6 py-2 gradient-bg text-white rounded-lg shadow hover:shadow-md transition-all">保存</button></div></form></div></div><!-- 确认删除模态框 --><div id="delete-modal" class="fixed inset-0 bg-black bg-opacity-50 z-50 hidden flex items-center justify-center p-4"><div class="bg-white rounded-xl shadow-2xl w-full max-w-md transform transition-all"><div class="p-6 text-center"><div class="w-16 h-16 bg-danger/10 rounded-full flex items-center justify-center mx-auto mb-4"><i class="fa fa-exclamation-triangle text-danger text-2xl"></i></div><h3 class="text-xl font-bold text-dark mb-2">确认删除</h3><p class="text-dark-2 mb-6">您确定要删除这条房屋信息吗?此操作不可撤销。</p><input type="hidden" id="delete-id"><div class="flex justify-center space-x-3"><button id="cancel-delete" class="px-6 py-2 border border-light-2 rounded-lg text-dark-2 hover:bg-light-1 transition-colors">取消</button><button id="confirm-delete" class="px-6 py-2 bg-danger text-white rounded-lg shadow hover:bg-danger/90 transition-all">确认删除</button></div></div></div></div><!-- 通知提示 --><div id="toast" class="fixed top-20 right-4 z-50 hidden transform transition-all duration-300 translate-x-full"><div class="bg-white rounded-lg shadow-lg p-4 max-w-sm flex items-start"><i id="toast-icon" class="fa fa-check-circle text-success text-xl mt-0.5 mr-3"></i><div><h4 id="toast-title" class="font-medium text-dark">操作成功</h4><p id="toast-message" class="text-sm text-dark-2 mt-1">房屋信息已保存</p></div><button id="close-toast" class="ml-4 text-light-3 hover:text-dark-2"><i class="fa fa-times"></i></button></div></div><script>// 全局变量let houses = [];let currentPage = 1;const pageSize = 10;let totalPages = 1;let filteredHouses = [];// DOM元素const houseListEl = document.getElementById('house-list');const addHouseBtn = document.getElementById('add-house-btn');const houseModal = document.getElementById('house-modal');const closeModalBtn = document.getElementById('close-modal');const cancelFormBtn = document.getElementById('cancel-form');const houseForm = document.getElementById('house-form');const deleteModal = document.getElementById('delete-modal');const cancelDeleteBtn = document.getElementById('cancel-delete');const confirmDeleteBtn = document.getElementById('confirm-delete');const toast = document.getElementById('toast');const closeToastBtn = document.getElementById('close-toast');const refreshBtn = document.getElementById('refresh-btn');const mobileRefreshBtn = document.getElementById('mobile-refresh-btn');const mobileMenuButton = document.getElementById('mobile-menu-button');const mobileMenu = document.getElementById('mobile-menu');const themeToggle = document.getElementById('theme-toggle');const mobileThemeToggle = document.getElementById('mobile-theme-toggle');const searchInput = document.getElementById('search-input');const districtFilter = document.getElementById('district-filter');const elevatorFilter = document.getElementById('elevator-filter');// 分页元素const prevPageBtn = document.getElementById('prev-page');const nextPageBtn = document.getElementById('next-page');const firstPageBtn = document.getElementById('first-page');const lastPageBtn = document.getElementById('last-page');const prevPageMobileBtn = document.getElementById('prev-page-mobile');const nextPageMobileBtn = document.getElementById('next-page-mobile');const pageIndicator = document.getElementById('page-indicator');const currentRangeEl = document.getElementById('current-range');const totalCountEl = document.getElementById('total-count');// 统计元素const totalHousesEl = document.getElementById('total-houses');const avgAreaEl = document.getElementById('avg-area');const avgPriceEl = document.getElementById('avg-price');const elevatorRatioEl = document.getElementById('elevator-ratio');// API基础URL - 改为相对路径,将由Nginx代理处理const API_BASE_URL = '/api/houses';// 页面加载完成后初始化document.addEventListener('DOMContentLoaded', () => {fetchHouses();setupEventListeners();});// 设置所有事件监听器function setupEventListeners() {// 导航栏滚动效果window.addEventListener('scroll', () => {const navbar = document.getElementById('navbar');if (window.scrollY > 10) {navbar.classList.add('shadow-lg');navbar.classList.remove('shadow-md');} else {navbar.classList.remove('shadow-lg');navbar.classList.add('shadow-md');}});// 移动端菜单切换mobileMenuButton.addEventListener('click', () => {mobileMenu.classList.toggle('hidden');});// 新增房屋按钮addHouseBtn.addEventListener('click', () => {openModal();});// 关闭模态框closeModalBtn.addEventListener('click', closeModal);cancelFormBtn.addEventListener('click', closeModal);// 表单提交houseForm.addEventListener('submit', handleFormSubmit);// 关闭删除模态框cancelDeleteBtn.addEventListener('click', () => {deleteModal.classList.add('hidden');});// 确认删除confirmDeleteBtn.addEventListener('click', handleDeleteConfirm);// 关闭通知closeToastBtn.addEventListener('click', hideToast);// 刷新数据refreshBtn.addEventListener('click', refreshData);mobileRefreshBtn.addEventListener('click', () => {mobileMenu.classList.add('hidden');refreshData();});// 主题切换//themeToggle.addEventListener('click', toggleTheme);//mobileThemeToggle.addEventListener('click', () => {//  mobileMenu.classList.add('hidden');//  toggleTheme();//});// 搜索和筛选searchInput.addEventListener('input', applyFilters);districtFilter.addEventListener('change', applyFilters);elevatorFilter.addEventListener('change', applyFilters);// 分页按钮prevPageBtn.addEventListener('click', () => goToPage(currentPage - 1));nextPageBtn.addEventListener('click', () => goToPage(currentPage + 1));firstPageBtn.addEventListener('click', () => goToPage(1));lastPageBtn.addEventListener('click', () => goToPage(totalPages));prevPageMobileBtn.addEventListener('click', () => goToPage(currentPage - 1));nextPageMobileBtn.addEventListener('click', () => goToPage(currentPage + 1));}// 获取所有房屋数据function fetchHouses() {showLoading();axios.get(API_BASE_URL).then(response => {houses = response.data;applyFilters();updateStatistics();hideLoading();}).catch(error => {console.error('获取房屋数据失败:', error);showToast('错误', '获取房屋数据失败,请重试', 'error');hideLoading();});}// 应用筛选条件function applyFilters() {const searchTerm = searchInput.value.toLowerCase().trim();const districtValue = districtFilter.value;const elevatorValue = elevatorFilter.value;filteredHouses = houses.filter(house => {// 搜索词筛选const matchesSearch = searchTerm === '' || (house.district && house.district.toLowerCase().includes(searchTerm)) ||(house.community && house.community.toLowerCase().includes(searchTerm)) ||(house.houseType && house.houseType.toLowerCase().includes(searchTerm));// 行政区筛选const matchesDistrict = districtValue === '' || house.district === districtValue;// 电梯筛选const matchesElevator = elevatorValue === '' || house.elevator === elevatorValue;return matchesSearch && matchesDistrict && matchesElevator;});// 重置到第一页currentPage = 1;renderHouseList();updatePagination();}// 渲染房屋列表function renderHouseList() {if (filteredHouses.length === 0) {houseListEl.innerHTML = `<tr class="text-center"><td colspan="8" class="px-6 py-20 text-dark-2"><div class="flex flex-col items-center"><i class="fa fa-search text-4xl text-light-3 mb-4"></i><p>没有找到匹配的房屋数据</p></div></td></tr>`;return;}// 计算分页const startIndex = (currentPage - 1) * pageSize;const endIndex = Math.min(startIndex + pageSize, filteredHouses.length);const currentHouses = filteredHouses.slice(startIndex, endIndex);// 更新当前范围显示currentRangeEl.textContent = `${startIndex + 1} 到 ${endIndex}`;totalCountEl.textContent = filteredHouses.length;// 生成表格内容let html = '';currentHouses.forEach(house => {html += `<tr class="hover:bg-light-1/50 transition-colors"><td class="px-6 py-4 whitespace-nowrap text-sm text-dark">${house.id}</td><td class="px-6 py-4 whitespace-nowrap text-sm text-dark">${house.district || '-'}</td><td class="px-6 py-4 whitespace-nowrap text-sm text-dark">${house.community || '-'}</td><td class="px-6 py-4 whitespace-nowrap text-sm text-dark">${house.houseType || '-'}</td><td class="px-6 py-4 whitespace-nowrap text-sm text-dark">${house.area || '-'}</td><td class="px-6 py-4 whitespace-nowrap text-sm text-dark">${house.totalPrice ? (house.totalPrice / 10000).toFixed(1) : '-'}</td><td class="px-6 py-4 whitespace-nowrap text-sm"><span class="px-2 inline-flex text-xs leading-5 font-semibold rounded-full ${house.elevator === '有' ? 'bg-success/10 text-success' : 'bg-light-2 text-dark-2'}">${house.elevator || '-'}</span></td><td class="px-6 py-4 whitespace-nowrap text-sm font-medium"><button onclick="editHouse(${house.id})" class="text-primary hover:text-primary/80 mr-3 transition-colors"><i class="fa fa-pencil"></i> 编辑</button><button onclick="deleteHouse(${house.id})" class="text-danger hover:text-danger/80 transition-colors"><i class="fa fa-trash"></i> 删除</button></td></tr>`;});houseListEl.innerHTML = html;}// 更新分页控件function updatePagination() {totalPages = Math.ceil(filteredHouses.length / pageSize);pageIndicator.textContent = currentPage;// 禁用/启用分页按钮prevPageBtn.disabled = currentPage === 1;firstPageBtn.disabled = currentPage === 1;prevPageMobileBtn.disabled = currentPage === 1;nextPageBtn.disabled = currentPage === totalPages || totalPages === 0;lastPageBtn.disabled = currentPage === totalPages || totalPages === 0;nextPageMobileBtn.disabled = currentPage === totalPages || totalPages === 0;if (prevPageBtn.disabled) {prevPageBtn.classList.add('opacity-50', 'cursor-not-allowed');firstPageBtn.classList.add('opacity-50', 'cursor-not-allowed');prevPageMobileBtn.classList.add('opacity-50', 'cursor-not-allowed');} else {prevPageBtn.classList.remove('opacity-50', 'cursor-not-allowed');firstPageBtn.classList.remove('opacity-50', 'cursor-not-allowed');prevPageMobileBtn.classList.remove('opacity-50', 'cursor-not-allowed');}if (nextPageBtn.disabled) {nextPageBtn.classList.add('opacity-50', 'cursor-not-allowed');lastPageBtn.classList.add('opacity-50', 'cursor-not-allowed');nextPageMobileBtn.classList.add('opacity-50', 'cursor-not-allowed');} else {nextPageBtn.classList.remove('opacity-50', 'cursor-not-allowed');lastPageBtn.classList.remove('opacity-50', 'cursor-not-allowed');nextPageMobileBtn.classList.remove('opacity-50', 'cursor-not-allowed');}}// 跳转到指定页function goToPage(page) {if (page < 1 || page > totalPages) return;currentPage = page;renderHouseList();updatePagination();// 滚动到列表顶部houseListEl.scrollIntoView({ behavior: 'smooth', block: 'start' });}// 更新统计数据function updateStatistics() {if (houses.length === 0) {totalHousesEl.textContent = '0';avgAreaEl.textContent = '0';avgPriceEl.textContent = '0';elevatorRatioEl.textContent = '0%';return;}// 总房屋数totalHousesEl.textContent = houses.length;// 平均面积const totalArea = houses.reduce((sum, house) => sum + (house.area || 0), 0);avgAreaEl.textContent = (totalArea / houses.length).toFixed(1);// 平均价格(万元)const totalPrice = houses.reduce((sum, house) => sum + (house.totalPrice || 0), 0);avgPriceEl.textContent = ((totalPrice / houses.length) / 10000).toFixed(1);// 有电梯比例const elevatorCount = houses.filter(house => house.elevator === '有').length;const elevatorRatio = (elevatorCount / houses.length) * 100;elevatorRatioEl.textContent = elevatorRatio.toFixed(0) + '%';// 动态添加行政区选项const districts = [...new Set(houses.map(house => house.district).filter(Boolean))];districtFilter.innerHTML = '<option value="">所有行政区</option>';districts.forEach(district => {const option = document.createElement('option');option.value = district;option.textContent = district;districtFilter.appendChild(option);});}// 打开模态框function openModal(house = null) {// 重置表单houseForm.reset();document.getElementById('house-id').value = '';// 如果是编辑模式,填充表单if (house) {document.getElementById('modal-title').textContent = '编辑房屋';document.getElementById('house-id').value = house.id;document.getElementById('district').value = house.district || '';document.getElementById('community').value = house.community || '';document.getElementById('houseType').value = house.houseType || '';document.getElementById('orientation').value = house.orientation || '';document.getElementById('floor').value = house.floor || '';document.getElementById('decoration').value = house.decoration || '';document.getElementById('elevator').value = house.elevator || '';document.getElementById('area').value = house.area || '';document.getElementById('totalPrice').value = house.totalPrice || '';document.getElementById('buildYear').value = house.buildYear || '';} else {document.getElementById('modal-title').textContent = '新增房屋';}// 显示模态框houseModal.classList.remove('hidden');// 添加动画效果setTimeout(() => {const modalContent = houseModal.querySelector('div[class*="rounded-xl"]');modalContent.classList.add('scale-100');modalContent.classList.remove('scale-95', 'opacity-0');}, 10);}// 关闭模态框function closeModal() {const modalContent = houseModal.querySelector('div[class*="rounded-xl"]');modalContent.classList.add('scale-95', 'opacity-0');modalContent.classList.remove('scale-100');setTimeout(() => {houseModal.classList.add('hidden');}, 300);}// 处理表单提交function handleFormSubmit(e) {e.preventDefault();const houseId = document.getElementById('house-id').value;const houseData = {district: document.getElementById('district').value,community: document.getElementById('community').value,houseType: document.getElementById('houseType').value,orientation: document.getElementById('orientation').value,floor: document.getElementById('floor').value ? parseInt(document.getElementById('floor').value) : null,decoration: document.getElementById('decoration').value,elevator: document.getElementById('elevator').value,area: document.getElementById('area').value ? parseInt(document.getElementById('area').value) : null,totalPrice: document.getElementById('totalPrice').value ? parseInt(document.getElementById('totalPrice').value) : null,buildYear: document.getElementById('buildYear').value ? parseInt(document.getElementById('buildYear').value) : null};// 如果有ID,则是更新操作,否则是新增操作const isEdit = !!houseId;const url = isEdit ? `${API_BASE_URL}/${houseId}` : API_BASE_URL;const method = isEdit ? 'put' : 'post';showLoading();axios[method](url, houseData).then(response => {closeModal();fetchHouses();showToast(isEdit ? '更新成功' : '新增成功',isEdit ? '房屋信息已更新' : '房屋信息已添加','success');hideLoading();}).catch(error => {console.error(isEdit ? '更新房屋失败:' : '新增房屋失败:', error);showToast('操作失败',isEdit ? '更新房屋信息失败,请重试' : '新增房屋信息失败,请重试','error');hideLoading();});}// 编辑房屋window.editHouse = function(id) {const house = houses.find(h => h.id === id);if (house) {openModal(house);}};// 删除房屋window.deleteHouse = function(id) {document.getElementById('delete-id').value = id;deleteModal.classList.remove('hidden');};// 确认删除function handleDeleteConfirm() {const id = document.getElementById('delete-id').value;showLoading();axios.delete(`${API_BASE_URL}/${id}`).then(response => {deleteModal.classList.add('hidden');fetchHouses();showToast('删除成功', '房屋信息已删除', 'success');hideLoading();}).catch(error => {console.error('删除房屋失败:', error);showToast('删除失败', '删除房屋信息失败,请重试', 'error');hideLoading();});}// 显示通知function showToast(title, message, type = 'success') {document.getElementById('toast-title').textContent = title;document.getElementById('toast-message').textContent = message;const iconEl = document.getElementById('toast-icon');iconEl.className = '';if (type === 'success') {iconEl.classList.add('fa', 'fa-check-circle', 'text-success', 'text-xl', 'mt-0.5', 'mr-3');} else if (type === 'error') {iconEl.classList.add('fa', 'fa-exclamation-circle', 'text-danger', 'text-xl', 'mt-0.5', 'mr-3');} else if (type === 'warning') {iconEl.classList.add('fa', 'fa-exclamation-triangle', 'text-warning', 'text-xl', 'mt-0.5', 'mr-3');}toast.classList.remove('hidden', 'translate-x-full');toast.classList.add('translate-x-0');// 3秒后自动关闭setTimeout(hideToast, 3000);}// 隐藏通知function hideToast() {toast.classList.add('translate-x-full');setTimeout(() => {toast.classList.add('hidden');}, 300);}// 刷新数据 - 修复了then()错误function refreshData() {refreshBtn.classList.add('animate-spin');mobileRefreshBtn.classList.add('animate-spin');// 修复:fetchHouses不返回Promise,所以直接调用并在内部处理完成后移除动画fetchHouses();}// 切换主题function toggleTheme() {document.body.classList.toggle('dark-mode');const iconEl = themeToggle.querySelector('i');const mobileIconEl = mobileThemeToggle.querySelector('i');if (document.body.classList.contains('dark-mode')) {iconEl.classList.remove('fa-moon-o');iconEl.classList.add('fa-sun-o');mobileIconEl.classList.remove('fa-moon-o');mobileIconEl.classList.add('fa-sun-o');// 这里可以添加暗色主题的CSS变量设置} else {iconEl.classList.remove('fa-sun-o');iconEl.classList.add('fa-moon-o');mobileIconEl.classList.remove('fa-sun-o');mobileIconEl.classList.add('fa-moon-o');// 恢复亮色主题}}// 显示加载状态function showLoading() {document.body.classList.add('cursor-wait');}// 隐藏加载状态function hideLoading() {document.body.classList.remove('cursor-wait');refreshBtn.classList.remove('animate-spin');mobileRefreshBtn.classList.remove('animate-spin');}</script>
</body>
</html>

12.3 访问Nginx VIP进行验证

浏览器输入192.168.121.88

可以看到有数据说明前后端已经打通了,接下来测试增删查改功能

测试删除功能原有总数9994

13 监控系统部署

13.1 部署 Prometheus 和 Grafana

在 monitor-server (192.168.121.125) 上操作:

# 创建数据目录
[root@monitor-server monitor]# mkdir -p /data/prometheus/data
[root@monitor-server monitor]# mkdir -p /data/grafana/data
[root@monitor-server monitor]# mkdir -p /data/monitor/config# 创建Prometheus配置
[root@monitor-server monitor]# vim /data/monitor/config/prometheus.yml
global:scrape_interval: 15sevaluation_interval: 15srule_files:# - "first_rules.yml"# - "second_rules.yml"alerting:alertmanagers:- static_configs:- targets:# - alertmanager:9093scrape_configs:- job_name: 'prometheus'static_configs:- targets: ['localhost:9090']- job_name: 'mysql'static_configs:- targets: ['192.168.121.221:9104', '192.168.121.222:9104', '192.168.121.223:9104']labels:group: 'mysql-cluster'- job_name: 'mycat'static_configs:- targets: ['192.168.121.180:9100', '192.168.121.190:9100']labels:group: 'mycat-cluster'- job_name: 'redis'static_configs:- targets: ['192.168.121.171:9121', '192.168.121.172:9121', '192.168.121.173:9121']labels:group: 'redis-cluster'- job_name: 'node'static_configs:- targets: ['192.168.121.180:9100', '192.168.121.190:9100', '192.168.121.220:9100', '192.168.121.221:9100', '192.168.121.222:9100', '192.168.121.223:9100', '192.168.121.210:9100', '192.168.121.66:9100', '192.168.121.125:9100', '192.168.121.171:9100', '192.168.121.172:9100', '192.168.121.173:9100','192.168.121.70:9100','192.168.121.71:9100','192.168.121.80:9100','192.168.121.81:9100']labels:group: 'nodes'# 创建Docker Compose配置
[root@monitor-server monitor]# vim /data/monitor/docker-compose.yml
version: '3'services:prometheus:image: docker.1ms.run/prom/prometheus:v2.33.5container_name: prometheusrestart: alwaysvolumes:- /data/prometheus/data:/prometheus- /data/monitor/config/prometheus.yml:/etc/prometheus/prometheus.ymlcommand:- '--config.file=/etc/prometheus/prometheus.yml'- '--storage.tsdb.path=/prometheus'- '--web.console.libraries=/etc/prometheus/console_libraries'- '--web.console.templates=/etc/prometheus/consoles'- '--web.enable-lifecycle'network_mode: hostgrafana:image: docker.1ms.run/grafana/grafana:8.4.5container_name: grafanarestart: alwaysvolumes:- /data/grafana/data:/var/lib/grafanaenvironment:- GF_SECURITY_ADMIN_PASSWORD=123456- GF_USERS_ALLOW_SIGN_UP=falsenetwork_mode: hostdepends_on:- prometheusnode-exporter:image: docker.1ms.run/prom/node-exporter:v1.3.1container_name: node-exporterrestart: alwaysvolumes:- /proc:/host/proc:ro- /sys:/host/sys:ro- /:/rootfs:rocommand:- '--path.procfs=/host/proc'- '--path.sysfs=/host/sys'- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'network_mode: hostmysql-exporter:image: docker.1ms.run/prom/mysqld-exporter:v0.13.0container_name: mysql-exporterrestart: alwaysenvironment:- DATA_SOURCE_NAME=root:123456@(localhost:3306)/network_mode: hostredis-exporter:image: docker.1ms.run/oliver006/redis_exporter:v1.33.0container_name: redis-exporterrestart: alwaysenvironment:- REDIS_ADDR=redis://localhost:6379- REDIS_PASSWORD=123456network_mode: host# 启动服务
[root@monitor-server monitor]# cd /data/monitor
[root@monitor-server monitor]# docker-compose up -d

13.2 在其他节点部署 Exporter

# 创建Ansible Playbook
[root@ansible-server ansible]# mkdir -p /data/ansible/roles/exporter/tasks
[root@ansible-server ansible]# cd /data/ansible/roles/exporter/tasks[root@ansible-server ansible]# vim main.yml
- name: 启动node-exporter容器docker_container:name: node-exporterimage: docker.1ms.run/prom/node-exporter:v1.3.1state: startedrestart_policy: alwaysvolumes:- /proc:/host/proc:ro- /sys:/host/sys:ro- /:/rootfs:rocommand:- '--path.procfs=/host/proc'- '--path.sysfs=/host/sys'- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'network_mode: host# 创建主Playbook
[root@ansible-server ansible]# cd /data/ansible
[root@ansible-server ansible]# vim deploy_exporter.yml
- hosts: allvars:ansible_python_interpreter: /usr/bin/python3.6tasks:- include_role:name: exporter# 执行部署
[root@ansible-server ansible]# ansible-playbook deploy_exporter.yml PLAY [all] ******************************************************************************************************************************TASK [Gathering Facts] ******************************************************************************************************************
ok: [192.168.121.172]
ok: [192.168.121.220]
ok: [192.168.121.125]
ok: [192.168.121.173]
ok: [192.168.121.171]
ok: [192.168.121.223]
ok: [192.168.121.221]
ok: [192.168.121.180]
ok: [192.168.121.222]
ok: [192.168.121.190]
ok: [192.168.121.66]
ok: [192.168.121.210]TASK [include_role : exporter] **********************************************************************************************************TASK [启动node-exporter容器] ****************************************************************************************************************
ok: [192.168.121.172]
ok: [192.168.121.171]
ok: [192.168.121.173]
ok: [192.168.121.220]
ok: [192.168.121.125]
ok: [192.168.121.180]
ok: [192.168.121.222]
ok: [192.168.121.190]
ok: [192.168.121.223]
ok: [192.168.121.221]
ok: [192.168.121.66]
ok: [192.168.121.210]
ok: [192.168.121.70]
ok: [192.168.121.71]
ok: [192.168.121.80]
ok: [192.168.121.81]
PLAY RECAP ******************************************************************************************************************************
192.168.121.125            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.171            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.172            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.173            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.180            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.190            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.210            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.220            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.221            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.222            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.223            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.66             : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.70             : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.71             : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.80             : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.81             : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

如遇到以下错误是目标服务器没安装python3

TASK [Gathering Facts] *******************************************************************
fatal: [192.168.121.70]: FAILED! => {"ansible_facts": {}, "changed": false, "failed_modules": {"setup": {"failed": true, "module_stderr": "Shared connection to 192.168.121.70 closed.\r\n", "module_stdout": "/bin/sh: /usr/bin/python3.6: 没有那个文件或目录\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}}, "msg": "The following modules failed to execute: setup\n"}
fatal: [192.168.121.71]: FAILED! => {"ansible_facts": {}, "changed": false, "failed_modules": {"setup": {"failed": true, "module_stderr": "Shared connection to 192.168.121.71 closed.\r\n", "module_stdout": "/bin/sh: /usr/bin/python3.6: 没有那个文件或目录\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}}, "msg": "The following modules failed to execute: setup\n"}

执行以下命令一键安装

ansible 主机组 -m shell -a 'yum install python3 -y'

如果遇到以下错误缺少 Docker SDK for Python和requestss模块报错

fatal: [192.168.121.70]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (Docker SDK for Python: docker (Python >= 2.7) or docker-py (Python 2.6)) on nginx1's Python /usr/bin/python3.6. Please read module documentation and install in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter, for example via `pip install docker` or `pip install docker-py` (Python 2.6). The error was: No module named 'requests'"}
 

ansible 主机组 -m shell -a 'pip3 install docker requests'

在 MySQL 节点部署 mysql-exporter:

# 创建Ansible Playbook
[root@ansible-server ansible]# cd /data/ansible
[root@ansible-server ansible]# vim deploy_mysql_exporter.yml
- hosts: mysqlvars:ansible_python_interpreter: /usr/bin/python3.6tasks:- name: 启动mysql-exporter容器docker_container:name: mysql-exporterimage: docker.1ms.run/prom/mysqld-exporter:v0.13.0state: startedrestart_policy: alwaysenv:DATA_SOURCE_NAME: "root:123456@(localhost:3306)/"network_mode: host# 执行部署
[root@ansible-server ansible]# ansible-playbook deploy_mysql_exporter.ymlPLAY [mysql] *************************************************************************************************************************************************************************************************************************************************************************TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************************************************************************
ok: [192.168.121.222]
ok: [192.168.121.223]
ok: [192.168.121.221]TASK [启动mysql-exporter容器] ************************************************************************************************************************************************************************************************************************************************************
ok: [192.168.121.221]
ok: [192.168.121.223]
ok: [192.168.121.222]PLAY RECAP ***************************************************************************************************************************************************************************************************************************************************************************
192.168.121.221            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.222            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.223            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

在 Redis 节点部署 redis-exporter:

[root@ansible-server ansible]# vim deploy_redis_exporter.yml
- hosts: redisvars:ansible_python_interpreter: /usr/bin/python3.6tasks:- name: 启动redis-exporter容器docker_container:name: redis-exporterimage: docker.1ms.run/oliver006/redis_exporter:v1.33.0state: startedrestart_policy: alwaysenv:REDIS_ADDR: "redis://localhost:6379"REDIS_PASSWORD: "123456"network_mode: host[root@ansible-server ansible]# ansible-playbook deploy_redis_exporter.ymlPLAY [redis] *************************************************************************************************************************************************************************************************************************************************************************TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************************************************************************
ok: [192.168.121.172]
ok: [192.168.121.173]
ok: [192.168.121.171]TASK [启动redis-exporter容器] ************************************************************************************************************************************************************************************************************************************************************
ok: [192.168.121.172]
ok: [192.168.121.173]
ok: [192.168.121.171]PLAY RECAP ***************************************************************************************************************************************************************************************************************************************************************************
192.168.121.171            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.172            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.121.173            : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 

13.3 配置 Grafana 监控面板

  1. 访问 Grafana:http://192.168.31.125:3000,用户名 admin,密码 123456

  1. 添加 Prometheus 数据源:

    • 点击 Configuration > Data Sources > Add data source

    • 选择 Prometheus

    • URL 填写:http://localhost:9090

    • 点击 Save & Test

  1. 导入监控面板:

    • 点击 Create > Import

    • 导入 MySQL 监控面板(ID:7362)

    • 导入 Redis 监控面板(ID:763)

    • 导入服务器监控面板(ID:1860)

13.4 使用 QQ 邮箱配置 Alertmanager 邮件通知

我们可以在 monitor-server(192.168.121.125)上部署 Alertmanager,与已有的 Prometheus+Grafana+ELK 形成完整的监控告警体系。

13.4.1 部署规划

组件部署位置IP 地址端口功能
Alertmanagermonitor-server192.168.121.1259093接收 Prometheus 告警并发送邮件通知

13.4.2 部署步骤

1. 开启 QQ 邮箱 SMTP 服务
  1. 登录 QQ 邮箱(mail.qq.com)

  2. 点击顶部 账号与安全

  3. 点击安全设置

  4. 开启 SMTP 服务(勾选 “开启” 后按提示操作,可能需要短信验证)

2. 获取 SMTP 授权码
  • 开启 SMTP 后,点击 生成授权码
  • 按提示用绑定的手机发送短信,验证后会获得一串 16 位字符串(如 abcdefghijklmnop)或者手机验证码
  • 保存授权码(仅显示一次,后续配置需要使用)
3. 在 monitor-server 上准备工作目录
# 创建Alertmanager相关目录
mkdir -p /data/monitor/alertmanager/{conf,data,templates}
cd /opt/monitor/alertmanager
4. 编写 Alertmanager 配置文件

创建 /data/monitor/alertmanager/conf/alertmanager.yml

[root@monitor-server conf]# vim alertmanager.yml # 全局配置:定义所有告警处理的基础参数和公共设置
global:resolve_timeout: 5m  # 告警状态从"firing"转为"resolved"的超时时间(5分钟未更新则标记为已解决)# 邮件发送相关配置(基于QQ邮箱)smtp_from: '123456@qq.com'  # 发送告警邮件的源邮箱地址smtp_smarthost: 'smtp.qq.com:587'  # SMTP服务器地址及端口(QQ邮箱SMTP服务端口为587)smtp_auth_username: '123456@qq.com'  # SMTP认证用户名(通常与发送邮箱一致)smtp_auth_password: '**********'  # SMTP认证密码(QQ邮箱需使用"SMTP授权码",而非登录密码)smtp_require_tls: true  # 强制使用TLS加密连接SMTP服务器(QQ邮箱要求必须开启)smtp_hello: 'qq.com'  # SMTP握手时使用的域名标识# 路由配置:定义告警的分发规则和处理逻辑
route:group_by: ['alertname', 'cluster', 'service']  # 按指定标签聚合告警(相同标签的告警会被合并发送)group_wait: 30s  # 首次收到告警后,等待30秒再发送(用于合并同组内可能陆续产生的告警)group_interval: 5m  # 同组告警再次发送的间隔时间(避免短时间内重复发送同组告警)repeat_interval: 30m  # 同一告警(未解决状态)的重复发送间隔(这里设置为30分钟)receiver: 'operations-team'  # 默认接收者(当所有子路由都不匹配时使用)# 子路由规则(按顺序匹配,优先匹配靠前的规则)routes:- match:  # 匹配标签"severity=critical"的告警severity: criticalreceiver: 'operations-team'  # 发送给"operations-team"接收者continue: true  # 匹配后继续检查后续路由(允许一个告警被多个接收者接收)- match:  # 匹配标签"severity=warning"的告警severity: warningreceiver: 'dev-team'  # 发送给"dev-team"接收者# 接收者配置:定义告警的接收方式(这里为邮件接收)
receivers:
- name: 'operations-team'  # 接收者名称(需与route中引用的名称一致)email_configs:  # 邮件接收配置- to: '123456@qq.com'  # 接收告警的目标邮箱send_resolved: true  # 当告警从"firing"转为"resolved"(恢复)时,是否发送通知headers: { subject: '【紧急告警】{{ .CommonLabels.alertname }}' }  # 邮件主题(使用模板变量动态生成)html: '{{ template "email.html" . }}'  # 邮件内容使用自定义模板"email.html"- name: 'dev-team'  # 另一个接收者(通常用于处理不同级别的告警)email_configs:- to: '123456@qq.com'  # 注意:当前与"operations-team"接收邮箱相同,实际场景应配置不同邮箱send_resolved: true  # 发送恢复通知headers: { subject: '【警告通知】{{ .CommonLabels.alertname }}' }  # 警告级别告警的邮件主题html: '{{ template "email.html" . }}'  # 共用同一个邮件模板# 模板文件路径:指定告警通知模板的存放位置
templates:
- '/etc/alertmanager/templates/email.tmpl'  # 模板文件的容器内路径(需确保文件存在且可被Alertmanager读取)# 抑制规则:减少冗余告警,避免重复通知
inhibit_rules:
- source_match:  # 源告警(触发抑制逻辑的告警)severity: 'critical'  # 当存在"critical"级别的告警时target_match:  # 目标告警(被抑制的告警)severity: 'warning'  # 抑制"warning"级别的告警equal: ['alertname', 'dev', 'instance']  # 仅当源告警和目标告警的这些标签值完全相同时,才触发抑制# 作用:同一对象(如同一instance)的严重告警触发后,相同类型的轻微告警会被屏蔽,减少通知噪音
5. 创建邮件模板

创建 /data/monitor/alertmanager/templates/email.tmpl

[root@monitor-server templates]# vim /data/monitor/alertmanager/templates/email.tmpl 
{{ define "email.html" }}
<html>
<head><meta charset="UTF-8"><title>{{ .Status | toUpper }}: {{ .CommonLabels.alertname }}</title>
</head>
<body><h2 style="color: {{ if eq .Status "firing" }}red{{ else }}green{{ end }};">{{ .Status | toUpper }}: {{ .CommonLabels.alertname }}</h2><table border="1"><tr><th>标签</th><th>值</th></tr>{{ range .CommonLabels.SortedPairs }}<tr><td>{{ .Name }}</td><td>{{ .Value }}</td></tr>{{ end }}</table><p><strong>告警描述:</strong> {{ .CommonAnnotations.description }}</p>{{/* 遍历告警列表,获取每个告警的时间信息 */}}{{ range .Alerts }}<p><strong>开始时间:</strong> {{ .StartsAt.Format "2006-01-02 15:04:05" }}</p>{{ if eq .Status "resolved" }}<p><strong>恢复时间:</strong> {{ .EndsAt.Format "2006-01-02 15:04:05" }}</p>{{ end }}{{ end }}<p>查看详情: <a href="http://192.168.121.125:3000">Grafana监控面板</a></p>
</body>
</html>
{{ end }}
6. 编写 Docker Compose 配置

创建 /data/monitor/alertmanager/docker-compose.yml

[root@monitor-server alertmanager]# vim /data/monitor/alertmanager/docker-compose.yml version: '3.8'services:alertmanager:image: docker.1ms.run/prom/alertmanager:v0.26.0container_name: alertmanagerrestart: alwaysports:- "9093:9093"volumes:- /data/monitor/alertmanager/conf/alertmanager.yml:/etc/alertmanager/alertmanager.yml- /data/monitor/alertmanager/data:/alertmanager- /data/monitor/alertmanager/templates:/etc/alertmanager/templatesenvironment:- TZ=Asia/Shanghainetworks:- monitor-networkcommand:- '--config.file=/etc/alertmanager/alertmanager.yml'- '--storage.path=/alertmanager'- '--web.listen-address=:9093'networks:monitor-network:driver: bridge
7. 启动 Alertmanager 容器
# 启动容器
[root@monitor-server alertmanager]# docker-compose up -d# 检查状态
[root@monitor-server alertmanager]# docker-compose ps# 查看日志
[root@monitor-server alertmanager]# docker-compose logs -f alertmanager

13.4.3 配置 Prometheus 集成 Alertmanager

在 monitor-server 上修改 Prometheus 配置,添加 Alertmanager 地址:

1. 编辑 Prometheus 配置文件
[root@monitor-server config]# vim /data/monitor/config/prometheus.yml 
global:scrape_interval: 15sevaluation_interval: 15s
--------------------------添加内容-------------------------
alerting:alertmanagers:- static_configs:- targets:- 192.168.121.125:9093
rule_files:- "/prometheus/rules/*.yml"
----------------------------结束---------------------------
scrape_configs:- job_name: 'prometheus'static_configs:- targets: ['localhost:9090']- job_name: 'mysql'static_configs:- targets: ['192.168.121.221:9104', '192.168.121.222:9104', '192.168.121.223:9104']labels:group: 'mysql-cluster'- job_name: 'mycat'static_configs:- targets: ['192.168.121.180:9100', '192.168.121.190:9100']labels:group: 'mycat-cluster'- job_name: 'redis'static_configs:- targets: ['192.168.121.171:9121', '192.168.121.172:9121', '192.168.121.173:9121']labels:group: 'redis-cluster'- job_name: 'node'static_configs:- targets: ['192.168.121.180:9100', '192.168.121.190:9100', '192.168.121.220:9100', '192.168.121.221:9100', '192.168.121.222:9100', '192.168.121.223:9100', '192.168.121.210:9100', '192.168.121.66:9100', '192.168.121.125:9100', '192.168.121.171:9100', '192.168.121.172:9100', '192.168.121.173:9100', '192.168.121.70:9100', '192.168.121.71:9100', '192.168.121.80:9100', '192.168.121.81:9100']labels:group: 'nodes'

2. 创建关键告警规则(在 rules 目录下创建 critical_alerts.yml):
[root@monitor-server data]# cd /data/prometheus/data
[root@monitor-server data]# mkdir rules
[root@monitor-server data]# vim rules/critical_alerts.ymlgroups:
- name: critical_alertsrules:- alert: InstanceDownexpr: up == 0for: 20s  # 延长时间避免网络抖动误报labels:severity: criticaljob: node  # 明确关联节点监控任务annotations:summary: "节点 {{ $labels.node_name }}({{ $labels.node_ip }})已下线"description: "节点IP:{{ $labels.node_ip }}已下线超过20秒,请立即处理!\n实例标识:{{ $labels.instance }}"- alert: HighCPUUsageexpr: 100 - (avg by(instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 85for: 10mlabels:severity: warningannotations:summary: "实例 {{ $labels.instance }} CPU使用率过高"description: "CPU使用率超过85%已持续10分钟,当前值: {{ $value | humanizePercentage }}"- alert: HighMemoryUsageexpr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes * 100 > 90for: 10mlabels:severity: warningannotations:summary: "实例 {{ $labels.instance }} 内存使用率过高"description: "内存使用率超过90%已持续10分钟,当前值: {{ $value | humanizePercentage }}"- alert: MySQLDownexpr: mysql_up == 0for: 20slabels:severity: criticalannotations:summary: "MySQL实例 {{ $labels.instance }} 已下线"description: "MySQL数据库已下线,请立即处理!"- alert: RedisDownexpr: redis_up == 0for: 20slabels:severity: criticalannotations:summary: "Redis实例 {{ $labels.instance }} 已下线"description: "Redis服务已下线,请立即处理!"
3.重启 Prometheus 使配置生效:
[root@monitor-server data]# docker restart prometheus

13.4.4 验证部署

1. 访问 Alertmanager Web 界面:
192.168.121.125:9093

2. 模拟redis1崩溃查看告警邮件
[root@redis1 ~]# docker stop redis
redis

14 ELK日志系统部署

14.1 ELK 组件说明与版本规划

组件功能描述版本部署节点
Elasticsearch分布式日志存储与检索引擎7.17.0monitor-server(192.168.121.125)
Logstash日志采集、过滤与转发(数据管道)7.17.0monitor-server(192.168.121.125)
Kibana日志可视化分析平台7.17.0monitor-server(192.168.121.125)
Filebeat轻量级日志采集器(部署至各节点)7.17.0所有业务节点(MySQL/MyCat/Redis 等)

14.2 ELK 系统部署(基于 Docker Compose)

14.2.1 环境准备(仅在 monitor-server 执行)

(1)调整系统内核参数(适配 Elasticsearch)
# 配置Elasticsearch所需内核参数
[root@monitor-server conf]# cat >> /etc/sysctl.conf << EOF
# Elasticsearch 内核配置
vm.max_map_count=262144
fs.file-max=655360
EOF# 生效配置
[root@monitor-server conf]# sysctl -p# 配置文件描述符限制
[root@monitor-server conf]# cat >> /etc/security/limits.conf << EOF
# Elasticsearch 文件描述符限制
elasticsearch soft nofile 65536
elasticsearch hard nofile 65536
elasticsearch soft nproc 4096
elasticsearch hard nproc 4096
EOF
(2)创建 ELK 数据目录与配置目录
# 创建数据目录(持久化存储Elasticsearch日志与索引)
[root@monitor-server conf]# mkdir -p /data/elk/elasticsearch/data
[root@monitor-server conf]# mkdir -p /data/elk/elasticsearch/logs
[root@monitor-server conf]# mkdir -p /data/elk/logstash/conf
[root@monitor-server conf]# mkdir -p /data/elk/logstash/pipeline
[root@monitor-server conf]# mkdir -p /data/elk/kibana/data
[root@monitor-server conf]# mkdir -p /data/elk/filebeat/conf

14.2.2 编写 ELK Docker Compose 配置

/data/monitor/目录下修改原有docker-compose.yml,新增 ELK 相关服务(与 Prometheus、Grafana 共存):

cd /data/monitor
vi docker-compose.yml# 新增以下内容至services节点下
services:# 原有Prometheus/Grafana/Exporter服务保持不变,新增以下ELK服务:# Elasticsearch:日志存储与检索###########################################################################elasticsearch:image: docker.1ms.run/elasticsearch:7.17.0container_name: elk-elasticsearchrestart: alwaysenvironment:- discovery.type=single-node  # 单节点模式(生产环境建议集群)- ES_JAVA_OPTS=-Xms2g -Xmx2g  # JVM内存配置(根据节点内存调整)- bootstrap.memory_lock=true  # 锁定内存(避免swap)- xpack.security.enabled=false  # 关闭安全认证(测试环境,生产需开启)volumes:- /data/elk/elasticsearch/data:/usr/share/elasticsearch/data- /data/elk/elasticsearch/logs:/usr/share/elasticsearch/logsnetwork_mode: hosthealthcheck:test: ["CMD", "curl", "-f", "http://localhost:9200/_cluster/health"]interval: 30stimeout: 10sretries: 3############################################################################ Logstash:日志过滤与转发###########################################################################logstash:image: docker.1ms.run/logstash:7.17.0container_name: elk-logstashrestart: alwaysenvironment:- LS_JAVA_OPTS=-Xms1g -Xmx1g  # JVM内存配置volumes:- /data/elk/logstash/conf/logstash.yml:/usr/share/logstash/config/logstash.yml- /data/elk/logstash/pipeline:/usr/share/logstash/pipelineports:- "5044:5044"  # Filebeat数据接收端口- "9600:9600"depends_on:elasticsearch:condition: service_healthy  # 等待Elasticsearch健康后启动############################################################################ Kibana:日志可视化###########################################################################kibana:image: docker.1ms.run/kibana:7.17.0container_name: elk-kibanarestart: alwaysenvironment:- ELASTICSEARCH_HOSTS=http://192.168.121.125:9200  # 连接Elasticsearch- I18N_LOCALE=zh-CN  # 中文界面volumes:- /data/elk/kibana/data:/usr/share/kibana/datanetwork_mode: hostdepends_on:elasticsearch:condition: service_healthyuser: "root:root"command: ["kibana", "--allow-root"]

14.2.3 配置 Logstash(日志处理规则)

(1)编写 Logstash 主配置文件
[root@monitor-server monitor]# vim  /data/elk/logstash/conf/logstash.ymlhttp.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: ["http://192.168.121.125:9200"]
(2)编写日志处理管道(采集 Filebeat 日志)

创建/data/elk/logstash/pipeline/logstash.conf,定义日志过滤规则(适配 MySQL、MyCat、Redis 日志格式):

[root@monitor-server monitor]# cd /data/elk/logstash/pipeline/
[root@monitor-server pipeline]# ls
logstash.conf
[root@monitor-server pipeline]# vim logstash.conf # 输入:接收Filebeat发送的日志
input {beats {port => 5044codec => "json"  # Filebeat发送JSON格式日志}
}# 过滤:按组件类型解析日志(MySQL/MyCat/Redis)
filter {# 1. 解析MySQL日志(区分错误日志、慢查询日志)if [fields][log_type] == "mysql-error" {grok {match => { "message" => "%{TIMESTAMP_ISO8601:log_time} %{LOGLEVEL:log_level} \[%{DATA:thread}\] %{GREEDYDATA:log_content}" }add_field => { "service" => "mysql" }}}else if [fields][log_type] == "mysql-slow" {grok {match => { "message" => "# Time: %{TIMESTAMP_ISO8601:log_time}\n# User@Host: %{DATA:user}\[%{DATA:user}\] @ %{DATA:host} \[%{IP:ip}\]\n# Query_time: %{NUMBER:query_time:float}  Lock_time: %{NUMBER:lock_time:float}  Rows_sent: %{NUMBER:rows_sent:int}  Rows_examined: %{NUMBER:rows_examined:int}\n%{GREEDYDATA:sql}" }add_field => { "service" => "mysql" }}}# 2. 解析MyCat日志else if [fields][log_type] == "mycat" {grok {match => { "message" => "%{TIMESTAMP_ISO8601:log_time} %{LOGLEVEL:log_level} \[%{DATA:thread}\] %{GREEDYDATA:log_content}" }add_field => { "service" => "mycat" }}}# 3. 解析Redis日志else if [fields][log_type] == "redis" {grok {match => { "message" => "%{TIMESTAMP_ISO8601:log_time} %{LOGLEVEL:log_level} %{GREEDYDATA:log_content}" }add_field => { "service" => "redis" }}}# 4. 解析系统日志(如/var/log/messages)else if [fields][log_type] == "system" {grok {match => { "message" => "%{TIMESTAMP_ISO8601:log_time} %{HOSTNAME:host} %{DATA:process}: %{GREEDYDATA:log_content}" }add_field => { "service" => "system" }}}# 时间格式化(适配Elasticsearch时间字段)date {match => ["log_time", "yyyy-MM-dd HH:mm:ss", "yyyy-MM-dd'T'HH:mm:ss.SSSZ"]target => "@timestamp"remove_field => ["log_time"]  # 删除原始时间字段}
}# 输出:将处理后的日志写入Elasticsearchoutput {elasticsearch {hosts => ["http://192.168.121.125:9200"]index => "elk-log-%{[fields][log_type]}-%{+YYYY.MM.dd}"  # 按日志类型+日期分索引document_type => "_doc"}# 控制台输出(调试用,生产可注释)stdout {codec => rubydebug}
}

14.2.4 部署 Filebeat(各节点日志采集)

Filebeat 需部署至所有业务节点(MySQL/MyCat/Redis 等),通过 Ansible 批量操作:

(1)编写 Filebeat 配置模板(在 ansible-server)
[root@ansible-server]# mkdir -p /data/ansible/roles/filebeat/filesfilebeat.inputs:
- type: filestreamenabled: truepaths:{{ log_paths }}  # 按节点类型动态替换日志路径fields:log_type: {{ log_type }}  # 日志类型(如mysql-error、mycat等)fields_under_root: true  # 字段提升至根级别# 输出到Logstash(不直接写Elasticsearch,便于过滤)
output.logstash:hosts: ["192.168.121.125:5044"]  # Logstash地址# 关闭Elasticsearch模块(仅用Logstash)
setup.ilm.enabled: false
setup.template.enabled: false
(2)创建 Ansible Role 批量部署 Filebeat
# 创建Filebeat角色任务
[root@ansible-server]# mkdir -p /data/ansible/roles/filebeat/tasks
[root@ansible-server]# vim /data/ansible/roles/filebeat/tasks/main.yml
- name: 创建Filebeat配置目录file:path: /data/filebeat/confstate: directorymode: '0755'- name: 复制Filebeat配置文件(按节点类型适配)template:src: /data/ansible/roles/filebeat/files/filebeat.ymldest: /data/filebeat/conf/filebeat.ymlmode: '0644'- name: 启动Filebeat容器docker_container:name: filebeatimage: docker.1ms.run/elastic/filebeat:7.17.0state: startedrestart_policy: alwaysvolumes: "{{ ['/data/filebeat/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro','/data/filebeat/data:/usr/share/filebeat/data'] + log_volume_mounts }}"network_mode: hostuser: rootcommand: ["filebeat", "-e", "-c", "/usr/share/filebeat/filebeat.yml"]
(3)按节点类型编写部署 Playbook
① 部署 Filebeat 至 MySQL 节点(采集错误日志 + 慢查询日志)
- hosts: mysqlvars:ansible_python_interpreter: /usr/bin/python3.6log_type: "mysql-error"log_paths:- /data/mysql/logs/error.log- /data/mysql/logs/slow.loglog_volume_mounts:- /data/mysql/logs:/data/mysql/logs:rotasks:- include_role:name: filebeat
② 部署 Filebeat 至 MyCat 节点(采集 MyCat 日志)
- hosts: mycatvars:ansible_python_interpreter: /usr/bin/python3.6log_type: "mycat"log_paths:- /usr/local/mycat/logs/wrapper.log  # MyCat核心日志log_volume_mounts:- /usr/local/mycat/logs:/usr/local/mycat/logs:rotasks:- include_role:name: filebeat
③ 部署 Filebeat 至 Redis 节点(采集 Redis 日志)
- hosts: redisvars:ansible_python_interpreter: /usr/bin/python3.6log_type: "redis"log_paths:- /data/redis/data/redis-server.log  # Redis日志路径(需在redis.conf中配置)log_volume_mounts:- /data/redis/data:/data/redis/data:rotasks:- include_role:name: filebeat
④ 执行批量部署
# 部署至MySQL节点
[root@ansible-server]# ansible-playbook /data/ansible/deploy_filebeat_mysql.yml# 部署至MyCat节点
[root@ansible-server]# ansible-playbook /data/ansible/deploy_filebeat_mycat.yml# 部署至Redis节点
[root@ansible-server]# ansible-playbook /data/ansible/deploy_filebeat_redis.yml

14.2.5 启动 ELK 系统

# 进入monitor-server的Docker Compose目录
cd /data/monitor# 启动所有服务(含原有Prometheus/Grafana+新增ELK)
docker-compose up -d# 验证ELK状态
# 1. 验证Elasticsearch是否正常:返回"yellow"或"green"即正常
curl http://192.168.121.125:9200/_cluster/health# 2. 验证Kibana是否可访问:浏览器打开 http://192.168.121.125:5601

14.3 ELK 日志可视化配置(Kibana 操作)

14.3.1 创建日志索引模式(Index Pattern)

  1. 访问 Kibana 首页(192.168.121.125:5601),进入左侧菜单 Management > Stack Management > Kibana > Index Patterns
  2. 点击 Create index pattern,输入索引匹配规则(如elk-log-*匹配所有 ELK 日志索引),点击 Next step
  3. 时间字段选择 @timestamp(Logstash 处理后的标准时间字段),点击 Create index pattern

14.3.2日志检索与分析(Discover 模块)

  1. 进入左侧菜单 Analytics > Discover,在顶部索引模式下拉框选择创建的elk-log-*
  2. 可通过以下方式筛选日志:
    • 按主机名筛选:在搜索框输入 mysql-master 主机名(仅查看 MySQL-master节点 日志)、mycat1(仅查看 MyCat 节点1的日志)。
    • 按时间范围筛选:顶部时间选择器可选择 “最近 1 小时”“最近 24 小时” 等,定位特定时间段的日志。

14.3.3 创建日志可视化看板(Dashboard 模块)

  1. 进入左侧菜单 Analytics > Visualize Library,点击 Create visualization,选择可视化类型(如 “柱状图”“表格”“折线图”)。
  2. 示例:创建各节点日志统计柱状图
    • 将host.name.keyword拖入后自动生成。
    • 调整水平轴数量,默认为5个,我们记录日志的节点有三台mysql,两台mycat,三台redis。
    • 保存柱状图,点击右上角save进行保存。
  3. 进入 Analytics > Dashboards,点击 Create dashboard,将创建的可视化图表添加到看板,形成完整的日志监控视图(如 “日志数量看板,mysql日志看板”)。

15 性能测试与优化

15.1 安装 Sysbench

在 sysbench-server (192.168.121.66) 上操作:

# 安装依赖
yum install -y make automake libtool pkgconfig libaio-devel mysql-devel# 安装sysbench
wget https://github.com/akopytov/sysbench/archive/1.0.20.tar.gz -O sysbench-1.0.20.tar.gz
tar -zxvf sysbench-1.0.20.tar.gz
cd sysbench-1.0.20
./autogen.sh
./configure --prefix=/usr/local/sysbench
make -j
make install# 创建软链接
ln -s /usr/local/sysbench/bin/sysbench /usr/bin/sysbench

15.2 准备测试脚本

[root@sysbench-server scripts]# mkdir -p /data/sysbench/scripts
[root@sysbench-server scripts]# cd /data/sysbench/scripts# 创建测试准备脚本
[root@sysbench-server scripts]# vim prepare.sh#!/bin/bash
# 准备测试数据
sysbench --db-driver=mysql \--mysql-host=192.168.121.188 \--mysql-port=8066 \--mysql-user=root \--mysql-password=123456 \--mysql-db=testdb \--table_size=100000 \--tables=10 \--threads=8 \oltp_read_write prepare# 创建测试执行脚本
[root@sysbench-server scripts]# vim run.sh
#!/bin/bash
# 执行性能测试DATE=$(date +%Y%m%d_%H%M%S)
LOG_DIR=/data/sysbench/logs
mkdir -p $LOG_DIRsysbench --db-driver=mysql \--mysql-host=192.168.121.188 \--mysql-port=8066 \--mysql-user=root \--mysql-password=123456 \--mysql-db=testdb \--table_size=100000 \--tables=10 \--threads=32 \--time=300 \--report-interval=10 \oltp_read_write run > $LOG_DIR/sysbench_result_$DATE.log# 创建测试清理脚本
[root@sysbench-server scripts]# vim cleanup.sh#!/bin/bash
# 清理测试数据sysbench --db-driver=mysql \--mysql-host=192.168.121.188 \--mysql-port=8066 \--mysql-user=root \--mysql-password=123456 \--mysql-db=testdb \oltp_read_write cleanup[root@sysbench-server scripts]# chmod +x *.sh

15.3 执行性能测试

# 准备测试数据
[root@sysbench-server scripts]# cd /data/sysbench/scripts
[root@sysbench-server scripts]# ./prepare.sh# 执行测试
[root@sysbench-server scripts]# sed -i 's/BEGIN/START TRANSACTION/g' /usr/local/sysbench/share/sysbench/oltp_common.lua
[root@sysbench-server scripts]# ./run.sh# 测试完成后清理数据
[root@sysbench-server scripts]# ./cleanup.sh

15.4 MySQL 性能测试结果

[root@sysbench-server logs]# pwd
/data/sysbench/logs
[root@sysbench-server logs]# ls
sysbench_result_20250826_124053.log  sysbench_result_20250826_124307.log
[root@sysbench-server logs]# cat sysbench_result_20250826_124307.log 
sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2)Running the test with following options:
Number of threads: 32
Report intermediate results every 10 second(s)
Initializing random number generator from current timeInitializing worker threads...Threads started![ 10s ] thds: 32 tps: 66.99 qps: 1373.50 (r/w/o: 966.96/403.34/3.20) lat (ms,95%): 707.07 err/s: 0.00 reconn/s: 0.00
[ 20s ] thds: 32 tps: 85.30 qps: 1699.74 (r/w/o: 1188.56/511.18/0.00) lat (ms,95%): 434.83 err/s: 0.00 reconn/s: 0.00
[ 30s ] thds: 32 tps: 87.30 qps: 1748.73 (r/w/o: 1224.32/524.41/0.00) lat (ms,95%): 427.07 err/s: 0.00 reconn/s: 0.00
[ 40s ] thds: 32 tps: 84.00 qps: 1675.14 (r/w/o: 1172.63/502.51/0.00) lat (ms,95%): 484.44 err/s: 0.00 reconn/s: 0.00
[ 50s ] thds: 32 tps: 78.70 qps: 1579.86 (r/w/o: 1106.57/473.29/0.00) lat (ms,95%): 549.52 err/s: 0.00 reconn/s: 0.00
[ 60s ] thds: 32 tps: 87.50 qps: 1750.38 (r/w/o: 1225.15/525.22/0.00) lat (ms,95%): 467.30 err/s: 0.00 reconn/s: 0.00
[ 70s ] thds: 32 tps: 84.80 qps: 1696.16 (r/w/o: 1186.67/509.49/0.00) lat (ms,95%): 511.33 err/s: 0.00 reconn/s: 0.00
[ 80s ] thds: 32 tps: 97.50 qps: 1948.56 (r/w/o: 1364.34/584.22/0.00) lat (ms,95%): 404.61 err/s: 0.00 reconn/s: 0.00
[ 90s ] thds: 32 tps: 103.20 qps: 2064.64 (r/w/o: 1445.86/618.78/0.00) lat (ms,95%): 369.77 err/s: 0.00 reconn/s: 0.00
[ 100s ] thds: 32 tps: 99.99 qps: 1999.02 (r/w/o: 1397.98/601.05/0.00) lat (ms,95%): 397.39 err/s: 0.00 reconn/s: 0.00
[ 110s ] thds: 32 tps: 97.50 qps: 1950.16 (r/w/o: 1365.97/584.19/0.00) lat (ms,95%): 411.96 err/s: 0.00 reconn/s: 0.00
[ 120s ] thds: 32 tps: 79.00 qps: 1577.56 (r/w/o: 1103.55/474.02/0.00) lat (ms,95%): 539.71 err/s: 0.00 reconn/s: 0.00
[ 130s ] thds: 32 tps: 87.19 qps: 1749.48 (r/w/o: 1224.72/524.76/0.00) lat (ms,95%): 484.44 err/s: 0.00 reconn/s: 0.00
[ 140s ] thds: 32 tps: 108.72 qps: 2171.51 (r/w/o: 1521.02/650.49/0.00) lat (ms,95%): 363.18 err/s: 0.00 reconn/s: 0.00
[ 150s ] thds: 32 tps: 95.99 qps: 1918.29 (r/w/o: 1340.93/577.37/0.00) lat (ms,95%): 467.30 err/s: 0.00 reconn/s: 0.00
[ 160s ] thds: 32 tps: 80.57 qps: 1612.22 (r/w/o: 1129.79/482.43/0.00) lat (ms,95%): 634.66 err/s: 0.00 reconn/s: 0.00
[ 170s ] thds: 32 tps: 89.44 qps: 1789.50 (r/w/o: 1253.16/536.34/0.00) lat (ms,95%): 601.29 err/s: 0.00 reconn/s: 0.00
[ 180s ] thds: 32 tps: 105.81 qps: 2114.83 (r/w/o: 1478.99/635.84/0.00) lat (ms,95%): 419.45 err/s: 0.00 reconn/s: 0.00
[ 190s ] thds: 32 tps: 116.70 qps: 2335.76 (r/w/o: 1635.34/700.42/0.00) lat (ms,95%): 344.08 err/s: 0.00 reconn/s: 0.00
[ 200s ] thds: 32 tps: 96.40 qps: 1927.00 (r/w/o: 1348.70/578.30/0.00) lat (ms,95%): 475.79 err/s: 0.00 reconn/s: 0.00
[ 210s ] thds: 32 tps: 108.19 qps: 2162.26 (r/w/o: 1513.03/649.23/0.00) lat (ms,95%): 404.61 err/s: 0.00 reconn/s: 0.00
[ 220s ] thds: 32 tps: 110.91 qps: 2224.33 (r/w/o: 1558.76/665.57/0.00) lat (ms,95%): 369.77 err/s: 0.00 reconn/s: 0.00
[ 230s ] thds: 32 tps: 117.80 qps: 2354.48 (r/w/o: 1647.89/706.59/0.00) lat (ms,95%): 331.91 err/s: 0.00 reconn/s: 0.00
[ 240s ] thds: 32 tps: 112.89 qps: 2259.31 (r/w/o: 1580.77/678.54/0.00) lat (ms,95%): 376.49 err/s: 0.00 reconn/s: 0.00
[ 250s ] thds: 32 tps: 118.21 qps: 2364.50 (r/w/o: 1655.84/708.66/0.00) lat (ms,95%): 331.91 err/s: 0.00 reconn/s: 0.00
[ 260s ] thds: 32 tps: 115.70 qps: 2317.96 (r/w/o: 1622.97/694.99/0.00) lat (ms,95%): 337.94 err/s: 0.00 reconn/s: 0.00
[ 270s ] thds: 32 tps: 107.60 qps: 2139.15 (r/w/o: 1496.37/642.79/0.00) lat (ms,95%): 434.83 err/s: 0.00 reconn/s: 0.00
[ 280s ] thds: 32 tps: 113.21 qps: 2277.23 (r/w/o: 1594.09/683.14/0.00) lat (ms,95%): 390.30 err/s: 0.00 reconn/s: 0.00
[ 290s ] thds: 32 tps: 105.70 qps: 2104.63 (r/w/o: 1473.05/631.58/0.00) lat (ms,95%): 434.83 err/s: 0.00 reconn/s: 0.00
[ 300s ] thds: 32 tps: 92.80 qps: 1862.62 (r/w/o: 1304.55/558.08/0.00) lat (ms,95%): 467.30 err/s: 0.00 reconn/s: 0.00
SQL statistics:queries performed:read:                            411432write:                           176296other:                           32total:                           587760transactions:                        29388  (97.86 per sec.)queries:                             587760 (1957.18 per sec.)ignored errors:                      0      (0.00 per sec.)reconnects:                          0      (0.00 per sec.)General statistics:total time:                          300.3086stotal number of events:              29388Latency (ms):min:                                  154.58avg:                                  326.82max:                                 1625.0495th percentile:                      458.96sum:                              9604720.99Threads fairness:events (avg/stddev):           918.3750/6.55execution time (avg/stddev):   300.1475/0.09

15.4.1 测试基础信息(确认测试配置)

首先确认测试的基础参数是否符合预期,这是解读性能指标的前提:

配置项日志中的实际配置说明
并发线程数Number of threads: 32模拟 32 个并发用户访问(符合脚本中 --threads=32 的配置)
测试时长total time: 300.3086s实际测试约 300 秒(5 分钟,符合脚本中 --time=300 的配置)
结果输出间隔Report intermediate results every 10 second(s)每 10 秒输出一次中间结果(符合脚本中 --report-interval=10 的配置)
测试类型oltp_read_write读写混合测试(包含 SELECT/INSERT/UPDATE/DELETE 等事务操作)

14.4.2 核心性能指标解读(重点关注)

日志分为 “中间结果” 和 “最终汇总结果”,核心指标集中在最终汇总部分,需重点关注以下 5 类指标:

1. 事务性能(TPS):数据库事务处理能力

TPS(Transactions Per Second,每秒事务数)是 OLTP 场景的 核心指标,直接反映数据库处理并发事务的能力:

    日志数据:transactions: 29388 (97.86 per sec.)

    平均 TPS:97.86 笔 / 秒(即每秒能处理约 98 个完整的读写事务)。

    总事务数:测试期间共完  成 29388 个事务

指标评估
32 线程下 TPS 接近 100,属于 中等偏上的入门级性能(适用于中小型业务场景,如日均千万级访问的应用)。若需支撑更高并发(如电商促销),TPS 需进一步提升至 200+ 甚至更高。

2. 查询性能(QPS):数据库查询处理能力

QPS(Queries Per Second,每秒查询数)反映数据库处理 SQL 语句的整体效率(包含事务内的所有读 / 写 / 其他操作):

    日志数据:queries: 587760 (1957.18 per sec.)总查询数:

    测试期间共执行 58.7 万条 SQL 语句

    平均 QPS:1957.18 笔 / 秒(即每秒处理约 1957 条 SQL)。

关联分析
QPS 与 TPS 的比例约为 20:1(1957 ÷ 97.86 ≈ 20),符合 OLTP 读写混合场景的正常比例(每个事务包含多个 SQL 操作,如 1 个事务包含 “1 次 INSERT + 3 次 SELECT + 1 次 UPDATE”),说明事务逻辑合理。

3. 读写操作分布(确认测试负载)

日志中 queries performed 部分展示了读、写、其他操作的占比,可验证测试是否符合 “读写混合” 的预期:

操作类型总次数占比说明
读操作(read)411432~70%主要为 SELECT 语句(查询数据)
写操作(write)176296~30%主要为 INSERT/UPDATE/DELETE 语句(修改数据)
其他操作(other)32~0%主要为事务控制语句(如 COMMIT/ROLLBACK)

结论:读写比例约为 7:3,符合大多数业务场景的负载特征(读多写少),测试结果能反映真实业务的性能表现。

4. 响应时间(Latency):用户体验核心指标

响应时间反映事务执行的快慢,直接影响用户体验(响应时间越长,用户等待越久),日志中提供了 5 个关键统计值:

响应时间指标日志数据说明
最小响应时间(min)154.58 ms最快的事务执行时间(约 0.15 秒)
平均响应时间(avg)326.82 ms所有事务的平均执行时间(约 0.33 秒)
最大响应时间(max)1625.04 ms最慢的事务执行时间(约 1.63 秒,需关注是否有极端延迟)
95% 响应时间(95th percentile)458.96 ms最关键的响应时间指标:95% 的事务执行时间在 0.46 秒以内(仅 5% 的事务超过此时间,可忽略极端值)
总响应时间(sum)9604720.99 ms所有事务的总执行时间(无实际业务意义,仅用于计算平均)

指标评估

平均响应时间 0.33 秒、95% 响应时间 0.46 秒,属于 良好的用户体验范围(一般用户可接受的延迟在 1 秒以内);

最大响应时间 1.63 秒 需注意:可能是测试期间的 “瞬时资源瓶颈”(如 CPU 抢占、磁盘 I/O 峰值)导致,可结合服务器监控进一步排查。

5. 线程公平性(Threads Fairness):并发调度合理性

线程公平性反映 32 个并发线程的负载分配是否均匀,避免 “部分线程繁忙、部分线程空闲” 的情况:

日志数据:events (avg/stddev): 918.3750/6.55:每个线程平均处理 918.38 个事务,标准差仅 6.55(标准差越小,线程负载越均匀);

execution time (avg/stddev): 300.1475/0.09:每个线程平均执行时间约 300.15 秒,标准差仅 0.09(几乎所有线程都满负载运行)。

结论:线程公平性极佳,说明 MySQL 的并发调度机制(如线程池、锁机制)运行正常,32 个线程的负载分配均匀,无 “资源浪费” 或 “线程饥饿” 问题。

15.4.3 中间结果分析(观察性能稳定性)

日志中每 10 秒输出一次中间结果(如 [10s] thds:32 tps:66.99[230s] thds:32 tps:117.80),可用于评估性能的稳定性:

TPS 波动范围:从初期的 66.99 TPS 逐步上升至中期的 117.80 TPS,最终稳定在 90-110 TPS 区间;

稳定性评估:初期 TPS 较低是因为 “数据库预热”(如缓存加载、连接建立),中期后性能稳定,无大幅波动(如 TPS 突然下降至 0),说明数据库性能稳定性良好。

15.4.4 总结

本次测试结果 表现良好

核心指标正常:TPS 约 98、QPS 约 1957,响应时间 95% 分位在 0.46 秒以内,无错误;

稳定性优秀:中间结果无大幅波动,线程负载均匀;

适用场景:可支撑中小型业务的 OLTP 负载(如企业内部系统、日均百万级访问的互联网应用)。

若需支撑更高并发,可按上述优化方向逐步调整,每次调整后重新执行测试,对比指标变化(如 TPS 提升 20% 以上,说明优化有效)。

16 项目验收与维护

16.1 功能验证清单

  •  MySQL 主从复制正常

  •  MHA 故障转移功能正常

  •  MyCat 读写分离正常

  •  MyCat 高可用切换正常

  •  Redis 集群功能正常

  • app-server集群工作正常

  • Nginx 负载均衡反向代理正常

  •  监控系统数据采集正常

  • ELK日志系统采集正常

  •  备份系统工作正常

  •  性能测试达到预期目标

16.2 日常维护脚本

创建集群状态检查脚本:

[root@ansible-server logs]# yum install -y mariadb redis
vim /data/scripts/check_cluster.sh
#!/bin/bash
# 检查集群状态DATE=$(date +%Y%m%d_%H%M%S)
LOG_FILE=/data/logs/cluster_check_$DATE.log
echo "集群状态检查: $DATE" >> $LOG_FILE# 检查MySQL状态
echo "=== MySQL状态 ===" >> $LOG_FILE
for ip in 192.168.121.221 192.168.121.222 192.168.121.223; doecho "MySQL节点: $ip" >> $LOG_FILEmysql -uroot -p123456 -h $ip -e "SELECT VERSION();" >> $LOG_FILE 2>&1mysql -uroot -p123456 -h $ip -e "SHOW SLAVE STATUS\G" >> $LOG_FILE 2>&1
done# 检查MyCat状态
echo "=== MyCat状态 ===" >> $LOG_FILE
for ip in 192.168.121.180 192.168.121.190; doecho "MyCat节点: $ip" >> $LOG_FILEmysql -uroot -p123456 -h $ip -P8066 -e "SELECT 1;" >> $LOG_FILE 2>&1
done# 检查MHA状态
echo "=== MHA状态 ===" >> $LOG_FILE
ssh root@192.168.121.220 "masterha_check_status --conf=/etc/mha/mysql_cluster.cnf" >> $LOG_FILE 2>&1# 检查Redis集群状态
echo "=== Redis集群状态 ===" >> $LOG_FILE
redis-cli -a 123456 -h 192.168.121.171 cluster info >> $LOG_FILE 2>&1# 检查监控系统状态
echo "=== 监控系统状态 ===" >> $LOG_FILE
curl -s http://192.168.121.125:9090/-/healthy >> $LOG_FILE 2>&1
curl -s http://192.168.121.125:3000/health >> $LOG_FILE 2>&1echo "检查完成: $(date)" >> $LOG_FILE# 运行脚本
./check_cluster.sh


16.3 查看维护日志

[root@ansible-server logs]# cat cluster_check_20250826_131504.log 
集群状态检查: 20250826_131504
=== MySQL状态 ===
MySQL节点: 192.168.121.221
VERSION()
8.0.28
*************************** 1. row ***************************Slave_IO_State: Waiting for source to send eventMaster_Host: 192.168.121.222Master_User: mhaMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000011Read_Master_Log_Pos: 456422543Relay_Log_File: mysql-master-relay-bin.000011Relay_Log_Pos: 8366728Relay_Master_Log_File: mysql-bin.000011Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 8366632Relay_Log_Space: 456425142Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 2158
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 222Master_UUID: e6b13ba9-7d6c-11f0-8a0b-000c29236169Master_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Waiting for dependent transaction to commitMaster_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: e6b13ba9-7d6c-11f0-8a0b-000c29236169:10-29830Executed_Gtid_Set: 965d216d-7d64-11f0-8771-000c29111b7d:1-10,
e6b13ba9-7d6c-11f0-8a0b-000c29236169:1-25,
ebd87b10-7d6c-11f0-965d-000c29111b7d:1-57Auto_Position: 1Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0Network_Namespace: 
MySQL节点: 192.168.121.222
VERSION()
8.0.28
MySQL节点: 192.168.121.223
VERSION()
8.0.28
*************************** 1. row ***************************Slave_IO_State: Waiting for source to send eventMaster_Host: 192.168.121.222Master_User: mhaMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000011Read_Master_Log_Pos: 456422543Relay_Log_File: mysql-slave2-relay-bin.000011Relay_Log_Pos: 456422639Relay_Master_Log_File: mysql-bin.000011Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 456422543Relay_Log_Space: 456422856Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 222Master_UUID: e6b13ba9-7d6c-11f0-8a0b-000c29236169Master_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Replica has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: e6b13ba9-7d6c-11f0-8a0b-000c29236169:10-29830Executed_Gtid_Set: 965d216d-7d64-11f0-8771-000c29111b7d:1-10,
e6b13ba9-7d6c-11f0-8a0b-000c29236169:1-29830,
e6b354b7-7d6c-11f0-8943-000c290f45a7:1-4,
ebd87b10-7d6c-11f0-965d-000c29111b7d:1-56Auto_Position: 1Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0Network_Namespace: 
=== MyCat状态 ===
MyCat节点: 192.168.121.180
1
1
MyCat节点: 192.168.121.190
1
1
=== MHA状态 ===
mysql_cluster (pid:890) is running(0:PING_OK), master:192.168.121.222
=== Redis集群状态 ===
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:3
cluster_size:3
cluster_current_epoch:3
cluster_my_epoch:1
cluster_stats_messages_ping_sent:16505
cluster_stats_messages_pong_sent:15517
cluster_stats_messages_fail_sent:2
cluster_stats_messages_sent:32024
cluster_stats_messages_ping_received:15517
cluster_stats_messages_pong_received:16503
cluster_stats_messages_fail_received:1
cluster_stats_messages_received:32021
=== 监控系统状态 ===
Prometheus is Healthy.
<a href="/login">Found</a>.检查完成: 2025年 08月 26日 星期二 13:15:04 CST

从结果来看集群整体状态健康。

总结

     本项目详细介绍了如何构建一个高可用、高性能的 MySQL 集群,通过容器化部署、多级缓存、完善的监控和备份策略,确保了数据库服务的连续性和数据安全性。项目实施完成后,系统能够处理大规模并发业务,并具备自动故障转移能力,大大提高了系统的可靠性和可维护性。

     在实际生产环境中,还需要根据业务特点和性能需求进行持续优化和调整,定期进行压力测试和故障演练,确保系统在各种情况下都能稳定运行。


文章转载自:

http://ILWhLJ3e.zkxbm.cn
http://4Ewfz2lc.zkxbm.cn
http://3RmK37oA.zkxbm.cn
http://rYnvZ3fQ.zkxbm.cn
http://soCuT6D7.zkxbm.cn
http://Z19CZ0TA.zkxbm.cn
http://6kSw6CIv.zkxbm.cn
http://2Fi9LAB2.zkxbm.cn
http://mi3QSQSX.zkxbm.cn
http://6lGiIKr5.zkxbm.cn
http://VmA3vIqm.zkxbm.cn
http://d29u7vkc.zkxbm.cn
http://zbY1ppUx.zkxbm.cn
http://CxaLLoTj.zkxbm.cn
http://2sG12f8y.zkxbm.cn
http://UYzA7GWL.zkxbm.cn
http://qVELnvZg.zkxbm.cn
http://93dC8xbw.zkxbm.cn
http://zswpjWdj.zkxbm.cn
http://PcIuCTH3.zkxbm.cn
http://clJGD3xn.zkxbm.cn
http://jbyMrZ9g.zkxbm.cn
http://cM31MOOW.zkxbm.cn
http://9zIjgoRC.zkxbm.cn
http://WaMYLbxV.zkxbm.cn
http://l5Sqm7Gy.zkxbm.cn
http://ZovzQ3ni.zkxbm.cn
http://WYiMOZCZ.zkxbm.cn
http://PYxaFb1s.zkxbm.cn
http://3fNUfA49.zkxbm.cn
http://www.dtcms.com/a/374780.html

相关文章:

  • “图观”端渲染场景编辑器
  • 构建分布式京东商品数据采集系统:基于 API 的微服务实现方案
  • HTML5点击转圈圈 | 深入了解HTML5技术中的动态效果与用户交互设计
  • springboot rabbitmq 延时队列消息确认收货订单已完成
  • CString(MFC/ATL 框架)和 QString(Qt 框架)
  • Sklearn(机器学习)实战:鸢尾花数据集处理技巧
  • 工具框架:Scikit-learn、Pandas、NumPy预测鸢尾花的种类
  • AI GEO 优化能否快速提升网站在搜索引擎的排名?​
  • nvm和nrm的详细安装配置,从卸载nodejs到安装NVM管理nodejs版本,以及安装nrm管理npm版本
  • 对口型视频怎么制作?从脚本到成片的全流程解析
  • 从“能说话”到“会做事”:AI Agent如何重构日常工作流?
  • 洛谷 P1249 最大乘积-普及/提高-
  • 小红书获取笔记详情API接口会返回哪些数据?
  • JAVA Spring Boot maven导入使用本地SDK(jar包)
  • Linux/UNIX系统编程手册笔记:SOCKET
  • F5和Nginx的区别
  • 9.9网编简单TCP,UDP的实现day2
  • Day39 SQLite数据库操作与HTML核心API及页面构建
  • Vue3 与 AntV X6 节点传参、自动布局及边颜色控制教程
  • 线程与进程的区别
  • RAC概念笔记
  • 如何将视频从安卓手机传输到电脑?
  • Day04_苍穹外卖——套餐管理(实战)
  • ElementUI 组件概览
  • fifo之读写指针
  • 【第三次全国土壤普查】一键制备土壤三普环境变量23项遥感植被指数神器
  • Java反射机制详解
  • PDF文件中的广告二维码图片该怎么批量删除
  • 记一次 .NET 某中医药附属医院门诊系统 崩溃分析
  • WPF/Prism 中计算属性的通知机制详解 —— SetProperty 与 RaisePropertyChanged