当前位置: 首页 > news >正文

Rocky Linux 9.4 搭建k8s-1.28.0 + docker一主多从集群测试环境

Rocky Linux 9.4 搭建k8s-1.28.0 + docker一主多从集群测试环境

kubernetes集群规划

集群各节点资源规划分配

主机名ip地址角色操作系统硬件配置
master192.168.100.10管理节点Rocky Linux 9.42core/4G内存/50G
node1192.168.100.20工作节点Rocky Linux 9.42core/4G内存/50G
node2192.168.100.30工作节点Rocky Linux 9.42core/4G内存/50G

操作系统准备工作

配置主机名称

master节点运行:

[root@node1 ~]# hostnamectl hostname master.example.com
[root@node1 ~]# bash

node1节点运行:

[root@node2 ~]# hostnamectl hostname node1.exmaple.com
[root@node2 ~]# bash

node2节点运行:

[root@node3 ~]# hostnamectl hostname node2.example.com
[root@node3 ~]# bash
配置系统hosts文件(全部节点)

分别在所有节点运行

[root@master ~]# vim /etc/hosts
192.168.100.10  master
192.168.100.20  node1
192.168.100.30  node2
~  
[root@node1 ~]# vim /etc/hosts
192.168.100.10  master
192.168.100.20  node1
192.168.100.30  node2
~   
[root@node2 ~]# vim /etc/hosts
192.168.100.10  master
192.168.100.20  node1
192.168.100.30  node2
~  
关闭防火墙和selinux(全部节点)
[root@master ~]# systemctl status firewalld
○ firewalld.service - firewalld - dynamic firewall daemonLoaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; preset: enabled)Active: inactive (dead)Docs: man:firewalld(1)
[root@master ~]# getenforce 
Disabled
配置国内系统镜像源和安装epel源

配置epel源(全部节点)

文件内容取自我的华为云主机如下

[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# ls
backup                    epel-testing.repo      rocky-devel.repo      rocky-extras.repo.bak
epel-cisco-openh264.repo  rocky-addons.repo      rocky-devel.repo.bak  rocky.repo
epel.repo                 rocky-addons.repo.bak  rocky-extras.repo     rocky.repo.bak[root@master yum.repos.d]# vim epel.repo 
[epel]
name=Extra Packages for Enterprise Linux $releasever - $basearch
# It is much more secure to use the metalink, but if you wish to use a local mirror
# place its address here.
baseurl=https://repo.huaweicloud.com/epel/$releasever/Everything/$basearch/
#metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-$releasever&arch=$basearch&infra=$infra&content=$contentdir
enabled=1
gpgcheck=1
countme=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$releasever[epel-debuginfo]
name=Extra Packages for Enterprise Linux $releasever - $basearch - Debug
# It is much more secure to use the metalink, but if you wish to use a local mirror
# place its address here.
baseurl=https://repo.huaweicloud.com/epel/$releasever/Everything/$basearch/debug/
#metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-$releasever&arch=$basearch&infra=$infra&content=$contentdir
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$releasever
gpgcheck=1[epel-source]
name=Extra Packages for Enterprise Linux $releasever - $basearch - Source
# It is much more secure to use the metalink, but if you wish to use a local mirror
# place its address here.
baseurl=https://repo.huaweicloud.com/epel/$releasever/Everything/source/tree/
#metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-$releasever&arch=$basearch&infra=$infra&content=$contentdir
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$releasever
gpgcheck=1
~           
[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# scp epel.repo root@node1:/etc/yum.repos.d/
The authenticity of host 'node1 (192.168.100.20)' can't be established.
ED25519 key fingerprint is SHA256:y4HKCsxdyGjRT5ATUzJg3sM/iq8qqVN5w8oELVFn35c.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'node1' (ED25519) to the list of known hosts.
root@node1's password: 
epel.repo                                                                 100% 1453     2.2MB/s   00:00  [root@master yum.repos.d]# scp epel.repo root@node2:/etc/yum.repos.d/
The authenticity of host 'node2 (192.168.100.30)' can't be established.
ED25519 key fingerprint is SHA256:y4HKCsxdyGjRT5ATUzJg3sM/iq8qqVN5w8oELVFn35c.
This host key is known by the following other names/addresses:~/.ssh/known_hosts:1: node1
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'node2' (ED25519) to the list of known hosts.
root@node2's password: 
epel.repo                                                                 100% 1453     2.6MB/s   00:00    

配置阿里云系统源(全部节点)。阿里云网上的文件名是Rocky.repo,系统上的是rocky-*.repo

注意修改查看:

[root@master ~]# sed -e 's|^mirrorlist=|#mirrorlist=|g' \
> -e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' \
> -i.bak \
> /etc/yum.repos.d/rocky-*.repo
[root@node1 ~]# sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' -i.bak /etc/yum.repos.d/rocky-*.repo
[root@node2 ~]# sed -e 's|^mirrorlist=|#mirrorlist=|g' -e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' -i.bak /etc/yum.repos.d/rocky-*.repo
时区和时间(全部节点)

检查时区和时间,由于我装机时候已经配置好,无需操作

[root@master ~]# timedatectl Local time: Tue 2025-10-28 15:47:49 CSTUniversal time: Tue 2025-10-28 07:47:49 UTCRTC time: Tue 2025-10-28 07:47:49Time zone: Asia/Shanghai (CST, +0800)
System clock synchronized: yesNTP service: activeRTC in local TZ: no
[root@master ~]# date
Tue Oct 28 03:48:33 PM CST 2025
[root@node1 ~]# timedatectl Local time: Tue 2025-10-28 15:48:57 CSTUniversal time: Tue 2025-10-28 07:48:57 UTCRTC time: Tue 2025-10-28 07:48:57Time zone: Asia/Shanghai (CST, +0800)
System clock synchronized: yesNTP service: activeRTC in local TZ: no
[root@node1 ~]# date
Tue Oct 28 03:48:59 PM CST 2025
[root@node2 ~]# timedatectl Local time: Tue 2025-10-28 15:49:18 CSTUniversal time: Tue 2025-10-28 07:49:18 UTCRTC time: Tue 2025-10-28 07:49:18Time zone: Asia/Shanghai (CST, +0800)
System clock synchronized: yesNTP service: activeRTC in local TZ: no
[root@node2 ~]# date
Tue Oct 28 03:49:20 PM CST 2025

如果时区和时间不对,需要手动设置时区和设置时间同步

本次使用系统自带的时间同步chrony工具

修改系统最大打开文件数(全部节点)

编辑文件,在文件后面添加一下两行数据

[root@master ~]# vim /etc/security/limits.conf 
.....
* soft nofile 65535
* hard nofile 65535# End of file
~    
[root@node1 ~]# vim /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535# End of file
~  
[root@node2 ~]# vim /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535# End of file
~  
修改内核参数(全部节点)

打开文件,在文件后面填入以下几行:并运行一下命令使得配置的sysctl.conf文件生效

[root@master ~]# vim /etc/sysctl.conf 
.....
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_tw_buckets = 20480
net.ipv4.tcp_max_syn_backlog = 20480
net.core.netdev_max_backlog = 262144
net.ipv4.tcp_fin_timeout = 20
~   
[root@master ~]# sysctl -p
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_tw_buckets = 20480
net.ipv4.tcp_max_syn_backlog = 20480
net.core.netdev_max_backlog = 262144
net.ipv4.tcp_fin_timeout = 20
[root@node1 ~]# vim /etc/sysctl.conf 
......
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_tw_buckets = 20480
net.ipv4.tcp_max_syn_backlog = 20480
net.core.netdev_max_backlog = 262144
net.ipv4.tcp_fin_timeout = 20
~ 
[root@node1 ~]# sysctl -p
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_tw_buckets = 20480
net.ipv4.tcp_max_syn_backlog = 20480
net.core.netdev_max_backlog = 262144
net.ipv4.tcp_fin_timeout = 20
[root@node2 ~]# vim /etc/sysctl.conf 
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_tw_buckets = 20480
net.ipv4.tcp_max_syn_backlog = 20480
net.core.netdev_max_backlog = 262144
net.ipv4.tcp_fin_timeout = 20
~  
[root@node2 ~]# sysctl -p
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_tw_buckets = 20480
net.ipv4.tcp_max_syn_backlog = 20480
net.core.netdev_max_backlog = 262144
net.ipv4.tcp_fin_timeout = 20
安装系统性能分析工具和其他(全部节点)
[root@master ~]# yum install -y gcc autoconf sysstat
[root@node1 ~]# yum install -y gcc autoconf sysstat
[root@node2 ~]# yum install -y gcc autoconf sysstat
开启bridge网桥过滤(全部节点)

编辑文件写入以下几行:

[root@master ~]# vim /etc/sysctl.d/k8s.conf 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
~  
[root@node1 ~]# vim /etc/sysctl.d/k8s.conf 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
~ 
[root@node2 ~]# vim /etc/sysctl.d/k8s.conf 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
~ 

加载br_netfilter模块,并查看:

[root@master ~]# modprobe br_netfilter
[root@master ~]# lsmod | grep br_netfilter
br_netfilter           36864  0
bridge                409600  1 br_netfilter
[root@node1 ~]# modprobe br_netfilter
[root@node1 ~]# lsmod | grep br_netfilter
br_netfilter           36864  0
bridge                409600  1 br_netfilter
[root@node2 ~]# modprobe br_netfilter
[root@node2 ~]# lsmod | grep br_netfilter
br_netfilter           36864  0
bridge                409600  1 br_netfilter

加载配置文件,使其生效:

[root@master ~]# sysctl -p /etc/sysctl.d/k8s.conf 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@node1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@node2 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

Docker环境准备

配置阿里云docker源(全部节点)
[root@master ~]# yum install -y yum-utils
....
[root@master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@node1 ~]# yum install -y yum-utils
....
[root@node1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@node2 ~]# yum install -y yum-utils
....
[root@node2 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
查看可用的docker版本(全部节点)
[root@master ~]# yum list docker-ce.x86_64 --showduplicates | sort -r
....
安装docker,指定安装25.0.5-1.el9版本(全部节点)
[root@master ~]# yum -y install docker-ce-25.0.5-1.el9
....
[root@node1 ~]# yum -y install docker-ce-25.0.5-1.el9
....
[root@node2 ~]# yum -y install docker-ce-25.0.5-1.el9
....
配置Docker Cgroup控制组(全部节点)

编辑文件写入以下行:

并启动Docker服务并设置随机自启(全部节点):

[root@master ~]# vim /etc/docker/daemon.json
{"registry-mirrors": ["https://docker.m.daocloud.io","https://dockerproxy.com","https://docker.mirrors.ustc.edu.cn","https://docker.nju.edu.cn"],      "exec-opts": ["native.cgroupdriver=systemd"]
}       
~   
[root@master ~]# systemctl daemon-reload 
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@node1 ~]# vim /etc/docker/daemon.json
{"registry-mirrors": ["https://docker.m.daocloud.io","https://dockerproxy.com","https://docker.mirrors.ustc.edu.cn","https://docker.nju.edu.cn"],      "exec-opts": ["native.cgroupdriver=systemd"]
}    
~ 
[root@node1 ~]# systemctl daemon-reload 
[root@node1 ~]# systemctl restart docker
[root@node1 ~]# systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
[root@node2 ~]# vim /etc/docker/daemon.json
{"registry-mirrors": ["https://docker.m.daocloud.io","https://dockerproxy.com","https://docker.mirrors.ustc.edu.cn","https://docker.nju.edu.cn"],      "exec-opts": ["native.cgroupdriver=systemd"]
}    
~  
[root@node2 ~]# systemctl daemon-reload 
[root@node2 ~]# systemctl restart docker
[root@node2 ~]# systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

配置cri-docker

下载cri-docker:
[root@master ~]# yum -y install lrzsz
....
[root@master ~]# rz -E
rz waiting to receive.
[root@master ~]# ls
anaconda-ks.cfg  cri-dockerd-0.3.9.amd64.tgz
解压cri-docker:
[root@master ~]# tar -xvf cri-dockerd-0.3.9.amd64.tgz --strip-components=1 -C /usr/local/bin/
cri-dockerd/cri-dockerd
下载cri-docker service文件:
[root@master ~]# yum -y install wget
....
[root@master ~]# wget -O /etc/systemd/system/cri-docker.service https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.service
--2025-10-28 16:41:51--  https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.service
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1319 (1.3K) [text/plain]
Saving to: ‘/etc/systemd/system/cri-docker.service’/etc/systemd/system/cri-dock 100%[============================================>]   1.29K  --.-KB/s    in 0s      2025-10-28 16:41:54 (125 MB/s) - ‘/etc/systemd/system/cri-docker.service’ saved [1319/1319][root@master ~]# wget -O /etc/systemd/system/cri-docker.socket https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket
--2025-10-28 16:58:05--  https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... failed: Connection refused.
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... failed: Connection refused.
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 204 [text/plain]
Saving to: ‘/etc/systemd/system/cri-docker.socket’/etc/systemd/system/cri-dock 100%[============================================>]     204  --.-KB/s    in 0s      2025-10-28 16:58:51 (17.8 MB/s) - ‘/etc/systemd/system/cri-docker.socket’ saved [204/204]
编辑cri-docker.server:

修改ExecStart行内容为:

[root@master ~]# vim /etc/systemd/system/cri-docker.service 
.....
[Service]
Type=notify
#ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://
ExecStart=/usr/local/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --cri-dockerd-root-directory=/var/lib/dockershim --cri-dockerd-root-directory=/var/lib/docker
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
.....
编辑cri-docker.socket:

修改ListenStream行内容为:

[root@master ~]# vim /etc/systemd/system/cri-docker.socket 
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service[Socket]
#ListenStream=%t/cri-dockerd.sock
ListenStream=/var/run/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker[Install]
WantedBy=sockets.target
~   
复制cri-dockerd-0.3.9.amd64.tgz到其他节点:
[root@master ~]# scp cri-dockerd-0.3.9.amd64.tgz root@node1:/root/
cri-dockerd-0.3.9.amd64.tgz                                                     100%   14MB 273.8MB/s   00:00    
[root@master ~]# scp cri-dockerd-0.3.9.amd64.tgz root@node2:/root/
cri-dockerd-0.3.9.amd64.tgz      
节点解压cri-docker:
[root@node1 ~]# tar -xvf cri-dockerd-0.3.9.amd64.tgz --strip-components=1 -C /usr/local/bin/
cri-dockerd/cri-dockerd
[root@node2 ~]# tar -xvf cri-dockerd-0.3.9.amd64.tgz --strip-components=1 -C /usr/local/bin/
cri-dockerd/cri-dockerd
复制修改好的service文件到其他节点:
[root@master ~]# scp /etc/systemd/system/cri-docker.s* root@node1:/etc/systemd/system/
cri-docker.service                                                              100% 1591     2.2MB/s   00:00    
cri-docker.socket                                                               100%  244   681.5KB/s   00:00    
[root@master ~]# scp /etc/systemd/system/cri-docker.s* root@node2:/etc/systemd/system/
cri-docker.service                                                              100% 1591     2.9MB/s   00:00    
cri-docker.socket                                                               100%  244   743.6KB/s   00:00   
启动并设置自启动:
[root@master ~]# systemctl daemon-reload 
[root@master ~]# systemctl restart cri-docker
[root@master ~]# systemctl enable cri-docker
Created symlink /etc/systemd/system/multi-user.target.wants/cri-docker.service → /etc/systemd/system/cri-docker.service.
[root@node1 ~]# systemctl daemon-reload 
[root@node1 ~]# systemctl restart cri-docker
[root@node1 ~]# systemctl enable cri-docker
Created symlink /etc/systemd/system/multi-user.target.wants/cri-docker.service → /etc/systemd/system/cri-docker.service.
[root@node2 ~]# systemctl daemon-reload 
[root@node2 ~]# systemctl restart cri-docker
[root@node2 ~]# systemctl enable cri-docker
Created symlink /etc/systemd/system/multi-user.target.wants/cri-docker.service → /etc/systemd/system/cri-docker.service.

kubeadm部署kubernetes集群

配置阿里云k8s源(全部节点)

编辑文件/etc/yum.repos.d/k8s.repo

[root@master ~]# vim /etc/yum.repos.d/k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/repodata/repomd.xml.key
~ 
[root@master ~]# scp /etc/yum.repos.d/k8s.repo root@node1:/etc/yum.repos.d/
k8s.repo                                                                        100%  218   364.2KB/s   00:00    
[root@master ~]# scp /etc/yum.repos.d/k8s.repo root@node2:/etc/yum.repos.d/
k8s.repo                                                                        100%  218   417.2KB/s   00:00    
安装集群所需软件包kubelet kubeadm kubectl(全部节点)
[root@master ~]# yum install -y kubelet kubeadm kubectl
[root@node1 ~]# yum install -y kubelet kubeadm kubectl
[root@node2 ~]# yum install -y kubelet kubeadm kubectl
配置k8s Cgoup控制组(全部节点):
[root@master ~]# vim /etc/sysconfig/kubelet 
[root@node1 ~]# vim /etc/sysconfig/kubelet 
[root@node2 ~]# vim /etc/sysconfig/kubelet 

打开文件写入以下行:

KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
~  
配置kubelet自启动(全部节点):
[root@master ~]# systemctl enable kubelet.service 
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@node1 ~]# systemctl enable kubelet.service 
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@node2 ~]# systemctl enable kubelet.service 
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
初始化集群(master节点运行)

打印master节点所需的镜像文件(master节点运行):

[root@master ~]# kubeadm config images list
I1028 18:44:00.192385   13968 version.go:256] remote version is much newer: v1.34.1; falling back to: stable-1.28
registry.k8s.io/kube-apiserver:v1.28.15
registry.k8s.io/kube-controller-manager:v1.28.15
registry.k8s.io/kube-scheduler:v1.28.15
registry.k8s.io/kube-proxy:v1.28.15
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/coredns/coredns:v1.10.1

打印集群初始化配置文件(master节点运行):

[root@master ~]# kubeadm config print init-defaults > kubeadm-config.yaml

修改参数(master节点):

[root@master ~]# vim kubeadm-config.yaml 
....
localAPIEndpoint:advertiseAddress: 192.168.100.10   #修改,集群初始化的主节点IP bindPort: 6443
nodeRegistration:criSocket: unix:///var/run/cri-dockerd.sock     #修改使用dockerimagePullPolicy: IfNotPresentname: master       #修改节点名称taints: null
....
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers     #修改使用阿里云镜像仓库
....

注意:修改以上标记的位置即可

使用配置文件初始化(master节点运行)

[root@master ~]# kubeadm init --config kubeadm-config.yaml --upload-certs
....mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.100.10:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:acf15bf6a03a7dcbd982e3ec4b1b2029256da0088f8046193361639597061bc2 

–upload-certs参数是将集群密钥添加到etcd数据库

配置环境变量(master节点运行)

根据初始化完成提示运行下面行:

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

工作节点加入集群(工作节点运行)

根据初始化完成提示将工作节点添加入集群

[root@node1 ~]# kubeadm join 192.168.100.10:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:acf15bf6a03a7dcbd982e3ec4b1b2029256da0088f8046193361639597061bc2 --cri-socket=unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks[WARNING Hostname]: hostname "node1.exmaple.com" could not be reached[WARNING Hostname]: hostname "node1.exmaple.com": lookup node1.exmaple.com on 114.114.114.114:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@node2 ~]# kubeadm join 192.168.100.10:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:acf15bf6a03a7dcbd982e3ec4b1b2029256da0088f8046193361639597061bc2 --cri-socket=unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks[WARNING Hostname]: hostname "node2.example.com" could not be reached[WARNING Hostname]: hostname "node2.example.com": lookup node2.example.com on 114.114.114.114:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

注意:–cri-socket=unix:///var/run/cri-dockerd.sock参数是指定使用docker作为容器管理引擎

[root@master ~]# systemctl restart containerd
[root@master ~]# systemctl enable containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
[root@node1 ~]# systemctl restart containerd
[root@node1 ~]# systemctl enable containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
[root@node2 ~]# systemctl restart containerd
[root@node2 ~]# systemctl enable containerd
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.
下载calico文件(master节点运行)

Calico是为集群中的 Pod 提供网络功能:

[root@master ~]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml
--2025-10-28 19:09:39--  https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 234906 (229K) [text/plain]
Saving to: ‘calico.yaml’calico.yaml                    100%[=================================================>] 229.40K  17.3KB/s    in 2m 30s  2025-10-28 19:12:12 (1.53 KB/s) - ‘calico.yaml’ saved [234906/234906]
创建calico网络(master节点运行):
[root@master ~]# kubectl apply -f calico.yaml
查看集群各节点状态:
[root@master ~]# kubectl get nodes
NAME                STATUS     ROLES           AGE   VERSION
master              Ready      control-plane   29m   v1.28.15
node1.exmaple.com   NotReady   <none>          97s   v1.28.15
node2.example.com   NotReady   <none>          48s   v1.28.15
查看k8s集群的各组件:
[root@master ~]# kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-9d57d8f49-xp62k   1/1     Running   0          2m48s
calico-node-nkl7z                         1/1     Running   0          2m48s
coredns-6554b8b87f-h6265                  1/1     Running   0          24m
coredns-6554b8b87f-p4vrt                  1/1     Running   0          24m
etcd-master                               1/1     Running   0          24m
kube-apiserver-master                     1/1     Running   0          24m
kube-controller-manager-master            1/1     Running   0          24m
kube-proxy-62dkk                          1/1     Running   0          24m
kube-scheduler-master                     1/1     Running   0          24m
[root@node1 ~]# cd /etc/kubernetes/
[root@node1 kubernetes]# ls
kubelet.conf  manifests  pki
[root@node1 kubernetes]# scp root@master:/etc/kubernetes/admin.conf .
The authenticity of host 'master (192.168.100.10)' can't be established.
ED25519 key fingerprint is SHA256:y4HKCsxdyGjRT5ATUzJg3sM/iq8qqVN5w8oELVFn35c.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'master' (ED25519) to the list of known hosts.
root@master's password: 
admin.conf                                                                             100% 5650     6.2MB/s   00:00    
[root@node1 ~]# mkdir -p $HOME/.kube
[root@node1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@node1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@node1 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
[root@node1 ~]# kubectl get nodes
NAME                STATUS   ROLES           AGE   VERSION
master              Ready    control-plane   85m   v1.28.15
node1.exmaple.com   Ready    <none>          57m   v1.28.15
node2.example.com   Ready    <none>          56m   v1.28.15[root@node1 ~]# kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-9d57d8f49-xp62k   1/1     Running   0          64m
calico-node-5b5dz                         1/1     Running   0          58m
calico-node-c5nkj                         1/1     Running   0          57m
calico-node-nkl7z                         1/1     Running   0          64m
coredns-6554b8b87f-h6265                  1/1     Running   0          86m
coredns-6554b8b87f-p4vrt                  1/1     Running   0          86m
etcd-master                               1/1     Running   0          86m
kube-apiserver-master                     1/1     Running   0          86m
kube-controller-manager-master            1/1     Running   0          86m
kube-proxy-62dkk                          1/1     Running   0          86m
kube-proxy-tjfkz                          1/1     Running   0          57m
kube-proxy-wddnk                          1/1     Running   0          58m
kube-scheduler-master                     1/1     Running   0          86m
[root@node2 ~]# cd /etc/kubernetes/
[root@node2 kubernetes]# scp root@master:/etc/kubernetes/admin.conf .
The authenticity of host 'master (192.168.100.10)' can't be established.
ED25519 key fingerprint is SHA256:y4HKCsxdyGjRT5ATUzJg3sM/iq8qqVN5w8oELVFn35c.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'master' (ED25519) to the list of known hosts.
root@master's password: 
admin.conf                                                                             100% 5650     7.1MB/s   00:00    
[root@node2 ~]# mkdir -p $HOME/.kube
[root@node2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@node2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@node2 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
[root@node2 ~]# kubectl get nodes
NAME                STATUS   ROLES           AGE   VERSION
master              Ready    control-plane   86m   v1.28.15
node1.exmaple.com   Ready    <none>          58m   v1.28.15
node2.example.com   Ready    <none>          57m   v1.28.15[root@node2 ~]# kubectl get pod -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-9d57d8f49-xp62k   1/1     Running   0          65m
calico-node-5b5dz                         1/1     Running   0          59m
calico-node-c5nkj                         1/1     Running   0          58m
calico-node-nkl7z                         1/1     Running   0          65m
coredns-6554b8b87f-h6265                  1/1     Running   0          87m
coredns-6554b8b87f-p4vrt                  1/1     Running   0          87m
etcd-master                               1/1     Running   0          87m
kube-apiserver-master                     1/1     Running   0          87m
kube-controller-manager-master            1/1     Running   0          87m
kube-proxy-62dkk                          1/1     Running   0          87m
kube-proxy-tjfkz                          1/1     Running   0          58m
kube-proxy-wddnk                          1/1     Running   0          59m
kube-scheduler-master                     1/1     Running   0          87m
http://www.dtcms.com/a/541117.html

相关文章:

  • 做网站的一般要多钱wordpress国内不使用方法
  • docker自定义网络
  • K8S 安装 部署 文档
  • stm32_关于乐鑫ESP8266-07S WIFI模组烧录安信可科技的MQTT固件流程
  • GitLab 私服(基于 Docker)搭建方案
  • 外贸网站wordpresswordpress模版安装
  • React 09
  • 2 VTK的基本概念
  • 慈溪市建设局网站表格下载工装公司名字怎么起
  • 苏州街网站建设外网设计网站
  • 磐石网站建设注册微信公众平台
  • Docker从入门到实战——含容器部署、docker基础、项目部署
  • Neo4j-图数据库入门图文保姆攻略
  • 【LangChain】LangChain Model 模型分类
  • 如何将多张PGN、JPG等格式图片合并成一个PDF文档?
  • 如何做游戏网站百度推广关键词怎么优化
  • 如何正确创建一个后端项目nodejs+express,只针对windows用户
  • 给我一个网站好吗做网站模板的软件
  • 网站使用特殊字体重庆市建设工程网官网
  • 官方网站下载微信html5网页制作代码成品
  • Python-env变量读取
  • 仓颉编程(19)函数语法糖
  • idea中更新github token 登录github
  • win11 访问 Win10 共享文件出现扩展错误
  • 网站建设超链接制作卖老石器老榆木做哪个网站好
  • conda 基础命令使用
  • OpenAI完成了其盈利结构的重组
  • 测试开发话题03---BUG篇
  • Rust中的闭包
  • 辽宁省建设信息网福州网站推广优化