【云运维】Kubernetes安装(基于 Docker + Calico)
Kubernetes安装(基于 Docker + Calico)
前言
本文详细记录基于 3 台 CentOS 主机搭建 Kubernetes(k8s)v1.28 集群的完整流程,包含环境准备、Docker 部署、CRI 适配、k8s 组件安装、Calico 网络配置及应用测试,全程采用国内镜像源加速,解决网络超时、版本兼容等常见问题,适合运维人员参考实践。
一、集群环境规划
1. 主机配置
| 角色 | CPU | 内存 | 硬盘 | IP 地址 | 主机名 | 预装软件 |
|---|---|---|---|---|---|---|
| 控制节点 | 4 核 | 4G | 200G | 192.168.100.128 | master | Docker |
| 工作节点 | 4 核 | 4G | 200G | 192.168.100.129 | node1 | Docker |
| 工作节点 | 4 核 | 4G | 200G | 192.168.100.130 | node2 | Docker |
2. 核心软件版本
- Kubernetes:v1.28.0(kubeadm、kubelet、kubectl 保持版本一致)
- Docker:最新稳定版(通过阿里云镜像源安装)
- CRI 适配:cri-dockerd v0.3.4(解决 k8s 1.24+ 不支持 Docker 原生问题)
- 网络插件:Calico v3.25(兼容 k8s 1.23-1.28 版本)
- 内核要求:≥ 4.19(需提前升级)
3. 网络规划
- Pod 网段:10.244.0.0/16(需与 Calico 配置一致)
- Service 网段:10.96.0.0/12(k8s 集群内部虚拟网络)
二、前置环境部署(所有节点执行)
1. 基础配置
(1)设置主机名
# master 节点
[root@master ~]# hostnamectl set-hostname master
# node1 节点
[root@node1 ~]# hostnamectl set-hostname node1
# node2 节点
[root@node2 ~]# hostnamectl set-hostname node2
(2)配置主机名解析
编辑 /etc/hosts 文件,添加集群节点映射:
[root@all ~]# vim /etc/hosts
添加以下内容:
192.168.100.128 master
192.168.100.129 node1
192.168.100.130 node2
测试连通性:ping master -c 2、ping node1 -c 2,确保所有节点互通。
(3)安装依赖包
[root@all ~]# yum -y install vim lrzsz unzip wget net-tools tree bash-completion conntrack ntpdate ntp ipvsadm ipset iptables curl sysstat libseccomp git psmisc telnet unzip gcc gcc-c++ make
部分依赖包的功能解析:
| 类别 | 软件包 | 核心功能 |
|---|---|---|
| 文件操作 | vim, lrzsz, unzip | 编辑 / 上传 / 下载 / 解压 ZIP |
| 网络管理 | net-tools, ipvsadm | 网络配置 / IPVS 负载均衡 |
| 系统监控 | sysstat, psmisc | 性能监控 / 进程管理 |
| 开发编译 | gcc, make | 代码编译 / 自动化构建 |
| 安全防护 | iptables, libseccomp | 防火墙 / 容器系统调用限制 |
2. 系统环境优化
(1)关闭防火墙与 SELinux
# 关闭防火墙(永久禁用)
[root@all ~]# systemctl disable firewalld --now
# 关闭 SELinux(临时+永久)
[root@all ~]# setenforce 0
[root@all ~]# sed -i 's/enforcing/disabled/g' /etc/selinux/config
(2)禁用 Swap 分区
k8s 要求禁用 Swap,避免性能损耗:
# 临时禁用
[root@all ~]# swapoff -a
# 永久禁用(注释 Swap 配置)
[root@all ~]# vim /etc/fstab
# 找到含swap配置的行用'#'注释掉,如下例所示
# /dev/mapper/centos-swap swap swap defaults 0 0
(3)关闭 NetworkManager(可选,现在不用做)
[root@all ~]# systemctl stop NetworkManager
[root@all ~]# systemctl disable NetworkManager
(4)升级内核并配置内核参数(必做!!!)
k8s 对内核版本要求较高,需升级至 ≥4.19:
# 升级内核
[root@all ~]# yum update -y kernel # 升级完后一定要记得重启
[root@all ~]# reboot
重启后配置内核参数,创建 /etc/sysctl.d/kubernetes.conf:
[root@all ~]# vim /etc/sysctl.d/kubernetes.conf# 开启网络桥接与包过滤
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
# 开启 IPv4 路由转发
net.ipv4.ip_forward=1
# 禁用 Swap 缓存
vm.swappiness=0
# 不检查物理内存是否够用
vm.overcommit_memory=1# 立即生效配置
[root@all ~]# sysctl --system
(5)调整 Linux 资源限制(可跳过)
# 临时设置最大文件句柄数
[root@all ~]# ulimit -SHn 65535
# 永久配置(编辑 /etc/security/limits.conf)
[root@all ~]# cat >> /etc/security/limits.conf <<EOF
# 为所有用户设置文件描述符软限制
* soft nofile 655360
# 为所有用户设置文件描述符硬限制
* hard nofile 131072
# 为所有用户设置进程数软限制
* soft nproc 655350
# 为所有用户设置进程数硬限制
* hard nproc 655350
# 为所有用户设置内存锁定软限制为无限制
* soft memlock unlimited
# 为所有用户设置内存锁定硬限制为无限制
* hard memlock unlimited
EOF
3. 时间同步
确保所有节点时间一致,避免证书失效等问题:
[root@all ~]# yum -y install chrony
[root@all ~]# systemctl restart chronyd
[root@all ~]# systemctl enable chronyd# 列出 Chrony 守护进程当前配置和使用的所有时间源(NTP 服务器)及其同步状态信息
[root@all ~]# chronyc sources -v# 将硬件时钟的时间同步到系统时钟
[root@all ~]# hwclock -s
4. 加载 IPVS 内核模块
IPVS 用于 kube-proxy 负载均衡,性能优于默认的 iptables:
# 创建模块配置文件
[root@all ~]# cat >>/etc/modules-load.d/ipvs.conf<<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
overlay
br_netfilter
EOF# 重启模块加载服务
[root@all ~]# systemctl restart systemd-modules-load# 验证模块加载
[root@all ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 9
ip_vs 145458 15 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4 19149 10
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
nf_conntrack 143411 10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_nat_masquerade_ipv6,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
IPVS 内核模块功能说明
- IPVS 核心模块
-
ip_vs:IPVS 负载均衡基础模块 -
ip_vs_rr:轮询(Round Robin)调度算法 -
ip_vs_wrr:加权轮询(Weighted RR)调度算法 -
ip_vs_sh:源地址哈希(Source Hashing)调度算法
- 网络连接与过滤
-
nf_conntrack_ipv4:IPv4 连接跟踪(NAT / 防火墙依赖,新内核中内核版本 ≥4.19 时合并至 nf_conntrack) -
ip_tables:iptables 基础框架 -
ipt_REJECT:实现数据包拒绝(REJECT 动作)
- IP 集合管理
-
ip_set:IP 地址集合管理 -
xt_set&ipt_set:iptables 与 IP 集合的扩展匹配
- 网络隧道与桥接
-
ipip:IP-over-IP 隧道协议 -
overlay:Overlay 网络支持(如 Docker 跨主机网络) -
br_netfilter:桥接网络流量过滤(需配合 net.bridge.bridge-nf-call-iptables=1 参数)
- 反向路径过滤
ipt_rpfilter:反向路径验证(防 IP 欺骗)
典型应用场景
-
Kubernetes 节点初始化:IPVS 模式 kube-proxy 依赖这些模块
-
负载均衡服务器:启用 IPVS 调度算法
-
容器网络配置:Overlay 和桥接模块支持
三、Docker-ce环境部署(所有节点执行)
1. 安装 Docker 依赖
[root@all ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
说明:
-
yum-utils 提供了 yum-config-manager
-
device mapper 存储驱动程序需要 device-mapper-persistent-data 和 lvm2
-
Device Mapper 是 Linux2.6 内核中支持逻辑卷管理的通用设备映射机制,它为实现用于存储资源管理的块设备驱动提供了一个高度模块化的内核架构。
2. 配置阿里云 Docker 镜像源
[root@all ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror, langpacks
adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
3. 安装并启动 docker-ce
# 安装 docker-ce
[root@all ~]# yum install -y docker-ce# 设置开机自启并启动
[root@all ~]# systemctl enable docker --now # 验证 Docker 状态
[root@all ~]# docker --version
Docker version 26.1.4, build 5650f9b[root@all ~]# systemctl status docker
4. 配置 Docker 镜像加速与 Cgroup 驱动
编辑 /etc/docker/daemon.json,配置国内镜像加速并指定 Cgroup 驱动为 systemd(与 k8s 一致):
[root@all ~]# vim /etc/docker/daemon.json
{"registry-mirrors": ["https://09def58152000fc00ff0c00057bad7e0.mirror.swr.myhuaweicloud.com","https://do.nark.eu.org","https://dc.j8.work","https://docker.m.daocloud.io","https://dockerproxy.com","https://docker.mirrors.ustc.edu.cn","https://docker.nju.edu.cn","https://registry.docker-cn.com","https://hub-mirror.c.163.com","https://hub.uuuadc.top","https://docker.anyhub.us.kg","https://dockerhub.jobcher.com","https://dockerhub.icu","https://docker.ckyl.me","https://docker.awsl9527.cn","https://mirror.baidubce.com","https://docker.1panel.live"], # 注意分隔符号','"exec-opts": ["native.cgroupdriver=systemd"] # 添加cgroup方式
}# 重启 Docker 生效
[root@all ~]# systemctl daemon-reload
[root@all ~]# systemctl restart docker
四、安装 cri-dockerd(所有节点执行)
1. 作用说明
k8s 1.24+ 移除了对 Docker 原生支持(dockershim),cri-dockerd 提供 CRI 标准接口,让 Docker 可继续作为 k8s 容器运行时。
2. 下载并安装
# 下载 cri-dockerd 安装包(适配 CentOS 7)
[root@all ~]# wget https://github.com/mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4-3.el7.x86_64.rpm
# 也可以访问https://github.com/mirantis/cri-dockerd/到浏览器找到对应的版本进行下载# 安装
[root@all ~]# rpm -ivh cri-dockerd-0.3.4-3.el7.x86_64.rpm
3. 配置并启动服务
编辑 /usr/lib/systemd/system/cri-docker.service,修改第 10 行添加镜像地址:
[root@all ~]# vim /usr/lib/systemd/system/cri-docker.service
修改内容:
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://
启动cri-dockerd服务:
[root@all ~]# systemctl daemon-reload
[root@all ~]# systemctl start cri-docker.service
[root@all ~]# systemctl enable cri-docker.service# 验证:检查文件是否启动(存在 /run/cri-dockerd.sock 即成功)
[root@all ~]# ls /run/cri-*
/run/cri-dockerd.sock
五、Kubernetes 集群部署
1. 配置 k8s YUM 源(所有节点执行)
使用阿里云镜像源加速 k8s 组件下载:
[root@all ~]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg[root@all ~]# yum clean all
[root@all ~]# yum list
2. 安装 k8s 组件(所有节点执行)
安装指定版本(v1.28.0)的 kubeadm、kubelet、kubectl:
# 查看可安装版本
[root@all ~]# yum list kubeadm.x86_64 --showduplicates | sort -r
已加载插件:fastestmirror, langpacks
可安装的软件包* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
kubeadm.x86_64 1.9.9-0 kubernetes
kubeadm.x86_64 1.9.8-0 kubernetes
kubeadm.x86_64 1.9.7-0 kubernetes
kubeadm.x86_64 1.9.6-0 kubernetes
kubeadm.x86_64 1.9.5-0 kubernetes
kubeadm.x86_64 1.9.4-0 kubernetes
kubeadm.x86_64 1.9.3-0 kubernetes
kubeadm.x86_64 1.9.2-0 kubernetes
kubeadm.x86_64 1.9.11-0 kubernetes...... ...... ......
kubeadm.x86_64 1.28.2-0 kubernetes
kubeadm.x86_64 1.28.1-0 kubernetes
kubeadm.x86_64 1.28.0-0 kubernetes# 安装1.28.0-0版本
[root@all ~]# yum install -y kubeadm-1.28.0-0 kubelet-1.28.0-0 kubectl-1.28.0-0
kubelet配置
-
强制指定 kubelet 使用 systemd 作为 cgroup 驱动,确保与 Docker 或其他容器运行时保持一致
-
将 kube-proxy 的代理模式设置为 ipvs ,替代默认的 iptables ,提升大规模集群的网络性能
# 配置 kubelet Cgroup 驱动与 kube-proxy 模式
[root@all ~]# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"# 因为没有初始化产生对应配置文件,我们先设置开机自启动状态
[root@all ~]# systemctl daemon-reload
[root@all ~]# systemctl enable kubelet.service # 注意:只是设为开机自启动,并不立即开启
3. 初始化 k8s 控制节点(仅 master 执行)
(1)查看并下载 k8s 镜像
# 查看可使用镜像
[root@master ~]# kubeadm config images list --kubernetes-version=v1.28.0 --image-repository=registry.aliyuncs.com/google_containers
registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.9-0
registry.aliyuncs.com/google_containers/coredns:v1.10.1# 通过阿里云镜像仓库拉取集群所需镜像
[root@master ~]# kubeadm config images pull --cri-socket=unix:///var/run/cri-dockerd.sock --kubernetes-version=v1.28.0 --image-repository=registry.aliyuncs.com/google_containers
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1# 验证镜像
[root@master ~]# docker images
(2)创建初始化配置文件(推荐)
生成默认配置并修改关键参数:
[root@master ~]# kubeadm config print init-defaults > kubeadm-init.yaml
[root@master ~]# vim kubeadm-init.yaml
修改核心配置(关键项):
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.100.128 #12行 master的IP地址bindPort: 6443
nodeRegistration:criSocket: unix:///var/run/cri-dockerd.sock #15行 修改docker运行时 imagePullPolicy: IfNotPresentname: master #17行 修改master节点主机名taints: #18行 去掉 Null- effect: NoSchedule #19行 添加污点key: node-role.kubernetes.io/control-plane #20行 添加
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers #32行 修改镜像仓库地址
kind: ClusterConfiguration
kubernetesVersion: 1.28.0 #34行 k8s版本
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12 # Service 网段podSubnet: 10.244.0.0/16 #38行 增加Pod网段(与 Calico 一致)
scheduler: {}
---
# 末尾添加 更改kube-proxy的代理模式,默认为iptables
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
# 更改kubelet cgroup驱动为systemd
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
(3)初始化集群
[root@master ~]# kubeadm init --config=kubeadm-init.yaml --upload-certs | tee kubeadm-init.log
........
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.100.128:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:ea5cbb5b077b7432417db8bde33c471654801dc1324425b772e7a36187d09312
初始化成功后,会输出 node 节点加入集群的命令,需保存备用。
(4)配置 kubectl 工具
# 普通用户配置(推荐)
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config# root 用户配置(永久生效)
[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master ~]# source ~/.bash_profile# 检查核心组建控制平面的健康状态
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy ok
4. 工作节点加入集群(仅 node1、node2 执行)
使用 master 初始化后输出的命令,添加 --cri-socket 参数:
[root@node1,2 ~]# kubeadm join 192.168.100.128:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:ea5cbb5b077b7432417db8bde33c471654801dc1324425b772e7a36187d09312 --cri-socket unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
注意:token 有效期 24 小时,过期后可在 master 执行
kubeadm token create --print-join-command重新生成。
5. 网络CNI组建部署(仅 master 执行)
(1)查看集群状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 40m v1.28.0
node1 NotReady <none> 54s v1.28.0
node2 NotReady <none> 4s v1.28.0# 此时coredns中一直没有IP地址,主要原因缺少网络组建
[root@master ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-66f779496c-6nvcj 0/1 Pending 0 48m <none> <none> <none> <none>
coredns-66f779496c-7dbm9 0/1 Pending 0 48m <none> <none> <none> <none>
etcd-master 1/1 Running 0 48m 192.168.100.128 master <none> <none>
kube-apiserver-master 1/1 Running 0 48m 192.168.100.128 master <none> <none>
kube-controller-manager-master 1/1 Running 0 48m 192.168.100.128 master <none> <none>
kube-proxy-bz9dd 1/1 Running 0 8m8s 192.168.100.130 node2 <none> <none>
kube-proxy-c2d89 1/1 Running 0 8m58s 192.168.100.129 node1 <none> <none>
kube-proxy-xdmrn 1/1 Running 0 48m 192.168.100.128 master <none> <none>
kube-scheduler-master 1/1 Running 0 48m 192.168.100.128 master <none> <none>
kubernetes集群的网络是比较复杂的,不是集群内部实现的,为了更方便的使用集群,因此,使用第三
方的cni网络插件(Container Network Interface )。cni是容器网络接口,作用是实现容器跨主机网络
通信。pod的ip地址段,也称为cidr。
calico是一个纯三层的网络解决方案,为容器提供多node间的访问通信,calico将每一个node节点都当
做为一个路由器(router),每个pod都是虚拟路由器下的的终端,各节点通过BGP(Border Gateway
Protocol) 边界网关协议学习并在node节点生成路由规则,从而将不同node节点上的pod连接起来进行
通信,是目前Kubernetes主流的网络方案。
所以,我们和基于containerd的时候一样,依然采用calico网络插件。
(2)下载 Calico 配置文件
[root@master ~]# wget --no-check-certificate https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
--2025-11-13 15:08:15-- https://docs.tigera.io/archive/v3.25/manifests/calico.yaml
Resolving docs.tigera.io (docs.tigera.io)... 52.74.6.109, 13.215.239.219, 2406:da18:b3d:e201::259, ...
Connecting to docs.tigera.io (docs.tigera.io)|52.74.6.109|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 238089 (233K) [text/yaml]
Saving to: ‘calico.yaml’100%[========================================================================>] 238,089 223KB/s in 1.0s 2025-11-13 15:08:17 (223 KB/s) - ‘calico.yaml’ saved [238089/238089]
(3)修改 Calico 配置
确保 Pod 网段与初始化配置一致:
[root@master ~]# vim calico.yaml
# 找到4601行,去掉注释并修改
- name: CALICO_IPV4POOL_CIDRvalue: "10.244.0.0/16"
(4)部署 Calico
注意:需要等待较长时间下载相关组建,主要看网络环境
[root@master ~]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created# 动态查看部署状态(等待所有 Pod 为 Running 状态,需耐心等待镜像下载)
[root@master ~]# watch kubectl get pods -n kube-system
若长时间卡在 Pending,可能是内核版本过低,执行
yum update -y kernel && reboot升级内核后重试。
动态查看效果图:

6. 部署完毕后验证集群状态(仅 master 执行)
# 查看节点状态(所有节点为 Ready 即正常)
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 69m v1.28.0
node1 Ready <none> 29m v1.28.0
node2 Ready <none> 28m v1.28.0# 查看 Kubernetes 集群中 kube-system 命名空间下的所有 Pod 资源状态
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-658d97c59c-jsvcp 1/1 Running 0 18m
calico-node-d2mnh 1/1 Running 0 18m
calico-node-tk5x5 1/1 Running 0 18m
calico-node-x8dv2 1/1 Running 0 18m
coredns-66f779496c-6nvcj 1/1 Running 0 69m
coredns-66f779496c-7dbm9 1/1 Running 0 69m
etcd-master 1/1 Running 0 69m
kube-apiserver-master 1/1 Running 0 69m
kube-controller-manager-master 1/1 Running 0 69m
kube-proxy-bz9dd 1/1 Running 0 28m
kube-proxy-c2d89 1/1 Running 0 29m
kube-proxy-xdmrn 1/1 Running 0 69m
kube-scheduler-master 1/1 Running 0 69m# 此时能看到coredns中有了IP地址
[root@master ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-658d97c59c-jsvcp 1/1 Running 0 18m 10.244.104.2 node2 <none> <none>
calico-node-d2mnh 1/1 Running 0 18m 192.168.100.128 master <none> <none>
calico-node-tk5x5 1/1 Running 0 18m 192.168.100.130 node2 <none> <none>
calico-node-x8dv2 1/1 Running 0 18m 192.168.100.129 node1 <none> <none>
coredns-66f779496c-6nvcj 1/1 Running 0 69m 10.244.104.3 node2 <none> <none>
coredns-66f779496c-7dbm9 1/1 Running 0 69m 10.244.104.1 node2 <none> <none>
etcd-master 1/1 Running 0 69m 192.168.100.128 master <none> <none>
kube-apiserver-master 1/1 Running 0 69m 192.168.100.128 master <none> <none>
kube-controller-manager-master 1/1 Running 0 69m 192.168.100.128 master <none> <none>
kube-proxy-bz9dd 1/1 Running 0 28m 192.168.100.130 node2 <none> <none>
kube-proxy-c2d89 1/1 Running 0 29m 192.168.100.129 node1 <none> <none>
kube-proxy-xdmrn 1/1 Running 0 69m 192.168.100.128 master <none> <none>
kube-scheduler-master 1/1 Running 0 69m 192.168.100.128 master <none> <none>
六、集群优化与测试
1. kubectl 命令补全
[root@master ~]# yum install bash-completion -y
[root@master ~]# source /usr/share/bash-completion/bash_completion# 在当前bash环境中永久设置命令补全
[root@master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@master ~]# source ~/.bashrc
2. 部署 Nginx 测试集群
# 创建 Nginx 部署(3 个副本)
[root@master ~]# kubectl create deployment nginx --image=nginx --replicas=3
deployment.apps/nginx created# 查看nginx服务是否创建完毕
root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7854ff8877-9qzvn 1/1 Running 0 3m10s 10.244.166.130 node1 <none> <none>
nginx-7854ff8877-jpmnl 1/1 Running 0 3m10s 10.244.166.129 node1 <none> <none>
nginx-7854ff8877-jtrn4 1/1 Running 0 3m10s 10.244.104.4 node2 <none> <none># 查看 Pod 与 Service 信息
[root@master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-7854ff8877-9qzvn 1/1 Running 0 3m56s
pod/nginx-7854ff8877-jpmnl 1/1 Running 0 3m56s
pod/nginx-7854ff8877-jtrn4 1/1 Running 0 3m56sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 77m# 暴露 Nginx 服务(NodePort 类型)
[root@master ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
service/nginx exposed# 再次查看
[root@master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-7854ff8877-9qzvn 1/1 Running 0 4m42s
pod/nginx-7854ff8877-jpmnl 1/1 Running 0 4m42s
pod/nginx-7854ff8877-jtrn4 1/1 Running 0 4m42sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 78m
service/nginx NodePort 10.109.130.211 <none> 80:32702/TCP 6s
3. 访问测试
(1)在命令行中访问
[root@master ~]# curl http://10.109.130.211
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>
# 访问成功!
(2)在浏览器中分别访问node1的IP和node2的IP
- node1:http://192.168.100.129:32702

- node2:http://192.168.100.130:32702

