当前位置: 首页 > news >正文

Rocky Linux 9.x 基于 kubeadm部署k8s

搭建集群使用docker下载K8s,使用一主两从模式

image-20250217162805144

主机名IP地址
k8s- master192.168.1.141
k8s- node-1192.168.1.142
k8s- node-2192.168.1.143

一:准备工作

VMware Workstation Pro新建三台虚拟机Rocky Linux 9(系统推荐最小化安装) 。

如果VMware Workstation Pro(如低于 16.x 的版本)中,如果新建虚拟机向导没有 Rocky Linux 9 的预设选项,可以 Red Hat Enterprise Linux 9 或相近版本作为替代模板。

主机硬件配置说明

作用IP地址操作系统配置关键组件
k8s-master01192.168.1.11Rocky Linux release 92颗CPU 4G内存 100G硬盘kube-apiserver, etcd, etc
k8s-node01192.168.1.12Rocky Linux release 92颗CPU 4G内存 100G硬盘kubelet, kube-proxy
k8s-node02192.168.1.13Rocky Linux release 92颗CPU 4G内存 100G硬盘kubelet, kube-proxy

yum源搭建

1、系统最小化安装。
2、替换默认源。
sed -e 's|^mirrorlist=|#mirrorlist=|g' \
    -e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' \
    -i.bak \
    /etc/yum.repos.d/rocky*.repo

dnf makecache

3、安装epel软件仓库,更换国内源
1>. 在 Rocky Linux 9 中启用并安装 EPEL Repo。
# Rocky Linux 9
dnf config-manager --set-enabled crb
dnf install epel-release

2>. 备份(如有配置其他epel源)并替换为国内镜像
注意最后这个库,阿里云没有对应的镜像,不要修改它,如果误改恢复原版源即可

cp /etc/yum.repos.d/epel.repo  /etc/yum.repos.d/epel.repo.backup 
cp /etc/yum.repos.d/epel-testing.repo  /etc/yum.repos.d/epel-testing.repo.backup
cp /etc/yum.repos.d/epel-cisco-openh264.repo  /etc/yum.repos.d/epel-cisco-openh264.repo.backup

3>. 将 repo 配置中的地址替换为阿里云镜像站地址

执行下面语句,它会替换epel.repo、eple-testing.repo中的网址,不会修改epel-cisco-openh264.repo,可以正常使用。

sed -e 's!^metalink=!#metalink=!g' \
    -e 's!^#baseurl=!baseurl=!g' \
    -e 's!https\?://download\.fedoraproject\.org/pub/epel!https://mirrors.aliyun.com/epel!g' \
    -e 's!https\?://download\.example/pub/epel!https://mirrors.aliyun.com/epel!g' \
    -i /etc/yum.repos.d/epel{,-testing}.repo
现在我们有了 EPEL 仓库,更新仓库缓存

dnf clean all 
dnf makecache

配置主机名和IP

[root@localhost ~]#hostnamectl set-hostname k8s-master01
[root@localhost ~]#hostnamectl set-hostname k8s-node01
[root@localhost ~]#hostnamectl set-hostname k8s-node02

#master01配置ip地址

[root@k8s-master01 ~]#vi /etc/NetworkManager/system-connections/ens160.nmconnection


[connection]
id=ens160
uuid=ff8b8a02-ec88-301d-8e64-4f88b4551949
type=ethernet
autoconnect-priority=-999
interface-name=ens160
timestamp=1744709836

[ethernet]

[ipv4]
method=manual
address=192.168.1.11/24,192.168.1.2
dns=114.114.114.114

[ipv6]
addr-gen-mode=eui64
method=auto

[proxy]


[root@k8s-master01 network-scripts]# nmcli connection reload
[root@k8s-master01 network-scripts]# nmcli connection up ens160

#同理node01和node02配置ip地址

[root@k8s-node01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens160address=192.168.1.11/24,192.168.1.2

[root@k8s-node01 ~]# nmcli connection reload

[root@k8s-node01 ~]# nmcli connection up ens160

[root@k8s-node02 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens160address=192.168.1.11/24,192.168.1.2

[root@k8s-node02 ~]# nmcli connection reload

[root@k8s-node02 ~]# nmcli connection up ens160

配置hosts解析

[root@k8s-master01 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11 k8s-master01
192.168.1.12 k8s-node01
192.168.1.13 k8s-node02


[root@k8s-node01 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11 k8s-master01
192.168.1.12 k8s-node01
192.168.1.13 k8s-node02

[root@k8s-node02 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.11 k8s-master01
192.168.1.12 k8s-node01
192.168.1.13 k8s-node02

# 配置免密登录,只在k8s-master01上操作
[root@k8s-master01 ~]# ssh-keygen -f ~/.ssh/id_rsa -N '' -q

# 点拷贝秘钥到其他 2 台节点
[root@k8s-master01 ~]# ssh-copy-id k8s-node01
[root@k8s-master01 ~]# ssh-copy-id k8s-node02

开启多执行关闭防火墙和SELinux,配置时间同步

[root@k8s-master01 ~]#systemctl disable --now firewalld            
[root@k8s-master01 ~]#sed -i '/^SELINUX=/ c SELINUX=disabled' /etc/selinux/config
[root@k8s-master01 ~]#setenforce 0

[root@k8s-node01 ~]#systemctl disable --now firewalld

[root@k8s-node01 ~]#sed -i '/^SELINUX=/ c SELINUX=disabled' /etc/selinux/config

[root@k8s-node01 ~]#setenforce 0

[root@k8s-node02 ~]#systemctl disable --now firewalld

[root@k8s-node02 ~]#sed -i '/^SELINUX=/ c SELINUX=disabled' /etc/selinux/config

[root@k8s-node02 ~]#setenforce 0

[root@k8s-master01 ~]#dnf install -y chrony
# 修改同步服务器
[root@k8s-master01 ~]#sed -i '/^pool/ c pool ntp1.aliyun.com  iburst' /etc/chrony.conf
[root@k8s-master01 ~]#systemctl restart chronyd
[root@k8s-master01 ~]#systemctl enable chronyd

[root@k8s-master01 ~]# chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 47.96.149.233                 2   6    17     2  -2147us[-2300us] +/-   46ms

#同理其他主机也需要进行时间同步安装,一模一样的配置

启用ipvs

[root@k8s-master01 ~]#cat >> /etc/modules-load.d/ipvs.conf << EOF
br_netfilter
ip_conntrack
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

[root@k8s-master01 ~]#dnf install ipvsadm ipset sysstat conntrack libseccomp -y

[root@k8s-master01 ~]#systemctl restart systemd-modules-load.service

[root@k8s-node01 ~]#cat >> /etc/modules-load.d/ipvs.conf << EOF
br_netfilter
ip_conntrack
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

[root@k8s-node01 ~]#dnf install ipvsadm ipset sysstat conntrack libseccomp -y

[root@k8s-node01 ~]#systemctl restart systemd-modules-load.service

[root@k8s-node02 ~]#cat >> /etc/modules-load.d/ipvs.conf << EOF
br_netfilter
ip_conntrack
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

[root@k8s-node02 ~]#dnf install ipvsadm ipset sysstat conntrack libseccomp -y

[root@k8s-node02 ~]#systemctl restart systemd-modules-load.service

句柄数最大

[root@k8s-master01 ~]#ulimit -SHn 65535
cat >> /etc/security/limits.conf <<EOF
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimitedd
EOF
查看修改结果
[root@k8s-master01 ~]#ulimit -a

进行系统优化

[root@k8s-master01 ~]#cat > /etc/sysctl.d/k8s_better.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

[root@k8s-master01 ~]#modprobe br_netfilter
[root@k8s-master01 ~]#lsmod |grep conntrack
[root@k8s-master01 ~]#modprobe ip_conntrack
[root@k8s-master01 ~]#sysctl -p /etc/sysctl.d/k8s_better.conf

二:容器运行时工具安装及运行

#安装依赖
[root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

#添加软件源信息
[root@k8s-master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/rhel/docker-ce.repo

#安装Docker-CE
[root@k8s-master01 ~]# yum makecache fast
[root@k8s-master01 ~]# yum -y install docker-ce

[root@k8s-master01 ~]# docker -v
Docker version 28.0.4, build b8034c0
 


# 设置国内镜像加速
[root@k8s-master01 ~]# mkdir -p /etc/docker/
cat  >> /etc/docker/daemon.json << EOF
{
   "registry-mirrors":["https://p3kgr6db.mirror.aliyuncs.com",
   "https://docker.m.daocloud.io",
   "https://your_id.mirror.aliyuncs.com",
   "https://docker.nju.edu.cn/",
    "https://docker.anyhub.us.kg",
    "https://dockerhub.jobcher.com",
    "https://dockerhub.icu",
    "https://docker.ckyl.me",
       "https://cr.console.aliyun.com"
   ],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF


设置docker开机启动并启动
[root@k8s-master01 ~]# # systemctl enable --now docker

#k8s-node01和k8s-node02进行一模一样的命令

查看docker版本
# docker version

三台同时安装cri-docker

wget -c https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.16/cri-dockerd-0.3.16-3.fc35.x86_64.rpm
wget -c https://rpmfind.net/linux/almalinux/8.10/BaseOS/x86_64/os/Packages/libcgroup-0.41-19.el8.x86_64.rpm
yum install libcgroup-0.41-19.el8.x86_64.rpm
yum install cri-dockerd-0.3.16-3.fc35.x86_64.rpm

启动cri-docker服务

systemctl enable cri-docker

cri-dockerd设置国内镜像加速

[root@k8s-master01 ~]#vim /usr/lib/systemd/system/cri-docker.service

[root@k8s-node01 ~]# vim /usr/lib/systemd/system/cri-docker.service
[root@k8s-node02 ~]# vim /usr/lib/systemd/system/cri-docker.service

# 重启Docker组件
[root@k8s-master01 ~]# systemctl daemon-reload && systemctl restart docker cri-docker.socket cri-docker

#检查Docker组件状态

[root@k8s-master01 ~]#systemctl status docker cir-docker.socket cri-docker

[root@k8s-node01 ~]#systemctl daemon-reload && systemctl restart docker cri-docker.socket cri-docker

[root@k8s-master01 ~]#systemctl status docker cir-docker.socket cri-docker

[root@k8s-node02 ~]#systemctl daemon-reload && systemctl restart docker cri-docker.socket cri-docker

[root@k8s-master02 ~]#systemctl status docker cir-docker.socket cri-docker

三:三台一起进行K8S软件安装

#添加阿里云YUM软件源,配置kubernetes源

[root@k8s-master01 ~]#
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/repodata/repomd.xml.key
EOF

#安装kubelet、kubeadm、kubectl、kubernetes-cni
[root@k8s-master01 ~]#yum install -y kubelet kubeadm kubectl kubernetes-cni

#配置cgroup为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修改如下文件内容。

[root@k8s-master01 ~]#vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

[root@k8s-master01 ~]# systemctl enable kubelet
 

#node1

[root@k8s-node01 ~]#

cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/repodata/repomd.xml.key
EOF

[root@k8s-node01 ~]#yum install -y kubelet kubeadm kubectl kubernetes-cni

[root@k8s-node01 ~]#vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

[root@k8s-node01 ~]# systemctl enable kubelet
 

#node2

[root@k8s-node02 ~]#

cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/repodata/repomd.xml.key
EOF

[root@k8s-node02 ~]#yum install -y kubelet kubeadm kubectl kubernetes-cni

[root@k8s-node02 ~]#vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"

[root@k8s-node02 ~]# systemctl enable kubelet

四:K8S集群初始化

#在master上面进行修改

[root@k8s-master01 ~]#kubeadm config print init-defaults > kubeadm-init.yaml

[root@k8s-master01 ~]#vi kubeadm-init.yaml

修改为 advertiseAddress: 192.168.1.11

修改为 criSocket: unix:///var/run/cri-dockerd.sock

修改为 name: k8s-master01

修改为:imageRepository: registry.aliyuncs.com/google_containers

修改为:kubernetesVersion: 1.32.2

文件末尾增加启用ipvs功能

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

#根据配置文件启动 kubeadm 初始化 k8s
[root@k8s-master01 ~]#kubeadm init --config=kubeadm-init.yaml --upload-certs --v=6

#master主机上

[root@k8s-master01 ~]#mkdir -p $HOME/.kube 
[root@k8s-master01 ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
[root@k8s-master01 ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master01 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

#将node-1和node-2加入到k8s集群中

[root@k8s-node01 ~]# kubeadm join 192.168.1.11:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:1c8cadee79fc7739c3084fa08f1d4347bb0f0ae67d7cb38b329c7f2481ee0048 --cri-socket unix:///var/run/cri-dockerd.sock

[root@k8s-node02 ~]#  kubeadm join 192.168.1.11:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:1c8cadee79fc7739c3084fa08f1d4347bb0f0ae67d7cb38b329c7f2481ee0048 --cri-socket unix:///var/run/cri-dockerd.sock

#maste查看集群

[root@k8s-master01 ~]# kubectl get node

# 只在master01上操作
[root@k8s-master01 ~]# curl -O https://docs.projectcalico.org/archive/v3.28/manifests/calico.yaml


[root@k8s-master01 ~]# vim calico.yaml
以下两行默认没有开启,开始后修改第二行为kubeadm初始化使用指定的pod network即可。
3680             # The default IPv4 pool to create on startup if none exists. Pod IPs will be
3681             # chosen from this range. Changing this value after installation will have
3682             # no effect. This should fall within `--cluster-cidr`.
3683             - name: CALICO_IPV4POOL_CIDR
3684               value: "10.244.0.0/16"
3685             # Disable file logging so `kubectl logs` works.


[root@k8s-master ~]# ls
anaconda-ks.cfg  calico.tar.gz  calico.yaml  kubeadm-init.yaml

[root@k8s-master ~]# docker load -i calico.tar.gz
29ebc113185d: Loading layer  3.582MB/3.582MB
de34b16b5b80: Loading layer  75.58MB/75.58MB
Loaded image: calico/kube-controllers:v3.28.0
3ba0ed02b4de: Loading layer  205.4MB/205.4MB
5f70bf18a086: Loading layer  1.024kB/1.024kB
Loaded image: calico/cni:v3.28.0
30d979f3b1cb: Loading layer  354.5MB/354.5MB
Loaded image: calico/node:v3.28.0

[root@k8s-master ~]# scp calico.tar.gz k8s-node01:~
calico.tar.gz                                                   100%  610MB  92.1MB/s   00:06
[root@k8s-master ~]# scp calico.tar.gz k8s-node02:~
calico.tar.gz                                                   100%  610MB  89.3MB/s   00:06


[root@k8s-node01 ~]# docker load -i calico.tar.gz
[root@k8s-node02 ~]# docker load -i calico.tar.gz
29ebc113185d: Loading layer  3.582MB/3.582MB
de34b16b5b80: Loading layer  75.58MB/75.58MB
Loaded image: calico/kube-controllers:v3.28.0
3ba0ed02b4de: Loading layer  205.4MB/205.4MB
5f70bf18a086: Loading layer  1.024kB/1.024kB
Loaded image: calico/cni:v3.28.0
30d979f3b1cb: Loading layer  354.5MB/354.5MB
Loaded image: calico/node:v3.28.0

部署calico网络
[root@k8s-master01 ~]# kubectl apply -f calico.yaml

检查:
[root@k8s-master01 ~]# kubectl get pod -n kube-system 

[root@k8s-master01 ~]# kubectl get nodes
[root@k8s-master01 ~]# kubectl get pod -n kube-system

扩展~Kubectl命令自动补全

yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

相关文章:

  • 基于Tesseract与Opencv的电子发票识别[1]
  • Vue 人看 React useRef:它不只是替代 ref
  • cocos 3D自由世界游戏 - 开发
  • GitHub实用手册
  • Java项目之基于ssm的学校小卖部收银系统(源码+文档)
  • 获取 arm-none-eabi-ld 默认使用的链接脚本
  • 【2-10】E1与T1
  • CentOS 下 Zookeeper 常用命令与完整命令列表
  • element-ui colorPicker 组件源码分享
  • 音视频小白系统入门笔记-0
  • 多光谱相机与高光谱相机的区别
  • AI搜索引擎的局限性
  • 代码随想录算法训练营Day30 | 01背包问题(卡码网46. 携带研究材料)、Leetcode416.分割等和子集
  • 车载软件架构 --- Autosar OS MCU多核启动
  • Python(16)Python文件操作终极指南:安全读写与高效处理实践
  • TikTok账号养号难题解决方案:利用TK矩阵系统助力账号快速成长
  • 爱普生SG3225EEN低抖动差分晶振在网络通信的应用
  • JAVA如何操作文件?(超级详细)
  • Kimi-VL:开源多模态视觉语言模型的崭新突破
  • 爬虫框架 - Coocan
  • 车展之战:国产狂飙、外资反扑、智驾变辅助
  • 深观察丨从“不建议将导师挂名为第一作者”说开去
  • 深入贯彻中央八项规定精神学习教育中央指导组派驻地方和单位名单公布
  • 五一假期上海推出首批16条“市民健康路线”,这些健康提示请收好
  • 对话|贝聿铭设计的不只是建筑,更是生活空间
  • 在差异中建共鸣,《20世纪美国文学思想研究》丛书出版