openeuler24.03部署k8s1.32.7集群(一主两从)
openeuler24.03部署k8s1.32.7集群(一主两从)
1.方案环境
1.1 操作系统环境
openEuler-24.03-LTS-x86_64-dvd.iso
1.2 硬件环境
自行准备5台具有2核CPU和4GB以上内存的服务器,硬盘大于50G,系统为openEuler-24.03,确保机器能够访问互联网。
主机名 | IP地址 | 说明 |
---|---|---|
master1 | 192.168.48.11 | master节点 |
node01 | 192.168.48.14 | node节点 |
node02 | 192.168.48.15 | node节点 |
安装配置信息如下表所示:
配置信息 | 备注 |
---|---|
OS系统版本 | openEuler-24.03 |
Docker版本 | 28.3.2 |
Calico版本 | 3.29.0 |
2.方案部署
2.1 主机初始化
对所有主机进行初始化操作
2.1.1 配置IP
master1节点:
nmcli connection modify "ens33" ipv4.method manual ipv4.addresses 192.168.48.11/24 ipv4.gateway 192.168.48.2 ipv4.dns "223.5.5.5 114.114.114.114 8.8.8.8" && nmcli connection down "ens33" && nmcli connection up "ens33"
node01节点:
nmcli connection modify "ens33" ipv4.method manual ipv4.addresses 192.168.48.14/24 ipv4.gateway 192.168.48.2 ipv4.dns "223.5.5.5 114.114.114.114 8.8.8.8" && nmcli connection down "ens33" && nmcli connection up "ens33"
node02节点:
nmcli connection modify "ens33" ipv4.method manual ipv4.addresses 192.168.48.15/24 ipv4.gateway 192.168.48.2 ipv4.dns "223.5.5.5 114.114.114.114 8.8.8.8" && nmcli connection down "ens33" && nmcli connection up "ens33"
2.1.2 设置主机名
根据不同主机的角色,设置相应主机名
master1节点:
hostnamectl set-hostname master1
node01节点:
hostnamectl set-hostname node01
node02节点:
hostnamectl set-hostname node02
2.1.3 设置域名解析
所有节点都配置hosts解析
cat >> /etc/hosts <<EOF 192.168.48.11 master1 192.168.48.14 node01 192.168.48.15 node02 EOF
2.1.4 关闭防火墙
所有节点都关闭防火墙
systemctl disable --now firewalld
2.1.5 禁用SELinux
所有节点都禁用SELinux
setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
2.1.6 禁用swap
所有节点关闭Swap分区:
swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab
2.1.7 配置yum源
所有节点配置Docker和默认yum源:
# 换成阿里源 sed -i 's|http://repo.openeuler.org|https://mirrors.jxust.edu.cn/openeuler/|g' /etc/yum.repos.d/openEuler.repo yum clean all yum makecache # 安装docker-ce依赖 dnf install -y device-mapper-persistent-data lvm2 # 添加 docker-ce源 openEuler24.03 对标centos9 dnf config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo sed -i 's+\$releasever+9+g' /etc/yum.repos.d/docker-ce.repo dnf makecache
2.1.8 安装常用的工具
所有节点安装一些常用的工具:
dnf install wget jq psmisc vim net-tools telnet git bash-completion -y
2.1.9 配置NTP时间同步
所有节点时间同步,同步后date查看和自己的主机时间是否一致。
sed -i '3 s/^/# /' /etc/chrony.conf sed -i '4 a server ntp.aliyun.com iburst' /etc/chrony.conf systemctl restart chronyd.service systemctl enable chronyd.service chronyc sources
2.1.10 配置网络
cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile] unmanaged-devices=interface-name:cali*;interface-name:tunl* EOF systemctl restart NetworkManager # 参数解释 # 这个参数用于指定不由 NetworkManager 管理的设备。它由以下两个部分组成 # interface-name:cali* # 表示以 "cali" 开头的接口名称被排除在 NetworkManager 管理之外。例如,"cali0", "cali1" 等接口不受 NetworkManager 管理。 # interface-name:tunl* # 表示以 "tunl" 开头的接口名称被排除在 NetworkManager 管理之外。例如,"tunl0", "tunl1" 等接口不受 NetworkManager 管理。通过使用这个参数,可以将特定的接口排除在 NetworkManager 的管理范围之外,以便其他工具或进程可 以独立地管理和配置这些接口
2.1.11 优化资源限制参数
所有节点配置
ulimit -SHn 65535 cat >> /etc/security/limits.conf << EOF * soft nofile 100000 * hard nofile 100000 * soft nproc 65535 * hard nproc 65535 * soft memlock unlimited * hard memlock unlimited EOF
2.1.12 安装ipv相关工具
yum -y install ipvsadm ipset sysstat conntrack libseccomp cat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip EOF # 加载模块,设置重启生效 systemctl restart systemd-modules-load.service # 查看已加载模块 lsmod | grep --color=auto -e ip_vs -e nf_conntrack
2.1.13 优化内核参数
sed -i '/net.ipv4.ip_forward/d' /etc/sysctl.conf cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 net.ipv4.conf.all.route_localnet = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF sysctl --system
2.1.14 master1配置免密钥
安装过程中,生成配置文件和证书均在master1上操作,所以master1节点需要免密钥登录其他节点之后将文件传送到其他节点。
编写一个免密配置脚本free-ssh.sh
vim free-ssh.sh
#!/bin/bash # 安装 sshpass(如果未安装) if ! command -v sshpass &> /dev/null; thenecho "正在安装 sshpass..."dnf install -y sshpass || {echo "安装 sshpass 失败,请检查网络或手动安装!"exit 1} fi ssh-keygen -f /root/.ssh/id_rsa -P '' # 定义目标主机和密码 IP_LIST=("192.168.48.11" "192.168.48.14" "192.168.48.15") SSH_PASS="elysia123." # 批量配置免密登录 for HOST in "${IP_LIST[@]}"; doecho "正在配置 $HOST ..."sshpass -p "$SSH_PASS" ssh-copy-id -o StrictHostKeyChecking=no -o ConnectTimeout=5 root@"$HOST" &> /dev/null# 检查是否成功if [ $? -eq 0 ]; thenecho "$HOST 配置成功!"elseecho "$HOST 配置失败,请检查网络或密码!"fi done
chmod +x free-ssh.sh sh free-ssh.sh
2.2 Docker部署
2.2.1 Docker作为Runtime
所有节点都安装docker-ce
dnf install -y docker-ce
由于新版Kubelet建议使用systemd,因此把Docker的CgroupDriver也改成systemd:
另外配置docker加速器;
cat > /etc/docker/daemon.json << 'EOF' {"registry-mirrors": ["https://jsrg2e0s.mirror.aliyuncs.com","https://docker.m.daocloud.io","https://docker.nju.edu.cn","https://docker.anyhub.us.kg","https://dockerhub.jobcher.com","https://dockerhub.icu","https://docker.ckyl.me","https://registry.docker-cn.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m","max-file": "3"},"storage-driver": "overlay2","live-restore": true } EOF
2.2.2 安装部署cri-docker
注意:K8s从1.24版本后不支持docker了,所以这里需要用cri-dockererd。
下载地址:https://github.com/Mirantis/cri-dockerd/releases/
#1、下载 cri-docker mkdir k8s && cd k8s wget -c https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.16/cri-dockerd-0.3.16-3.fc35.x86_64.rpm #2、安装 cri-docker dnf install cri-dockerd-0.3.16-3.fc35.x86_64.rpm #修改cri-docker 服务文件 /usr/lib/systemd/system/cri-docker.service #ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd:// #3、启动 cri-docker systemctl daemon-reload systemctl restart docker cri-docker.socket cri-docker systemctl enable docker cri-docker
2.2.3 K8S软件安装
#1、配置kubernetes源 #添加阿里云YUM软件源 cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/ enabled=1 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/rpm/repodata/repomd.xml.key EOF #2、查看所有可用的版本 yum list kubelet --showduplicates | sort -r |grep 1.32 #3、安装kubelet、kubeadm、kubectl、kubernetes-cni dnf install -y kubelet kubeadm kubectl kubernetes-cni #4、配置cgroup为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,建议修#改如下文件内容。 #vim /etc/sysconfig/kubelet [全部设置下] #--------------------- #添加KUBELET_EXTRA_ARGS="--cgroup-driver=systemd" #--------------------- sudo cat > /etc/sysconfig/kubelet <<EOF KUBELET_EXTRA_ARGS="--cgroup-driver=systemd" EOF #设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动 systemctl enable kubelet #-------------------------
2.3 K8S集群配置
2.3.1 K8S集群初始化
只在master1节点上操作,创建初始化文件 kubeadm-init.yaml
kubeadm config print init-defaults > kubeadm-init.yaml
vim kubeadm-init.yaml 修改如下配置: - advertiseAddress:为控制平面地址,( Master 主机 IP ) advertiseAddress: 1.2.3.4 修改为 advertiseAddress: 192.168.48.11 - criSocket:为 containerd 的 socket 文件地址 criSocket: unix:///var/run/containerd/containerd.sock 修改为 criSocket: unix:///var/run/cri-dockerd.sock - name: node 修改node为 master1 name: node 修改为 name: master1 - imageRepository:阿里云镜像代理地址,否则拉取镜像会失败 imageRepository: registry.k8s.io 修改为:imageRepository: registry.aliyuncs.com/google_containers - kubernetesVersion:为 k8s 版本 kubernetesVersion: 1.32.0 修改为:kubernetesVersion: 1.32.7 注意:一定要配置镜像代理,否则会由于防火墙问题导致集群安装失败 文件末尾增加启用ipvs功能 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs # 根据配置文件启动 kubeadm 初始化 k8s kubeadm init --config=kubeadm-init.yaml --upload-certs --v=6 #以下为成功输出 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.48.11:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:5d9cf674b29f636cc50beacdc2dafb4faf368512da0ad38f99667ec039cf02da
2.3.2 配置kubectl
只在master1上操作
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
2.3.3 node节点加入集群
在node01和node02上执行
kubeadm join 192.168.48.11:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:5d9cf674b29f636cc50beacdc2dafb4faf368512da0ad38f99667ec039cf02da --cri-socket unix:///var/run/cri-dockerd.sock
2.3.4 查看集群状态
[root@master1 k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION master1 NotReady control-plane 3m24s v1.32.7 node01 NotReady <none> 54s v1.32.7 node02 NotReady <none> 23s v1.32.7 # 查看容器运行时 [root@master1 3.29]# kubectl describe node | grep RuntimeContainer Runtime Version: docker://28.3.2Container Runtime Version: docker://28.3.2Container Runtime Version: docker://28.3.2
2.4 K8S集群网络插件使用
2.4.1 下载calico.yaml
只在master1上操作,此次可能需要科学上网
curl -O -L https://raw.githubusercontent.com/projectcalico/calico/v3.29.0/manifests/calico.yaml
2.4.2 拉取镜像
查看安装calico需要的镜像
[root@master1 3.29]# grep -i image: calico.yaml image: docker.io/calico/cni:v3.29.0image: docker.io/calico/cni:v3.29.0image: docker.io/calico/node:v3.29.0image: docker.io/calico/node:v3.29.0image: docker.io/calico/kube-controllers:v3.29.0
提前拉取这三个镜像:
image: docker.io/calico/cni:v3.29.0 image: docker.io/calico/node:v3.29.0 image: docker.io/calico/kube-controllers:v3.29.0
docker pull docker.io/calico/cni:v3.29.0 docker pull docker.io/calico/node:v3.29.0 docker pull docker.io/calico/kube-controllers:v3.29.0
2.4.3 部署calico网络
kubectl apply -f calico.yaml
这里要等待一段时间,5分钟左右吧,取决于你的网络环境。切记,科学上网后一定要及时关闭有关的http代理,因为k8s的apiserver也是通过http协议通信的,错误的http代理设置有可能导致集群通信异常。
2.4.4 检查
[root@master1 3.29]# kubectl get nodes NAME STATUS ROLES AGE VERSION master1 Ready control-plane 3h39m v1.32.7 node01 Ready <none> 3h37m v1.32.7 node02 Ready <none> 3h36m v1.32.7 [root@master1 3.29]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-6766b7b6bb-2kntk 1/1 Running 0 3h39m coredns-6766b7b6bb-tqrmw 1/1 Running 0 3h39m etcd-master1 1/1 Running 0 3h39m kube-apiserver-master1 1/1 Running 0 3h39m kube-controller-manager-master1 1/1 Running 0 3h39m kube-proxy-9s6d9 1/1 Running 0 3h39m kube-proxy-d49wl 1/1 Running 0 3h36m kube-proxy-frfbm 1/1 Running 0 3h37m kube-scheduler-master1 1/1 Running 0 3h39m
node节点为ready,k8s核心组件全部running。
到此,k8s已经大致安装完毕了,还有常用组件比如helm,dashboard,Metrics-Server,ingress等按需求到官网安装即可。