Kubernetes 高可用集群安装
一、节点规划
系统: CentOSmaster01 ip:192.168.1.20 内存:8G CPU: 8Cmaster02 ip:192.168.1.21 内存:8G CPU: 8Cmaster03 ip:192.168.1.22 内存:8G CPU: 8Cnode01 ip:192.168.1.23 内存:8G CPU: 8Cnode02 ip:192.168.1.24 内存:8G CPU: 8Clb ip:192.168.1.15Pod网段 10.96.0.0/12Service网段 172.16.0.1/12
二、版本选择
本次安装选择kubernetes 1.28 版本
三、hosts配置
1、master01配置
192.168.1.20 master01192.168.1.21 master02192.168.1.22 master03192.168.1.23 node01192.168.1.24 node02192.168.1.15 lb
2、同步至其他主机
for i in master02 master03 node01 node02;doscp /etc/hosts root@$i:/etc/hosts;done
四、SSH配置
1、所有节点初始化SSH配置
ssh-keygen -t rsa2、master01免密登录其他节点
for node in master01 master02 master03 node01 node02;do ssh-copy-id -i .ssh/id_rsa.pub root@$node;done;
五、修改yum源为aliyun
vim /etc/yum.repos.d/CentOS-Base.repo
[base]name=CentOS-$releasever - Base#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infrabaseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7#released updates[updates]name=CentOS-$releasever - Updates#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infrabaseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7#additional packages that may be useful[extras]name=CentOS-$releasever - Extras#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infrabaseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/gpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7#additional packages that extend functionality of existing packages[centosplus]name=CentOS-$releasever - Plus#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus&infra=$infrabaseurl=http://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/gpgcheck=1enabled=0gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
六、CentOS基本环境升级
1、安装基础工具
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y2、所有节点关闭firewalld 、dnsmasq、selinux
systemctl disable --now firewalldsystemctl disable --now dnsmasqsystemctl disable --now NetworkManagersetenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinuxsed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
3、所有节点关闭swap分区,fstab注释swap
swapoff -a && sysctl -w vm.swappiness=0sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
4、所有节点同步时间
4.1、安装ntpdate
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpmyum install ntpdate -y
4.2、同步时间
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtimeecho 'Asia/Shanghai' >/etc/timezonentpdate time2.aliyun.com
crontab -e# 加入到crontab*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
5、所有节点配置limit
5.1、ulimit -SHn 65535
5.2、limits.conf
vim /etc/security/limits.conf
# 末尾添加如下内容* soft nofile 65536* hard nofile 131072* soft nproc 65535* hard nproc 655350* soft memlock unlimited* hard memlock unlimited
6、升级内核
6.1、下载
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpmwget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
6.2、同步至其他节点
for node in master02 master03 node01 node02;doscp kernel-ml* root@$node:/root/;done;
6.3、所有节点安装
cd /root && yum localinstall -y kernel-ml*6.4、更改内核启动顺序
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfggrubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
6.5、检查默认内核是不是4.19
grubby --default-kernel[root@master01 ~]# grubby --default-kernel/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
6.6、所有节点重启,然后检查内核是不是4.19
uname -a6.7、所有节点安装ipvsadm
yum install ipvsadm ipset sysstat conntrack libseccomp -y6.7.1、所有节点配置ipvs模块
modprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack
6.7.2、ipvs.conf
vim /etc/modules-load.d/ipvs.conf
ip_vsip_vs_lcip_vs_wlcip_vs_rrip_vs_wrrip_vs_lblcip_vs_lblcrip_vs_dhip_vs_ship_vs_foip_vs_nqip_vs_sedip_vs_ftpip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipip
然后执行
systemctl enable --now systemd-modules-load.service6.7.3、检查是否加载
[root@master01 ~]# lsmod | grep -e ip_vs -e nf_conntrackip_vs_ftp 16384 0nf_nat 32768 1 ip_vs_ftpip_vs_sed 16384 0ip_vs_nq 16384 0ip_vs_fo 16384 0ip_vs_dh 16384 0ip_vs_lblcr 16384 0ip_vs_lblc 16384 0ip_vs_wlc 16384 0ip_vs_lc 16384 0ip_vs_sh 16384 0ip_vs_wrr 16384 0ip_vs_rr 16384 0ip_vs 151552 24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftpnf_conntrack 143360 2 nf_nat,ip_vsnf_defrag_ipv6 20480 1 nf_conntracknf_defrag_ipv4 16384 1 nf_conntracklibcrc32c 16384 4 nf_conntrack,nf_nat,xfs,ip_vs
7、内核参数配置
cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1net.bridge.bridge-nf-call-ip6tables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1net.ipv4.conf.all.route_localnet = 1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384EOFsysctl --system
8、所有节点配置完后,重启节点,保证服务重启后,配置依旧
lsmod | grep --color=auto -e ip_vs -e nf_conntrack七、基本组件安装
1、Containerd作为Runtime
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum install docker-ce-20.10.* docker-ce-cli-20.10.* containerd -y
2、所有节点配置Containerd需要的模块
cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOFmodprobe -- overlaymodprobe -- br_netfilter
3、所有节点配置Containerd所需要的内核
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1net.bridge.bridge-nf-call-ip6tables = 1EOFsysctl --system
4、所有节点配置Containerd的配置文件
mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml
5、所有节点将Containerd的Cgroup改为Systemd
vim /etc/containerd/config.toml
找到containerd.runtimes.runc.options,添加SystemdCgroup = true

6、所有节点将sandbox_image的Pause镜像
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
镜像加速器配置:
[plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]endpoint = ["https://xxxxxxx.mirror.swr.myhuaweicloud.com"]
完整的配置文件
disabled_plugins = []imports = []oom_score = 0plugin_dir = ""required_plugins = []root = "/var/lib/containerd"state = "/run/containerd"temp = ""version = 2[cgroup]path = ""[debug]address = ""format = ""gid = 0level = ""uid = 0[grpc]address = "/run/containerd/containerd.sock"gid = 0max_recv_message_size = 16777216max_send_message_size = 16777216tcp_address = ""tcp_tls_ca = ""tcp_tls_cert = ""tcp_tls_key = ""uid = 0[metrics]address = ""grpc_histogram = false[plugins][plugins."io.containerd.gc.v1.scheduler"]deletion_threshold = 0mutation_threshold = 100pause_threshold = 0.02schedule_delay = "0s"startup_delay = "100ms"[plugins."io.containerd.grpc.v1.cri"]device_ownership_from_security_context = falsedisable_apparmor = falsedisable_cgroup = falsedisable_hugetlb_controller = truedisable_proc_mount = falsedisable_tcp_service = truedrain_exec_sync_io_timeout = "0s"enable_selinux = falseenable_tls_streaming = falseenable_unprivileged_icmp = falseenable_unprivileged_ports = falseignore_deprecation_warnings = []ignore_image_defined_volumes = falsemax_concurrent_downloads = 3max_container_log_line_size = 16384netns_mounts_under_state_dir = falserestrict_oom_score_adj = falsesandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"selinux_category_range = 1024stats_collect_period = 10stream_idle_timeout = "4h0m0s"stream_server_address = "127.0.0.1"stream_server_port = "0"systemd_cgroup = falsetolerate_missing_hugetlb_controller = trueunset_seccomp_profile = ""[plugins."io.containerd.grpc.v1.cri".cni]bin_dir = "/opt/cni/bin"conf_dir = "/etc/cni/net.d"conf_template = ""ip_pref = ""max_conf_num = 1[plugins."io.containerd.grpc.v1.cri".containerd]default_runtime_name = "runc"disable_snapshot_annotations = truediscard_unpacked_layers = falseignore_rdt_not_enabled_errors = falseno_pivot = falsesnapshotter = "overlayfs"[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = ""[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options][plugins."io.containerd.grpc.v1.cri".containerd.runtimes][plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = "io.containerd.runc.v2"[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]BinaryName = ""CriuImagePath = ""CriuPath = ""CriuWorkPath = ""IoGid = 0IoUid = 0NoNewKeyring = falseNoPivotRoot = falseRoot = ""ShimCgroup = ""SystemdCgroup = true[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]base_runtime_spec = ""cni_conf_dir = ""cni_max_conf_num = 0container_annotations = []pod_annotations = []privileged_without_host_devices = falseruntime_engine = ""runtime_path = ""runtime_root = ""runtime_type = ""[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options][plugins."io.containerd.grpc.v1.cri".image_decryption]key_model = "node"[plugins."io.containerd.grpc.v1.cri".registry]config_path = ""[plugins."io.containerd.grpc.v1.cri".registry.auths][plugins."io.containerd.grpc.v1.cri".registry.configs][plugins."io.containerd.grpc.v1.cri".registry.headers][plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]endpoint = ["https://3a0ac22010f3428eb2cae4ab2aa014f5.mirror.swr.myhuaweicloud.com"][plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]tls_cert_file = ""tls_key_file = ""[plugins."io.containerd.internal.v1.opt"]path = "/opt/containerd"[plugins."io.containerd.internal.v1.restart"]interval = "10s"[plugins."io.containerd.internal.v1.tracing"][plugins."io.containerd.metadata.v1.bolt"]content_sharing_policy = "shared"[plugins."io.containerd.monitor.v1.cgroups"]no_prometheus = false[plugins."io.containerd.runtime.v1.linux"]no_shim = falseruntime = "runc"runtime_root = ""shim = "containerd-shim"shim_debug = false[plugins."io.containerd.runtime.v2.task"]platforms = ["linux/amd64"]sched_core = false[plugins."io.containerd.service.v1.diff-service"]default = ["walking"][plugins."io.containerd.service.v1.tasks-service"]rdt_config_file = ""[plugins."io.containerd.snapshotter.v1.aufs"]root_path = ""[plugins."io.containerd.snapshotter.v1.btrfs"]root_path = ""[plugins."io.containerd.snapshotter.v1.devmapper"]async_remove = falsebase_image_size = ""discard_blocks = falsefs_options = ""fs_type = ""pool_name = ""root_path = ""[plugins."io.containerd.snapshotter.v1.native"]root_path = ""[plugins."io.containerd.snapshotter.v1.overlayfs"]mount_options = []root_path = ""sync_remove = falseupperdir_label = false[plugins."io.containerd.snapshotter.v1.zfs"]root_path = ""[plugins."io.containerd.tracing.processor.v1.otlp"][proxy_plugins][stream_processors][stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]path = "ctd-decoder"returns = "application/vnd.oci.image.layer.v1.tar"[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]path = "ctd-decoder"returns = "application/vnd.oci.image.layer.v1.tar+gzip"[timeouts]"io.containerd.timeout.bolt.open" = "0s""io.containerd.timeout.shim.cleanup" = "5s""io.containerd.timeout.shim.load" = "5s""io.containerd.timeout.shim.shutdown" = "3s""io.containerd.timeout.task.state" = "2s"[ttrpc]address = ""gid = 0uid = 0
7、所有节点启动Containerd,并配置开机自启动
systemctl daemon-reloadsystemctl enable --now containerd
8、所有节点配置crictl客户端连接的运行时位置
cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF
八、ETCD安装
1、下载
 wget https://github.com/etcd-io/etcd/releases/download/v3.5.7/etcd-v3.5.7-linux-amd64.tar.gz2、解压ETCD安装包,安装至执行目录
tar -zxvf etcd-v3.5.7-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.5.7-linux-amd64/etcd{,ctl}3、同步执行文件至master02、master03节点
for node in master02 master03;doscp /usr/local/bin/etcd* root@$node:/usr/local/bin/;done;
九、K8S安装
1、下载
wget https://dl.k8s.io/v1.28.15/kubernetes-server-linux-amd64.tar.gz2、解压安装包至执行目录
 tar -zvxf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}3、同步执行文件到其他节点
MasterNodes='master02 master03'WorkNodes='node01 node02'for node in $MasterNodes;doscp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} root@$node:/usr/local/bin/;done;for node in $WorkNodes;doscp /usr/local/bin/kube{let,-proxy} root@$node:/usr/local/bin/;done;
十、生成证书
1、下载生成证书工具
在master01操作
wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfsslwget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljsonchmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
2、ETCD证书
2.1、所有Master节点创建etcd证书目录
mkdir /etc/etcd/ssl -p2.2、clone相关配置文件
git clone https://gitee.com/dukuan/k8s-ha-install.git# 切换至v1.28版本git checkout manual-installation-v1.28.x# 切换至cd /root/k8s-ha-install/pki
2.3、生成ETCD CA证书和CA证书的key
cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-cacfssl gencert \-ca=/etc/etcd/ssl/etcd-ca.pem \-ca-key=/etc/etcd/ssl/etcd-ca-key.pem \-config=ca-config.json \-hostname=127.0.0.1,master01,master02,master03,192.168.1.20,192.168.1.21,192.168.1.22 \-profile=kubernetes \etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
2.4、复制证书至其他master节点
MasterNodes='master02 master03'for NODE in $MasterNodes;dossh root@$NODE "mkdir -p /etc/etcd/ssl"for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem;doscp /etc/etcd/ssl/${FILE} root@$NODE:/etc/etcd/ssl/${FILE}donedone
3、K8S组件证书
3.1、Master01生成kubernetes证书
cd /root/k8s-ha-install/pkimkdir -p /etc/kubernetes/pkicfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
#172.16.0.1是k8sservice的网段,如果说需要更改k8sservice网段,那就需要更改172.16.0.1,
# 如果不是高可用集群,192.168.1.15为Master01的IP
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-hostname=172.16.0.1,192.168.1.15,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.1.20,192.168.1.21,192.168.1.22 \-profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
3.2、生成apiserver的聚合证书
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-cacfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
忽略这个warning
2025/03/30 00:16:58 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements")
3.3、生成controller-manage的证书
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
3.4、set-cluster:设置一个集群项
# 注意,如果不是高可用集群,192.168.1.15:8443改为master01的地址,8443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.1.15:8443 \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
3.5、设置一个环境项,一个上下文
kubectl config set-context system:kube-controller-manager@kubernetes \--cluster=kubernetes \--user=system:kube-controller-manager \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
3.6、set-credentials 设置一个用户项
kubectl config set-credentials system:kube-controller-manager \--client-certificate=/etc/kubernetes/pki/controller-manager.pem \--client-key=/etc/kubernetes/pki/controller-manager-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
3.7、使用某个环境当做默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
3.8、生成kube-scheduler证书
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
# 注意,如果不是高可用集群,192.168.1.15:8443改为master01的地址,8443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.1.15:8443 \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-credentials system:kube-scheduler \--client-certificate=/etc/kubernetes/pki/scheduler.pem \--client-key=/etc/kubernetes/pki/scheduler-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config set-context system:kube-scheduler@kubernetes \--cluster=kubernetes \--user=system:kube-scheduler \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigkubectl config use-context system:kube-scheduler@kubernetes \--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
3.9、配置admin证书
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-profile=kubernetes \admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
# 注意,如果不是高可用集群,192.168.1.15:8443改为master01的地址,8443改为apiserver的端口,默认是6443
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.1.15:8443 \--kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-credentials kubernetes-admin \--client-certificate=/etc/kubernetes/pki/admin.pem \--client-key=/etc/kubernetes/pki/admin-key.pem \--embed-certs=true \--kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config set-context kubernetes-admin@kubernetes \--cluster=kubernetes \--user=kubernetes-admin \--kubeconfig=/etc/kubernetes/admin.kubeconfigkubectl config use-context kubernetes-admin@kubernetes \--kubeconfig=/etc/kubernetes/admin.kubeconfig
3.10、创建ServiceAccount Key -> secret
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
4、将证书发送至其他节点
for NODE in master02 master03; dofor FILE in $(ls /etc/kubernetes/pki | grep -v etcd);doscp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE};done;for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig;doscp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};done;done
查看证书文件
ls /etc/kubernetes/pki/ls /etc/kubernetes/pki/ | wc -l
十一、ETCD配置
1、master01配置
vim /etc/etcd/etcd.config.yml
name: 'master01'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.1.20:2380'listen-client-urls: 'https://192.168.1.20:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.1.20:2380'advertise-client-urls: 'https://192.168.1.20:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'master01=https://192.168.1.20:2380,master02=https://192.168.1.21:2380,master03=https://192.168.1.22:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: truepeer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: false
2、master02配置
vim /etc/etcd/etcd.config.yml
name: 'master02'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.1.21:2380'listen-client-urls: 'https://192.168.1.21:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.1.21:2380'advertise-client-urls: 'https://192.168.1.21:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'master01=https://192.168.1.20:2380,master02=https://192.168.1.21:2380,master03=https://192.168.1.22:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: truepeer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: false
3、master03配置
vim /etc/etcd/etcd.config.yml
name: 'master03'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.1.22:2380'listen-client-urls: 'https://192.168.1.22:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.1.22:2380'advertise-client-urls: 'https://192.168.1.22:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'master01=https://192.168.1.20:2380,master02=https://192.168.1.21:2380,master03=https://192.168.1.22:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: truepeer-transport-security:cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'peer-client-cert-auth: truetrusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: false
4、创建service
所有Master节点创建etcd service并启动
vim /usr/lib/systemd/system/etcd.service
[Unit]Description=Etcd ServiceDocumentation=https://coreos.com/etcd/docs/latest/After=network.target[Service]Type=notifyExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.ymlRestart=on-failureRestartSec=10LimitNOFILE=65536[Install]WantedBy=multi-user.targetAlias=etcd3.service
所有Master节点创建etcd的证书目录
mkdir /etc/kubernetes/pki/etcdln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/systemctl daemon-reloadsystemctl enable --now etcd
5、查看ETCD的状态
export ETCDCTL_API=3etcdctl \--endpoints="192.168.1.20:2379,192.168.1.21:2379,192.168.1.22:2379" \--cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem \--cert=/etc/kubernetes/pki/etcd/etcd.pem \--key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
十二、高可用配置
1、所有Master节点安装keepalived和haproxy
yum install keepalived haproxy -y2、所有Master配置HAProxy,配置一样
vim /etc/haproxy/haproxy.cfg
globalmaxconn 2000ulimit-n 16384log 127.0.0.1 local0 errstats timeout 30sdefaultslog globalmode httpoption httplogtimeout connect 5000timeout client 50000timeout server 50000timeout http-request 15stimeout http-keep-alive 15sfrontend k8smasterbind 0.0.0.0:8443bind 127.0.0.1:8443mode tcpoption tcplogtcp-request inspect-delay 5sdefault_backend k8smasterbackend k8smastermode tcpoption tcplogoption tcp-checkbalance roundrobindefault-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100server master01 192.168.1.20:6443 checkserver master02 192.168.1.21:6443 checkserver master03 192.168.1.22:6443 check
3、Master01 keepalived
所有Master节点配置KeepAlived,配置不一样,注意区分
[root@master01 pki]# vim /etc/keepalived/keepalived.conf ,注意每个节点的IP和网卡(interface参数)
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVEL}vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5weight -5fall 2rise 1}vrrp_instance VI_1 {state MASTERinterface ens192mcast_src_ip 192.168.1.20virtual_router_id 51priority 101nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.1.15}track_script {chk_apiserver} }
4、Master02 keepalived
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVEL}vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5weight -5fall 2rise 1}vrrp_instance VI_1 {state BACKUPinterface ens192mcast_src_ip 192.168.1.21virtual_router_id 51priority 100nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.1.15}track_script {chk_apiserver} }
5、Master03 keepalived
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVEL}vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 5weight -5fall 2rise 1}vrrp_instance VI_1 {state BACKUPinterface ens192mcast_src_ip 192.168.1.22virtual_router_id 51priority 100nopreemptadvert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.1.15}track_script {chk_apiserver} }
6、健康检测配置
所有master节点
# vim /etc/keepalived/check_apiserver.sh#!/bin/basherr=0for k in $(seq 1 3)docheck_code=$(pgrep haproxy)if [[ $check_code == "" ]]; thenerr=$(expr $err + 1)sleep 1continueelseerr=0breakfidoneif [[ $err != "0" ]]; thenecho "systemctl stop keepalived"/usr/bin/systemctl stop keepalivedexit 1elseexit 0fi
chmod +x /etc/keepalived/check_apiserver.sh7、所有master节点启动haproxy和keepalived
systemctl daemon-reloadsystemctl enable --now haproxysystemctl enable --now keepalived
8、VIP测试
ping 192.168.1.15重要:如果安装了keepalived和haproxy,需要测试keepalived是否是正常的
telnet 192.168.1.15 8443十三、Kubernetes组件配置
所有节点创建相关目录
mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes1、Apiserver
所有Master节点创建kube-apiserver service,# 注意,如果不是高可用集群,192.168.1.15改为master01的地址
1.1、Master01配置
注意本文档使用的k8s service网段为172.16.0.1/16,该网段不能和宿主机的网段、Pod网段的重复,请按需修改
# vim /usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \--v=2 \--logging-format=text \--allow-privileged=true \--bind-address=0.0.0.0 \--secure-port=6443 \--advertise-address=192.168.1.20 \--service-cluster-ip-range=172.16.0.1/12 \--service-node-port-range=10000-40000 \--etcd-servers=https://192.168.1.20:2379,https://192.168.1.21:2379,https://192.168.1.22:2379 \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \--etcd-certfile=/etc/etcd/ssl/etcd.pem \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \--client-ca-file=/etc/kubernetes/pki/ca.pem \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \--service-account-key-file=/etc/kubernetes/pki/sa.pub \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \--authorization-mode=Node,RBAC \--enable-bootstrap-token-auth=true \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \--requestheader-allowed-names=aggregator \--requestheader-group-headers=X-Remote-Group \--requestheader-extra-headers-prefix=X-Remote-Extra- \--requestheader-username-headers=X-Remote-User# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.target
1.2、Master02配置
[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \--v=2 \--logging-format=text \--allow-privileged=true \--bind-address=0.0.0.0 \--secure-port=6443 \--advertise-address=192.168.1.21 \--service-cluster-ip-range=172.16.0.1/12 \--service-node-port-range=10000-40000 \--etcd-servers=https://192.168.1.20:2379,https://192.168.1.21:2379,https://192.168.1.22:2379 \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \--etcd-certfile=/etc/etcd/ssl/etcd.pem \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \--client-ca-file=/etc/kubernetes/pki/ca.pem \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \--service-account-key-file=/etc/kubernetes/pki/sa.pub \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \--authorization-mode=Node,RBAC \--enable-bootstrap-token-auth=true \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \--requestheader-allowed-names=aggregator \--requestheader-group-headers=X-Remote-Group \--requestheader-extra-headers-prefix=X-Remote-Extra- \--requestheader-username-headers=X-Remote-User# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.target
1.3、Master03配置
[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-apiserver \--v=2 \--logging-format=text \--allow-privileged=true \--bind-address=0.0.0.0 \--secure-port=6443 \--advertise-address=192.168.1.22 \--service-cluster-ip-range=172.16.0.1/12 \--service-node-port-range=10000-40000 \--etcd-servers=https://192.168.1.20:2379,https://192.168.1.21:2379,https://192.168.1.22:2379 \--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \--etcd-certfile=/etc/etcd/ssl/etcd.pem \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \--client-ca-file=/etc/kubernetes/pki/ca.pem \--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \--service-account-key-file=/etc/kubernetes/pki/sa.pub \--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \--authorization-mode=Node,RBAC \--enable-bootstrap-token-auth=true \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \--requestheader-allowed-names=aggregator \--requestheader-group-headers=X-Remote-Group \--requestheader-extra-headers-prefix=X-Remote-Extra- \--requestheader-username-headers=X-Remote-User# --token-auth-file=/etc/kubernetes/token.csvRestart=on-failureRestartSec=10sLimitNOFILE=65535[Install]WantedBy=multi-user.target
1.4、启动apiserver
所有Master节点开启kube-apiserver
systemctl daemon-reload && systemctl enable --now kube-apiserver检测kube-server状态
systemctl status kube-apiserver2、ControllerManager
所有Master节点配置kube-controller-manager service(所有master节点配置一样)
注意本文档使用的k8s Pod网段为10.96.0.0/12,该网段不能和宿主机的网段、k8s Service网段的重复,请按需修改
[root@master01 pki]# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-controller-manager \--v=2 \--logging-format=text \--bind-address=127.0.0.1 \--root-ca-file=/etc/kubernetes/pki/ca.pem \--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \--service-account-private-key-file=/etc/kubernetes/pki/sa.key \--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \--leader-elect=true \--use-service-account-credentials=true \--node-monitor-grace-period=40s \--node-monitor-period=5s \--node-eviction-rate=0.1 \--controllers=*,bootstrapsigner,tokencleaner \--allocate-node-cidrs=true \--cluster-cidr=10.96.0.0/12 \--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \--node-cidr-mask-size=24Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.target
2.1 所有Master节点启动kube-controller-manager
systemctl daemon-reloadsystemctl enable --now kube-controller-manager
3、Scheduler
所有Master节点配置kube-scheduler service(所有master节点配置一样)
# vim /usr/lib/systemd/system/kube-scheduler.service[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-scheduler \--v=2 \--logging-format=text \--bind-address=127.0.0.1 \--leader-elect=true \--kubeconfig=/etc/kubernetes/scheduler.kubeconfigRestart=alwaysRestartSec=10s[Install]WantedBy=multi-user.target
systemctl daemon-reloadsystemctl enable --now kube-scheduler
十四、TLS Bootstrapping配置
只需要在Master01创建bootstrap
# 注意,如果不是高可用集群,192.168.1.15:8443改为master01的地址,8443改为apiserver的端口,默认是6443
cd /root/k8s-ha-install/bootstrapkubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.1.15:8443 \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-credentials tls-bootstrap-token-user \--token=c8ad9c.2e4d610cf3e7426e \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config set-context tls-bootstrap-token-user@kubernetes \--cluster=kubernetes \--user=tls-bootstrap-token-user \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfigkubectl config use-context tls-bootstrap-token-user@kubernetes \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
注意:如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保证下图红圈内的字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致
1、配置客户端访问
mkdir -p /root/.kube ;cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
2、检查集群状态
可以正常查询集群状态,才可以继续往下,否则不行,需要排查k8s组件是否有故障
[root@master01 pki]# kubectl get csWarning: v1 ComponentStatus is deprecated in v1.19+NAME STATUS MESSAGE ERRORscheduler Healthy okcontroller-manager Healthy oketcd-0 Healthy ok
3、配置bootstrap需要的rbac
[root@master01 bootstrap]# kubectl create -f bootstrap.secret.yamlsecret/bootstrap-token-c8ad9c createdclusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap createdclusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap createdclusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation createdclusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet createdclusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
十五、Kubelet配置
1、复制证书到node节点
Master01节点复制证书至Node节点
cd /etc/kubernetes/for NODE in master02 master03 node01 node02;dossh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/sslfor FILE in etcd-ca.pem etcd.pem etcd-key.pem;doscp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/donefor FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig;doscp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}donedone
2、所有节点创建相关目录
mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/3、所有节点配置kubelet service
vim /usr/lib/systemd/system/kubelet.service
[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetes[Service]ExecStart=/usr/local/bin/kubeletRestart=alwaysStartLimitInterval=0RestartSec=10[Install]WantedBy=multi-user.target
4、所有节点配置kubelet service的配置文件(也可以写到kubelet.service)
vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
[Service]Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"Environment="KUBELET_SYSTEM_ARGS=--cgroup-driver=systemd"Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml"Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "ExecStart=ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
5、创建kubelet的配置文件
注意:如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如172.16.0.10
vim /etc/kubernetes/kubelet-conf.yml
- Version: kubelet.config.k8s.io/v1beta1- kind: KubeletConfiguration- address: 0.0.0.0- port: 10250- readOnlyPort: 10255- authentication:- anonymous:- enabled: false- webhook:- cacheTTL: 2m0s- enabled: true- x509:- clientCAFile: /etc/kubernetes/pki/ca.pem- authorization:- mode: Webhook- webhook:- cacheAuthorizedTTL: 5m0s- cacheUnauthorizedTTL: 30s- cgroupDriver: systemd- cgroupsPerQOS: true- clusterDNS:- - 172.16.0.10- clusterDomain: cluster.local- containerLogMaxFiles: 5- containerLogMaxSize: 10Mi- contentType: application/vnd.kubernetes.protobuf- cpuCFSQuota: true- cpuManagerPolicy: none- cpuManagerReconcilePeriod: 10s- enableControllerAttachDetach: true- enableDebuggingHandlers: true- enforceNodeAllocatable:- - pods- eventBurst: 10- eventRecordQPS: 5- evictionHard:- imagefs.available: 15%- memory.available: 100Mi- nodefs.available: 10%- nodefs.inodesFree: 5%- evictionPressureTransitionPeriod: 5m0s- failSwapOn: true- fileCheckFrequency: 20s- hairpinMode: promiscuous-bridge- healthzBindAddress: 127.0.0.1- healthzPort: 10248- httpCheckFrequency: 20s- imageGCHighThresholdPercent: 85- imageGCLowThresholdPercent: 80- imageMinimumGCAge: 2m0s- iptablesDropBit: 15- iptablesMasqueradeBit: 14- kubeAPIBurst: 10- kubeAPIQPS: 5- makeIPTablesUtilChains: true- maxOpenFiles: 1000000- maxPods: 110- nodeStatusUpdateFrequency: 10s- oomScoreAdj: -999- podPidsLimit: -1- registryBurst: 10- registryPullQPS: 5- resolvConf: /etc/resolv.conf- rotateCertificates: true- runtimeRequestTimeout: 2m0s- serializeImagePulls: true- staticPodPath: /etc/kubernetes/manifests- streamingConnectionIdleTimeout: 4h0m0s- syncFrequency: 1m0s- volumeStatsAggPeriod: 1m0s
6、启动所有节点kubelet
systemctl daemon-reloadsystemctl enable --now kubelet
此时系统日志/var/log/messages显示只有如下信息为正常Unable to update cni config: no networks found in /etc/cni/net.d如果有很多报错日志,或者有大量看不懂的报错,说明kubelet的配置有误,需要检查kubelet配置查看集群状态
7、查看集群状态
@master01 kubernetes]# kubectl get nodeNAME       STATUS     ROLES    AGE   VERSIONmaster01   NotReady   <none>   31m   v1.28.15master02   NotReady   <none>   72s   v1.28.15master03   NotReady   <none>   69s   v1.28.15node01     NotReady   <none>   72s   v1.28.15node02     NotReady   <none>   72s   v1.28.15
十六、kube-proxy配置
注意,如果不是高可用集群,192.168.1.15:8443改为master01的地址,8443改为apiserver的端口,默认是6443
以下操作只在Master01执行
cd /root/k8s-ha-installkubectl -n kube-system create serviceaccount kube-proxykubectl create clusterrolebinding system:kube-proxy \--clusterrole system:node-proxier \--serviceaccount kube-system:kube-proxykubectl apply -f - <<EOFapiVersion: v1kind: Secretmetadata:name: kube-proxy-tokennamespace: kube-systemannotations:kubernetes.io/service-account.name: kube-proxy # 关联到 kube-proxy 的 ServiceAccounttype: kubernetes.io/service-account-tokenEOFJWT_TOKEN=$(kubectl -n kube-system get secret/kube-proxy-token --output=jsonpath='{.data.token}' | base64 -d)PKI_DIR=/etc/kubernetes/pkiK8S_DIR=/etc/kuberneteskubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true \--server=https://192.168.1.15:8443 \--kubeconfig=${K8S_DIR}/kube-proxy.kubeconfigkubectl config set-credentials kubernetes \--token=${JWT_TOKEN} \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config set-context kubernetes \--cluster=kubernetes \--user=kubernetes \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfigkubectl config use-context kubernetes \--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
1、将kubeconfig发送至其他节点
for NODE in master02 master03; doscp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfigdonefor NODE in node01 node02; doscp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfigdone
2、所有节点添加kube-proxy的配置和service文件
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]Description=Kubernetes Kube ProxyDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target[Service]ExecStart=/usr/local/bin/kube-proxy \--config=/etc/kubernetes/kube-proxy.yaml \--v=2Restart=alwaysRestartSec=10s[Install]WantedBy=multi-user.target
如果更改了集群Pod的网段,需要更改kube-proxy.yaml的clusterCIDR为自己的Pod网段
vim /etc/kubernetes/kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0clientConnection:acceptContentTypes: ""burst: 10contentType: application/vnd.kubernetes.protobufkubeconfig: /etc/kubernetes/kube-proxy.kubeconfigqps: 5clusterCIDR: 10.96.0.0/12configSyncPeriod: 15m0sconntrack:max: nullmaxPerCore: 32768min: 131072tcpCloseWaitTimeout: 1h0m0stcpEstablishedTimeout: 24h0m0senableProfiling: falsehealthzBindAddress: 0.0.0.0:10256hostnameOverride: ""iptables:masqueradeAll: falsemasqueradeBit: 14minSyncPeriod: 0ssyncPeriod: 30sipvs:masqueradeAll: trueminSyncPeriod: 5sscheduler: "rr"syncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs"nodePortAddresses: nulloomScoreAdj: -999portRange: ""udpIdleTimeout: 250ms
3、所有节点启动kube-proxy
systemctl daemon-reloadsystemctl enable --now kube-proxy
十七、安装Calico
1、以下步骤只在master01执行
cd /root/k8s-ha-install/calico/# 更改calico的网段,主要需要将红色部分的网段,改为自己的Pod网段sed -i "s#POD_CIDR#10.96.0.0/12#g" calico.yaml
2、检查网段是自己的Pod网段
cd /root/k8s-ha-install/calico/# 更改calico的网段,主要需要将红色部分的网段,改为自己的Pod网段sed -i "s#POD_CIDR#10.96.0.0/12#g" calico.yaml
3、创建
kubectl apply -f calico.yaml4、查看容器状态
[root@master01 CoreDNS]# kubectl get po -nkube-systemNAME READY STATUS RESTARTS AGEcalico-kube-controllers-6d48795585-gclj7 1/1 Running 0 5m54scalico-node-482wx 1/1 Running 0 5m54scalico-node-j7p69 1/1 Running 0 5m54scalico-node-mjr89 1/1 Running 0 5m54scalico-node-w8v4q 1/1 Running 0 5m54scalico-node-wm4nc 1/1 Running 0 5m54s
5、查看集群是否ready
[root@master01 CoreDNS]# kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster01 Ready <none> 22h v1.28.15master02 Ready <none> 21h v1.28.15master03 Ready <none> 21h v1.28.15node01 Ready <none> 21h v1.28.15node02 Ready <none> 21h v1.28.15
十八、安装CoreDNS
如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP
cd /root/k8s-ha-install/COREDNS_SERVICE_IP=`kubectl get svc | grep kubernetes | awk '{print $3}'`0sed -i "s#KUBEDNS_SERVICE_IP#${COREDNS_SERVICE_IP}#g" CoreDNS/coredns.yamlkubectl create -f CoreDNS/coredns.yaml
[root@master01 CoreDNS]# kubectl get po -nkube-systemNAME READY STATUS RESTARTS AGEcoredns-788958459b-ncr8x 1/1 Running 0 11s
十九、安装Metrics Server
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
cd /root/k8s-ha-install/metrics-serverkubectl create -f .
等待metrics server启动然后查看状态
[root@master01 dashboard]# kubectl top nodeNAME CPU(cores) CPU% MEMORY(bytes) MEMORY%master01 309m 3% 2409Mi 30%master02 218m 2% 2040Mi 25%master03 274m 3% 2059Mi 26%node01 97m 1% 1137Mi 14%node02 103m 1% 1229Mi 15%
二十、安装dashboard
1、安装
Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。
cd /root/k8s-ha-install/dashboard/kubectl create -f .
2、登录dashboard
[root@master01 dashboard]# kubectl get svc -nkubernetes-dashboardNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEdashboard-metrics-scraper ClusterIP 172.19.236.49 <none> 8000/TCP 28mkubernetes-dashboard NodePort 172.17.107.71 <none> 443:20110/TCP 28m
3、查看token值
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')4、登录效果

