当前位置: 首页 > news >正文

二进制部署k8s

一、环境准备

关闭防火墙、selinux、交换分区(swap) 下载阿里云源

1、配置hosts文件

vim /etc/hosts[root@master1 ~]# cat /etc/hosts
192.168.200.50 master1
192.168.200.51 master2
192.168.200.52 node1

2、配置主机之间无密码登陆

ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub master1
ssh-copy-id -i .ssh/id_rsa.pub master2
ssh-copy-id -i .ssh/id_rsa.pub node1

3、修改内核参数

##加载br_netfilter 模块
modprobe br_netfilter##验证模块是否加载成功
lsmod |grep br_netfilter##修改内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF##参数生效
sysctl -p /etc/sysctl.d/k8s.conf

4、下载一些小工具

yum install openssh-clients -y
yum install ntpdate -y
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet rsync

5、配置时间同步

##跟网络源做同步
ntpdate cn.pool.ntp.org##把时间同步做成计划任务
crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org###重启 crond 服务
service crond restart

6、开启ipvs

vim ipvs.modules 
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1if [ 0 -eq 0 ]; then/sbin/modprobe ${kernel_module}fi
done----------------------------------
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs[root@master1 modules]# scp /etc/sysconfig/modules/ipvs.modules master2:/etc/sysconfig/modules/
ipvs.modules  
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs[root@master1 modules]# scp /etc/sysconfig/modules/ipvs.modules node1:/etc/sysconfig/modules/
ipvs.modules                                                                               
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

二、安装docker

##配置源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo##下载
yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io -y##开启Docker服务
service docker start##配置加速器
vim /etc/docker/daemon.json 
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://docker.1ms.run"] 
}systemctl daemon-reload
systemctl restart docker##kubelet 默认使用 systemd,两者必须一致才可以。

三、搭建etcd集群

1、创建配置文件和证书文件存放目录

##全部机器
mkdir -p /etc/etcd
mkdir -p /etc/etcd/ssl

2、安装签发证书工具 cfssl

mkdir /data/work -p  ##看个人喜欢吧
cd /data/work/wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 ##执行权限
chmod +x *
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

3、配置 ca 证书

##生成 ca 证书请求文件
[root@master1 work]# cat ca-csr.json
{"CN": "kubernetes",    #公用名称"key": {"algo": "rsa", "size": 2048},"names": [{"C": "CN",         #只能是国家字母缩写"ST": "Guangxi",  #所在省份"L": "Nanning",   #所在城市"O": "k8s",         #单位名称"OU": "system"}],"ca": {"expiry": "87600h"}
}[root@master1 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@master1 work]# 
##生成 ca 证书文件
[root@master1 work]# cat ca-config.json
{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "87600h"}}}
}

4、生成 etcd 证书

##配置 etcd 证书请求,hosts的ip换自己etcd所在节点的 ip
[root@master1 work]# cat etcd-csr.json
{"CN": "etcd","hosts": ["127.0.0.1","192.168.200.50","192.168.200.51","192.168.200.199"       ##vip漂移看后续要不要高可用],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Guangxi","L": "Nanning","O": "k8s","OU": "system"}]
} [root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

5、部署 etcd 集群

wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz

node节点是不用etcd的,我弄就弄了。

[root@master1 work]# ls
ca-config.json  ca-csr.json  ca.pem    etcd-csr.json  etcd.pem
ca.csr          ca-key.pem   etcd.csr  etcd-key.pem   etcd-v3.4.13-linux-amd64.tar.gz
[root@master1 work]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz 
[root@master1 work]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/scp -r etcd-v3.4.13-linux-amd64/etcd* master2:/usr/local/bin/
scp -r etcd-v3.4.13-linux-amd64/etcd* node1:/usr/local/bin/##创建配置文件
[root@master1 work]# cat etcd.conf
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.200.50:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.200.50:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.50:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.50:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.200.50:2380,etcd2=https://192.168.200.51:2380,etcd3=https://192.168.200.52:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"##解释
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群 Token
ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new 是新集群,existing 表示加入已有集群##创建启动服务文件
[root@master1 work]# cat etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \--cert-file=/etc/etcd/ssl/etcd.pem \--key-file=/etc/etcd/ssl/etcd-key.pem \--trusted-ca-file=/etc/etcd/ssl/ca.pem \--peer-cert-file=/etc/etcd/ssl/etcd.pem \--peer-key-file=/etc/etcd/ssl/etcd-key.pem \--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \--peer-client-cert-auth \--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536[Install]
WantedBy=multi-user.targetcp ca*.pem /etc/etcd/ssl/
cp etcd*.pem /etc/etcd/ssl/
cp etcd.conf /etc/etcd/
cp etcd.service /usr/lib/systemd/system/##传给其他主节点机器
for i in master2;do rsync -vaz etcd.conf $i:/etc/etcd/;done
for i in master2;do rsync -vaz etcd*.pem ca*.pem $i:/etc/etcd/ssl/;done
for i in master2;do rsync -vaz etcd.service $i:/usr/lib/systemd/system/;done##启动 etcd 集群
[root@master2 ~]# cat /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.200.51:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.200.51:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.51:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.51:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.200.50:2380,etcd2=https://192.168.200.51:2380,etcd3=https://192.168.200.52:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"[root@master1 work]# systemctl daemon-reload
[root@master1 work]# systemctl enable etcd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@master1 work]# systemctl start etcd.service
###启动1的时候发现会卡住,执行完2就好了[root@master2 ~]# systemctl daemon-reload
[root@master2 ~]# systemctl enable etcd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@master2 ~]# systemctl start etcd.service

6、查看 etcd 集群

[root@master1 ]# ETCDCTL_API=3
[root@master1 ]# /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.200.50:2379,https://192.168.200.51:2379 endpoint health

四、安装 kubernetes 组件

组件版本要求说明
kube-apiserver必须一致控制平面核心
kube-controller-manager必须一致和 apiserver 同版本
kube-scheduler必须一致和 apiserver 同版本
kubelet建议一致(允许 ±1 minor)worker 节点组件
kube-proxy建议一致(允许 ±1 minor)跟随 kubelet
etcd兼容即可推荐官方指定版本
CoreDNS独立任意兼容版本
CNI 插件独立任意兼容版本
Add-ons (metrics-server, dashboard, ingress)独立可随时升级
kubectl必须一致和 kube-apiserver 通信
##下载安装包
https://www.downloadkubernetes.com/
### 下载 Kubernetes v1.28.3 二进制
https://dl.k8s.io/v1.28.3/kubernetes-server-linux-amd64.tar.gzcp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/#传到其他控制节点
rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master2:/usr/local/bin/scp kubelet kube-proxy node1:/usr/local/bin/[root@master1 work]# mkdir -p /etc/kubernetes/
[root@master1 work]# mkdir -p /etc/kubernetes/ssl
[root@master1 work]# mkdir /var/log/kubernetes
[root@master1 work]# 

1、部署 apiserver 组件

##启动 TLS Bootstrapping 机制
自动颁发客户端证书,kubelet 会以一个低权限用户自动向 apiserver 申请证书,kubelet 的证书由 apiserver 动态签署
TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,然后用于连接 apiserver##创建 token.csv 文件
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF生成一个 Kubernetes 的 bootstrap token 文件(token.csv),主要用于 kubelet 向 apiserver 注册 的时候进行认证。
head -c 16 /dev/urandom
从随机源 /dev/urandom 读取 16 个字节随机数。
用来生成一个随机 token。
od -An -t x
od = octal dump(数据转储工具)。
-An 表示不要显示偏移量。
-t x 表示以十六进制输出。
输出就是 16 个字节的随机十六进制字符串。
tr -d ' '
去掉中间多余的空格,让 token 连续显示。##创建 csr 请求文件
[root@master1 work]# cat kube-apiserver-csr.json
{"CN": "kubernetes","hosts": ["127.0.0.1","192.168.200.50","192.168.200.51","192.168.200.52","192.168.200.199","10.255.0.1",             #10.255.0.1:集群内 kubernetes Service 的 ClusterIP"kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Guangxi","L": "Nanning","O": "k8s","OU": "system"}]
}
[root@master1 work]# ##生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver##创建 api-server 的配置文件
[root@master1 work]# cat kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \--anonymous-auth=false \--bind-address=192.168.200.50 \--secure-port=6443 \--advertise-address=192.168.200.50 \--insecure-port=0 \--authorization-mode=Node,RBAC \--runtime-config=api/all=true \--enable-bootstrap-token-auth \--service-cluster-ip-range=10.255.0.0/16 \--token-auth-file=/etc/kubernetes/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \--client-ca-file=/etc/kubernetes/ssl/ca.pem \--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--etcd-cafile=/etc/etcd/ssl/ca.pem \--etcd-certfile=/etc/etcd/ssl/etcd.pem \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \--etcd-servers=https://192.168.200.50:2379,https://192.168.200.51:2379,https://192.168.200.52:2379 \--enable-swagger-ui=true \--allow-privileged=true \--apiserver-count=3 \--audit-log-maxage=30 \--audit-log-maxbackup=3 \--audit-log-maxsize=100 \--audit-log-path=/var/log/kube-apiserver-audit.log \--event-ttl=1h \--alsologtostderr=true \--logtostderr=false \--log-dir=/var/log/kubernetes \--v=4"##解释
--logtostderr:启用日志
--v:日志等级
--log-dir:日志目录
--etcd-servers:etcd 集群地址
--bind-address:监听地址
--secure-port:https 安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service 虚拟 IP 地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用 RBAC 授权和节点自管理
--enable-bootstrap-token-auth:启用 TLS bootstrap 机制
--token-auth-file:bootstrap token 文件
--service-node-port-range:Service nodeport 类型默认分配端口范围
--kubelet-client-xxx:apiserver 访问 kubelet 客户端证书
--tls-xxx-file:apiserver https 证书
--etcd-xxxfile:连接 Etcd 集群证书 
--audit-log-xxx:审计日志##创建服务启动文件
[root@master1 work]# vim kube-apiserver.service
You have new mail in /var/spool/mail/root
[root@master1 work]# cat kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536[Install]
WantedBy=multi-user.target[root@master1 work]# cp ca*.pem /etc/kubernetes/ssl
[root@master1 work]# cp kube-apiserver*.pem /etc/kubernetes/ssl/
[root@master1 work]# cp token.csv /etc/kubernetes/
[root@master1 work]# cp kube-apiserver.conf /etc/kubernetes/
[root@master1 work]# cp kube-apiserver.service /usr/lib/systemd/system/
[root@master1 work]# rsync -vaz token.csv master2:/etc/kubernetes/
[root@master1 work]# rsync -vaz kube-apiserver*.pem master2:/etc/kubernetes/ssl/
[root@master1 work]# rsync -vaz ca*.pem master2:/etc/kubernetes/ssl/
[root@master1 work]# rsync -vaz kube-apiserver.conf master2:/etc/kubernetes/
[root@master1 work]# rsync -vaz kube-apiserver.service master2:/usr/lib/systemd/system/##修改控制节点2的ip
[root@master2 ~]# cat /etc/kubernetes/kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \--anonymous-auth=false \--bind-address=192.168.200.51 \--secure-port=6443 \--advertise-address=192.168.200.51 \--insecure-port=0 \--authorization-mode=Node,RBAC \--runtime-config=api/all=true \--enable-bootstrap-token-auth \--service-cluster-ip-range=10.255.0.0/16 \--token-auth-file=/etc/kubernetes/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \--client-ca-file=/etc/kubernetes/ssl/ca.pem \--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \--service-account-issuer=https://kubernetes.default.svc.cluster.local \--etcd-cafile=/etc/etcd/ssl/ca.pem \--etcd-certfile=/etc/etcd/ssl/etcd.pem \--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \--etcd-servers=https://192.168.200.50:2379,https://192.168.200.51:2379,https://192.168.200.52:2379 \--enable-swagger-ui=true \--allow-privileged=true \--apiserver-count=3 \--audit-log-maxage=30 \--audit-log-maxbackup=3 \--audit-log-maxsize=100 \--audit-log-path=/var/log/kube-apiserver-audit.log \--event-ttl=1h \--alsologtostderr=true \--logtostderr=false \--log-dir=/var/log/kubernetes \--v=4"##启动
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver##查看状态,为401,因为没有认证
[root@master1 work]# curl --insecure https://192.168.200.50:6443/
{"kind": "Status","apiVersion": "v1","metadata": {},"status": "Failure","message": "Unauthorized","reason": "Unauthorized","code": 401

2、部署 kubectl 组件

##创建 csr 请求文件
[root@master1 work]# cat admin-csr.json
{"CN": "admin","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Guangxi","L": "Nanning","O": "system:masters",             "OU": "system"}]
}
一个 证书请求配置文件(CSR 配置),主要用来生成 Kubernetes 管理员(admin)证书。##生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admincp admin*.pem /etc/kubernetes/ssl/##创建 kubeconfig 配置文件##1.设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.200.50:6443 --kubeconfig=kube.config#解释
kubectl config set-cluster kubernetes
创建一个名字为 kubernetes 的集群配置。
--certificate-authority=ca.pem
指定用于验证 API Server 证书的 CA 根证书。
--embed-certs=true
把证书内容直接写入到 kube.config 文件里(而不是只写路径)。
--server=https://192.168.200.50:6443
指定 API Server 的访问地址(这里是 master 节点的 IP 和 6443 端口)。
--kubeconfig=kube.config
把配置写入到 kube.config 文件中,而不是默认的 ~/.kube/config。##查看 kube.config 内容
[root@master1 work]# cat kube.config
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVYjNudjl6NkdBSUkwNW5idmJuZGxZbmZCalJZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBkMVlXNW5lR2t4RURBT0JnTlZCQWNUQjA1aApibTVwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HYzNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEkxTURreE1qRXpNekF3TUZvWERUTTFNRGt4TURFek16QXdNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBkMVlXNW5lR2t4RURBT0JnTlZCQWNUQjA1aGJtNXBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HYzNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBeFdsQSt5TlRtRGQxWDhPaGV1LzgKbGUzTXZ5Y2FNVTVSNWczUEg3L1h4YjhVS1JwaGExZHJSenRWU2tod0QwTVJqYk9KSm9jZHFtd0MyQytqUVVZQwpoNUo4SlhsZDJkV2pETUhMVEFXcTlZcStmU0N0OGhSOWhjWnZhNzFySEU4R3gvSCtGYnNQYkVNK3ZwSGhxQkxjCnQ4TXdRdWJUeVBnV090VDZtcVJPcG5LWmJmSkFEbmdjZlZOVzFtaTZIZG1DbmpONVlLaldhMUNjTlZIa2wvQXgKWXhzNCt3WU1BM3k1bUduV0JURUpKSUw0UldOV3ZPMDZDbzlrSUlaa01UcUNFcFBFZkhXTWtnYzQxU0p2Q3JmbApYWDFhZkxXYTN0TVVBRFdvK2RJWGVBM3d6SnQ5R2JibHBuM3pTOExSbVlaeFVLSXFYUmJOb3MvSXpUVTlGUXQyCkN3SURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVbmNtVlFnWG1UY3dHMy9BY3N5Zk1RK0hKdUY0d0h3WURWUjBqQkJnd0ZvQVVuY21WUWdYbQpUY3dHMy9BY3N5Zk1RK0hKdUY0d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBWE1IRzdoampvaHErNGwrWWg2CjZkYmhRUXlUZm5YaXNjYTNiTGtjcFRXamJOVVR1VnN6MXpDck5hN0dCcEZXUnJrMElkTllDS00rVk5BaHMwL1QKcHAzVUhYY0dEVllyVm9PMThWMmdRM0pzYlRxR0hSREI4bFdrVzdUb3FnbFNOMEZLaG55VkV5TkJ2b2VJYUN5ZgpvbjVRb3VxRFMrQTlJRjdCQnRsL2xpZUpTcEN4Z1NhSGxRODhBZ1ljTUJYT2Y2WFRzZ3IyYkorcVFxdWNYUjhUClBQR0RvcnMzRlpPbzVwb0t6K0ptNDJqTU5vcjBNTUlsNzFuOGFYT1R2WEs5WFd4a2g2SmlPaTc5OERLUXg1VjcKUGp2cG1NVkZVdmxCdWRGdEtjaDhDenI5ZnpPR0pRanN3Y0FCSkVvTHFlZXU2bTJxMjdFOTB5aTBTTE4rK0ZYSgpKSnc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kserver: https://192.168.200.50:6443name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
[root@master1 work]# ##2.设置客户端认证参数
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config##配置 kubeconfig 文件里的上下文
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config##解释
kubectl config set-context kubernetes
新建一个名为 kubernetes 的上下文 (context)。--cluster=kubernetes
指定这个上下文要连接的集群(就是你之前 kubectl config set-cluster kubernetes 配置的集群)。--user=admin
指定使用的用户(你之前用 kubectl config set-credentials admin 配置的 admin 用户证书)。--kubeconfig=kube.config
指定把配置写到 kube.config 文件,而不是默认的 ~/.kube/config。##设置当前上下文--切换当前使用的上下文
[root@master1 work]# kubectl config use-context kubernetes --kubeconfig=kube.config[root@master1 work]# mkdir ~/.kube -p
[root@master1 work]# cp kube.config ~/.kube/config
把你之前生成的 kube.config 拷贝为默认配置文件 ~/.kube/config。
这样以后就可以直接用 kubectl,不用每次都加 --kubeconfig=kube.config 参数了。

3、授权 kubernetes 证书访问 kubelet api 权限

[root@master1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes---给 API Server 进程绑定访问 kubelet API 的权限
##解释
clusterrolebinding
在 Kubernetes 里,ClusterRoleBinding 是把 ClusterRole(集群角色) 绑定到某个 User/Group/ServiceAccount 上的机制。
这样绑定之后,被绑定的对象就拥有这个 ClusterRole 定义的权限。--clusterrole=system:kubelet-api-admin
这是 Kubernetes 内置的一个 ClusterRole,主要作用是:
允许访问 kubelet 提供的全部 API(例如 /metrics、/logs、/spec、/exec 等)。
如果没有这个权限,apiserver 可能无法调用 kubelet 的一些接口。--user kubernetes
这里的 kubernetes 指的是你在 API Server 的启动参数里(或者 TLS 证书 CN 字段里)配置的用户身份。
通常二进制部署时,apiserver 的 kubeconfig 里 --kubelet-client-certificate / --kubelet-client-key 证书的 CN 可能就是 kubernetes。
所以这里是告诉集群:名为 kubernetes 的用户,有权访问所有 kubelet API。kube-apiserver:kubelet-apis
这是这个 ClusterRoleBinding 的名字,可以随便取,只要唯一即可。##同步Kubectl文件
[root@master2 ~]# mkdir /root/.kube/
[root@master2 ~]# rsync -vaz /root/.kube/config master2:/root/.kube/##配置 kubectl 子命令补全
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash > ~/.kube/completion.bash.inc
source '/root/.kube/completion.bash.inc'
source $HOME/.bash_profile

4、部署 kube-controller-manager 组件

##创建 csr 请求文件
[root@master1 work]# cat kube-controller-manager-csr.json
{"CN": "system:kube-controller-manager","key": {"algo": "rsa","size": 2048},"hosts": ["127.0.0.1","192.168.200.50","192.168.200.51","192.168.200.52","192.168.40.199"],"names": [{"C": "CN","ST": "Guangxi","L": "Nanning","O": "system:kube-controller-manager","OU": "system"}]
}##生成证书
[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager###创建 kube-controller-manager 的 kubeconfig1.设置集群参数
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.200.50:6443 --kubeconfig=kube-controller-manager.kubeconfig--给 kube-controller-manager 组件生成一个专用的 kubeconfig 配置文件2.设置客户端认证参数
[root@master1 work]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig3.设置上下文参数
[root@master1 work]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig4.设置当前上下文
[root@master1 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig##创建配置文件 kube-controller-manager.conf
[root@master1 work]# cat kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \--secure-port=10252 \--bind-address=127.0.0.1 \--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \--service-cluster-ip-range=10.255.0.0/16 \--cluster-name=kubernetes \--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \--allocate-node-cidrs=true \--cluster-cidr=10.0.0.0/16 \--experimental-cluster-signing-duration=87600h \--root-ca-file=/etc/kubernetes/ssl/ca.pem \--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \--leader-elect=true \--feature-gates=RotateKubeletServerCertificate=true \--controllers=*,bootstrapsigner,tokencleaner \--horizontal-pod-autoscaler-use-rest-clients=true \--horizontal-pod-autoscaler-sync-period=10s \--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \--use-service-account-credentials=true \--alsologtostderr=true \--logtostderr=false \--log-dir=/var/log/kubernetes \--v=2"##创建启动文件
[root@master1 work]# cat kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target###启动服务
[root@master1 work]# cp kube-controller-manager*.pem /etc/kubernetes/ssl/
[root@master1 work]# cp kube-controller-manager.kubeconfig /etc/kubernetes/
[root@master1 work]# cp kube-controller-manager.conf /etc/kubernetes/
[root@master1 work]# cp kube-controller-manager.service /usr/lib/systemd/system/[root@master1 work]# rsync -vaz kube-controller-manager*.pem master2:/etc/kubernetes/ssl/
[root@master1 work]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master2:/etc/kubernetes/[root@master1 work]# rsync -vaz kube-controller-manager.service master2:/usr/lib/systemd/system/##启动
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

5、部署 kube-scheduler 组件

##创建 csr 请求
[root@master1 work]# cat kube-scheduler-csr.json
{"CN": "system:kube-scheduler","hosts": ["127.0.0.1","192.168.200.50","192.168.200.51","192.168.40.199"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Guangxi","L": "Nanning","O": "system:kube-scheduler","OU": "system"}]
}##生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler##创建 kube-scheduler 的 kubeconfig1.设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.200.50:6443 --kubeconfig=kube-scheduler.kubeconfig2.设置客户端认证参数
kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig3.设置上下文参数
kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig4.设置当前上下文
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig###创建配置文件 kube-scheduler.conf
[root@master1 work]# cat kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"##创建服务启动文件
[root@master1 work]# cat kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5[Install]
WantedBy=multi-user.target##启动服务
[root@master1 work]# cp kube-scheduler*.pem /etc/kubernetes/ssl/
[root@master1 work]# cp kube-scheduler.kubeconfig /etc/kubernetes/
[root@master1 work]# cp kube-scheduler.conf /etc/kubernetes/
[root@master1 work]# cp kube-scheduler.service /usr/lib/systemd/system/[root@master1 work]# rsync -vaz kube-scheduler*.pem master2:/etc/kubernetes/ssl/
[root@master1 work]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master2:/etc/kubernetes/
[root@master1 work]# rsync -vaz kube-scheduler.service master2:/usr/lib/systemd/system/systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

6、node节点下载pause

Kubernetes 版本pause 版本CoreDNS 版本
v1.25.x3.81.9.3
v1.26.x3.91.9.3
v1.27.x3.91.10.1
v1.28.x3.91.10.1
v1.29.x3.91.11.1
v1.30.x3.101.11.1
##官方镜像仓库:
registry.k8s.io/pausedocker pull registry.k8s.io/pause:3.9
docker pull registry.aliyuncs.com/google_containers/pause:3.9

7、部署 kubelet 组件

##创建 kubelet-bootstrap.kubeconfig
[root@master1 work]# cd /data/work/[root@master1 work]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
--自动读取 token.csv 文件中的第一个字段(token),赋值给变量 BOOTSTRAP_TOKEN##在 kubeconfig 的结构里,至少要有 3 个部分:
cluster:集群信息(API server 地址、证书等)
user:认证信息(token、证书等)
context:把 cluster + user + namespace 关联起来[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.200.50:6443 --kubeconfig=kubelet-bootstrap.kubeconfig[root@master1 work]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig[root@master1 work]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig[root@master1 work]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig[root@master1 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap###创建配置文件 kubelet.json
--注意换ip
[root@master1 work]# cat kubelet.json
{"kind": "KubeletConfiguration","apiVersion": "kubelet.config.k8s.io/v1beta1","authentication": {"x509": {"clientCAFile": "/etc/kubernetes/ssl/ca.pem"},"webhook": {"enabled": true,"cacheTTL": "2m0s"},"anonymous": {"enabled": false}},"authorization": {"mode": "Webhook","webhook": {"cacheAuthorizedTTL": "5m0s","cacheUnauthorizedTTL": "30s"}},"address": "192.168.200.52",           "port": 10250,"readOnlyPort": 10255,"cgroupDriver": "systemd","hairpinMode": "promiscuous-bridge","serializeImagePulls": false,"featureGates": {"RotateKubeletClientCertificate": true,"RotateKubeletServerCertificate": true},"clusterDomain": "cluster.local.","clusterDNS": ["10.255.0.2"]
}##编写启动文件
[root@master1 work]# cat kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \--cert-dir=/etc/kubernetes/ssl \--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \--config=/etc/kubernetes/kubelet.json \--network-plugin=cni \--pod-infra-container-image=k8s.gcr.io/pause:3.2 \    ##看你的版本--alsologtostderr=true \--logtostderr=false \--log-dir=/var/log/kubernetes \--v=2
Restart=on-failure
RestartSec=5[Install]
WantedBy=multi-user.target[root@node1 ~]# mkdir /etc/kubernetes/ssl -p
[root@master1 work]# scp kubelet-bootstrap.kubeconfig kubelet.json node1:/etc/kubernetes/
[root@master1 work]# scp ca.pem node1:/etc/kubernetes/ssl/
[root@master1 work]# scp kubelet.service node1:/usr/lib/systemd/system/##启动
[root@node1 ~]# mkdir /var/lib/kubelet
[root@node1 ~]# mkdir /var/log/kubernetes
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl enable kubelet
[root@node1 ~]# systemctl start kubelet
[root@node1 ~]# systemctl status kubelet##执行如下命令,加入节点
[root@master1 work]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-pdm4sh2KNDSWjOlAbMmhqmWZSmgrFpwI9n_2sCUW-10   86m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending[root@master1 work]# kubectl certificate approve node-csr-pdm4sh2KNDSWjOlAbMmhqmWZSmgrFpwI9n_2sCUW-10
certificatesigningrequest.certificates.k8s.io/node-csr-pdm4sh2KNDSWjOlAbMmhqmWZSmgrFpwI9n_2sCUW-10 approved[root@master1 work]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-pdm4sh2KNDSWjOlAbMmhqmWZSmgrFpwI9n_2sCUW-10   87m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued[root@master1 work]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
node1   NotReady    <none>   21s   v1.20.7##因为没安装网络插件

8、部署 kube-proxy 组件

##创建 csr 请求
[root@master1 work]# cat kube-proxy-csr.json
{"CN": "system:kube-proxy","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "Guangxi","L": "Nanning","O": "k8s","OU": "system"}]
}##生成证书
[root@master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxyKubernetes 里的 kube-proxy 证书 只用于客户端认证(访问 apiserver),不需要绑定具体的域名或 IP,所以这个 warning 可以忽略。
hosts 可以留空,因为它不做服务端 TLS,只做客户端 TLS 认证##创建 kubeconfig 文件
[root@master1 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.200.50:6443 --kubeconfig=kube-proxy.kubeconfig[root@master1 work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig[root@master1 work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig[root@master1 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig##创建 kube-proxy 配置文件
[root@master1 work]# cat kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.200.52
clientConnection:kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.200.0/24
healthzBindAddress: 192.168.200.52:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.200.52:10249
mode: "ipvs"##创建服务启动文件
[root@master1 work]# cat kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \--config=/etc/kubernetes/kube-proxy.yaml \--alsologtostderr=true \--logtostderr=false \--log-dir=/var/log/kubernetes \--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536[Install]
WantedBy=multi-user.target[root@master1 work]# scp kube-proxy.kubeconfig kube-proxy.yaml node1:/etc/kubernetes/
[root@master1 work]# scp kube-proxy.service node1:/usr/lib/systemd/system/##启动
[root@node1 ~]# mkdir -p /var/lib/kube-proxy
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl enable kube-proxy
[root@node1 ~]# systemctl start kube-proxy
[root@node1 ~]# systemctl status kube-proxy

9、部署 calico 组件

docker pull calico/node:v3.27.2##calico.yaml文件
https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/calico.yaml网段:默认是 192.168.0.0/16,如果你在 kubeadm 或二进制部署时指定的 Pod 网段不是这个,需要改一下。
镜像地址:国内可能拉不动 docker.io/calico/... 或 quay.io/...,需要换成阿里云/中科大镜像源。[root@master1 work]# kubectl apply -f calico.yaml
[root@master1 work]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6949477b58-45rwg   1/1     Running   0          16m
calico-node-6c7bs    ##如果看到是Pending,查找原因,可能是pod没通过kubelet 加入
kubectl describe pod calico名字 -n kube-system

10、部署 coredns 组件

##CoreDNS 版本要和 Kubernetes 版本匹配
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/corednsKubernetes 默认的 DNS 服务
Pod 可以通过 service-name.namespace.svc.cluster.local 找到服务 IP
##kubernetes.default.svc.cluster.local
每当新 Service 创建或删除,CoreDNS 会自动更新解析记录
外部域名解析、黑名单、负载均衡[root@master1 ~]# kubectl apply -f coredns.yaml[root@master1 ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-6949477b58-45rwg   1/1     Running   0          19m
calico-node-6c7bs                          1/1     Running   0          4m39s
coredns-7bf4bd64bd-k7s2h                   1/1     Running   0          15s[root@master1 ~]# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.255.0.2   <none>        53/UDP,53/TCP,9153/TCP   24s

五、部署 tomcat 服务(测试)

##node节点pull tomcat镜像[root@master1 ~]# vim tomcat.yaml
apiVersion: v1 
kind: Pod  
metadata:  name: demo-pod  namespace: default  labels:app: myapp  env: dev      
spec:containers:      - name:  tomcat-pod-java  ports:- containerPort: 8080image: 看你pull的镜像名称imagePullPolicy: IfNotPresent- name: busyboximage: busybox:latestcommand:  - "/bin/sh"- "-c"- "sleep 3600"
[root@master1 ~]# kubectl apply -f tomcat.yaml##外部访问
[root@master1 ~]# vim tomcat-service.yaml
apiVersion: v1
kind: Service
metadata:name: tomcat
spec:type: NodePortports:- port: 8080nodePort: 30080selector:app: myappenv: dev[root@master1 ~]# kubectl apply -f tomcat-service.yaml
[root@master1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.255.0.1      <none>        443/TCP          11h
tomcat       NodePort    10.255.20.244   <none>        8080:30080/TCP   3m37s##访问
节点ip + 30080

###后续可以弄 keepalived+nginx 高可用、Dashboard可视化.......

##练习笔记


文章转载自:

http://iEuFPx4C.qqbjt.cn
http://ORFBz0p2.qqbjt.cn
http://LitzxvJ0.qqbjt.cn
http://28GJo7zr.qqbjt.cn
http://QncaW1ld.qqbjt.cn
http://7N2cNLkO.qqbjt.cn
http://HsdCvesA.qqbjt.cn
http://ozaQQIbE.qqbjt.cn
http://2L9tdKPc.qqbjt.cn
http://v7QFVFE8.qqbjt.cn
http://EohOth9d.qqbjt.cn
http://ZskdZdAw.qqbjt.cn
http://SZggnkQO.qqbjt.cn
http://fIHEwFRk.qqbjt.cn
http://RmFnUXNz.qqbjt.cn
http://FCjBklVN.qqbjt.cn
http://elGrjREI.qqbjt.cn
http://TonCVQSp.qqbjt.cn
http://NbHWft2x.qqbjt.cn
http://T1jz3GrN.qqbjt.cn
http://5rMHhPfj.qqbjt.cn
http://PDGmvgAH.qqbjt.cn
http://xeWRtf5E.qqbjt.cn
http://uGcsjA0Y.qqbjt.cn
http://xxGpnZlt.qqbjt.cn
http://1uVLsUtx.qqbjt.cn
http://sbRNrXpU.qqbjt.cn
http://9bwaEF2d.qqbjt.cn
http://JTDCp2Od.qqbjt.cn
http://Czopeb1m.qqbjt.cn
http://www.dtcms.com/a/381380.html

相关文章:

  • 为什么知识复用时缺乏场景化指导影响实用性
  • 基于Matlab可见光通信系统中OOK调制的误码率性能建模与分析
  • 《Linux线程——从概念到实践》
  • Android相机API2,基于GLSurfaceView+SurfaceTexture实现相机预览,集成的相机算法采用GPU方案,简要说明
  • 美团核销接口,第三方服务商零侵入对接的核心步骤与技巧美团核销接口
  • Java导出复杂excel,自定义excel导出
  • 【SLT库】红黑树的原理学习 | 模拟实现
  • 【轨物方案】赋能绿色能源新纪元:轨物科技发布光伏清洁机器人智能控制与运维解决方案
  • React Hooks原理深度解析与高级应用模式
  • React 原理篇 - 深入理解虚拟 DOM
  • [能源化工] 面向锂电池RUL预测的开源项目全景速览
  • 分布式专题——10.5 ShardingSphere的CosID主键生成框架
  • 【Redis#9】其他数据结构
  • C++使用拉玛努金公式计算π的值
  • 上海市2025CSP-J十连测Round 5卷后感
  • RDB/AOF------Redis两大持久化方法
  • 【图解】idea中快速查找maven冲突
  • Dubbo SPI机制
  • 《Linux 基础指令实战:新手入门的命令行操作核心教程(第一篇)》
  • 【开题答辩全过程】以 “饭否”食材搭配指南小程序的设计与实现为例,包含答辩的问题和答案
  • RabbitMQ 在实际开发中的应用场景与实现方案
  • 有没有什么办法能批量去除很多个PDF文件的水印
  • JavaScript 内存管理与常见泄漏排查(闭包、DOM 引用、定时器、全局变量)
  • ArkAnalyzer源码初步分析I——分析ts项目流程
  • Linux_基础指令(二)
  • 什么是子网?
  • 【前端】【utils】高效文件下载技术解析
  • FastAPI 中内省函数 inspect.signature() 作用
  • 【Linux】Linux进程概念(上)
  • 前端vue使用canvas封装图片标注功能,鼠标画矩形框,标注文字 包含下载标注之后的图片