使用sealos部署kubernetes集群并实现集群管理
使用sealos部署kubernetes集群并实现集群管理
本次使用5台主机完成,其中3台主机为master节点,1台主机为worker节点,一台主机作为kuboard节点。
一、主机准备
1.1 配置主机名
# hostnamectl set-hostname xxxk8s-master01
k8s-master02
k8s-master03
k8s-worker01
1.2 设置静态IP地址
序号 | 主机名 | 主机IP |
---|---|---|
1 | k8s-master01 | 192.168.95.142 |
2 | k8s-master02 | 192.168.95.143 |
3 | k8s-master03 | 192.168.95.144 |
4 | k8s-worker01 | 192.168.95.145 |
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="ec87533a-8151-4aa0-9d0f-1e970affcdc6"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.95.xxx"
PREFIX="24"
GATEWAY="192.168.95.2"
1.3 配置主机名与IP地址解析
下面解析是管理员添加,sealos在运行过程中,也会自动添加主机名与IP地址解析关系。
# /etc/hosts
192.168.95.142 k8s-master01
192.168.95.143 k8s-master02
192.168.95.144 k8s-master03
192.168.95.145 k8s-worker01
192.168.95.146 nfsserver
1.4 升级内核
参考
centos7停服yum更新kernel失败解决办法
二、sealos准备(在k8s-master01 执行即可)
安装jq
[root@k8s-master01 ~]# sudo yum install epel-release[root@k8s-master01 ~]# sudo yum install jq[root@k8s-master01 ~]# curl --silent "https://api.github.com/repos/labring/sealos/releases" | jq -r '.[].tag_n ame'
下载 Sealos 命令行工具
你可以通过运行命令来获取版本列表:
curl --silent "https://api.github.com/repos/labring/sealos/releases" | jq -r '.[].tag_name'
注意:在选择版本时,建议使用稳定版本例如 v4.3.0。像 v4.3.0-rc1、v4.3.0-alpha1 这样的版本是预发布版,请谨慎使用。
设置 VERSION 环境变量为 latest 版本号,或者将 VERSION 替换为您要安装的 Sealos 版本:
VERSION=`curl -s https://api.github.com/repos/labring/sealos/releases/latest | grep -oE '"tag_name": "[^"]+"' | head -n1 | cut -d'"' -f4`
另外由于国内网络的特殊原因,访问 GitHub 可能会受限,建议先到以下几个网站寻找最新可用的 GitHub 代理:
https://ghproxy.link/
https://ghproxy.net/
找到可用的 GitHub 代理之后,将其设置为环境变量 PROXY_PREFIX,例如:
export PROXY_PREFIX=https://ghfast.top
二进制自动下载
curl -sfL ${PROXY_PREFIX}/https://raw.githubusercontent.com/labring/sealos/main/scripts/install.sh | PROXY_PREFIX=${PROXY_PREFIX} sh -s ${VERSION} labring/sealos
# sealos version
SealosVersion:buildDate: "2024-10-09T02:18:27Z"compiler: gcgitCommit: 2b74a1281gitVersion: 5.0.1goVersion: go1.20.14platform: linux/amd64
三、使用sealos部署kubernetes集群
kubernetes集群默认使用containerd
passwd 接主机的密码,做主机互信后可不加
sealos run labring/kubernetes:v1.24.0 labring/calico:v3.22.1 --masters 192.168.95.142,192.168.95.143,192.168.95.144 --nodes 192.168.95.145 --passwd xxxx
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 16h v1.24.0
k8s-master02 Ready control-plane 16h v1.24.0
k8s-master03 Ready control-plane 16h v1.24.0
k8s-worker01 Ready <none> 16h v1.24.0
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d4b75cb6d-fmcmc 1/1 Running 0 20m
coredns-6d4b75cb6d-kdd9d 1/1 Running 0 20m
etcd-k8s-master01 1/1 Running 0 20m
etcd-k8s-master02 1/1 Running 0 19m
etcd-k8s-master03 1/1 Running 0 19m
kube-apiserver-k8s-master01 1/1 Running 0 20m
kube-apiserver-k8s-master02 1/1 Running 1 (19m ago) 19m
kube-apiserver-k8s-master03 1/1 Running 0 18m
kube-controller-manager-k8s-master01 1/1 Running 1 (16m ago) 20m
kube-controller-manager-k8s-master02 1/1 Running 0 18m
kube-controller-manager-k8s-master03 1/1 Running 0 17m
kube-proxy-jl2gx 1/1 Running 0 20m
kube-proxy-kzn5g 1/1 Running 0 19m
kube-proxy-pscn2 1/1 Running 0 20m
kube-proxy-pvfw9 1/1 Running 0 18m
kube-scheduler-k8s-master01 1/1 Running 1 (16m ago) 20m
kube-scheduler-k8s-master02 1/1 Running 0 19m
kube-scheduler-k8s-master03 1/1 Running 0 19m
kube-sealos-lvscare-k8s-worker01 1/1 Running 0 18m
四、使用kuboard实现k8s集群托管
序号 | 主机名 | 主机IP |
---|---|---|
1 | nfsserver | 192.168.95.146 |
4.1 kuboard部署及访问
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repoyum -y install docker-cesystemctl enable --now dockerdocker run -d --restart=unless-stopped --name=kuboard -p 80:80/tcp -p 10081:10081/tcp -e KUBOARD_ENDPOINT="http://192.168.95.146:80" -e KUBOARD_AGENT_SERVER_TCP_PORT="10081" -v /root/kuboard-data:/data eipwork/kuboard:v3
用户名和密码分别为:admin及Kuboard123
## 4.2 kuboard添加k8s集群
[root@k8s-master01 ~]# curl -k 'http://192.168.95.146:80/kuboard-api/cluster/k8s1-sealos/kind/KubernetesCluster/k8s1-sealos/resource/installAgentToKubernetes?token=YpFt04uBrcHdPvuqkjDsjizm0cDQpFTS' > kuboard-agent.yaml% Total % Received % Xferd Average Speed Time Time Time CurrentDload Upload Total Spent Left Speed
100 5613 0 5613 0 0 1203k 0 --:--:-- --:--:-- --:--:-- 1370k
[root@k8s-master01 ~]# kubectl apply -f ./kuboard-agent.yaml
namespace/kuboard created
serviceaccount/kuboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kuboard-admin-crb created
serviceaccount/kuboard-viewer created
clusterrolebinding.rbac.authorization.k8s.io/kuboard-viewer-crb created
deployment.apps/kuboard-agent-hpbezk created
deployment.apps/kuboard-agent-hpbezk-2 created
[root@k8s-master01 ~]# cat kuboard-agent.yaml
---
apiVersion: v1
kind: Namespace
metadata:name: kuboard---
apiVersion: v1
kind: ServiceAccount
metadata:name: kuboard-adminnamespace: kuboard---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kuboard-admin-crb
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: kuboard-adminnamespace: kuboard---
apiVersion: v1
kind: ServiceAccount
metadata:name: kuboard-viewernamespace: kuboard---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kuboard-viewer-crb
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: view
subjects:
- kind: ServiceAccountname: kuboard-viewernamespace: kuboard---
apiVersion: apps/v1
kind: Deployment
metadata:annotations:k8s.kuboard.cn/ingress: "false"k8s.kuboard.cn/service: nonek8s.kuboard.cn/workload: kuboard-agent-hpbezklabels:k8s.kuboard.cn/name: kuboard-agent-hpbezkname: kuboard-agent-hpbezknamespace: kuboard
spec:replicas: 1selector:matchLabels:k8s.kuboard.cn/name: kuboard-agent-hpbezktemplate:metadata:labels:k8s.kuboard.cn/name: kuboard-agent-hpbezkspec:affinity:nodeAffinity:preferredDuringSchedulingIgnoredDuringExecution:- preference:matchExpressions:- key: node-role.kubernetes.io/masteroperator: Existsweight: 100podAffinity:preferredDuringSchedulingIgnoredDuringExecution:- podAffinityTerm:labelSelector:matchLabels:k8s.kuboard.cn/name: kuboard-v3namespaces:- kuboardtopologyKey: kubernetes.io/hostnameweight: 100serviceAccountName: kuboard-admintolerations:- effect: NoSchedulekey: node-role.kubernetes.io/masteroperator: Existscontainers:- env:- name: KUBOARD_ENDPOINTvalue: "http://192.168.95.146:80"- name: KUBOARD_AGENT_HOSTvalue: "192.168.95.146"- name: KUBOARD_AGENT_PORTvalue: "10081"- name: KUBOARD_AGENT_REMOTE_PORTvalue: "35001"- name: KUBOARD_AGENT_PROTOCOLvalue: "tcp"- name: KUBOARD_AGENT_PROXYvalue: ""- name: KUBOARD_K8S_CLUSTER_NAMEvalue: "k8s1-sealos"- name: KUBOARD_AGENT_KEYvalue: "32b7d6572c6255211b4eec9009e4a816"- name: KUBERNETES_TOKEN_NAMEvalue: "kuboard-admin"- name: KUBOARD_ANONYMOUS_TOKENvalue: "YpFt04uBrcHdPvuqkjDsjizm0cDQpFTS"- name: POD_HOST_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.nameimage: "eipwork/kuboard-agent:v3"imagePullPolicy: Alwaysname: kuboard-agentrestartPolicy: Always---
apiVersion: apps/v1
kind: Deployment
metadata:annotations:k8s.kuboard.cn/ingress: "false"k8s.kuboard.cn/service: nonek8s.kuboard.cn/workload: kuboard-agent-hpbezk-2labels:k8s.kuboard.cn/name: kuboard-agent-hpbezk-2name: kuboard-agent-hpbezk-2namespace: kuboard
spec:replicas: 1selector:matchLabels:k8s.kuboard.cn/name: kuboard-agent-hpbezk-2template:metadata:labels:k8s.kuboard.cn/name: kuboard-agent-hpbezk-2spec:affinity:nodeAffinity:preferredDuringSchedulingIgnoredDuringExecution:- preference:matchExpressions:- key: node-role.kubernetes.io/masteroperator: Existsweight: 100podAffinity:preferredDuringSchedulingIgnoredDuringExecution:- podAffinityTerm:labelSelector:matchLabels:k8s.kuboard.cn/name: kuboard-v3namespaces:- kuboardtopologyKey: kubernetes.io/hostnameweight: 100serviceAccountName: kuboard-viewertolerations:- effect: NoSchedulekey: node-role.kubernetes.io/masteroperator: Existscontainers:- env:- name: KUBOARD_ENDPOINTvalue: "http://192.168.95.146:80"- name: KUBOARD_AGENT_HOSTvalue: "192.168.95.146"- name: KUBOARD_AGENT_PORTvalue: "10081"- name: KUBOARD_AGENT_REMOTE_PORTvalue: "35001"- name: KUBOARD_AGENT_PROTOCOLvalue: "tcp"- name: KUBOARD_AGENT_PROXYvalue: ""- name: KUBOARD_K8S_CLUSTER_NAMEvalue: "k8s1-sealos"- name: KUBOARD_AGENT_KEYvalue: "32b7d6572c6255211b4eec9009e4a816"- name: KUBERNETES_TOKEN_NAMEvalue: "kuboard-viewer"- name: KUBOARD_ANONYMOUS_TOKENvalue: "YpFt04uBrcHdPvuqkjDsjizm0cDQpFTS"- name: POD_HOST_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.nameimage: "eipwork/kuboard-agent:v3"imagePullPolicy: Alwaysname: kuboard-agentrestartPolicy: Always
[root@k8s-master01 ~]# kubectl get pods -n kuboard
NAME READY STATUS RESTARTS AGE
kuboard-agent-hpbezk-2-74c7c76988-n84gh 1/1 Running 0 73s
kuboard-agent-hpbezk-6959ddfb74-65q29 1/1 Running 0 73s
选第一个