docker安装Kubernetes
docker安装Kubernetes
前置环境部署
主机配置
#配置域名解析
[root@node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.70 node1
192.168.100.71 node2
192.168.100.72 master
安装依赖包
yum install -y vim lrzsz unzip wget net-tools tree bashcompletion conntrack ntpdate ntp ipvsadm ipset iptables curl sysstat libseccomp git psmisc telnet unzip gcc gcc-c++ make
关闭防火请和内核安全机制
[root@node2 ~]# systemctl disable firewalld --now
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@node2 ~]# getenforce
Disabled
关闭swap分区
[root@node2 ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Oct 14 18:44:33 2025
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=3338adc8-96d7-4169-a81f-1b9df483c4f0 /boot xfs defaults 0 0
/dev/mapper/centos-home /home xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0[root@node2 ~]# swapoff -a
升级内核
[root@master ~]# yum update -y kernel#升级完之后直接reboot重启即可[root@node1 ~]# vim /etc/sysctl.d/kubernetes.conf
[root@node1 ~]# cat /etc/sysctl.d/kubernetes.conf
# 开启Linux内核的网络桥接功能,同时启用iptables和ip6tables的网络包过滤功能,用于在网络桥接时
# 进行网络包过滤
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
# # 开启路由转发,转发IPv4的数据包
net.ipv4.ip_forward=1
# # 尽可能避免使用交换分区,提升k8s性能
vm.swappiness=0
# # 不检查物理内存是否够用
vm.overcommit_memory=1[root@node1 ~]# systemctl --system
配置时间同步
[root@node1 ~]# yum -y install chrony
[root@node1 ~]# systemctl restart chronyd
[root@node1 ~]# chronyc sources -v
210 Number of sources = 4.-- Source mode '^' = server, '=' = peer, '#' = local clock./ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^? tick.ntp.infomaniak.ch 0 7 0 - +0ns[ +0ns] +/- 0ns
^* 210.16.166.99 3 6 17 1 -2629us[-1056us] +/- 84ms
^+ time.cloudflare.com 3 6 17 2 -4025us[-2452us] +/- 148ms
^- time.nju.edu.cn 1 6 17 1 -1580us[-1580us] +/- 60ms[root@node1 ~]# hwclock -s
IPVS功能
[root@node1 ~]# vim /etc/modules-load.d/ipvs.conf
[root@node1 ~]# cat /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
overlay
br_netfilter[root@node1 ~]# systemctl restart systemd-modules-load#查看内核模块
[root@node1 ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh 12688 0
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4 19149 2
nf_defrag_ipv4 12729 1 nf_conntrack_ipv4
nf_conntrack 143411 6 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
IPVS 核心模块(负载均衡核心组件)
IPVS(IP Virtual Server,IP 虚拟服务器)的核心功能由以下内核模块实现,主要用于提供不同策略的负载均衡调度能力:
- ip_vs:IPVS 负载均衡的基础核心模块,是所有 IPVS 功能的依赖基础,负责负载均衡的核心逻辑处理。
- ip_vs_rr:轮询(Round Robin)调度算法模块,将请求按顺序依次分配给后端服务器,适用于后端服务器性能均等的场景。
- ip_vs_wrr:加权轮询(Weighted RR)调度算法模块,根据后端服务器的权重分配请求(权重越高,接收请求越多),适用于后端服务器性能不均衡的场景。
- ip_vs_sh:源地址哈希(Source Hashing)调度算法模块,根据请求源 IP 的哈希值固定分配后端服务器,可实现 “同一客户端始终访问同一服务器” 的会话保持效果。
网络连接跟踪与过滤模块(流量管控基础)
此类模块主要用于网络连接的跟踪、数据包过滤及规则执行,是 NAT、防火墙、iptables 等功能的依赖:
-
nf_conntrack_ipv4:IPv4 协议的连接跟踪模块,负责记录 IPv4 网络连接的状态(如建立、关闭、数据传输中),是 NAT 转换、防火墙状态检测的核心依赖。
说明:在内核版本 ≥4.19的系统中,该模块已合并至 nf_conntrack 模块,无需单独加载。
-
ip_tables:iptables 工具的基础框架模块,提供 iptables 规则的加载、解析和执行能力,是 Linux 系统中实现防火墙、流量转发、端口映射等功能的底层支撑。
-
ipt_REJECT:iptables 的 “拒绝动作” 实现模块,当 iptables 规则匹配到 “拒绝” 策略时,由该模块返回拒绝响应(如 ICMP 端口不可达),区别于 “丢弃(DROP)” 动作(不返回任何响应)。
IP 集合管理模块(高效批量地址匹配)
用于管理批量 IP 地址 / 端口集合,支持 iptables 对批量地址进行高效匹配,减少规则数量、提升性能:
- ip_set:IP 地址 / 端口集合的核心管理模块,支持创建、修改、删除不同类型的 IP 集合(如单个 IP、IP 段、端口范围等),实现对批量地址的统一管理。
- xt_set & ipt_set:iptables 与 IP 集合的扩展匹配模块,其中
xt_set提供匹配逻辑,ipt_set提供 iptables 规则的扩展语法,二者配合实现 “iptables 规则匹配 IP 集合” 的功能(例:拒绝某 IP 集合中的所有地址访问)。
网络隧道与桥接模块(跨网络通信支撑)
主要用于实现不同网络间的隧道通信、容器跨主机网络及桥接流量过滤,常见于容器化(如 K8s、Docker)环境:
-
ipip:IP-over-IP 隧道协议模块,将 IPv4 数据包封装在另一个 IPv4 数据包中,实现不同子网 / 跨公网的网络互通(如跨地域服务器组网)。
-
overlay:Overlay 网络支持模块,通过在现有网络之上构建虚拟网络平面,实现容器跨主机通信(如 Docker Overlay 网络、K8s Calico/Flannel 的 Overlay 模式)。
-
br_netfilter:桥接网络流量过滤模块,允许对 Linux 网桥(bridge)上的流量执行 iptables 规则过滤。
依赖配置:需配合内核参数 net.bridge.bridge-nf-call-iptables=1使用(该参数启用后,网桥流量会经过 iptables 链,确保网络策略生效)。
反向路径过滤模块(网络安全防护)
- ipt_rpfilter:反向路径验证模块,用于防范 IP 欺骗攻击。其原理是:检查数据包的源 IP 是否与 “数据包到达接口的路由表中,返回源 IP 的路径” 一致,若不一致则丢弃数据包,避免伪造源 IP 的恶意流量。
docker-ce环境
#前置环境安装
[root@node1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
阿里云镜像
[root@node1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror, langpacks
adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
安装docker-ce
yum install -y docker-ce
启动docker服务
[root@node1 ~]# systemctl enable --now docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
镜像加速
[root@master ~]# vim /etc/docker/daemon.json
[root@master ~]# cat /etc/docker/daemon.json
{"registry-mirrors": ["https://09def58152000fc00ff0c00057bad7e0.mirror.swr.myhuaweicloud.com","https://do.nark.eu.org","https://dc.j8.work","https://docker.m.daocloud.io","https://dockerproxy.com","https://docker.mirrors.ustc.edu.cn","https://docker.nju.edu.cn","https://registry.docker-cn.com","https://hub-mirror.c.163.com","https://hub.uuuadc.top","https://docker.anyhub.us.kg","https://dockerhub.jobcher.com","https://dockerhub.icu","https://docker.ckyl.me","https://docker.awsl9527.cn","https://mirror.baidubce.com","https://docker.1panel.live"]
}
修改cgroup方式
[root@node1 ~]# vim /etc/docker/daemon.json
[root@node1 ~]# cat /etc/docker/daemon.json
{"registry-mirrors": [ "https://09def58152000fc00ff0c00057bad7e0.mirror.swr.myhuaweicloud.com","https://do.nark.eu.org","https://dc.j8.work","https://docker.m.daocloud.io","https://dockerproxy.com","https://docker.mirrors.ustc.edu.cn","https://docker.nju.edu.cn","https://registry.docker-cn.com","https://hub-mirror.c.163.com","https://hub.uuuadc.top","https://docker.anyhub.us.kg","https://dockerhub.jobcher.com","https://dockerhub.icu","https://docker.ckyl.me","https://docker.awsl9527.cn","https://mirror.baidubce.com","https://docker.1panel.live"],"exec-opts": ["native.cgroupdriver=systemd"]
}[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl restart docker
cri-dockerd安装
[root@node1 ~]# rz -E
rz waiting to receive.[root@node1 ~]# rpm -ivh cri-dockerd-0.3.4-3.el7.x86_64.rpm
Preparing... ################################# [100%]
Updating / installing...1:cri-dockerd-3:0.3.4-3.el7 ################################# [100%]#编辑配置文件
[root@node1 ~]# vim /usr/lib/systemd/system/cri-docker.service
//编辑第10行,......dockerd后面添加:
--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl enable cri-docker.service
[root@node1 ~]# systemctl start cri-docker.service#检查文件是否启动
[root@node1 ~]# ls /run/cri-*
/run/cri-dockerd.sock
kubernetes集群部署
Yum源
[root@master ~]# vim /etc/yum.repos.d/kubernetes.repo
[root@master ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgyum clean all
yum list
软件安装
#查看安装版本
[root@master ~]# yum list kubeadm.x86_64 --showduplicates | sort -r#安装1.28版本的
[root@master ~]# yum install -y kubeadm-1.28.0-0 kubelet-1.28.0-0 kubectl-1.28.0-0
kubelet配置
[root@master ~]# vim /etc/sysconfig/kubelet
[root@master ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
集群初始化(只在master节点操作)
#查看可使用的镜像
[root@master ~]# kubeadm config images list --kubernetes-version=v1.28.0 --image-repository=registry.aliyuncs.com/google_containers
registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.9-0
registry.aliyuncs.com/google_containers/coredns:v1.10.1#下载镜像
[root@master ~]# kubeadm config images pull --cri-socket=unix:///var/run/cri-dockerd.sock --kubernetes-version=v1.28.0 --image-repository=registry.aliyuncs.com/google_containers
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1#查看下载完成的镜像
[root@master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-apiserver v1.28.0 bb5e0dde9054 2 years ago 126MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.28.0 f6f496300a2a 2 years ago 60.1MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.28.0 4be79c38a4ba 2 years ago 122MB
registry.aliyuncs.com/google_containers/kube-proxy v1.28.0 ea1030da44aa 2 years ago 73.1MB
registry.aliyuncs.com/google_containers/etcd 3.5.9-0 73deb9a3f702 2 years ago 294MB
registry.aliyuncs.com/google_containers/coredns v1.10.1 ead0a4a53df8 2 years ago 53.6MB
registry.aliyuncs.com/google_containers/pause 3.9 e6f181688397 3 years ago 744kB#创建初始化集群配置文件
[root@master ~]# kubeadm config print init-defaults > kubeadm-init.yaml
[root@master ~]# ls
anaconda-ks.cfg Documents kubeadm-init.yaml Public
cri-dockerd-0.3.4-3.el7.x86_64.rpm Downloads Music Templates
Desktop initial-setup-ks.cfg Pictures Videos#修改初始化配置文件
[root@master ~]# vim kubeadm-init.yaml
[root@master ~]# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.100.72bindPort: 6443
nodeRegistration:criSocket: unix:///var/run/cri-dockerd.sockimagePullPolicy: IfNotPresentname: mastertaints:- effect: NoSchedulekey: node-role.kubernetes.io/control-plane
---
apiServer:timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd#初始化
[root@master ~]# kubeadm init --config=kubeadm-init.yaml --upload-certs | tee kubeadm-init.log
......
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.100.72:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:b7847537a7da6c61f4ec9f47e33012452f9910606f0f6807403701f1a554cb1c
配置kubectl工具
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config#配置环境变量
[root@master ~]# vim .bash_profile
[root@master ~]# source ~/.bash_profile
[root@master ~]# cat .bash_profile
# .bash_profile# Get the aliases and functions
if [ -f ~/.bashrc ]; then. ~/.bashrc
fi# User specific environment and startup programsPATH=$PATH:$HOME/binexport PATH
export KUBECONFIG=/etc/kubernetes/admin.conf#检查核心组建控制平面的健康状态
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
etcd-0 Healthy ok
scheduler Healthy ok
controller-manager Healthy ok
node工作节点加入集群
kubeadm join 192.168.100.72:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:b7847537a7da6c61f4ec9f47e33012452f9910606f0f6807403701f1a554cb1c --cri-socket unix:///var/run/cri-dockerd.sock#添加前
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 37m v1.28.0#node1节点添加后
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 40m v1.28.0
node1 NotReady <none> 43s v1.28.0
#node2节点添加后
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 41m v1.28.0
node1 NotReady <none> 58s v1.28.0
node2 NotReady <none> 6s v1.28.0
网络CNI组建部署
#查看集群状态
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 53m v1.28.0
node1 NotReady <none> 12m v1.28.0
node2 NotReady <none> 12m v1.28.0#查看Kubernetes集群中kube-system命名空间下的所有Pod及其详细信息
[root@master ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-66f779496c-26pcb 0/1 Pending 0 49m <none> <none> <none> <none>
coredns-66f779496c-wpc57 0/1 Pending 0 49m <none> <none> <none> <none>
etcd-master 1/1 Running 0 49m 192.168.100.72 master <none> <none>
kube-apiserver-master 1/1 Running 0 49m 192.168.100.72 master <none> <none>
kube-controller-manager-master 1/1 Running 0 49m 192.168.100.72 master <none> <none>
kube-proxy-cd8gk 1/1 Running 0 8m44s 192.168.100.71 node2 <none> <none>
kube-proxy-q8prd 1/1 Running 0 49m 192.168.100.72 master <none> <none>
kube-proxy-xtfxw 1/1 Running 0 9m36s 192.168.100.70 node1 <none> <none>
kube-scheduler-master 1/1 Running 0 49m 192.168.100.72 master <none> <none>
下载并修改Calico文件
[root@master ~]# wget --no-check-certificate https://docs.tigera.io/archive/v3.25/manifests/calico.yaml[root@master ~]# vim calico.yaml
# 找到4601行,去掉注释并修改
- name: CALICO_IPV4POOL_CIDRvalue: "10.244.0.0/16"
部署Calico
#部署
[root@master ~]# kubectl apply -f calico.yaml#动态查看部署过程
[root@master ~]# watch kubectl get pods -n kube-system#完成结果如下图

验证检查集群
#查看所有组建状态
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-658d97c59c-tk2pk 1/1 Running 0 17m
calico-node-57rqw 1/1 Running 0 17m
calico-node-fmdcl 1/1 Running 0 17m
calico-node-npfm5 1/1 Running 0 17m
coredns-66f779496c-26pcb 1/1 Running 0 69m
coredns-66f779496c-wpc57 1/1 Running 0 69m
etcd-master 1/1 Running 0 69m
kube-apiserver-master 1/1 Running 0 69m
kube-controller-manager-master 1/1 Running 0 69m
kube-proxy-cd8gk 1/1 Running 0 28m
kube-proxy-q8prd 1/1 Running 0 69m
kube-proxy-xtfxw 1/1 Running 0 29m
kube-scheduler-master 1/1 Running 0 69m[root@master ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-658d97c59c-tk2pk 1/1 Running 0 18m 10.244.166.129 node1 <none> <none>
calico-node-57rqw 1/1 Running 0 18m 192.168.100.70 node1 <none> <none>
calico-node-fmdcl 1/1 Running 0 18m 192.168.100.72 master <none> <none>
calico-node-npfm5 1/1 Running 0 18m 192.168.100.71 node2 <none> <none>
coredns-66f779496c-26pcb 1/1 Running 0 69m 10.244.166.130 node1 <none> <none>
coredns-66f779496c-wpc57 1/1 Running 0 69m 10.244.166.131 node1 <none> <none>
etcd-master 1/1 Running 0 70m 192.168.100.72 master <none> <none>
kube-apiserver-master 1/1 Running 0 70m 192.168.100.72 master <none> <none>
kube-controller-manager-master 1/1 Running 0 70m 192.168.100.72 master <none> <none>
kube-proxy-cd8gk 1/1 Running 0 29m 192.168.100.71 node2 <none> <none>
kube-proxy-q8prd 1/1 Running 0 69m 192.168.100.72 master <none> <none>
kube-proxy-xtfxw 1/1 Running 0 30m 192.168.100.70 node1 <none> <none>
kube-scheduler-master 1/1 Running 0 70m 192.168.100.72 master <none> <none>
安装kubectl命令补全工具
[root@master ~]# yum install bash-completion -y
[root@master ~]# source /usr/share/bash-completion/bash_completion#环境配置
[root@master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@master ~]# source ~/.bashrc
测试创建nginx
[root@node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.28.0 ea1030da44aa 2 years ago 73.1MB
registry.aliyuncs.com/google_containers/coredns v1.10.1 ead0a4a53df8 2 years ago 53.6MB
calico/kube-controllers v3.25.0 5e785d005ccc 2 years ago 71.6MB
calico/cni v3.25.0 d70a5947d57e 2 years ago 198MB
calico/node v3.25.0 08616d26b8e7 2 years ago 245MB
registry.aliyuncs.com/google_containers/pause 3.9 e6f181688397 3 years ago 744kB#创建nginx
[root@master ~]# kubectl create deployment nginx --image=nginx --replicas=3
deployment.apps/nginx created
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-7854ff8877-4lss5 0/1 ContainerCreating 0 14s
nginx-7854ff8877-fzbfm 0/1 ContainerCreating 0 14s
nginx-7854ff8877-mrmww 0/1 ContainerCreating 0 14s[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7854ff8877-4lss5 1/1 Running 0 3m5s 10.244.104.1 node2 <none> <none>
nginx-7854ff8877-fzbfm 1/1 Running 0 3m5s 10.244.104.2 node2 <none> <none>
nginx-7854ff8877-mrmww 1/1 Running 0 3m5s 10.244.166.132 node1 <none> <none>[root@master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 77m[root@master ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
service/nginx exposed#查看pod和service信息
[root@master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 80m
nginx NodePort 10.109.182.190 <none> 80:30578/TCP 15s
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-7854ff8877-4lss5 1/1 Running 0 6m27s
nginx-7854ff8877-fzbfm 1/1 Running 0 6m27s
nginx-7854ff8877-mrmww 1/1 Running 0 6m27s#测试访问
#虚拟机访问
[root@master ~]# curl 10.109.182.190
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p><p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p>
</body>
</html>#浏览器访问如下图
#node1、node2、master三个地址都能访问



: Tahoma, Verdana, Arial, sans-serif; }
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
#浏览器访问如下图
#node1、node2、master三个地址都能访问
[外链图片转存中...(img-IsmCaKS5-1763030404516)][外链图片转存中...(img-kSmNBQnG-1763030404517)][外链图片转存中...(img-6gR2F1mT-1763030404517)]