apiVersion: v1
kind: Pod
metadata:name: pod-base
namespace: dev
labels:user: heima
spec:containers:-name: nginx
image: nginx:1.17.1
-name: busybox
image: busybox:1.30
上面定义了一个比较简单 Pod 的配置,里面有两个容器
nginx:用 1.17.1 版本的 nginx 镜像创建,(nginx 是一个轻量级 web 容器)
busybox:用 1.30 版本的 busybox 镜像创建,(busybox 是一个小巧的 linux 命令集合)
# 创建Pod[root@k8s-master01 pod]# kubectl apply -f pod-base.yaml
pod/pod-base created
# 查看 Pod 状况# READY 1/2:表示当前 Pod 中有 2 个容器,其中 1 个准备就绪,1 个未就绪# RESTARTS:重启次数,因为有 1 个容器故障了,Pod 一直在重启试图恢复它[root@k8s-master01 pod]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
pod-base 1/2 Running 4 95s
# 可以通过describe查看内部的详情# 此时已经运行起来了一个基本的 Pod,虽然它暂时有问题[root@k8s-master01 pod]# kubectl describe pod pod-base -n dev
Pod 镜像拉取(imagePullPolicy)
创建 pod-imagepullpolicy.yaml 文件,内容如下
apiVersion: v1
kind: Pod
metadata:name: pod-imagepullpolicy
namespace: dev
spec:containers:-name: nginx
image: nginx:1.17.1
imagePullPolicy: Never # 用于设置镜像拉取策略-name: busybox
image: busybox:1.30
# 创建Pod[root@k8s-master01 pod]# kubectl create -f pod-imagepullpolicy.yaml
pod/pod-imagepullpolicy created
# 查看Pod详情# 此时明显可以看到nginx镜像有一步Pulling image "nginx:1.17.1"的过程[root@k8s-master01 pod]# kubectl describe pod pod-imagepullpolicy -n dev......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned dev/pod-imagePullPolicy to node1
Normal Pulling 32s kubelet, node1 Pulling image "nginx:1.17.1"
Normal Pulled 26s kubelet, node1 Successfully pulled image "nginx:1.17.1"
Normal Created 26s kubelet, node1 Created container nginx
Normal Started 25s kubelet, node1 Started container nginx
Normal Pulled 7s (x3 over 25s) kubelet, node1 Container image "busybox:1.30" already present on machine
Normal Created 7s (x3 over 25s) kubelet, node1 Created container busybox
Normal Started 7s (x3 over 25s) kubelet, node1 Started container busybox
apiVersion: v1
kind: Pod
metadata:name: pod-resources
namespace: dev
spec:containers:-name: nginx
image: nginx:1.17.1
resources:# 资源配额limits:# 限制资源(上限)cpu:"2"# CPU限制,单位是core数memory:"10Gi"# 内存限制requests:# 请求资源(下限)cpu:"1"# CPU限制,单位是core数memory:"10Mi"# 内存限制
在这对 cpu 和 memory 的单位做一个说明
cpu:core数,可以为整数或小数
memory:内存大小,可以使用 Gi、Mi、G、M 等形式
# 运行Pod[root@k8s-master01 ~]# kubectl create -f pod-resources.yaml
pod/pod-resources created
# 查看发现pod运行正常[root@k8s-master01 ~]# kubectl get pod pod-resources -n dev
NAME READY STATUS RESTARTS AGE
pod-resources 1/1 Running 0 39s
# 接下来,停止Pod[root@k8s-master01 ~]# kubectl delete -f pod-resources.yaml
pod "pod-resources" deleted
# 编辑pod,修改resources.requests.memory的值为10Gi[root@k8s-master01 ~]# vim pod-resources.yaml# 再次启动pod[root@k8s-master01 ~]# kubectl create -f pod-resources.yaml
pod/pod-resources created
# 查看Pod状态,发现Pod启动失败[root@k8s-master01 ~]# kubectl get pod pod-resources -n dev -o wide
NAME READY STATUS RESTARTS AGE
pod-resources 0/1 Pending 0 20s
# 查看pod详情会发现,如下提示[root@k8s-master01 ~]# kubectl describe pod pod-resources -n dev......
Warning FailedScheduling 35s default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient memory.(内存不足)
Pod 生命周期
一般将 pod 对象从创建至终的这段时间范围称为 pod 的生命周期,它主要包含下面的过程
pod 创建过程
运行初始化容器(init container)过程
运行主容器(main container)
容器启动后钩子(post start)、容器终止前钩子(pre stop)
容器的存活性探测(liveness probe)、就绪性探测(readiness probe)
在整个生命周期中,Pod 会出现 5 种状态(相位)
挂起(Pending):apiserver 已经创建了 pod 资源对象,但它尚未被调度完成或者仍处于下载镜像的过程中
# 创建Pod[root@k8s-master01 ~]# kubectl create -f pod-liveness-exec.yaml
pod/pod-liveness-exec created
# 查看Pod详情[root@k8s-master01 ~]# kubectl describe pods pod-liveness-exec -n dev......
Normal Created 20s (x2 over 50s) kubelet, node1 Created container nginx
Normal Started 20s (x2 over 50s) kubelet, node1 Started container nginx
Normal Killing 20s kubelet, node1 Container nginx failed liveness probe, will be restarted
Warning Unhealthy 0s (x5 over 40s) kubelet, node1 Liveness probe failed: cat: can't open '/tmp/hello11.txt': No such file or directory
# 观察上面的信息就会发现nginx容器启动之后就进行了健康检查# 检查失败之后,容器被kill掉,然后尝试进行重启(这是重启策略的作用,后面讲解)# 稍等一会之后,再观察pod信息,就可以看到RESTARTS不再是0,而是一直增长[root@k8s-master01 ~]# kubectl get pods pod-liveness-exec -n dev
NAME READY STATUS RESTARTS AGE
pod-liveness-exec 0/1 CrashLoopBackOff 2 3m19s
# 当然接下来,可以修改成一个存在的文件,比如/tmp/hello.txt,再试,结果就正常了......
# 创建Pod[root@k8s-master01 ~]# kubectl create -f pod-liveness-httpget.yaml
pod/pod-liveness-httpget created
# 查看Pod详情[root@k8s-master01 ~]# kubectl describe pod pod-liveness-httpget -n dev.......
Normal Pulled 6s (x3 over 64s) kubelet, node1 Container image "nginx:1.17.1" already present on machine
Normal Created 6s (x3 over 64s) kubelet, node1 Created container nginx
Normal Started 6s (x3 over 63s) kubelet, node1 Started container nginx
Warning Unhealthy 6s (x6 over 56s) kubelet, node1 Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 6s (x2 over 36s) kubelet, node1 Container nginx failed liveness probe, will be restarted
# 观察上面信息,尝试访问路径,但是未找到,出现404错误# 稍等一会之后,再观察pod信息,就可以看到RESTARTS不再是0,而是一直增长[root@k8s-master01 ~]# kubectl get pod pod-liveness-httpget -n dev
NAME READY STATUS RESTARTS AGE
pod-liveness-httpget 1/1 Running 5 3m17s
# 当然接下来,可以修改成一个可以访问的路径path,比如/,再试,结果就正常了......