30Metrics Server的使用
一、准备工作
1、下载Metrics Server镜像
[root@master 30]# cat metrics-server.sh
repo=registry.aliyuncs.com/google_containersname=k8s.gcr.io/metrics-server/metrics-server:v0.6.1
src_name=metrics-server:v0.6.1docker pull $repo/$src_namedocker tag $repo/$src_name $name
docker rmi $repo/$src_name
[root@master 30]# ./metrics-server.sh
v0.6.1: Pulling from google_containers/metrics-server
2df365faf0e3: Already exists
0ae7e0717edb: Pull complete
Digest: sha256:5ddc6458eb95f5c70bd13fdab90cbd7d6ad1066e5b528ad1dcb28b76c5fb2f00
Status: Downloaded newer image for registry.aliyuncs.com/google_containers/metrics-server:v0.6.1
registry.aliyuncs.com/google_containers/metrics-server:v0.6.1
Untagged: registry.aliyuncs.com/google_containers/metrics-server:v0.6.1
Untagged: registry.aliyuncs.com/google_containers/metrics-server@sha256:5ddc6458eb95f5c70bd13fdab90cbd7d6ad1066e5b528ad1dcb28b76c5fb2f00
2、下载httpd:alpine镜像(hpa会用到这个镜像)
[root@master 30]# docker pull httpd:alpine
alpine: Pulling from library/httpd
f18232174bc9: Pull complete
f6f8d7d49e24: Pull complete
aca02fb0fe83: Pull complete
4f4fb700ef54: Pull complete
da9bd0c8aef2: Pull complete
e7c0ad6e3e09: Pull complete
852b1f8ff649: Pull complete
Digest: sha256:4aec2953509e2d3aa5a8d73c580a381be44803fd2481875b15d9ad7d2810d7ca
Status: Downloaded newer image for httpd:alpine
docker.io/library/httpd:alpine
3、下载并修改YAML文件
- 浏览器进入git地址:
https://github.com/kubernetes-sigs/metrics-server/releases/latest/ - 下载components.yaml文件
- 修改Metrics Server的通信协议,追加参数–kubelet-insecure-tls
spec:template:spec:containers:- args:- --kubelet-insecure-tls
- 修改镜像地址为阿里云的镜像地址
image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.1
二、部署Metrics Server
[root@master 30]# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
[root@master 30]# kubectl get pod -n kube-system | grep metrics-server
metrics-server-66bdd9d576-d6rff 0/1 Running 0 30s
三、使用Metrics Server
1、查看节点的资源使用率
[root@master 30]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master 185m 9% 1242Mi 33%
worker 62m 3% 656Mi 38%
2、查看Pod的资源使用率
[root@master 30]# kubectl top pod -n kube-system
NAME CPU(cores) MEMORY(bytes)
coredns-64897985d-4dx65 3m 16Mi
coredns-64897985d-89vss 3m 17Mi
etcd-master 19m 80Mi
kube-apiserver-master 54m 305Mi
kube-controller-manager-master 30m 53Mi
kube-proxy-8ght4 1m 21Mi
kube-proxy-g8mlb 1m 17Mi
kube-scheduler-master 4m 22Mi
metrics-server-66bdd9d576-d6rff 5m 14Mi
四、HorizontalPodAutoscaler(hpa)
专门用来自动伸缩Pod的数量,适用于Deployment和StatefulSet,但不能用于DaemonSet。
HorizontalPodAutoscaler的能力完全基于Metrics Server,它从Metrics Server获取当前应用的运行指标,主要是CPU使用率,再依据预定的策略增加或者减少Pod的数量。
1、创建一个Nginx应用,作为自动伸缩的目标对象
[root@master 30]# cat 03ngx-hpa-dep.yml
apiVersion: apps/v1
kind: Deployment
metadata:name: ngx-hpa-dep
spec:replicas: 1selector:matchLabels:app: ngx-hpa-deptemplate:metadata:labels:app: ngx-hpa-depspec:containers:- image: nginx:alpinename: nginxports:- containerPort: 80resources:requests:cpu: 50mmemory: 10Milimits:cpu: 100mmemory: 20Mi
---apiVersion: v1
kind: Service
metadata:name: ngx-hpa-svc
spec:ports:- port: 80protocol: TCPtargetPort: 80selector:app: ngx-hpa-dep
[root@master 30]# kubectl apply -f 03ngx-hpa-dep.yml
deployment.apps/ngx-hpa-dep created
service/ngx-hpa-svc created
[root@master 30]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d7h
ngx-hpa-svc ClusterIP 10.108.185.201 <none> 80/TCP 11s
ngx-svc NodePort 10.102.24.158 <none> 80:30749/TCP 163m
[root@master 30]# kubectl get pod
NAME READY STATUS RESTARTS AGE
ngx-dep-6796688696-2xjz8 1/1 Running 0 167m
ngx-dep-6796688696-pkwft 1/1 Running 0 158m
ngx-hpa-dep-86f66c75f5-kwgln 1/1 Running 0 18s
redis-ds-g8mt5 1/1 Running 0 3h5m
redis-ds-ztt5f 1/1 Running 0 3h5m
2、生成YAML样板文件
- min,Pod数量的最小值,也就是缩容的下限。
- max,Pod数量的最大值,也就是扩容的上限。
- cpu-percent,CPU使用率指标,当大于这个值时扩容,小于这个值时缩容。
[root@master 30]# kubectl autoscale deploy ngx-hpa-dep --min=2 --max=10 --cpu-percent=5 --dry-run=client -o yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:creationTimestamp: nullname: ngx-hpa-dep
spec:maxReplicas: 10minReplicas: 2scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: ngx-hpa-deptargetCPUUtilizationPercentage: 5
status:currentReplicas: 0desiredReplicas: 0
3、创建HorizontalPodAutoscale
[root@master 30]# cat 04ngx-hpa.yml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:name: ngx-hpa
spec:maxReplicas: 10minReplicas: 2scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: ngx-hpa-deptargetCPUUtilizationPercentage: 5
[root@master 30]# kubectl apply -f 04ngx-hpa.yml
horizontalpodautoscaler.autoscaling/ngx-hpa created
[root@master 30]# kubectl get deploy ngx-hpa-dep
NAME READY UP-TO-DATE AVAILABLE AGE
ngx-hpa-dep 2/2 2 2 12m
[root@master 30]# kubectl get hpa ngx-hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
ngx-hpa Deployment/ngx-hpa-dep 0%/5% 2 10 2 68s
下面我们来给Nginx加上压力流量,运行一个测试Pod,使用的镜像是“httpd:alpine”,它里面有HTTP性能测试工具ab(Apache Bench):
[root@master ~]# kubectl run test1 -it --image=httpd:alpine -- sh
If you don't see a command prompt, try pressing enter.
/usr/local/apache2 # ab -V
This is ApacheBench, Version 2.3 <$Revision: 1923142 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org//usr/local/apache2 #
然后我们向Nginx发送一百万个请求,持续1分钟,再用kubectl get hpa来观察HorizontalPodAutoscaler的运行状况:
/usr/local/apache2 # ab -c 10 -t 60 -n 1000000 'http://ngx-hpa-svc/'
This is ApacheBench, Version 2.3 <$Revision: 1923142 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking ngx-hpa-svc (be patient)
Completed 100000 requests
Completed 200000 requests
Completed 300000 requests
Completed 400000 requests
Finished 466952 requestsServer Software: nginx/1.27.3
Server Hostname: ngx-hpa-svc
Server Port: 80Document Path: /
Document Length: 615 bytesConcurrency Level: 10
Time taken for tests: 60.000 seconds
Complete requests: 466952
Failed requests: 6(Connect: 0, Receive: 0, Length: 0, Exceptions: 6)
Total transferred: 395981232 bytes
HTML transferred: 287179785 bytes
Requests per second: 7782.53 [#/sec] (mean)
Time per request: 1.285 [ms] (mean)
Time per request: 0.128 [ms] (mean, across all concurrent requests)
Transfer rate: 6445.00 [Kbytes/sec] receivedConnection Times (ms)min mean[+/-sd] median max
Connect: 0 0 0.3 0 21
Processing: 0 1 6.2 0 88
Waiting: 0 1 6.1 0 88
Total: 0 1 6.2 1 88Percentage of the requests served within a certain time (ms)50% 166% 175% 180% 190% 195% 298% 399% 12100% 88 (longest request)
[root@master 20]# kubectl get hpa ngx-hpa -w
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
ngx-hpa Deployment/ngx-hpa-dep 0%/5% 2 10 2 32m
ngx-hpa Deployment/ngx-hpa-dep 19%/5% 2 10 2 32m
ngx-hpa Deployment/ngx-hpa-dep 178%/5% 2 10 4 32m
ngx-hpa Deployment/ngx-hpa-dep 195%/5% 2 10 8 33m
ngx-hpa Deployment/ngx-hpa-dep 188%/5% 2 10 10 33m
ngx-hpa Deployment/ngx-hpa-dep 144%/5% 2 10 10 33m
ngx-hpa Deployment/ngx-hpa-dep 35%/5% 2 10 10 33m
ngx-hpa Deployment/ngx-hpa-dep 3%/5% 2 10 10 34m
ngx-hpa Deployment/ngx-hpa-dep 0%/5% 2 10 10 34m
ngx-hpa Deployment/ngx-hpa-dep 0%/5% 2 10 10 38m
ngx-hpa Deployment/ngx-hpa-dep 0%/5% 2 10 6 39m
ngx-hpa Deployment/ngx-hpa-dep 0%/5% 2 10 2 39m
因为Metrics Server大约每15秒采集一次数据,所以HorizontalPodAutoscaler的自动化扩容和缩容也是按照这个时间点来逐步处理的。
当它发现目标的CPU使用率超过了预定的5%后,就会以2的倍数开始扩容,一直到数量上限,然后持续监控一段时间,如果CPU使用率回落,就会再缩容到最小值。