k8s storageclasses nfs-provisioner 部署
k8s storageclasses部署存储由nfs-provisioner提供
- 1. 部署nfs-server服务器
- 2. 所有k8s节点部署nfs客户端工具
- 3. 部署nfs storageclasses
- 3.1. 编写NFS驱动资源文件
- 3.2. 编写nfs storageclasses资源文件
- 3.3. 创建所有资源
- 4. 创建PVC使用nfs-provisioner StorageClass来自动创建PV
- 5. 创建pod挂载pvc
- 6. 删除pvc
1. 部署nfs-server服务器
安装nfs服务
yum -y install nfs-utils
创建共享目录
mkdir /data
配置共享目录
echo "/data 192.168.25.*(rw,sync,no_root_squash)" > /etc/exports
- 192.168.25.* : 允许访问的地址
启动nfs服务
systemctl restart rpcbind
systemctl restart nfs
showmount查看:
showmount -e
Export list for 192.168.25.247:
/data 192.168.25.*
我的nfs服务器地址是 192.168.25.247
2. 所有k8s节点部署nfs客户端工具
yum -y install nfs-utils
showmount -e 192.168.25.247
确保所有节点都能访问到nfs服务器
3. 部署nfs storageclasses
3.1. 编写NFS驱动资源文件
nfs-provisioner-namespace.yaml
:
apiVersion: v1
kind: Namespace
metadata:name: nfs-provisioner
nfs-provisioner-rbal.yaml
:
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisionernamespace: nfs-provisioner
---
# 集群角色定义
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
# 集群角色绑定
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: nfs-provisioner
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
# 角色定义
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: nfs-provisioner
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
# 角色绑定
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisionernamespace: nfs-provisioner
subjects:- kind: ServiceAccountname: nfs-client-provisionernamespace: nfs-provisioner
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io
nfs-provisioner-deploy.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:name: nfs-client-provisionerlabels:app: nfs-client-provisionernamespace: nfs-provisioner
spec:replicas: 2strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner # nfs provisioner 名称,在部署nfs storageclasses需要指的。- name: NFS_SERVERvalue: 192.168.25.247 # NFS服务器地址- name: NFS_PATHvalue: /data # NFS服务器共享路径volumes:- name: nfs-client-rootnfs:server: 192.168.25.247 path: /data # 与上方NFS_PATH一致
由于国内无法拉取
registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
镜像,所以可以拉取docker-hub中的镜像,然后使用docker tag修改镜像名称。
$ docker pull eipwork/nfs-subdir-external-provisioner:v4.0.2
$ docker tag eipwork/nfs-subdir-external-provisioner:v4.0.2 registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
3.2. 编写nfs storageclasses资源文件
nfs-provisioner-storageclasses.yaml
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-provisioner
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # 正确设置provisioner值
parameters: server: 192.168.25.247 # nfs服务器地址path: /data # 共享目录readOnly: "false"
reclaimPolicy: Delete # 回收策略
allowVolumeExpansion: true # 允许PVC扩容
3.3. 创建所有资源
kubectl apply -f ./
查看NFS驱动pod是否正常运行:
kubectl get pod -n nfs-provisioner -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nfs-client-provisioner-675b648f9f-6tnkm 1/1 Running 0 26m 10.224.58.229 k8s-node02 <none> <none>
nfs-client-provisioner-675b648f9f-q9hdw 1/1 Running 0 26m 10.224.85.216 k8s-node01 <none> <none>
目前nfs pod运行在
k8s-node02
k8s-node01
所以会在这两台主机挂载nfs目录
查看nfs storageclasses资源是否正常创建
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-provisioner k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate true 99s
4. 创建PVC使用nfs-provisioner StorageClass来自动创建PV
nginx-pvc.yaml
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: nginx-pvc
spec:accessModes:- ReadWriteMany resources:requests:storage: 1Gi # 请求的存储容量storageClassName: nfs-provisioner # 使用nfs-provisioner storageClass
kubectl apply -f nginx-pvc.yaml
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
nginx-pvc Bound pvc-2e842830-0385-41ea-9630-5de671a238e8 1Gi RWX nfs-provisioner <unset> 5s
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-2e842830-0385-41ea-9630-5de671a238e8 1Gi RWX Delete Bound default/nginx-pvc nfs-provisioner <unset> 21s
可以看出创建的pvcnginx-pvc
绑定到了自动创建的pvpvc-69f5e4ab-1257-4115-8e8a-f62111408758
5. 创建pod挂载pvc
nginx-pod.yaml
:
apiVersion: v1
kind: Pod
metadata:name: nginx
spec:containers:- name: nginximage: nginx:latestports:- containerPort: 80volumeMounts: # 绑定卷- name: html # 与下面声明的卷的名称一样mountPath: /usr/share/nginx/html # 挂载容器挂载路径volumes: # 声明卷- name: html # 卷的名称persistentVolumeClaim:claimName: nginx-pvc # 已经存在的pvc的名称
kubectl apply -f nginx-pod.yaml
kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 52s
创建index.html文件:
kubectl exec nginx -- /bin/bash -c "hostname > /usr/share/nginx/html/index.html"
使用port-forward
将端口转发出来访问测试:
kubectl port-forward --address 0.0.0.0 pod/nginx 8888:80
6. 删除pvc
先删除pod
kubectl delete -f nginx-pod.yaml
删除pvc
kubectl delete -f nginx-pvc.yaml
查看pvc和pv
kubectl get pv,pvc
No resources found
查看nfs目录文件:
数据仍然保存。