0%

K8s使用NFS跨节点数据持久化

《深入剖析Kubernetes》中使用了Rook来搭建存储,而K8s本身也支持直接使用NFS。本次实验主要为后续学习做铺垫,使用现有NFS存储做为跨节点数据持久化,并简单的学习其实现原理。

安装NFS客户端

分别在各个node上安装

1
$ apt install nfs-common

测试

1
$ mount -t nfs 10.160.12.7:/data/share /mnt

NFS的PV及PVC测试

新建PV

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs-client
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /data/share/bqi
server: 10.160.12.7
1
2
3
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv 100Gi RWO Recycle Bound default/pvc-nfs-test nfs-client 28m

新建PVC

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-test
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-client
resources:
requests:
storage: 1Gi

注:storageClassName同PV的storageClassName

1
2
3
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs-test Bound nfs-pv 100Gi RWO nfs-client 7m41s

测试PVC

测试思路:

  1. 创建一个pod,使用pvc
  2. 在挂载目录下创建文件
  3. 再次创建一个pod,挂载pvc,检查其中文件
  4. 删除两个pod,重新创建一个使用pvc的pod,然后检查数据
  5. 同步检查跨节点时的状况

1. 创建一个pod,使用pvc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Pod
metadata:
name: pvc-test-pod1
spec:
containers:
- name: pvc-busybox
image: busybox
imagePullPolicy: IfNotPresent
stdin: true
tty: true
volumeMounts:
- name: pvc
mountPath: /mnt
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pvc-nfs-test
1
2
3
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pvc-test-pod1 1/1 Running 0 83s 172.1.1.229 node2 <none> <none>

2. 在挂载目录下创建文件

1
2
$ kubectl exec -it pvc-test-pod1 -- /bin/sh
/ # echo "hello nfs pvc" > /mnt/test.txt

3. 创建第二个pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Pod
metadata:
name: pvc-test-pod2
spec:
containers:
- name: pvc-busybox
image: busybox
imagePullPolicy: IfNotPresent
stdin: true
tty: true
volumeMounts:
- name: pvc
mountPath: /mnt
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pvc-nfs-test
1
2
3
4
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pvc-test-pod1 1/1 Running 0 5m26s 172.1.1.229 node2 <none> <none>
pvc-test-pod2 1/1 Running 0 2s 172.1.1.231 node2 <none> <none>
1
2
3
4
$ kubectl exec -it pvc-test-pod2 -- ls /mnt/
test.txt
$ kubectl exec -it pvc-test-pod2 -- cat /mnt/test.txt
hello nfs pvc

4. 删除原来的pod并重建新的pod

1
2
3
4
$ kubectl delete pod pvc-test-pod1
pod "pvc-test-pod1" deleted
$ kubectl delete pod pvc-test-pod2
pod "pvc-test-pod2" deleted
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Pod
metadata:
name: pvc-test-pod3
spec:
containers:
- name: pvc-busybox
image: busybox
imagePullPolicy: IfNotPresent
stdin: true
tty: true
volumeMounts:
- name: pvc
mountPath: /mnt
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pvc-nfs-test
1
2
3
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pvc-test-pod3 1/1 Running 0 7s 172.1.1.232 node2 <none> <none>
1
2
3
4
$ kubectl exec -it pvc-test-pod3 -- ls /mnt
test.txt
$ kubectl exec -it pvc-test-pod3 -- cat /mnt/test.txt
hello nfs pvc

5. 测试跨节点

由于总是调度在node2上,不得已,使用deployment

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-pvc-dp
labels:
app: busybox
spec:
replicas: 5
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
stdin: true
tty: true
volumeMounts:
- name: pvc
mountPath: /mnt
volumes:
- name: pvc
persistentVolumeClaim:
claimName: pvc-nfs-test
1
2
3
4
5
6
7
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nfs-pvc-dp-559d75db66-dg6th 1/1 Running 0 119s 172.1.1.233 node2 <none> <none>
nfs-pvc-dp-559d75db66-kwrnk 1/1 Running 0 119s 172.1.2.210 bqi-k8s-node3 <none> <none>
nfs-pvc-dp-559d75db66-qk4pr 1/1 Running 0 119s 172.1.2.209 bqi-k8s-node3 <none> <none>
nfs-pvc-dp-559d75db66-z224z 1/1 Running 0 119s 172.1.1.234 node2 <none> <none>
nfs-pvc-dp-559d75db66-z4xs5 1/1 Running 0 119s 172.1.2.211 bqi-k8s-node3 <none> <none>
1
2
3
4
$ kubectl exec -it nfs-pvc-dp-559d75db66-dg6th -- cat /mnt/test.txt
hello nfs pvc
$ kubectl exec -it nfs-pvc-dp-559d75db66-kwrnk -- cat /mnt/test.txt
hello nfs pvc

看看底层

可以使用Linux的mount命令查看两个worker节点上的mount状况

1
2
3
4
# node2
$ mount | grep 10.160.12.7
10.160.12.7:/data/share/bqi on /var/lib/kubelet/pods/d2375620-7164-468e-a212-0f73042983d7/volumes/kubernetes.io~nfs/nfs-pv type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.160.18.181,local_lock=none,addr=10.160.12.7)
10.160.12.7:/data/share/bqi on /var/lib/kubelet/pods/015ee988-c22e-4a8e-aa9d-7a85d0dd6045/volumes/kubernetes.io~nfs/nfs-pv type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.160.18.181,local_lock=none,addr=10.160.12.7)
1
2
3
4
$ mount | grep 10.160.12.7
10.160.12.7:/data/share/bqi on /var/lib/kubelet/pods/2ff007b3-6568-4b90-9c76-2c994c18f71c/volumes/kubernetes.io~nfs/nfs-pv type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.160.18.183,local_lock=none,addr=10.160.12.7)
10.160.12.7:/data/share/bqi on /var/lib/kubelet/pods/aa545d93-0178-4adc-87b5-df27ac070a2e/volumes/kubernetes.io~nfs/nfs-pv type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.160.18.183,local_lock=none,addr=10.160.12.7)
10.160.12.7:/data/share/bqi on /var/lib/kubelet/pods/2098028e-8be5-44fd-b5d8-b358baf40791/volumes/kubernetes.io~nfs/nfs-pv type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.160.18.183,local_lock=none,addr=10.160.12.7)

可以看出:10.160.12.7:/data/share/bqi被分别挂载到了对应的pod的目录下

自动创建PV及PVC

当我以为这样就完了的时候,用着用着,我发现,一个PV只能被一个PVC绑定。所以,实际上之前的操作,都局限在将一个nfs的目录变成了一个PV,然后绑定到了PVC,最后挂载到了某个pod上。

而后续实验中发现,存储状态的实验需要不同的pod用不同的PVC。那我不可能总是手工去创建一个PV。于是,需要用到nfs-client-provisioner

nfs-client-provisioner

nfs-client-provisioner可以利用NFS Server给Kubernetes作为持久存储的后端,并且动态提供PV。可以参考https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client

安装方式也有多种,这里采用helm方式安装,非常简单

1
$ helm install --set nfs.server=10.160.12.7 --set nfs.path=/data/share/bqi stable/nfs-client-provisioner

参数说明:

  1. nfs.server - nfs服务器的地址
  2. nfs.path - nfs共享出来的目录
1
2
3
$ helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
crusty-penguin 1 Wed Jul 22 19:07:19 2020 DEPLOYED nfs-client-provisioner-1.2.8 3.1.0 default
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
$ helm status crusty-penguin
LAST DEPLOYED: Wed Jul 22 19:07:19 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ClusterRole
NAME CREATED AT
crusty-penguin-nfs-client-provisioner-runner 2020-07-22T11:07:19Z

==> v1/ClusterRoleBinding
NAME ROLE AGE
run-crusty-penguin-nfs-client-provisioner ClusterRole/crusty-penguin-nfs-client-provisioner-runner 15h

==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
crusty-penguin-nfs-client-provisioner 1/1 1 1 15h

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
crusty-penguin-nfs-client-provisioner-579d48f95f-vncxw 1/1 Running 1 15h

==> v1/Role
NAME CREATED AT
leader-locking-crusty-penguin-nfs-client-provisioner 2020-07-22T11:07:19Z

==> v1/RoleBinding
NAME ROLE AGE
leader-locking-crusty-penguin-nfs-client-provisioner Role/leader-locking-crusty-penguin-nfs-client-provisioner 15h

==> v1/ServiceAccount
NAME SECRETS AGE
crusty-penguin-nfs-client-provisioner 1 15h

==> v1/StorageClass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client cluster.local/crusty-penguin-nfs-client-provisioner Delete Immediate true 15h

可以看到,主要部署了:

  1. Role, roleBinding, serviceAccount
  2. Deployment
  3. StorageClass

其中,比较重要的是StorageClass,当前为nfs-client。当创建PVC时,指定了storageClassNamenfs-client后,将会自动创建PV及PVC

测试

  1. 创建pvc
1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs-test
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-client
resources:
requests:
storage: 1Gi
  1. 查看pv及pvc
1
2
3
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs-test Bound pvc-cd18aa33-8734-461c-81a5-f5cf2aa0ad8f 1Gi RWO nfs-client 3s
1
2
3
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-cd18aa33-8734-461c-81a5-f5cf2aa0ad8f 1Gi RWO Delete Bound default/pvc-nfs-test nfs-client 10s
  1. 查看nfs服务器上的目录
1
2
3
$ ls -l
total 0
drwxrwxrwx 2 root root 10 Jul 23 11:01 default-pvc-nfs-test-pvc-cd18aa33-8734-461c-81a5-f5cf2aa0ad8f

可以看到,对应的nfs服务器上的目录中创建了一个目录

小结

本次实验主要是想通过NFS创建PV及PVC,以提供跨节点数据持久化

注:没有演示搭建nfs服务器的方法,网上教程很多

  1. 在创建PV时,指定了nfs方式
  2. 基于nfs的PV创建PVC,分布于多个节点上的pod可以共享数据
  3. 使用nfs-client-provisioner自动创建PV及PVC