1. 概述
在日常学习与企业内网环境中,可能并没有条件使用Ceph等分布式存储,这时可以使用NFS作为StorageClass。
NFS代表网络文件系统(Network File System),它是一种允许计算机通过网络共享文件和存储设备的协议。
2. 环境说明
本文nfs server所在服务器的操作系统为Ubuntu 22.04.2
, IP为192.168.3.10
kubernetes
版本为1.27.2
kubernetes
节点的操作系统为RockyLinux 9.2 x64
3. k8s集群外部署NFS server
- 安装nfs server
apt install -y nfs-kernel-server
- 配置nfs server
mkdir /data/nfs
chmod a+rw /data/nfs/
echo '/data/nfs *(rw,sync,no_subtree_check,no_root_squash,insecure)' >> /etc/exports
# 使配置生效
exportfs -r
# 查看当前配置
exportfs
systemctl restart rpcbind
systemctl enable rpcbind
systemctl restart nfs-server
systemctl enable nfs-server
# 查看rpc服务的注册情况
rpcinfo -p localhost |grep nfs
showmount -e 192.168.3.10
4. k8s集群内创建Storage class
官方文档: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
4.1 所有k8s节点安装nfs client
# centos/rockylinux
yum install -y nfs-utils
4.2 为NFS设置rbac
创建文件rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
kubectl apply -f rbac.yaml
输出
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
4.3 创建nfs subdir external provisioner
创建文件nfs-subdir-external-provisioner.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
# image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
image: k8s.dockerproxy.com/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
# value: <YOUR NFS SERVER HOSTNAME>
value: 192.168.3.10
- name: NFS_PATH
# value: /var/nfs
value: /data/nfs
volumes:
- name: nfs-client-root
nfs:
# server: <YOUR NFS SERVER HOSTNAME>
server: 192.168.3.10
path: /data/nfs
kubectl apply -f nfs-subdir-external-provisioner.yaml
4.4 部署storage class
创建文件nfs-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" # 此处也可以使用 "${.PVC.namespace}/${.PVC.name}" 来使用pvc的名称作为nfs中真实目录名称
onDelete: delete
kubectl apply -f nfs-storage-class.yaml
5. 测试nfs storage class
5.1 创建一个PVC
创建文件nfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
nfs.io/storage-path: "test-path" # not required, depending on whether this annotation was shown in the storage class description
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Mi
kubectl apply -f nfs-pvc.yaml
查看pvc
kubectl get pvc
输出
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-e9a9f6ad-45f8-4e3f-9b1e-abdc5f59c7ef 50Mi RWX nfs-client 11s
5.2 在NFS server查看
登录服务器192.168.3.10
可以看到在/data/nfs/
目录下有一个default
目录,这表示default命名空间
default
目录下有一个test-path
目录,这个就是创建pvc
时在annotations
指定的 nfs.io/storage-path: "test-path"
;
tree /data/nfs/
/data/nfs/
├── default
│ └── test-path