Kubernets PV PVC
1. Volume
https://kubernetes.io/docs/concepts/storage/volumes/
- Kubernetes中的Volume提供了在容器中挂载外部存储的能力
- Pod需要设置卷来源(spec.volume)和挂载点(spec.containers.volumeMounts)两个信息后才可以使用相应的Volume
1.1 本地卷emptyDir
创建一个空卷,挂载到Pod中的容器。Pod删除该卷也会被删除。应用场景:Pod中容器之间数据共享
#(spec.volumes)
volumes:
- name: data
emptyDir: {}
1.2 本地卷hostPath
挂载Node文件系统上文件或者目录到Pod中的容器。应用场景:Pod中容器需要访问宿主机文件
[root@k8s-master01 demo2]# cat hostpath.yaml
apiVersion: v1
kind: Pod
metadata:
name: my-busybox
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 36000
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
hostPath:
path: /tmp
type: Directory
Value | Behavior |
---|---|
Empty | string (default) is for backward compatibility, which means that no checks will be performed before mounting the hostPath volume. |
DirectoryOrCreate | If nothing exists at the given path, an empty directory will be created there as needed with permission set to 0755, having the same group and ownership with Kubelet. |
Directory | A directory must exist at the given path |
FileOrCreate | If nothing exists at the given path, an empty file will be created there as needed with permission set to 0644, having the same group and ownership with Kubelet. |
File | A file must exist at the given path |
Socket | A UNIX socket must exist at the given path |
CharDevice | A character device must exist at the given path |
BlockDevice | A block device must exist at the given path |
1.3 网络卷nfs
nfs网络卷不像上面emptydir,pod删除,数据就没了。我们在192.168.186.143
机器上安装一个nfs,作为nfs server。
volumes:
- name: wwwroot
nfs:
server: 192.168.186.143
path: /caimengzhi/kubernetes
2. PersistentVolume 静态
管理存储和管理计算有着明显的不同。PersistentVolume子系统给用户和管理员提供了一套API,从而抽象出存储是如何提供和消耗的细节。在这里,我们介绍两种新的API资源:PersistentVolume(简称PV)和PersistentVolumeClaim(简称PVC)。
PersistentVolume(持久卷,简称PV)是集群内,由管理员提供的网络存储的一部分。就像集群中的节点一样,PV也是集群中的一种资源。它也像Volume一样,是一种volume插件,但是它的生命周期却是和使用它的Pod相互独立的。PV这个API对象,捕获了诸如NFS、ISCSI、或其他云存储系统的实现细节。
PersistentVolumeClaim(持久卷声明,简称PVC)是用户的一种存储请求。它和Pod类似,Pod消耗Node资源,而PVC消耗PV资源。Pod能够请求特定的资源(如CPU和内存)。PVC能够请求指定的大小和访问的模式(可以被映射为一次读写或者多次只读)。PVC允许用户消耗抽象的存储资源,用户也经常需要各种属性(如性能)的PV。集群管理员需要提供各种各样、不同大小、不同访问模式的PV,而不用向用户暴露这些volume如何实现的细节。因为这种需求,就催生出一种StorageClass资源。
StorageClass提供了一种方式,使得管理员能够描述他提供的存储的等级。集群管理员可以将不同的等级映射到不同的服务等级、不同的后端策略。
1、创建存储端
#创建nfs服务端
yum install -y nfs-utils
# mkdir v{1,2,3,4,5}
# ls
v1 v2 v3 v4 v5
#chmod 777 v{1,2,3,4,5}
# pwd
/data/k8s_volumes
# cat /etc/exports
/data/k8s_volumes/v1 172.22.108.0/24(rw,no_root_squash)
/data/k8s_volumes/v2 172.22.108.0/24(rw,no_root_squash)
/data/k8s_volumes/v3 172.22.108.0/24(rw,no_root_squash)
/data/k8s_volumes/v4 172.22.108.0/24(rw,no_root_squash)
/data/k8s_volumes/v5 172.22.108.0/24(rw,no_root_squash)
[k8s_volumes]# exportfs -arv
exporting 172.22.108.0/24:/data/k8s_volumes/v5
exporting 172.22.108.0/24:/data/k8s_volumes/v4
exporting 172.22.108.0/24:/data/k8s_volumes/v3
exporting 172.22.108.0/24:/data/k8s_volumes/v2
exporting 172.22.108.0/24:/data/k8s_volumes/v1
2、创建pv
# vim pv-daemon.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
name: pv001
spec:
nfs:
path: /data/volumes/v1
server: 10.100.106.15 #nfs server的ip
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
capacity:
storage: 10Gi
3、创建pvc
[root@k8s-master01 demo2]# cat pvc-daemon.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
namespace: default
spec:
accessModes: ["ReadWriteOnce","ReadOnlyMany","ReadWriteMany"]
resources:
requests:
storage: 9Gi
---
apiVersion: v1
kind: Pod
metadata:
name: caimengzhi-pv-pvc-nfs-pod
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html/"
name: wwwroot
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: mypvc
备注:
虽然我们使用了pvc,其实还是关联了之前创建的pv,也就是共享了nfs数据一旦数据有变更,其他都跟着变动。
以上是静态的供给,也就是先创建pv,然后pvc关联pv,最后容器服务关联pvc,接下来我们讲述另一种pvc自动创建pv。
故障
Warning FailedMount 89s kubelet, 172.22.108.12 MountVolume.SetUp failed for volume "pv002-prom" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/97d1800c-c20a-48c2-a005-1352ec2cb598/volumes/kubernetes.io~nfs/pv002-prom --scope -- mount -t nfs 10.100.106.15:/data/k8s_volumes/v2 /var/lib/kubelet/pods/97d1800c-c20a-48c2-a005-1352ec2cb598/volumes/kubernetes.io~nfs/pv002-prom
Output: Running scope as unit run-149025.scope.
mount: wrong fs type, bad option, bad superblock on 10.100.106.15:/data/k8s_volumes/v2,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
出现故障:
yum -y install nfs-utils