Kubernetes/Storage
PV - Persistent Volumes
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, or a cloud-provider-specific storage system eg. AWS EBS or gcloud disks.
Access modes: (it's a node capability not a pod, volume can be mounted only using one Access Mode at the time, even if it supports many)
- ReadWriteOnce - only a single node can mount a volume for writing and reading
- ReadOnlyMany - multiple nodes can mount for reading only
- ReadWriteMany - multiple nodes can mount volume for reading and writing
A PersistentVolumeClaim (PVC) is a request for storage by a user, similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only). This is a way to "claim" already provision storage by a pod.
A StorageClass provides a way for administrators to describe the classes of storage they offer (IOPS, performance, ssd), it's called profiles in other storage systems.
Operations
In this example we use Google Cloud and run from Cloud Shell.
#Create a volume, a zone must match your K8s cluster node's zone a pod is running on
gcloud compute disks create --size=1GiB --zone=us-central1-a mongodb
gcloud compute disks list
NAME LOCATION LOCATION_SCOPE SIZE_GB TYPE STATUS
gke-standard-cluster-1-default-pool-c43dab38-4qdn europe-west1-b zone 100 pd-standard READY
gke-standard-cluster-1-default-pool-c43dab38-553f europe-west1-b zone 100 pd-standard READY
gke-standard-cluster-1-default-pool-c43dab38-qc0z europe-west1-b zone 100 pd-standard READY
mongodb europe-west1-b zone 1 pd-standard READY
#Create a pod, from a table below
kubectl apply -f mongodb.yaml
kubectl describe pod mongodb
...
Volumes:
mongodb-data:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: mongodb
FSType: ext4
Partition: 0
ReadOnly: false
default-token-fvhrt:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-fvhrt
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 29s default-scheduler Successfully assigned default/mongodb to gke-standard-cluster-1-default-pool-c43dab38-553f
Normal SuccessfulAttachVolume 22s attachdetach-controller AttachVolume.Attach succeeded for volume "mongodb-data"
| Pod | PersistentVolume |
|---|---|
apiVersion: v1
kind: Pod
metadata:
name: mongodb
spec:
volumes:
- name: mongodb-data
gcePersistentDisk:
pdName: mongodb #gcloud disk name
fsType: ext4
containers:
- image: mongo
name: mongodb
volumeMounts:
- name: mongodb-data #volume name from spec.volumes above
mountPath: /data/db #where MongoDb stores its data
ports:
- containerPort: 27017 #standard MongoDB port
protocol: TCP
|
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
gcePersistentDisk:
pdName: mongodb
fsType: ext4
|
Save data to DB, delete pod and re-create to verify data persistance
kubectl exec -it mongodb mongo #connect to MongoDB
> use aaa
switched to db mydb
> db.foo.insert({name:'foo'})
WriteResult({ "nInserted" : 1 })
> db.foo.find()
{ "_id" : ObjectId("5d3aa87f4f89408f62df4e8b"), "name" : "foo" }
exit
#Drain a node that the pod is running on
kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-standard-cluster-1-default-pool-c43dab38-4qdn Ready <none> 44m v1.12.8-gke.10
gke-standard-cluster-1-default-pool-c43dab38-553f Ready <none> 44m v1.12.8-gke.10
gke-standard-cluster-1-default-pool-c43dab38-qc0z Ready <none> 44m v1.12.8-gke.10
kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
mongodb 1/1 Running 0 24m 10.4.2.4 gke-standard-cluster-1-default-pool-c43dab38-553f <none>
kubectl drain gke-standard-cluster-1-default-pool-c43dab38-553f --ignore-daemonsets
kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-standard-cluster-1-default-pool-c43dab38-4qdn Ready <none> 51m v1.12.8-gke.10
gke-standard-cluster-1-default-pool-c43dab38-553f Ready,SchedulingDisabled <none> 51m v1.12.8-gke.10
gke-standard-cluster-1-default-pool-c43dab38-qc0z Ready <none> 51m v1.12.8-gke.10
#Create pod again
kubectl apply -f mongodb.yaml
pod/mongodb created
kubectl get pods -owide #notice it's running on different node now
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
mongodb 1/1 Running 0 104s 10.4.0.9 gke-standard-cluster-1-default-pool-c43dab38-4qdn <none>
kubectl exec -it mongodb mongo #data we created earlier or should be still there
> use mydb
switched to db aaa
> db.foo.find()
{ "_id" : ObjectId("5d3aa87f4f89408f62df4e8b"), "name" : "foo" }
>
Persistent volumes can be managed like another K8s resources. Use YAML manifest from the table above 2nd column to create one
kubectl get persistentvolume -owide NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE mongodb-pv 512Mi RWO,ROX Retain Available 61s
PersistentVolumeClaim
Persistent Volume Claims (PVCs) are a way for an application developer to request storage for the application without having to know where the underlying storage is. The claim is then bound to the Persistent Volume (PV), and it will not be released until the PVC is deleted.
| Pod using PVC | PersistentVolumeClaim |
|---|---|
| apiVersion: v1
kind: Pod metadata: name: mongodb spec: containers:
- image: mongo
name: mongodb
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
protocol: TCP
volumes:
- name: mongodb-data
persistentVolumeClaim:
claimName: mongodb-pvc
|
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pvc
spec:
resources:
requests:
storage: 1Gi #cannot be more than PV
accessModes:
- ReadWriteOnce
storageClassName: ""
|
- Storage Objects in use Protection, it does not allow to delete PV or PVC if in use by a Pod
| PV | PVC |
|---|---|
kubectl describe pv mongodb-pv
...
Finalizers: [kubernetes.io/pv-protection]
...
Source:
Type: ...
PDName: mongodb
FSTyope: ext4
...
|
kubectl describe pv mongodb-pvc ... Finalizers: [kubernetes.io/pvc-protection] ... |
Storage Class
It's K8s object that allows to manage physical storage using provisioners. Eg, using provisioner: kubernetes.io/gce-pd it's an equivalent of gcloud compute disks create --size=1GiB --zone=us-west1-a mongodb-vol-ssd --zone us-west1-a --type pd-ssd.
| col1-title | col2-title |
|---|---|
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pvc
spec:
storageClassName: fast
resources:
requests:
storage: 100Mi
accessModes:
- ReadWriteOnce
|
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd </source |} List drives <source lang=bash> gcloud compute disks list NAME LOCATION LOCATION_SCOPE SIZE_GB TYPE STATUS gke-standard-cluster-1-default-pool-29207899-1s7d us-west1-a zone 100 pd-standard READY gke-standard-cluster-1-default-pool-29207899-7sxs us-west1-a zone 100 pd-standard READY gke-standard-cluster-1-default-pool-29207899-w4hh us-west1-a zone 100 pd-standard READY gke-standard-cluster-1-pvc-148a2097-b29b-11e9-b66b-42010a8a00c7 us-west1-a zone 1 pd-ssd READY #StorageClass mongodb-vol us-west1-a zone 7 pd-standard READY mongodb-vol-ssd us-west1-a zone 1 pd-ssd READY #gcloud created List storage classes kubectl get sc NAME PROVISIONER AGE fast kubernetes.io/gce-pd 9m11s #storage class created standard (default) kubernetes.io/gce-pd 49m #Default storage class when you create a PVC, and leave StorageClass: <blank> kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mongodb-pvc Bound mongodb-pv 2Gi RWO,ROX 40m mongodb-pvc-storage-class Bound pvc-148a2097-b29b-11e9-b66b-42010a8a00c7 1Gi RWO fast 12m Resources |