Difference between revisions of "Kubernetes/Storage"
Line 8: | Line 8: | ||
A <tt>PersistentVolumeClaim</tt> (PVC) is a request for storage by a user, similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only). | A <tt>PersistentVolumeClaim</tt> (PVC) is a request for storage by a user, similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only). This is a way to "claim" already provision storage by a pod. | ||
Revision as of 10:54, 26 July 2019
PV - Persistent Volumes
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, or a cloud-provider-specific storage system eg. AWS EBS or gcloud disks.
Access modes: (it's a node capability not a pod, volume can be mounted only using one Access Mode at the time, even if it supports many)
- ReadWriteOnce - only a single node can mount a volume for writing and reading
- ReadOnlyMany - multiple nodes can mount for reading only
- ReadWriteMany - multiple nodes can mount volume for reading and writing
A PersistentVolumeClaim (PVC) is a request for storage by a user, similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only). This is a way to "claim" already provision storage by a pod.
A StorageClass provides a way for administrators to describe the classes of storage they offer (IOPS, performance, ssd), it's called profiles in other storage systems.
Operations
In this example we use Google Cloud and run from Cloud Shell.
#Create a volume, a zone must match your K8s cluster node's zone a pod is running on gcloud compute disks create --size=1GiB --zone=us-central1-a mongodb gcloud compute disks list NAME LOCATION LOCATION_SCOPE SIZE_GB TYPE STATUS gke-standard-cluster-1-default-pool-c43dab38-4qdn europe-west1-b zone 100 pd-standard READY gke-standard-cluster-1-default-pool-c43dab38-553f europe-west1-b zone 100 pd-standard READY gke-standard-cluster-1-default-pool-c43dab38-qc0z europe-west1-b zone 100 pd-standard READY mongodb europe-west1-b zone 1 pd-standard READY #Create a pod, from a table below kubectl apply -f mongodb.yaml kubectl describe pod mongodb ... Volumes: mongodb-data: Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine) PDName: mongodb FSType: ext4 Partition: 0 ReadOnly: false default-token-fvhrt: Type: Secret (a volume populated by a Secret) SecretName: default-token-fvhrt Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 29s default-scheduler Successfully assigned default/mongodb to gke-standard-cluster-1-default-pool-c43dab38-553f Normal SuccessfulAttachVolume 22s attachdetach-controller AttachVolume.Attach succeeded for volume "mongodb-data"
Pod | PersistentVolume |
---|---|
apiVersion: v1 kind: Pod metadata: name: mongodb spec: volumes: - name: mongodb-data gcePersistentDisk: pdName: mongodb #gcloud disk name fsType: ext4 containers: - image: mongo name: mongodb volumeMounts: - name: mongodb-data #volume name from spec.volumes above mountPath: /data/db #where MongoDb stores its data ports: - containerPort: 27017 #standard MongoDB port protocol: TCP |
apiVersion: v1 kind: PersistentVolume metadata: name: mongodb-pv spec: capacity: storage: 0.5Gi accessModes: - ReadWriteOnce - ReadOnlyMany persistentVolumeReclaimPolicy: Retain gcePersistentDisk: pdName: mongodb fsType: ext4 |
Save data to DB, delete pod and re-create to verify data persistance
kubectl exec -it mongodb mongo #connect to MongoDB > use aaa switched to db mydb > db.foo.insert({name:'foo'}) WriteResult({ "nInserted" : 1 }) > db.foo.find() { "_id" : ObjectId("5d3aa87f4f89408f62df4e8b"), "name" : "foo" } exit #Drain a node that the pod is running on kubectl get nodes NAME STATUS ROLES AGE VERSION gke-standard-cluster-1-default-pool-c43dab38-4qdn Ready <none> 44m v1.12.8-gke.10 gke-standard-cluster-1-default-pool-c43dab38-553f Ready <none> 44m v1.12.8-gke.10 gke-standard-cluster-1-default-pool-c43dab38-qc0z Ready <none> 44m v1.12.8-gke.10 kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE mongodb 1/1 Running 0 24m 10.4.2.4 gke-standard-cluster-1-default-pool-c43dab38-553f <none> kubectl drain gke-standard-cluster-1-default-pool-c43dab38-553f --ignore-daemonsets kubectl get nodes NAME STATUS ROLES AGE VERSION gke-standard-cluster-1-default-pool-c43dab38-4qdn Ready <none> 51m v1.12.8-gke.10 gke-standard-cluster-1-default-pool-c43dab38-553f Ready,SchedulingDisabled <none> 51m v1.12.8-gke.10 gke-standard-cluster-1-default-pool-c43dab38-qc0z Ready <none> 51m v1.12.8-gke.10 #Create pod again kubectl apply -f mongodb.yaml pod/mongodb created kubectl get pods -owide #notice it's running on different node now NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE mongodb 1/1 Running 0 104s 10.4.0.9 gke-standard-cluster-1-default-pool-c43dab38-4qdn <none> kubectl exec -it mongodb mongo #data we created earlier or should be still there > use mydb switched to db aaa > db.foo.find() { "_id" : ObjectId("5d3aa87f4f89408f62df4e8b"), "name" : "foo" } >
Persistent volumes can be managed like another K8s resources. Use YAML manifest from the table above 2nd column to create one
kubectl get persistentvolume -owide NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE mongodb-pv 512Mi RWO,ROX Retain Available 61s