Kubernetes/Storage

From Ever changing code
< Kubernetes
Revision as of 17:51, 16 July 2021 by Pio2pio (talk | contribs) (→‎Expanding PVC Persistent Volume Claims)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

PV - Persistent Volumes

StorageClass (SC)
Simply you describe how to create PhysicalVolume specifying a provisioner in a manifest, in Google it will be GCE-provisioner, in AWS it will be EBS-provisioner, you can also specify other settings to be passed to the provisioner like diskType: magentic, ssd, IOPS etc. It provides a way for administrators to describe the classes of storage they offer (IOPS, performance, ssd), it's called profiles in other storage systems.
PersistentVolumeClaim (PVC)
is a request for storage by a user, similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only). This is a way to "claim" already provision storage by a pod. PVC can request StorageClass or direct alrady created PV for space.
PersistentVolume (PV)
is a storage (volume,dir,share) in the cluster that has been provisioned by an administrator (using eg gcloud compute disks ... command) or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, or a cloud-provider-specific storage system eg. AWS EBS or gcloud disks.

PV Access modes: (it's a node capability not a pod, volume can be mounted only using one Access Mode at the time, even if it supports many)

  • ReadWriteOnce - only a single node can mount a volume for writing and reading
  • ReadOnlyMany - multiple nodes can mount for reading only
  • ReadWriteMany - multiple nodes can mount volume for reading and writing

Projected volume

A projected volume maps several existing volume sources into the same directory. Currently, the following types of volume sources can be projected:

  • secret
  • downwardAPI
  • configMap
  • serviceAccountToken


All sources are required to be in the same namespace as the Pod. For more details, see the all-in-one volume design document.

Workflow

You create a

  • deployment, then deployment requests storage using
  • PersistentVolumeClaim, then PVC uses PV direct or StorageClass to provision disks
    • StorageClass provision disks, and creates
  • PV (PersistentDisk)

Operations

In this example we use Google Cloud and run from Cloud Shell. We create a disk using gcloud Google Cloud command and, and use the created disk in PersistentVolume manifest.

#Create a volume, a zone must match your K8s cluster node's zone a pod is running on
gcloud compute disks create --size=1GiB --zone=europe-west1-b mongodb 
gcloud compute disks list
NAME                                               LOCATION        LOCATION_SCOPE  SIZE_GB  TYPE         STATUS
gke-standard-cluster-1-default-pool-c43dab38-4qdn  europe-west1-b  zone            100      pd-standard  READY
gke-standard-cluster-1-default-pool-c43dab38-553f  europe-west1-b  zone            100      pd-standard  READY
gke-standard-cluster-1-default-pool-c43dab38-qc0z  europe-west1-b  zone            100      pd-standard  READY
mongodb                                            europe-west1-b  zone            1        pd-standard  READY

#Create a pod, from a table below
kubectl apply -f mongodb.yaml

kubectl describe pod mongodb
...
Volumes:
  mongodb-data:
    Type:       GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName:     mongodb
    FSType:     ext4
    Partition:  0
    ReadOnly:   false
  default-token-fvhrt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-fvhrt
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                  Age   From                     Message
  ----    ------                  ----  ----                     -------
  Normal  Scheduled               29s   default-scheduler        Successfully assigned default/mongodb to gke-standard-cluster-1-default-pool-c43dab38-553f
  Normal  SuccessfulAttachVolume  22s   attachdetach-controller  AttachVolume.Attach succeeded for volume "mongodb-data"


Persistent volumes management can be done in Pod manifest or use dedicated PV object
Pod PersistentVolume
apiVersion: v1
kind: Pod
metadata:
  name: mongodb 
spec:
  volumes:
  - name: mongodb-data
    gcePersistentDisk:
      pdName: mongodb #gcloud disk name
      fsType: ext4
  containers:
  - image: mongo
    name: mongodb
    volumeMounts:
    - name: mongodb-data #volume name from spec.volumes above
      mountPath: /data/db #where MongoDb stores its data
    ports:
    - containerPort: 27017 #standard MongoDB port
      protocol: TCP
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongodb-pv
spec:
  capacity: 
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain
  gcePersistentDisk:
    pdName: mongodb
    fsType: ext4


Save data to DB, delete pod and re-create to verify data persistance

kubectl exec -it mongodb mongo #connect to MongoDB
> use aaa
switched to db mydb
> db.foo.insert({name:'foo'})
WriteResult({ "nInserted" : 1 })
> db.foo.find()
{ "_id" : ObjectId("5d3aa87f4f89408f62df4e8b"), "name" : "foo" }
exit

#Drain a node that the pod is running on
kubectl get nodes
NAME                                                STATUS   ROLES    AGE   VERSION
gke-standard-cluster-1-default-pool-c43dab38-4qdn   Ready    <none>   44m   v1.12.8-gke.10
gke-standard-cluster-1-default-pool-c43dab38-553f   Ready    <none>   44m   v1.12.8-gke.10
gke-standard-cluster-1-default-pool-c43dab38-qc0z   Ready    <none>   44m   v1.12.8-gke.10

kubectl get pods -owide
NAME      READY   STATUS    RESTARTS   AGE   IP         NODE                                                NOMINATED NODE
mongodb   1/1     Running   0          24m   10.4.2.4   gke-standard-cluster-1-default-pool-c43dab38-553f   <none>

kubectl drain gke-standard-cluster-1-default-pool-c43dab38-553f --ignore-daemonsets
kubectl get nodes
NAME                                                STATUS                     ROLES    AGE   VERSION
gke-standard-cluster-1-default-pool-c43dab38-4qdn   Ready                      <none>   51m   v1.12.8-gke.10
gke-standard-cluster-1-default-pool-c43dab38-553f   Ready,SchedulingDisabled   <none>   51m   v1.12.8-gke.10
gke-standard-cluster-1-default-pool-c43dab38-qc0z   Ready                      <none>   51m   v1.12.8-gke.10

#Create pod again
kubectl apply -f mongodb.yaml
pod/mongodb created
kubectl get pods -owide #notice it's running on different node now
NAME      READY   STATUS    RESTARTS   AGE    IP         NODE                                                NOMINATED NODE
mongodb   1/1     Running   0          104s   10.4.0.9   gke-standard-cluster-1-default-pool-c43dab38-4qdn   <none>

kubectl exec -it mongodb mongo #data we created earlier or should be still there
> use mydb
switched to db aaa
> db.foo.find()
{ "_id" : ObjectId("5d3aa87f4f89408f62df4e8b"), "name" : "foo" }
>


Persistent volumes can be managed like another K8s resources. Use YAML manifest from the table above 2nd column to create one

kubectl get persistentvolume -owide
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
mongodb-pv   512Mi      RWO,ROX        Retain           Available                                   61s

PersistentVolumeClaim

Persistent Volume Claims (PVCs) are a way for an application developer to request storage for the application without having to know where the underlying storage is. The claim is then bound to the Persistent Volume (PV), and it will not be released until the PVC is deleted.

Headline
Pod using PVC PersistentVolumeClaim
apiVersion: v1
kind: Pod
metadata:
  name: mongodb 
spec:
  containers:
  - image: mongo
    name: mongodb
    volumeMounts:
    - name: mongodb-data
      mountPath: /data/db
    ports:
    - containerPort: 27017
      protocol: TCP
  volumes:
  - name: mongodb-data
    persistentVolumeClaim:
      claimName: mongodb-pvc #claimName
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongodb-pvc #claimName
spec:
  resources:
    requests:
      storage: 1Gi # no more than PV, units: Gi,Mi
  accessModes:
  - ReadWriteOnce
  storageClassName: "" # default class 'standard' in GCE


Storage Objects in use Protection, it does not allow to delete PV or PVC if in use by a Pod
protection
PV PVC
kubectl describe pv mongodb-pv
...
Finalizers: [kubernetes.io/pv-protection]
...
Source:
    Type:    ...
    PDName:  mongodb
    FSTyope: ext4
...
kubectl describe pv mongodb-pvc
...
Finalizers: [kubernetes.io/pvc-protection]
...

Expanding PVC Persistent Volume Claims

In AWS you need to add allowVolumeExpansion: true, then edit pvc and the pv will be resized accordingly. For the safe operation, you could scale your deployment/stateful set to 0 to unbind pvc. Any Linux system should be able to see new space, if not you may need to run resize2fs or xfs_growfs based on what type of volume you have.

kubectk get sc gp2 -oyaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  name: gp2
parameters:
  fsType: ext4
  type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true # <-- add, GA since v1.16

Storage Class

It's K8s object that allows to manage (create) physical storage using provisioners.

  • eg. Google-GCE provisioner: kubernetes.io/gce-pd it's an equivalent of command:
  • gcloud compute disks create --size=1GiB --zone=us-west1-a mongodb-vol-ssd --zone us-west1-a --type pd-ssd
  • provisioner would create a disk in GCE with prefixed-ID name opposit to the native gcloud command, see below
gcloud compute disks list
NAME                                                             LOCATION        LOCATION_SCOPE  SIZE_GB  TYPE    STATUS
gke-cluster-1-b4800067-pvc-8c2655c5-b360-11e9-93cd-42010a84024e  europe-west1-b  zone            1        pd-ssd  READY


PersistentVolumeClaim will request storage from a class, and the class will provision disks, format, set filesystem etc.
Deployment --> PersistentVolumeClaim --> StorageClass
apiVersion: v1
kind: Pod
metadata:
  name: mongodb 
spec:
  containers:
  - image: mongo
    name: mongodb
    volumeMounts:
    - name: mongodb-data
      mountPath: /data/db
    ports:
    - containerPort: 27017
      protocol: TCP
  volumes:
  - name: mongodb-data
    persistentVolumeClaim:
      claimName: mongodb-pvc #claimName
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongodb-pvc #claimName
spec:
  storageClassName: fast #className
  resources:
    requests:
      storage: 100Mi
  accessModes:
    - ReadWriteOnce
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast #ClassName
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd


List drives

gcloud compute disks list 
NAME                                                             LOCATION       LOCATION_SCOPE  SIZE_GB  TYPE         STATUS
gke-standard-cluster-1-default-pool-29207899-1s7d                us-west1-a     zone            100      pd-standard  READY
gke-standard-cluster-1-default-pool-29207899-7sxs                us-west1-a     zone            100      pd-standard  READY
gke-standard-cluster-1-default-pool-29207899-w4hh                us-west1-a     zone            100      pd-standard  READY
gke-standard-cluster-1-pvc-148a2097-b29b-11e9-b66b-42010a8a00c7  us-west1-a     zone            1        pd-ssd       READY #StorageClass
mongodb-vol                                                      us-west1-a     zone            7        pd-standard  READY
mongodb-vol-ssd                                                  us-west1-a     zone            1        pd-ssd       READY #gcloud created


Notice default storageClass name 'standard' already created in GCE

kubectl describe storageclasses.storage.k8s.io
Name:                  standard
IsDefaultClass:        Yes
Annotations:           storageclass.kubernetes.io/is-default-class=true
Provisioner:           kubernetes.io/gce-pd
Parameters:            type=pd-standard
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>


List storage classes

kubectl get sc
NAME                 PROVISIONER            AGE
fast                 kubernetes.io/gce-pd   9m11s    #storage class created
standard (default)   kubernetes.io/gce-pd   49m      #Default storage class when you create a PVC, and leave StorageClass: <blank>

kubectl get pvc
NAME                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mongodb-pvc                 Bound    mongodb-pv                                 2Gi        RWO,ROX                       40m
mongodb-pvc-storage-class   Bound    pvc-148a2097-b29b-11e9-b66b-42010a8a00c7   1Gi        RWO            fast           12m

Local-storage

There are many volume types, incl. gdrive, EBS, hostPath, empty directory type.

Headline
Pod emptyDir (useful for sharing tmp dir with containers on the in a pod) hostPath (PV)
apiVersion: v1
kind: Pod
metadata:
  name: emptydir-pod
spec:
  containers:
  - image: busybox
    name: busybox
    command: ["/bin/sh", "-c", "sleep 3600"]
    volumeMounts:
    - mountPath: /tmp/storage
      name: vol
  volumes:
  - name: vol
    emptyDir: {}
apiVersion: v1
kind: PersistentVolume
metadata:
  name: hostpath-pv
spec:
  storageClassName: local-storage
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

Example

Tmux session screenshot showing. PV created using 'local-storage' StorageClass, claimed 1G using PVC and deployed a pod with mounted volume as PVC. Deployed pod it's MongoDB, and on the bottom-left screen it can be seen the data on the node running the pod in '/mnt/data'.

Every 2.0s: kubectl get all --all-namespaces -o wide                                                                  ip-10-0-1-101: Fri Aug  2 07:21:44 2019│Every 2.0s: kubectl get pv,pvc,sc --all-namespaces                                             ip-10-0-1-101: Fri Aug  2 07:21:43 2019│
                                                                                                                                                             │                                                                                                                                      │
NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE    IP           NODE            NOMINATED NODE   READINESS GATES  │NAME                          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS    REASON   AGE  │
default       pod/mongodb                                 1/1     Running   0          5m3s   10.244.1.3   ip-10-0-1-102   <none>           <none>           │persistentvolume/mongodb-pv   1Gi        RWO            Retain           Bound    default/mongodb-pvc   local-storage            21m  │
kube-system   pod/coredns-54ff9cd656-z28b9                1/1     Running   0          53m    10.244.0.3   ip-10-0-1-101   <none>           <none>           │                                                                                                                                      │
kube-system   pod/coredns-54ff9cd656-z9b98                1/1     Running   0          53m    10.244.0.2   ip-10-0-1-101   <none>           <none>           │NAMESPACE   NAME                                STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS    AGE                   │
kube-system   pod/etcd-ip-10-0-1-101                      1/1     Running   0          52m    10.0.1.101   ip-10-0-1-101   <none>           <none>           │default     persistentvolumeclaim/mongodb-pvc   Bound    mongodb-pv   1Gi        RWO            local-storage   17m                   │
kube-system   pod/kube-apiserver-ip-10-0-1-101            1/1     Running   0          52m    10.0.1.101   ip-10-0-1-101   <none>           <none>           │                                                                                                                                      │
kube-system   pod/kube-controller-manager-ip-10-0-1-101   1/1     Running   0          52m    10.0.1.101   ip-10-0-1-101   <none>           <none>           │                                                                                                                                      │
kube-system   pod/kube-flannel-ds-amd64-4c6wj             1/1     Running   0          53m    10.0.1.101   ip-10-0-1-101   <none>           <none>           │                                                                                                                                      │
kube-system   pod/kube-flannel-ds-amd64-mfbk2             1/1     Running   0          53m    10.0.1.102   ip-10-0-1-102   <none>           <none>           │                                                                                                                                      │
kube-system   pod/kube-proxy-fp5k6                        1/1     Running   0          53m    10.0.1.102   ip-10-0-1-102   <none>           <none>           │                                                                                                                                      │
kube-system   pod/kube-proxy-spdws                        1/1     Running   0          53m    10.0.1.101   ip-10-0-1-101   <none>           <none>           │                                                                                                                                      │
kube-system   pod/kube-scheduler-ip-10-0-1-101            1/1     Running   0          52m    10.0.1.101   ip-10-0-1-101   <none>           <none>           │                                                                                                                                      │
                                                                                                                                                             │                                                                                                                                      │
NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE   SELECTOR                                                     │                                                                                                                                      │
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP         53m   <none>                                                       │                                                                                                                                      │
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   53m   k8s-app=kube-dns                                             │                                                                                                                                      │
                                                                                                                                                             │                                                                                                                                      │
NAMESPACE     NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE   CONTAINER│                                                                                                                                      │
S     IMAGES                                   SELECTOR                                                                                                      │                                                                                                                                      │
kube-system   daemonset.apps/kube-flannel-ds-amd64     2         2         2       2            2           beta.kubernetes.io/arch=amd64     53m   kube-flan│                                                                                                                                      │
nel   quay.io/coreos/flannel:v0.11.0-amd64     app=flannel,tier=node                                                                                         │                                                                                                                                      │
kube-system   daemonset.apps/kube-flannel-ds-arm       0         0         0       0            0           beta.kubernetes.io/arch=arm       53m   kube-flan│                                                                                                                                      │
nel   quay.io/coreos/flannel:v0.11.0-arm       app=flannel,tier=node                                                                                         │                                                                                                                                      │
kube-system   daemonset.apps/kube-flannel-ds-arm64     0         0         0       0            0           beta.kubernetes.io/arch=arm64     53m   kube-flan│                                                                                                                                      │
nel   quay.io/coreos/flannel:v0.11.0-arm64     app=flannel,tier=node                                                                                         │                                                                                                                                      │
kube-system   daemonset.apps/kube-flannel-ds-ppc64le   0         0         0       0            0           beta.kubernetes.io/arch=ppc64le   53m   kube-flan│                                                                                                                                      │
nel   quay.io/coreos/flannel:v0.11.0-ppc64le   app=flannel,tier=node                                                                                         ├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
kube-system   daemonset.apps/kube-flannel-ds-s390x     0         0         0       0            0           beta.kubernetes.io/arch=s390x     53m   kube-flan│Every 2.0s: kubectl get all -owide                                                             ip-10-0-1-101: Fri Aug  2 07:21:45 2019│
nel   quay.io/coreos/flannel:v0.11.0-s390x     app=flannel,tier=node                                                                                         │                                                                                                                                      │
kube-system   daemonset.apps/kube-proxy                2         2         2       2            2           <none>                            53m   kube-prox│NAME          READY   STATUS    RESTARTS   AGE    IP           NODE            NOMINATED NODE   READINESS GATES                       │
y     k8s.gcr.io/kube-proxy:v1.13.8            k8s-app=kube-proxy                                                                                            │pod/mongodb   1/1     Running   0          5m4s   10.244.1.3   ip-10-0-1-102   <none>           <none>                                │
                                                                                                                                                             │                                                                                                                                      │
NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                     SELECTOR                              │NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR                                                  │
kube-system   deployment.apps/coredns   2/2     2            2           53m   coredns      k8s.gcr.io/coredns:1.2.6   k8s-app=kube-dns                      │service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   53m   <none>                                                    │
                                                                                                                                                             │                                                                                                                                      │
NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                     SELECTOR                        │                                                                                                                                      │
kube-system   replicaset.apps/coredns-54ff9cd656   2         2         2       53m   coredns      k8s.gcr.io/coredns:1.2.6   k8s-app=kube-dns,pod-template-ha│                                                                                                                                      │
sh=54ff9cd656                                                                                                                                                │                                                                                                                                      │
                                                                                                                                                             │                                                                                                                                      │
                                                                                                                                                             │                                                                                                                                      │
                                                                                                                                                             │                                                                                                                                      │
                                                                                                                                                             │                                                                                                                                      │
                                                                                                                                                             │                                                                                                                                      │
                                                                                                                                                             │                                                                                                                                      │
                                                                                                                                                             │                                                                                                                                      │
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────┴───────────────────────────────┬──────────────────────────────────────────┬───────────────────────────────────────────────────────────┤
pod "mongodb" deleted                                                                                                                        │apiVersion: v1                                 │apiVersion: v1                            │apiVersion: v1                                             │
cloud_user@ip-10-0-1-101:~$ kubectl apply -f mongodb-pod.yaml                                                                                │kind: PersistentVolume                         │kind: PersistentVolumeClaim               │kind: Pod                                                  │
error: the path "mongodb-pod.yaml" does not exist                                                                                            │metadata:                                      │metadata:                                 │metadata:                                                  │
cloud_user@ip-10-0-1-101:~$ kubectl apply -f pod.yaml                                                                                        │  name: mongodb-pv                             │  name: mongodb-pvc                       │  name: mongodb                                            │
pod/mongodb created                                                                                                                          │spec:                                          │spec:                                     │spec:                                                      │
cloud_user@ip-10-0-1-101:~$ #kubectl apply -f pod.yaml                                                                                       │  storageClassName: local-storage              │  storageClassName: local-storage         │  containers:                                              │
cloud_user@ip-10-0-1-101:~$ #kubectl apply -f pvc.yaml                                                                                       │  capacity:                                    │  accessModes:                            │  - image: mongo                                           │
cloud_user@ip-10-0-1-101:~$ #kubectl apply -f mongodb-pv.yaml                                                                                │    storage: 1Gi                               │    - ReadWriteOnce                       │    name: mongodb                                          │
cloud_user@ip-10-0-1-101:~$                                                                                                                  │  accessModes:                                 │  resources:                              │    volumeMounts:                                          │
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤    - ReadWriteOnce                            │    requests:                             │    - name: mongodb-data                                   │
-rw------- 1 999 docker    45 Aug  2 07:10 WiredTiger                                                                                        │  hostPath:                                    │      torage: 1Gi                         │      mountPath: /data/db                                  │
-rw------- 1 999 docker    21 Aug  2 07:10 WiredTiger.lock                                                                                   │    path: "/mnt/data"                          │~                                         │    ports:                                                 │
-rw------- 1 999 docker  1065 Aug  2 07:12 WiredTiger.turtle                                                                                 │~                                              │~                                         │    - containerPort: 27017                                 │
-rw------- 1 999 docker 57344 Aug  2 07:12 WiredTiger.wt                                                                                     │~                                              │~                                         │      protocol: TCP                                        │
-rw------- 1 999 docker  4096 Aug  2 07:10 WiredTigerLAS.wt                                                                                  │~                                              │~                                         │  volumes:                                                 │
-rw------- 1 999 docker 16384 Aug  2 07:11 _mdb_catalog.wt                                                                                   │~                                              │~                                         │  - name: mongodb-data                                     │
-rw------- 1 999 docker 16384 Aug  2 07:11 collection-0--7126785364106793000.wt                                                              │~                                              │~                                         │    persistentVolumeClaim:                                 │
-rw------- 1 999 docker 16384 Aug  2 07:11 collection-2--7126785364106793000.wt                                                              │~                                              │~                                         │      claimName: mongodb-pvc                               │
-rw------- 1 999 docker  4096 Aug  2 07:10 collection-4--7126785364106793000.wt                                                              │~                                              │~                                         │~                                                          │
drwx------ 2 999 docker  4096 Aug  2 07:13 diagnostic.data                                                                                   │~                                              │~                                         │~                                                          │
-rw------- 1 999 docker 16384 Aug  2 07:11 index-1--7126785364106793000.wt                                                                   │~                                              │~                                         │~                                                          │
-rw------- 1 999 docker 16384 Aug  2 07:11 index-3--7126785364106793000.wt                                                                   │~                                              │~                                         │~                                                          │
-rw------- 1 999 docker  4096 Aug  2 07:10 index-5--7126785364106793000.wt                                                                   │~                                              │~                                         │~                                                          │
-rw------- 1 999 docker  4096 Aug  2 07:11 index-6--7126785364106793000.wt                                                                   │~                                              │~                                         │~                                                          │
drwx------ 2 999 docker  4096 Aug  2 07:10 journal                                                                                           │~                                              │~                                         │~                                                          │
-rw------- 1 999 docker     2 Aug  2 07:10 mongod.lock                                                                                       │~                                              │~                                         │~                                                          │
-rw------- 1 999 docker 16384 Aug  2 07:12 sizeStorer.wt                                                                                     │~                                              │~                                         │~                                                          │
-rw------- 1 999 docker   114 Aug  2 07:10 storage.bson                                                                                      │~                                              │~                                         │~                                                          │
cloud_user@ip-10-0-1-102:~$  #this is node ip-10-0-1-102 where pod: pod/mongodb resides                                                      │#kubectl apply -f mongodb-pv.yaml              │ #kubectl apply -f pvc.yaml               │#kubectl apply -f pod.yaml                                 │
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────────────────┴──────────────────────────────────────────┴───────────────────────────────────────────────────────────┘

Resources