Difference between revisions of "Kubernetes/Google GKE"

From Ever changing code
Jump to navigation Jump to search
Line 24: Line 24:
kubectl -n kube-system scale --replicas=0 deployment/kube-dns-autoscaler
kubectl -n kube-system scale --replicas=0 deployment/kube-dns-autoscaler
kubectl -n kube-system scale --replicas=1 deployment/kube-dns
kubectl -n kube-system scale --replicas=1 deployment/kube-dns
</source>
GKE creates a <code>LimitRange</code> resource in the default namespace, called limits, which gives Pods a 100m CPU request if not specified. This default is too high for some of the services, which can cause workloads not to schedule.
<source lang=bash>
kubectl get limitrange limits
NAME    CREATED AT
limits  2019-07-13T12:55:57Z
kubectl delete limitrange limits
limitrange "limits" deleted
</source>
</source>



Revision as of 11:12, 8 August 2019

Create a cluster

gcloud container clusters create \
    --cluster-version=1.13 --image-type=UBUNTU \
    --disk-type=pd-ssd --disk-size=10GB \
    --no-enable-cloud-logging --no-enable-cloud-monitoring \
    --addons=NetworkPolicy --issue-client-certificate \
    --region europe-west1-b --username=admin cluster-1
#   ----cluster-ipv4-cidr *.*.*.*
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: The Pod address range limits the maximum size of the cluster. Please refer to https://cloud.google.com/kubernetes-engine/docs/how-to/flexible-pod-cidr to learn how to optimize IP address allocation.
This will disable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster cluster-1 in europe-west1-b... Cluster is being health-checked (master is healthy)...done.              
Created [https://container.googleapis.com/v1/projects/responsive-sun-*****/zones/europe-west1-b/clusters/cluster-1].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/europe-west1-b/cluster-1?project=responsive-sun-*****
kubeconfig entry generated for cluster-1.

NAME       LOCATION        MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
cluster-1  europe-west1-b  1.13.7-gke.8    35.205.145.199  n1-standard-1  1.13.7-gke.8  3          RUNNING

#Free up some resources
watch -d kubectl get pods --all-namespaces
kubectl -n kube-system scale --replicas=0 deployment/kube-dns-autoscaler
kubectl -n kube-system scale --replicas=1 deployment/kube-dns


GKE creates a LimitRange resource in the default namespace, called limits, which gives Pods a 100m CPU request if not specified. This default is too high for some of the services, which can cause workloads not to schedule.

kubectl get limitrange limits
NAME     CREATED AT
limits   2019-07-13T12:55:57Z
kubectl delete limitrange limits
limitrange "limits" deleted

Resize cluster

To save some money you can resize you cluster to zero nodes, because for running Control-Plane nodes you are not billed. Example below resizes cluster-1 cluster.

$ gcloud container clusters resize cluster-1 --size=0
Pool [default-pool] for [cluster-1] will be resized to 0.
Do you want to continue (Y/n)?  y
Resizing cluster-1...done.                                                                                            
Updated [https://container.googleapis.com/v1/projects/responsive-sun-123456/zones/europe-west1-b/clusters/cluster-1]

All pods will get terminated and required pods eg. kube-dns, metrics-server will change status to pending. They will get rescheduled once nodes are added to the cluster.

Cluster commands

gcloud container clusters list
gcloud container clusters describe cluster-1
gcloud container clusters delete   cluster-1

Configure kubectl

$ gcloud container clusters get-credentials cluster-1 #--zone <europe-west1-b> --project <project>
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cluster-1
$ cat ~/.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 
AABBCCFF112233
    server: https://35.205.145.199
  name: gke_responsive-sun-246311_europe-west1-b_cluster-1
contexts:
- context:
    cluster: gke_responsive-sun-246311_europe-west1-b_cluster-1
    user: gke_responsive-sun-246311_europe-west1-b_cluster-1
  name: gke_responsive-sun-246311_europe-west1-b_cluster-1
current-context: gke_responsive-sun-246311_europe-west1-b_cluster-1
kind: Config
preferences: {}
users:
- name: gke_responsive-sun-246311_europe-west1-b_cluster-1
  user:
    auth-provider:
      config:
        cmd-args: config config-helper --format=json
        cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
        expiry-key: '{.credential.token_expiry}'
        token-key: '{.credential.access_token}'
      name: gcp