Kubernetes/Install Master and nodes

From Ever changing code
< Kubernetes
Revision as of 08:25, 16 October 2019 by Pio2pio (talk | contribs) (→‎References)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This example is based on Ubuntu 16 LTS

Updates

  • Updated version of Kubernetes binary to 1.13.10-00 to align with AWS version. The commands output haven't been reflected, but all worked.

Install binaries

#Docker gpg key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
#Docker repository
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 
#Kubernetes gpg key
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
#Kubernetes repository
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

# Install software
sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.13.10-00 kubeadm=1.13.10-00 kubectl=1.13.10-00
# Set packages at the current versions so they won't autoupdate
sudo apt-mark hold docker-ce kubelet kubeadm kubectl
# Enable iptables bridge call, then enable immediately
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

Initialize a cluster

Run only on the master to initialize the cluster

sudo kubeadm init --pod-network-cidr=10.100.0.0/16 # flannel default is 10.244.0.0/16

I0705 06:23:54.675905   24293 version.go:237] remote version is much newer: v1.15.0; falling back to: stable-1.13
[init] Using Kubernetes version: v1.13.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubemaster.acme.com localhost] and IPs [172.31.115.255 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubemaster.acme.com localhost] and IPs [172.31.115.255 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubemaster.acme.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.115.255]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.002150 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubemaster.acme.com" as an annotation
[mark-control-plane] Marking the node kubemaster.acme.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubemaster.acme.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xkcoul.0i2m*******ockj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.31.115.255:6443 --token xkcoul.0i2m*******ockj --discovery-token-ca-cert-hash sha256:808*******6a


Set up local kubeconfig:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


Create Flannel CNI network overlay to allow nodes to communicate with each other. Default flannel pod network is 10.244.0.0/16, you need to update manifest if this is not your cidr.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

Join worker nodes

Run on worker nodes - join worker nodes to the master

sudo kubeadm join 172.31.115.255:6443 --token xkcoul.0i2m*******ockj --discovery-token-ca-cert-hash sha256:808*******6a
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "172.31.115.255:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.31.115.255:6443"
[discovery] Requesting info from "https://172.31.115.255:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.31.115.255:6443"
[discovery] Successfully established connection with API Server "172.31.115.255:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeworker1.acme.com" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.


Verify nodes joined the cluster

$ kubectl get nodes
NAME                  STATUS   ROLES    AGE     VERSION
kubemaster.acme.com   Ready    master   16m     v1.13.5
kubeworker1.acme.com  Ready    <none>   2m45s   v1.13.5
kubeworker2.acme.com  Ready    <none>   2m39s   v1.13.5

Highly Available Kubernetes Cluster

In the output below you can see out-of-box K8s installation. Note that not all components are duplicated. But even we douplicate components not all can run in active mode. Eg. because Controller Manger and Scheduler are constantly watching for a cluster events only one can be in active mode, the rest would be in standby mode.

#View cluster components
kubectl get pods -o custom-columns=POD:metadata.name,NODE:spec.nodeName --sort-by spec.nodeName -n kube-system
POD                                            NODE
coredns-86c58d9df4-cdl5a                       kube-master.acme.com
coredns-86c58d9df4-csxca                       kube-master.acme.com
etcd-kube-master.acme.com                      kube-master.acme.com
kube-apiserver-kube-master.acme.com            kube-master.acme.com
kube-controller-manager-kube-master.acme.com   kube-master.acme.com
kube-scheduler-kube-master.acme.com            kube-master.acme.com
kube-flannel-ds-amd64-cwd74                    kube-master.acme.com
kube-proxy-z264w                               kube-master.acme.com
kube-proxy-fxl6f                               kube-worker-1.acme.com
kube-flannel-ds-amd64-c7hva                    kube-worker-1.acme.com
kube-flannel-ds-amd64-c5p9a                    kube-worker-2.acme.com
kube-proxy-jtbwm                               kube-worker-2.acme.com

#View details of components
kubectl get endpoints kube-scheduler -n kube-system -o yaml

References

Testing cluster end-to-end

kubectl run nginx --image=nginx #deployment test, run eg. nginx
kubectl get deployments

#access a pod directly using port-forwarding by creating local listener on port 8080 that forwards traffic to the pod's port 80.
kubectl port-forward $pod_name 8080:80&

#Check a response from the 'nginx' pod directly. Run from a master or where kubectl is installed
curl --head   http://127.0.0.1:8080 

kubectl logs $pod_name             #view pod's logs
kubectl exec -it nginx -- nginx -v #run a commands directly on a container

#Create a service by exposing port 80 of the nginx deployment
kubectl expose deployment nginx --port 80 --type NodePort

#List the services in your cluster, you can see the service is exposed on port <tt>31839</tt>
kubectl get service nginx -o wide
NAME    TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE     SELECTOR
nginx   NodePort   10.110.225.169   <none>        80:31839/TCP   2m26s   run=nginx

#Check a response from the service
curl -I localhost:$node_port #run on a worker node
HTTP/1.1 200 OK
Server: nginx/1.17.1
Date: Sun, 07 Jul 2019 07:36:00 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 25 Jun 2019 12:19:45 GMT
Connection: keep-alive
ETag: "5d121161-264"
Accept-Ranges: bytes

#Check information about nodes
kubectl get nodes
kubectl describe nodes

Upgrade

kubeadm allows us to upgrade a cluster components in controlled smart way it means the upgrade process will verify what components can be upgraded. Output examples are based on Amazon EKS, but EKS does not allow access for Control Place access, thus cannot see API Server or kube-controller.

#Verify different components versions
kubectl version --short #check version for a API client (kubelet) and API server 
Client Version: v1.15.0
Server Version: v1.12.6-eks-d69f1b

kubectl get pods -n kube-system kube-controller-manager-<podUID> -o yaml |grep image: #version of the scheduler and controller manager
    image: k8s.gcr.io/kube-controller-manager:v1.12.10
    image: k8s.gcr.io/kube-controller-manager:v1.12.10
kubectl get pods -n kube-system                                          #kube-controller pod

#Set the VERSION and ARCH you want to upgrade to
export VERSION=v1.13.8; export ARCH=amd64

#Download Kubernetes binaries
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > kubeadm

sudo install -o root -g root -m 0755 ./kubeadm /usr/bin/kubeadm #install kubeadm
sudo kubeadm version
sudo kubeadm upgrade plan     #plan upgrade
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.12.10
[upgrade/versions] kubeadm version: v1.13.8
I0711 07:32:05.071864   17589 version.go:237] remote version is much newer: v1.15.0; falling back to: stable-1.13
[upgrade/versions] Latest stable version: v1.13.8
[upgrade/versions] Latest version in the v1.12 series: v1.12.10

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     3 x v1.12.2   v1.13.8

Upgrade to the latest stable version:

COMPONENT            CURRENT    AVAILABLE
API Server           v1.12.10   v1.13.8
Controller Manager   v1.12.10   v1.13.8
Scheduler            v1.12.10   v1.13.8
Kube Proxy           v1.12.10   v1.13.8
CoreDNS              1.2.2      1.2.6
Etcd                 3.2.24     3.2.24

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.13.8

#Upgrade
kubeadm upgrade apply v1.13.8 #apply upgrade

#You can check differences between the old and new manifests if you got it saved before upgrade
sudo vi /etc/kubernetes/manifests/kube-controller-manager.yaml #current

#Download the latest version of kubelet:
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubelet > kubelet
sudo install -o root -g root -m 0755 ./kubelet /usr/bin/kubelet #install kublet
sudo systemctl restart kubelet.service #restart is required
kubectl get nodes -w #check nodes versions

#Upgrade version of kubelet on all worker nodes now

Controlling nodes during upgrade

  • if a node downtime is <5 minutes a node controller will restart a pod on a node but if it's greater then it will delete the pod
  • so upgrading a node, you should check pod running on it, then evict them
kubectl drain <node> --ignore-daemonsets
kubectl get nodes #the drained node will be in 'Ready,SchedulingDisabled' status
#the node now can be upgraded
kubectl uncordon <node> #bring back to participate in a cluster
kubectl delete <node> #delete node from a cluster, remember to drain it first

Add new node to a cluster

On master node create connection token

sudo kubeadm token generate #below is a token-name not a secret!
123abc.abcabcabcabc
sudo kubeadm token list
sudo kubeadm token create 123abc.abcabcabcabc --ttl 2h --print-join-command

#Run join command on the new worker-node after you install all required packages: etc. Docker
sudo kuebadm join <IP>:6443 --token 123abc.abcabcabcabc --discovery-token-ca-cert-hash sha256:****

Verify cluster properties

Check podCIDR range
kubectl get nodes <node> -oyaml | grep podCIDR
spec:
  podCIDR: 10.244.0.0/24

References

References