Difference between revisions of "Kubernetes/Install Master and nodes"
Jump to navigation
Jump to search
Line 185: | Line 185: | ||
* [https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/ Operating a Highly Available etcd Cluster] | * [https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/ Operating a Highly Available etcd Cluster] | ||
= Testing cluster end-to-end = | |||
<source lang=bash> | |||
kubectl run nginx --image=nginx #deployment test, run eg. nginx | |||
kubectl get deployments | |||
#access a pod directly using port-forwarding by creating local listener on port 8080 that forwards traffic to the pod's port 80. | |||
kubectl port-forward $pod_name 8080:80& | |||
curl --head http://127.0.0.1:8080 #Check a response from the 'nginx' pod directly | |||
kubectl logs $pod_name #view pod's logs | |||
kubectl exec -it nginx -- nginx -v #run a commands directly on a container | |||
#Create a service by exposing port 80 of the nginx deployment | |||
kubectl expose deployment nginx --port 80 --type NodePort | |||
#List the services in your cluster, you can see the service is exposed on port <tt>31839</tt> | |||
kubectl get service nginx -o wide | |||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR | |||
nginx NodePort 10.110.225.169 <none> 80:31839/TCP 2m26s run=nginx | |||
Get a response from the service | |||
curl -I localhost:$node_port #run from a worker node () | |||
Get information about nodes | |||
kubectl get nodes | |||
kubectl describe nodes | |||
</source> | |||
= References = | = References = | ||
*[] | *[https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/e2e-tests.md Kubetest] | ||
*[https://kubernetes.io/docs/getting-started-guides/ubuntu/ Test a Juju Cluster] | |||
[[Category:kubernetes]] | [[Category:kubernetes]] |
Revision as of 07:32, 7 July 2019
This example is based on Ubuntu 16 LTS
Install binaries
#Docker gpg key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - #Docker repository sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" #Kubernetes gpg key curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - #Kubernetes repository cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF #Install software sudo apt-get update sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.13.5-00 kubeadm=1.13.5-00 kubectl=1.13.5-00 # Set packages at the current versions so they won't autoupdate sudo apt-mark hold docker-ce kubelet kubeadm kubectl #Add the iptables rule to sysctl.conf; then enable imidiately echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf sudo sysctl -p
Initialize a cluster
Run only on the master to initialize the cluster
sudo kubeadm init --pod-network-cidr=10.100.0.0/16 I0705 06:23:54.675905 24293 version.go:237] remote version is much newer: v1.15.0; falling back to: stable-1.13 [init] Using Kubernetes version: v1.13.7 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubemaster.acme.com localhost] and IPs [172.31.115.255 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubemaster.acme.com localhost] and IPs [172.31.115.255 127.0.0.1 ::1] [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubemaster.acme.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.115.255] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 20.002150 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubemaster.acme.com" as an annotation [mark-control-plane] Marking the node kubemaster.acme.com as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kubemaster.acme.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: xkcoul.0i2m*******ockj [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.31.115.255:6443 --token xkcoul.0i2m*******ockj --discovery-token-ca-cert-hash sha256:808*******6a
Set up local kubeconfig:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Create Flannel CNI network overlay to allow nodes to communicate with each other
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created
Join worker nodes
Run on worker nodes - join worker nodes to the master
sudo kubeadm join 172.31.115.255:6443 --token xkcoul.0i2m*******ockj --discovery-token-ca-cert-hash sha256:808*******6a [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "172.31.115.255:6443" [discovery] Created cluster-info discovery client, requesting info from "https://172.31.115.255:6443" [discovery] Requesting info from "https://172.31.115.255:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.31.115.255:6443" [discovery] Successfully established connection with API Server "172.31.115.255:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeworker1.acme.com" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
Verify nodes joined the cluster
$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster.acme.com Ready master 16m v1.13.5 kubeworker1.acme.com Ready <none> 2m45s v1.13.5 kubeworker2.acme.com Ready <none> 2m39s v1.13.5
Highly Available Kubernetes Cluster
In the output below you can see out-of-box K8s installation. Note that not all components are duplicated. But even we douplicate components not all can run in active mode. Eg. because Controller Manger and Scheduler are constantly watching for a cluster events only one can be in active mode, the rest would be in standby mode.
#View cluster components kubectl get pods -o custom-columns=POD:metadata.name,NODE:spec.nodeName --sort-by spec.nodeName -n kube-system POD NODE coredns-86c58d9df4-cdl5a kube-master.acme.com coredns-86c58d9df4-csxca kube-master.acme.com etcd-kube-master.acme.com kube-master.acme.com kube-apiserver-kube-master.acme.com kube-master.acme.com kube-controller-manager-kube-master.acme.com kube-master.acme.com kube-scheduler-kube-master.acme.com kube-master.acme.com kube-flannel-ds-amd64-cwd74 kube-master.acme.com kube-proxy-z264w kube-master.acme.com kube-proxy-fxl6f kube-worker-1.acme.com kube-flannel-ds-amd64-c7hva kube-worker-1.acme.com kube-flannel-ds-amd64-c5p9a kube-worker-2.acme.com kube-proxy-jtbwm kube-worker-2.acme.com #View details of components kubectl get endpoints kube-scheduler -n kube-system -o yaml
References
- Creating Highly Available Kubernetes Clusters with kubeadm
- Highly Available Topologies in Kubernetes
- Operating a Highly Available etcd Cluster
Testing cluster end-to-end
kubectl run nginx --image=nginx #deployment test, run eg. nginx kubectl get deployments #access a pod directly using port-forwarding by creating local listener on port 8080 that forwards traffic to the pod's port 80. kubectl port-forward $pod_name 8080:80& curl --head http://127.0.0.1:8080 #Check a response from the 'nginx' pod directly kubectl logs $pod_name #view pod's logs kubectl exec -it nginx -- nginx -v #run a commands directly on a container #Create a service by exposing port 80 of the nginx deployment kubectl expose deployment nginx --port 80 --type NodePort #List the services in your cluster, you can see the service is exposed on port <tt>31839</tt> kubectl get service nginx -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR nginx NodePort 10.110.225.169 <none> 80:31839/TCP 2m26s run=nginx Get a response from the service curl -I localhost:$node_port #run from a worker node () Get information about nodes kubectl get nodes kubectl describe nodes