Difference between revisions of "Kubernetes/Amazon EKS"

From Ever changing code
Jump to navigation Jump to search
(41 intermediate revisions by the same user not shown)
Line 1: Line 1:
= Theory =
== Kubernetes ==
;service: - consider as software load balancer
*kubernetes resource used to provide an abstraction through your Pod agnostic of the specific instances that are running
*emulates a software load balancer within Kubernetes
*can contain a Policy
= kubectl =
<source lang=bash>
kubectl get nodes
kubectl get pods --watch
kubectl describe pod <pod-name> #shows events
kubectl run letskube-deployment --image=acrtest.azurecr.io/letskube:v2 --port=80 --replicas=3
kubectl expose deployment letskube-deployment --type=NodePort
kubectl delete deployment letskube-deployment
kubectl create -f .\letskubedeploy.yml
kubectl get service --watch #for EXTERNAL-IP to be allocated
kubectl scale --replicas=55 deployment/letskube-deployment
</source>
= kubectx =
*<code>kubectx</code> helps you switch between clusters back and forth
*<code>kubens</code> helps you switch between Kubernetes namespaces smoothly
<source lang=bash>
git clone https://github.com/ahmetb/kubectx.git ~/.kubectx
COMPDIR=$(pkg-config --variable=completionsdir bash-completion)
ln -sf ~/.kubectx/completion/kubens.bash $COMPDIR/kubens
ln -sf ~/.kubectx/completion/kubectx.bash $COMPDIR/kubectx
cat << FOE >> ~/.bashrc
#kubectx and kubens
export PATH=\$PATH:~/.kubectx
FOE
</source>
= Azure AKS =
== Setting up kubectl ==
=== Powershell ===
<source>
$env:KUBECONFIG="$env:HOMEPATH\.kube\aksconfig"
PS1 C:\> kubectl config current-context #show current context, default cluster managed by the kubectl
PS1 C:\> Get-Content $env:KUBECONFIG | sls context
contexts:
- context:
current-context: aks-test-cluster
</source>
=== Bash ===
<source>
export KUBECONFIG=~/.kube/aksconfig
</source>
= Intro into Amazon EKS =
''This intro information are valid at the time of writting this section.''
AWS EKS supports only Kubernetes version '''1.10.3'''.
By default, Amazon EKS provides AWS CloudFormation templates to spin up your worker nodes with the Amazon EKS-optimized AMI. This AMI is built on top of '''Amazon Linux 2'''. The AMI is configured to work with Amazon EKS out of the box and it includes '''Docker 17.06.2-ce''' (with '''overlay2''' as a Docker '''storage driver'''), '''Kubelet 1.10.3''', and the '''AWS authenticator'''. The AMI also launches with specialized Amazon EC2 user data that allows it to discover and connect to your cluster's control plane automatically.
The AWS VPC '''container network interface (CNI) plugin''' is responsible for providing pod networking in Kubernetes using Elastic Network Interfaces (ENI) on AWS. Amazon EKS works with Calico by Tigera to integrate with the CNI plugin to provide fine grained networking policies.
The Amazon EKS service is available at the time of writting this in Novmeber 2018 only in following regions:
* US East (N. Virginia) - us-east-1
* US East (Ohio) - us-east-2
* US West (Oregon) - us-west-2
* EU (Ireland) - eu-west-1
= Bootstrap/create EKS Cluster =
= Bootstrap/create EKS Cluster =
<source lang="bash">
<source lang="bash">
Line 6: Line 77:
# Install kubectl
# Install kubectl
mkdir -p ~/.kube
mkdir -p ~/.kube
#Get the latest version
sudo curl -L https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl
#Get 1.10.3 version
sudo curl --location -o /usr/local/bin/kubectl "https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/kubectl"
sudo curl --location -o /usr/local/bin/kubectl "https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/kubectl"
sudo chmod +x /usr/local/bin/kubectl
sudo chmod +x /usr/local/bin/kubectl
kubectl version --short --client
kubectl version --short --client
kubectl <operation> <object> <resource_name> <optional_flags>


# Install aws-iam-authenticator
# Install aws-iam-authenticator
#download option1, then mv
go get -u -v github.com/kubernetes-sigs/aws-iam-authenticator/cmd/aws-iam-authenticator
go get -u -v github.com/kubernetes-sigs/aws-iam-authenticator/cmd/aws-iam-authenticator
sudo mv ~/go/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
sudo mv ~/go/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
#download option2 straight to /usr/bin/aws-iam-authenticator
sudo curl https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator -o /usr/bin/aws-iam-authenticator
aws-iam-authenticator help
aws-iam-authenticator help


Line 18: Line 101:
sudo yum -y install jq #Amazon Linux
sudo yum -y install jq #Amazon Linux
sudo apt-get jq -y    #Ubuntu
sudo apt-get jq -y    #Ubuntu
# Download deployable applications
mkdir ~/environment #place of deployables to EKS, applications, policies etc
cd ~/environment
git clone https://github.com/brentley/ecsdemo-frontend.git
git clone https://github.com/brentley/ecsdemo-nodejs.git
git clone https://github.com/brentley/ecsdemo-crystal.git


# Configure awscli
# Configure awscli
Line 68: Line 144:
ip-192-168-41-230.eu-west-1.compute.internal  Ready    <none>    1h        v1.10.3
ip-192-168-41-230.eu-west-1.compute.internal  Ready    <none>    1h        v1.10.3
ip-192-168-79-54.eu-west-1.compute.internal    Ready    <none>    1h        v1.10.3
ip-192-168-79-54.eu-west-1.compute.internal    Ready    <none>    1h        v1.10.3
# Get info about the cluster
eksctl get cluster --name=eksworkshop-eksctl --region=${AWS_REGION}                      NAME                    VERSION STATUS  CREATED                VPC                    SUBNETS                SECURITYGROUPS
eksworkshop-eksctl      1.10    ACTIVE  2018-11-24T12:55:28Z    vpc-0c97f8a6dabb11111  subnet-05285b6c692711111,subnet-0a6626ec2c0111111,subnet-0c5e839d106f11111,subnet-0d9a9b34be5511111,subnet-0f297fefefad11111,subnet-0faaf1d3dedd11111  sg-083fbc37e4b011111
</source>
== Deploy the Official Kubernetes Dashboard ==
<source lang="bash">
# Deploy dashboard from official config sources. Also can download a files and deploy.
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
# Create kube-proxy to enable accedd to the application (dashboard) from Internet
# start the proxy in the background, listen on port 8080, listen on all interfaces, and will disable the filtering of non-localhost requests
kubectl proxy --port=8080 --address='0.0.0.0' --disable-filter=true &
W1124 14:47:55.308424  14460 proxy.go:138] Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious
Starting to serve on [::]:8080
</source>
;Access dashboard
Generate temporary token to login to dashboard
<source lang="bash">
aws-iam-authenticator token -i eksworkshop-eksctl --token-only
</source>
Go to webbrowser, point to kube-proxy and append to the URL following path
<source>
/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
</source>
select ''token'' sign-in and paste token to login in.
== Deploy sample applications ==
The containers listen on port 3000, and native service discovery will be used to locate the running containers and communicate with them.
<source lang="bash">
# Download deployable sample applications
mkdir ~/environment #place of deployables to EKS, applications, policies etc
cd ~/environment
git clone https://github.com/brentley/ecsdemo-frontend.git
git clone https://github.com/brentley/ecsdemo-nodejs.git
git clone https://github.com/brentley/ecsdemo-crystal.git
### Deploy applications
# NodeJS Backend API
cd ecsdemo-nodejs
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml
kubectl get deployment ecsdemo-nodejs # watch progress
# Crystal Backend API
cd ~/environment/ecsdemo-crystal
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml
kubectl get deployment ecsdemo-crystal
</source>
Before deploying frontend application let's see how servive differes between backend and frontend services
{| class="wikitable"
|+ kubernetes/service.yaml
|-
! frontend service
! backend service
|-
| <source lang="json">
apiVersion: v1
kind: Service
metadata:
  name: ecsdemo-frontend
spec:
  selector:
    app: ecsdemo-frontend
  type: LoadBalancer
  ports:
  -  protocol: TCP
      port: 80
      targetPort: 3000
</source>
| <source lang="json">
apiVersion: v1
kind: Service
metadata:
  name: ecsdemo-nodejs
spec:
  selector:
    app: ecsdemo-nodejs
  type: ClusterIP  <-- this is default
  ports:
  -  protocol: TCP
      port: 80
      targetPort: 3000
</source>
|}
Notice there is no need to specific service type describe for '''backend''' because the default type is <code>ClusterIP</code>. This Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. Thus '''forntend''' has <code>type: LoadBalancer</code>
The '''frontend''' service will attempt to create ELB thus requires access to the elb service. This is controlled by IAM service role that needs creating if does not exist.
<source lang="bash">
aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
</source>
Deploy frontend service
<source lang="bash">
cd ecsdemo-frontend
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml
kubectl get deployment ecsdemo-frontend
# Get service address
kubectl get service ecsdemo-frontend -o wide
ELB=$(kubectl get service ecsdemo-frontend -o json | jq -r '.status.loadBalancer.ingress[].hostname')
curl -m3 -v $ELB #You can also open this in a webrowser
</source>
== Scale backend services ==
<source lang="bash">
kubectl scale deployment ecsdemo-nodejs --replicas=3
kubectl scale deployment ecsdemo-crystal --replicas=3
kubectl get deployments
NAME              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
ecsdemo-crystal    3        3        3            3          38m
ecsdemo-frontend  1        1        1            1          20m
ecsdemo-nodejs    3        3        3            3          40m
# Watch scaling in action
$ i=3; kubectl scale deployment ecsdemo-nodejs --replicas=$i; kubectl scale deployment ecsdemo-crystal --replicas=$i
$ watch -d -n 0.5 kubectl get deployments
</source>
Check the browser you should now see traffic flowing to multiple frontend services.
== Delete the applications ==
<source lang="bash">
cd ecsdemo-frontend
kubectl delete -f kubernetes/service.yaml
kubectl delete -f kubernetes/deployment.yaml
cd ecsdemo-crystal
kubectl delete -f kubernetes/service.yaml
kubectl delete -f kubernetes/deployment.yaml
cd ecsdemo-nodejs
kubectl delete -f kubernetes/service.yaml
kubectl delete -f kubernetes/deployment.yaml
</source>
= Networking using Calico =
;Install
Below will install Calico manifest. This creates the daemon sets in the kube-system namespace.
<source lang="bash">
wget https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.2/calico.yaml
kubectl apply -f calico.yaml
kubectl get daemonset calico-node --namespace=kube-system
</source>
See more details on the [https://eksworkshop.com/calico/install_calico/ eksworkshop.com] website.
== Network policy demo ==
Before creating network polices, we will create the required resources.
<source lang="bash">
mkdir calico_resources && cd calico_resources
wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/namespace.yaml
kubectl apply -f namespace.yaml # create namespace
# Download manifest for orher resources
wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/management-ui.yaml
wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/backend.yaml
wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/frontend.yaml
wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/client.yaml
kubectl apply -f management-ui.yaml
kubectl apply -f backend.yaml
kubectl apply -f frontend.yaml
kubectl apply -f client.yaml
kubectl get pods --all-namespaces
</source>
Resources we created:
* A namespace called '''stars'''
* '''frontend''' and '''backend''' replication controllers and services within '''stars''' namespace
* A namespace called '''management-ui'''
* Replication controller and service '''management-ui''' for the user interface seen on the browser, in the '''management-ui''' namespace
* A namespace called '''client'''
* '''client''' replication controller and service in '''client''' namespace
== Pod-to-Pod communication ==
In Kubernetes, the pods by default can communicate with other pods, regardless of which host they land on. Every pod gets its own IP address so you do not need to explicitly create links between pods. This is demonstrated by the management-ui.
<source>
$ cat management-ui.yaml
kind: Service
metadata:
  name: management-ui
  namespace: management-ui
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 9001
# Get Management UI dns name
kubectl get svc -o wide -n management-ui
</source>
If you open the URL you see '''Visual Start''' of connectiona between PODs B-C-F. The UI here shows the default behavior, of all services being able to reach each other.
== Apply network policies ==
By default all Pods can talk to each other what is not what we shuld allow in produciton environemtns. So, let's apply policies:
<source lang="bash">
cd calico_resources
wget https://eksworkshop.com/calico/stars_policy_demo/apply_network_policies.files/default-deny.yaml
cat default-deny.yaml #not all output showing below
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default-deny
spec:
  podSelector:
    matchLabels: {}
# Create deny policies to followign name spaces 'stars' and 'client'. Web browser won't show anything as UI won't have access to pods.
kubectl apply -n stars -f default-deny.yaml
kubectl apply -n client -f default-deny.yaml
# Create allow policies
wget https://eksworkshop.com/calico/stars_policy_demo/apply_network_policies.files/allow-ui.yaml
wget https://eksworkshop.com/calico/stars_policy_demo/apply_network_policies.files/allow-ui-client.yaml
cat allow-ui.yaml
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
  namespace: stars
  name: allow-ui
spec:
  podSelector:
    matchLabels: {}
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              role: management-ui
cat allow-ui-client.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: client
  name: allow-ui
spec:
  podSelector:
    matchLabels: {}
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              role: management-ui
kubectl apply -f allow-ui.yaml
kubectl apply -f allow-ui-client.yaml
# The website should start showing connection star again but Pods cannot communicate to each other.
</source>
== Allow Directional Traffic ==
Network policies in Kubernetes use labels to select pods, and define rules on what traffic is allowed to reach those pods. They may specify ingress or egress or both. Each rule allows traffic which matches both the from and ports sections.
<source lang="bash">
# Download
cd calico_resources
wget https://eksworkshop.com/calico/stars_policy_demo/directional_traffic.files/backend-policy.yaml
wget https://eksworkshop.com/calico/stars_policy_demo/directional_traffic.files/frontend-policy.yaml
</source>
{| class="wikitable"
|+ Backend and forntend policies
|-
! backend-policy
! frontend-policy
|-
| <source lang="bash">
$ cat backend-policy.yaml:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: stars
  name: backend-policy
spec:
  podSelector:
    matchLabels:
      role: backend
  ingress:
    - from:
        - podSelector:
            matchLabels:
              role: frontend
      ports:
        - protocol: TCP
          port: 6379
</source>
| <source lang="bash">
$ cat frontend-policy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: stars
  name: frontend-policy
spec:
  podSelector:
    matchLabels:
      role: frontend
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              role: client
      ports:
        - protocol: TCP
          port: 80
</source>
|}
Apply policies
<source lang="bash">
# allow traffic from frontend service to the backend service apply the manifest
kubectl apply -f backend-policy.yaml
# allow traffic from the client namespace to the frontend service
kubectl apply -f frontend-policy.yaml
</source>
Let’s have a look at the backend-policy. Its spec has a podSelector that selects all pods with the label <code>role:backend</code>, and allows ingress from all pods that have the label role:frontend and on TCP port '''6379''', but not the other way round. Traffic is allowed in one direction on a specific port number.
The frontend-policy is similar, except it allows ingress from '''namespaces''' that have the label <code>role: client</code> on TCP port '''80'''.
== Clean up ==
Remove deleting the namespaces and uninstalling Calico
<source lang="bash">
kubectl delete ns client stars management-ui #delete namespaces
kubectl calico.yaml                          #uninstall Calico
kubectl delete -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.2/calico.yaml
</source>
= [https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ Health Checks] =
By default, Kubernetes will restart a container if it crashes for any reason. Addtionally you can use probes:
* '''Liveness''' probes are used to know when a pod is alive or dead. A pod can be in a dead state for different reasons while Kubernetes kills and recreates the pod when liveness probe does not pass.
* '''Readiness''' probes are used to know when a pod is ready to serve traffic. Only when the readiness probe passes, a pod will receive traffic from the service. When readiness probe fails, traffic will not be sent to a pod until it passes.
;liveness probe
In the example below kublet is instructed to send HTTP GET request to the server hosting this Pod and if the handler for the servers /health returns a success code, then the Container is considered healthy.
<source lang="bash">
mkdir healthchecks; cd $_
$ cat << EOF > liveness-app.yaml                                                                                                                                                                                                         
apiVersion: v1
kind: Pod
metadata:
  name: liveness-app
spec:
  containers:
  - name: liveness
    image: brentley/ecsdemo-nodejs
    livenessProbe:
      httpGet:
        path: /health
        port: 3000
      initialDelaySeconds: 5
      periodSeconds: 5
EOF
# Create a pod from the manifrst
kubectl apply -f liveness-app.yaml
# Show the pod event history
kubectl describe pod liveness-app
NAME          READY    STATUS    RESTARTS  AGE
liveness-app  1/1      Running  0          54s
# Intrduce failure. Send a kill signal to the application process in docker runtime
kubectl exec -it liveness-app -- /bin/kill -s SIGUSR1 1
kubectl get pod liveness-app
NAME          READY    STATUS    RESTARTS  AGE
liveness-app  1/1      Running  1          11m
# Get logs
kubectl logs liveness-app # use -f for log tailing
kubectl logs liveness-app --previous # previous container logs
</source>
;readiness probe
<source lang="bash">
cd healthchecks
cat << EOF > readiness-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: readiness-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: readiness-deployment
  template:
    metadata:
      labels:
        app: readiness-deployment
    spec:
      containers:
      - name: readiness-deployment
        image: alpine
        command: ["sh", "-c", "touch /tmp/healthy && sleep 86400"]
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 5
          periodSeconds: 3
EOF
# create a deployment to test readiness probe
kubectl apply -f readiness-deployment.yaml
# Verify
kubectl get pods -l app=readiness-deployment
kubectl describe deployment readiness-deployment | grep Replicas:
# Introduce failure by deleting the file used by the probe
kubectl exec -it readiness-deployment-<POD-NAME> -- rm /tmp/healthy
kubectl get pods -l app=readiness-deployment
NAME                                    READY    STATUS    RESTARTS  AGE
readiness-deployment-59dcf5956f-jfpf6  1/1      Running  0          9m
readiness-deployment-59dcf5956f-mdqc6  0/1      Running  0          9m  #traffic won't be routed to it
readiness-deployment-59dcf5956f-wfwgn  1/1      Running  0          9m
kubectl describe deployment readiness-deployment | grep Replicas:
Replicas:              3 desired | 3 updated | 3 total | 2 available | 1 unavailable
# Recreate the probe file
kubectl exec -it readiness-deployment-<YOUR-POD-NAME> -- touch /tmp/healthy
</source>
;Clean up
<source lang="bash">
kubectl delete -f liveness-app.yaml,readiness-deployment.yaml
</source>
In the example above we use a text file but instead you can use <code>tcpSocket</code>
<source>
    readinessProbe:
      tcpSocket:
        port: 8080
</source>
= Helm - charts =
Package manager for Kubernetes that packages multiple Kubernetes resources into a single logical deployment unit called '''[https://github.com/helm/helm/blob/master/docs/charts.md Chart]'''. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
* [https://github.com/helm/charts/tree/master/stable Official Helm Chart Repository]
<source lang="bash">
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod +x get_helm.sh
# Install
./get_helm.sh
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.
</source>
Helm relies on a service called '''tiller''' that requires special permission on the kubernetes cluster, so we need to build a '''Service Account''' for '''tiller''' to use. We’ll then apply this to the cluster.
<source lang="bash">
# create a new service account manifest
cat <<EoF > ~/environment/rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
EoF
# Apply config
kubectl rbac.yaml
serviceaccount "tiller" created
clusterrolebinding.rbac.authorization.k8s.io "tiller" created
# Initialise helm
helm init --service-account tiller
Creating /home/ec2-user/.helm
Creating /home/ec2-user/.helm/repository
Creating /home/ec2-user/.helm/repository/cache
Creating /home/ec2-user/.helm/repository/local
Creating /home/ec2-user/.helm/plugins
Creating /home/ec2-user/.helm/starters
Creating /home/ec2-user/.helm/cache/archive
Creating /home/ec2-user/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/ec2-user/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
#Update
helm repo update
</source>
;Helm usage
<source lang="bash">
helm search
helm search jenkins
NAME            CHART VERSION  APP VERSION    DESCRIPTION                                               
stable/jenkins  0.22.0          lts            Open source continuous integration server...
# add repository
helm repo add bitnami https://charts.bitnami.com/bitnami
</source>
Install bitnami/nginx application from bitnami repository
<source lang="bash">
helm install --name mywebserver bitnami/nginx
NAME:  mywebserver
LAST DEPLOYED: Sun Nov 25 20:47:58 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME              AGE
mywebserver-nginx  0s
==> v1beta1/Deployment
mywebserver-nginx  0s
==> v1/Pod(related)
NAME                                READY  STATUS            RESTARTS  AGE
mywebserver-nginx-866d7bcc97-k6rg4  0/1    ContainerCreating  0        0s
</source>
The chart has created 3 objects: service, deployment and a pod. To verify each of objects use comamnds
<source lang="bash">
kubectl get      service    mywebserver-nginx -o wide # service info
kubectl get      deployment  mywebserver-nginx        # deployment info short
kubectl describe deployment  mywebserver-nginx        # deployment info
kubectl get      pods -l app=mywebserver-nginx        # pod info
</source>
Clean up
<source lang="bash">
helm list #list running applications installed by helm
helm delete --purge mywebserver # delete deployment
</source>
= Delete EKS cluster =
As the running cluster costs $0.20 per hour it make sense to kill it. The command below will run CloudForamtion and delete stack named ''eksctl-eksworkshop-eksctl-cluster''
<source lang="bash">
eksctl delete cluster --name=eksworkshop-eksctl
</source>
</source>


= References =
= References =
*[https://eksworkshop.com eksworkshop] Official Amazon EKS Workshop
*[https://eksworkshop.com eksworkshop] Official Amazon EKS Workshop
*[https://github.com/ramitsurana/awesome-kubernetes Awesome-Kubernetes] Git repo

Revision as of 15:42, 19 June 2019

Theory

Kubernetes

service
- consider as software load balancer
  • kubernetes resource used to provide an abstraction through your Pod agnostic of the specific instances that are running
  • emulates a software load balancer within Kubernetes
  • can contain a Policy

kubectl

kubectl get nodes
kubectl get pods --watch
kubectl describe pod <pod-name> #shows events
kubectl run letskube-deployment --image=acrtest.azurecr.io/letskube:v2 --port=80 --replicas=3
kubectl expose deployment letskube-deployment --type=NodePort
kubectl delete deployment letskube-deployment

kubectl create -f .\letskubedeploy.yml
kubectl get service --watch #for EXTERNAL-IP to be allocated

kubectl scale --replicas=55 deployment/letskube-deployment

kubectx

  • kubectx helps you switch between clusters back and forth
  • kubens helps you switch between Kubernetes namespaces smoothly


git clone https://github.com/ahmetb/kubectx.git ~/.kubectx
COMPDIR=$(pkg-config --variable=completionsdir bash-completion)
ln -sf ~/.kubectx/completion/kubens.bash $COMPDIR/kubens
ln -sf ~/.kubectx/completion/kubectx.bash $COMPDIR/kubectx
cat << FOE >> ~/.bashrc

#kubectx and kubens
export PATH=\$PATH:~/.kubectx
FOE

Azure AKS

Setting up kubectl

Powershell

$env:KUBECONFIG="$env:HOMEPATH\.kube\aksconfig"
PS1 C:\> kubectl config current-context #show current context, default cluster managed by the kubectl
PS1 C:\> Get-Content $env:KUBECONFIG | sls context
contexts:
- context:
current-context: aks-test-cluster

Bash

export KUBECONFIG=~/.kube/aksconfig

Intro into Amazon EKS

This intro information are valid at the time of writting this section.

AWS EKS supports only Kubernetes version 1.10.3.

By default, Amazon EKS provides AWS CloudFormation templates to spin up your worker nodes with the Amazon EKS-optimized AMI. This AMI is built on top of Amazon Linux 2. The AMI is configured to work with Amazon EKS out of the box and it includes Docker 17.06.2-ce (with overlay2 as a Docker storage driver), Kubelet 1.10.3, and the AWS authenticator. The AMI also launches with specialized Amazon EC2 user data that allows it to discover and connect to your cluster's control plane automatically.

The AWS VPC container network interface (CNI) plugin is responsible for providing pod networking in Kubernetes using Elastic Network Interfaces (ENI) on AWS. Amazon EKS works with Calico by Tigera to integrate with the CNI plugin to provide fine grained networking policies.

The Amazon EKS service is available at the time of writting this in Novmeber 2018 only in following regions:

  • US East (N. Virginia) - us-east-1
  • US East (Ohio) - us-east-2
  • US West (Oregon) - us-west-2
  • EU (Ireland) - eu-west-1

Bootstrap/create EKS Cluster

# Generate ssh key to be used to connect to Kubernetes EKS Ec2 worker instances
ssh-keygen

# Install kubectl
mkdir -p ~/.kube

#Get the latest version
sudo curl -L https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl

#Get 1.10.3 version
sudo curl --location -o /usr/local/bin/kubectl "https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/kubectl"
sudo chmod +x /usr/local/bin/kubectl
kubectl version --short --client
kubectl <operation> <object> <resource_name> <optional_flags>

# Install aws-iam-authenticator

#download option1, then mv
go get -u -v github.com/kubernetes-sigs/aws-iam-authenticator/cmd/aws-iam-authenticator
sudo mv ~/go/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator

#download option2 straight to /usr/bin/aws-iam-authenticator
sudo curl https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator -o /usr/bin/aws-iam-authenticator

aws-iam-authenticator help

# Install jq
sudo yum -y install jq #Amazon Linux
sudo apt-get jq -y     #Ubuntu

# Configure awscli
rm -vf ${HOME}/.aws/credentials
export AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r .region)
echo "export AWS_REGION=${AWS_REGION}" >> ~/.bash_profile
aws configure set default.region ${AWS_REGION}
aws configure get default.region

# Install eksctl by Waveworks
curl --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv -v /tmp/eksctl /usr/local/bin
eksctl version

# Create EKS cluster
$ eksctl create cluster --name=eksworkshop-eksctl --nodes=3 --node-ami=auto --region=${AWS_REGION}
2018-11-24T12:54:41Z [ℹ]  using region eu-west-1
2018-11-24T12:54:42Z [ℹ]  setting availability zones to [eu-west-1b eu-west-1a eu-west-1c]
2018-11-24T12:54:42Z [ℹ]  subnets for eu-west-1b - public:192.168.0.0/19 private:192.168.96.0/19
2018-11-24T12:54:42Z [ℹ]  subnets for eu-west-1a - public:192.168.32.0/19 private:192.168.128.0/19
2018-11-24T12:54:42Z [ℹ]  subnets for eu-west-1c - public:192.168.64.0/19 private:192.168.160.0/19
2018-11-24T12:54:43Z [ℹ]  using "ami-00c3b2d35bdddffff" for nodes
2018-11-24T12:54:43Z [ℹ]  creating EKS cluster "eksworkshop-eksctl" in "eu-west-1" region
2018-11-24T12:54:43Z [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
2018-11-24T12:54:43Z [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --name=eksworkshop-eksctl'
2018-11-24T12:54:43Z [ℹ]  creating cluster stack "eksctl-eksworkshop-eksctl-cluster"
2018-11-24T13:06:38Z [ℹ]  creating nodegroup stack "eksctl-eksworkshop-eksctl-nodegroup-0"
2018-11-24T13:10:16Z [✔]  all EKS cluster resource for "eksworkshop-eksctl" had been created
2018-11-24T13:10:16Z [✔]  saved kubeconfig as "/home/ec2-user/.kube/config"
2018-11-24T13:10:16Z [ℹ]  the cluster has 0 nodes
2018-11-24T13:10:16Z [ℹ]  waiting for at least 3 nodes to become ready
2018-11-24T13:10:47Z [ℹ]  the cluster has 3 nodes
2018-11-24T13:10:47Z [ℹ]  node "ip-192-168-13-5.eu-west-1.compute.internal" is ready
2018-11-24T13:10:47Z [ℹ]  node "ip-192-168-41-230.eu-west-1.compute.internal" is ready
2018-11-24T13:10:47Z [ℹ]  node "ip-192-168-79-54.eu-west-1.compute.internal" is ready
2018-11-24T13:10:47Z [ℹ]  kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes'
2018-11-24T13:10:47Z [✔]  EKS cluster "eksworkshop-eksctl" in "eu-west-1" region is ready

# Verify EKS cluster nodes
kubectl get nodes
NAME                                           STATUS    ROLES     AGE       VERSION
ip-192-168-13-5.eu-west-1.compute.internal     Ready     <none>    1h        v1.10.3
ip-192-168-41-230.eu-west-1.compute.internal   Ready     <none>    1h        v1.10.3
ip-192-168-79-54.eu-west-1.compute.internal    Ready     <none>    1h        v1.10.3

# Get info about the cluster
eksctl get cluster --name=eksworkshop-eksctl --region=${AWS_REGION}                       NAME                    VERSION STATUS  CREATED                 VPC                     SUBNETS                SECURITYGROUPS
eksworkshop-eksctl      1.10    ACTIVE  2018-11-24T12:55:28Z    vpc-0c97f8a6dabb11111   subnet-05285b6c692711111,subnet-0a6626ec2c0111111,subnet-0c5e839d106f11111,subnet-0d9a9b34be5511111,subnet-0f297fefefad11111,subnet-0faaf1d3dedd11111   sg-083fbc37e4b011111

Deploy the Official Kubernetes Dashboard

# Deploy dashboard from official config sources. Also can download a files and deploy.
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

# Create kube-proxy to enable accedd to the application (dashboard) from Internet
# start the proxy in the background, listen on port 8080, listen on all interfaces, and will disable the filtering of non-localhost requests
kubectl proxy --port=8080 --address='0.0.0.0' --disable-filter=true &
 W1124 14:47:55.308424   14460 proxy.go:138] Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious
Starting to serve on [::]:8080


Access dashboard

Generate temporary token to login to dashboard

aws-iam-authenticator token -i eksworkshop-eksctl --token-only

Go to webbrowser, point to kube-proxy and append to the URL following path

/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

select token sign-in and paste token to login in.

Deploy sample applications

The containers listen on port 3000, and native service discovery will be used to locate the running containers and communicate with them.

# Download deployable sample applications
mkdir ~/environment #place of deployables to EKS, applications, policies etc
cd ~/environment
git clone https://github.com/brentley/ecsdemo-frontend.git
git clone https://github.com/brentley/ecsdemo-nodejs.git
git clone https://github.com/brentley/ecsdemo-crystal.git

### Deploy applications
# NodeJS Backend API
cd ecsdemo-nodejs
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml
kubectl get deployment ecsdemo-nodejs # watch progress

# Crystal Backend API
cd ~/environment/ecsdemo-crystal
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml
kubectl get deployment ecsdemo-crystal

Before deploying frontend application let's see how servive differes between backend and frontend services

kubernetes/service.yaml
frontend service backend service
apiVersion: v1
kind: Service
metadata:
  name: ecsdemo-frontend
spec:
  selector:
    app: ecsdemo-frontend
  type: LoadBalancer
  ports:
   -  protocol: TCP
      port: 80
      targetPort: 3000
apiVersion: v1
kind: Service
metadata:
  name: ecsdemo-nodejs
spec:
  selector:
    app: ecsdemo-nodejs
  type: ClusterIP  <-- this is default
  ports:
   -  protocol: TCP
      port: 80
      targetPort: 3000

Notice there is no need to specific service type describe for backend because the default type is ClusterIP. This Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. Thus forntend has type: LoadBalancer

The frontend service will attempt to create ELB thus requires access to the elb service. This is controlled by IAM service role that needs creating if does not exist.

aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"

Deploy frontend service

cd ecsdemo-frontend
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml
kubectl get deployment ecsdemo-frontend

# Get service address
kubectl get service ecsdemo-frontend -o wide
ELB=$(kubectl get service ecsdemo-frontend -o json | jq -r '.status.loadBalancer.ingress[].hostname')
curl -m3 -v $ELB #You can also open this in a webrowser

Scale backend services

kubectl scale deployment ecsdemo-nodejs --replicas=3
kubectl scale deployment ecsdemo-crystal --replicas=3
kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
ecsdemo-crystal    3         3         3            3           38m
ecsdemo-frontend   1         1         1            1           20m
ecsdemo-nodejs     3         3         3            3           40m

# Watch scaling in action
$ i=3; kubectl scale deployment ecsdemo-nodejs --replicas=$i; kubectl scale deployment ecsdemo-crystal --replicas=$i
$ watch -d -n 0.5 kubectl get deployments

Check the browser you should now see traffic flowing to multiple frontend services.

Delete the applications

cd ecsdemo-frontend
kubectl delete -f kubernetes/service.yaml
kubectl delete -f kubernetes/deployment.yaml

cd ecsdemo-crystal
kubectl delete -f kubernetes/service.yaml
kubectl delete -f kubernetes/deployment.yaml

cd ecsdemo-nodejs
kubectl delete -f kubernetes/service.yaml
kubectl delete -f kubernetes/deployment.yaml

Networking using Calico

;Install

Below will install Calico manifest. This creates the daemon sets in the kube-system namespace.

wget https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.2/calico.yaml
kubectl apply -f calico.yaml
kubectl get daemonset calico-node --namespace=kube-system

See more details on the eksworkshop.com website.

Network policy demo

Before creating network polices, we will create the required resources.

mkdir calico_resources && cd calico_resources 
wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/namespace.yaml
kubectl apply -f namespace.yaml # create namespace

# Download manifest for orher resources
wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/management-ui.yaml
wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/backend.yaml
wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/frontend.yaml
wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/client.yaml

kubectl apply -f management-ui.yaml
kubectl apply -f backend.yaml
kubectl apply -f frontend.yaml
kubectl apply -f client.yaml

kubectl get pods --all-namespaces


Resources we created:

  • A namespace called stars
  • frontend and backend replication controllers and services within stars namespace
  • A namespace called management-ui
  • Replication controller and service management-ui for the user interface seen on the browser, in the management-ui namespace
  • A namespace called client
  • client replication controller and service in client namespace

Pod-to-Pod communication

In Kubernetes, the pods by default can communicate with other pods, regardless of which host they land on. Every pod gets its own IP address so you do not need to explicitly create links between pods. This is demonstrated by the management-ui.

$ cat management-ui.yaml
kind: Service
metadata:
  name: management-ui
  namespace: management-ui
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 9001

# Get Management UI dns name
kubectl get svc -o wide -n management-ui

If you open the URL you see Visual Start of connectiona between PODs B-C-F. The UI here shows the default behavior, of all services being able to reach each other.

Apply network policies

By default all Pods can talk to each other what is not what we shuld allow in produciton environemtns. So, let's apply policies:

cd calico_resources
wget https://eksworkshop.com/calico/stars_policy_demo/apply_network_policies.files/default-deny.yaml

cat default-deny.yaml #not all output showing below
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default-deny
spec:
  podSelector:
    matchLabels: {}

# Create deny policies to followign name spaces 'stars' and 'client'. Web browser won't show anything as UI won't have access to pods.
kubectl apply -n stars -f default-deny.yaml
kubectl apply -n client -f default-deny.yaml

# Create allow policies
wget https://eksworkshop.com/calico/stars_policy_demo/apply_network_policies.files/allow-ui.yaml
wget https://eksworkshop.com/calico/stars_policy_demo/apply_network_policies.files/allow-ui-client.yaml

cat allow-ui.yaml
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
  namespace: stars
  name: allow-ui
spec:
  podSelector:
    matchLabels: {}
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              role: management-ui

cat allow-ui-client.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: client
  name: allow-ui
spec:
  podSelector:
    matchLabels: {}
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              role: management-ui

kubectl apply -f allow-ui.yaml
kubectl apply -f allow-ui-client.yaml
# The website should start showing connection star again but Pods cannot communicate to each other.

Allow Directional Traffic

Network policies in Kubernetes use labels to select pods, and define rules on what traffic is allowed to reach those pods. They may specify ingress or egress or both. Each rule allows traffic which matches both the from and ports sections.

# Download 
cd calico_resources
wget https://eksworkshop.com/calico/stars_policy_demo/directional_traffic.files/backend-policy.yaml
wget https://eksworkshop.com/calico/stars_policy_demo/directional_traffic.files/frontend-policy.yaml


Backend and forntend policies
backend-policy frontend-policy
$ cat backend-policy.yaml:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: stars
  name: backend-policy
spec:
  podSelector:
    matchLabels:
      role: backend
  ingress:
    - from:
        - podSelector:
            matchLabels:
              role: frontend
      ports:
        - protocol: TCP
          port: 6379
$ cat frontend-policy.yaml 
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: stars
  name: frontend-policy
spec:
  podSelector:
    matchLabels:
      role: frontend
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              role: client
      ports:
        - protocol: TCP
          port: 80

Apply policies

# allow traffic from frontend service to the backend service apply the manifest
kubectl apply -f backend-policy.yaml

# allow traffic from the client namespace to the frontend service
kubectl apply -f frontend-policy.yaml


Let’s have a look at the backend-policy. Its spec has a podSelector that selects all pods with the label role:backend, and allows ingress from all pods that have the label role:frontend and on TCP port 6379, but not the other way round. Traffic is allowed in one direction on a specific port number.

The frontend-policy is similar, except it allows ingress from namespaces that have the label role: client on TCP port 80.

Clean up

Remove deleting the namespaces and uninstalling Calico

kubectl delete ns client stars management-ui #delete namespaces
kubectl calico.yaml                          #uninstall Calico
kubectl delete -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.2/calico.yaml

Health Checks

By default, Kubernetes will restart a container if it crashes for any reason. Addtionally you can use probes:

  • Liveness probes are used to know when a pod is alive or dead. A pod can be in a dead state for different reasons while Kubernetes kills and recreates the pod when liveness probe does not pass.
  • Readiness probes are used to know when a pod is ready to serve traffic. Only when the readiness probe passes, a pod will receive traffic from the service. When readiness probe fails, traffic will not be sent to a pod until it passes.
liveness probe

In the example below kublet is instructed to send HTTP GET request to the server hosting this Pod and if the handler for the servers /health returns a success code, then the Container is considered healthy.

mkdir healthchecks; cd $_
$ cat << EOF > liveness-app.yaml                                                                                                                                                                                                          
apiVersion: v1
kind: Pod
metadata:
  name: liveness-app
spec:
  containers:
  - name: liveness
    image: brentley/ecsdemo-nodejs
    livenessProbe:
      httpGet:
        path: /health
        port: 3000
      initialDelaySeconds: 5
      periodSeconds: 5
EOF

# Create a pod from the manifrst
kubectl apply -f liveness-app.yaml

# Show the pod event history
kubectl describe pod liveness-app
NAME           READY     STATUS    RESTARTS   AGE
liveness-app   1/1       Running   0          54s

# Intrduce failure. Send a kill signal to the application process in docker runtime
kubectl exec -it liveness-app -- /bin/kill -s SIGUSR1 1

kubectl get pod liveness-app
NAME           READY     STATUS    RESTARTS   AGE
liveness-app   1/1       Running   1          11m

# Get logs
kubectl logs liveness-app # use -f for log tailing
kubectl logs liveness-app --previous # previous container logs
readiness probe
cd healthchecks
cat << EOF > readiness-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: readiness-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: readiness-deployment
  template:
    metadata:
      labels:
        app: readiness-deployment
    spec:
      containers:
      - name: readiness-deployment
        image: alpine
        command: ["sh", "-c", "touch /tmp/healthy && sleep 86400"]
        readinessProbe:
          exec:
            command:
            - cat
            - /tmp/healthy
          initialDelaySeconds: 5
          periodSeconds: 3
EOF

# create a deployment to test readiness probe
kubectl apply -f readiness-deployment.yaml

# Verify
kubectl get pods -l app=readiness-deployment
kubectl describe deployment readiness-deployment | grep Replicas:

# Introduce failure by deleting the file used by the probe
kubectl exec -it readiness-deployment-<POD-NAME> -- rm /tmp/healthy
kubectl get pods -l app=readiness-deployment
NAME                                    READY     STATUS    RESTARTS   AGE
readiness-deployment-59dcf5956f-jfpf6   1/1       Running   0          9m
readiness-deployment-59dcf5956f-mdqc6   0/1       Running   0          9m  #traffic won't be routed to it
readiness-deployment-59dcf5956f-wfwgn   1/1       Running   0          9m

kubectl describe deployment readiness-deployment | grep Replicas:
Replicas:               3 desired | 3 updated | 3 total | 2 available | 1 unavailable

# Recreate the probe file
kubectl exec -it readiness-deployment-<YOUR-POD-NAME> -- touch /tmp/healthy
Clean up
kubectl delete -f liveness-app.yaml,readiness-deployment.yaml

In the example above we use a text file but instead you can use tcpSocket

readinessProbe:
      tcpSocket:
        port: 8080

Helm - charts

Package manager for Kubernetes that packages multiple Kubernetes resources into a single logical deployment unit called Chart. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.


curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod +x get_helm.sh

# Install
./get_helm.sh
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.


Helm relies on a service called tiller that requires special permission on the kubernetes cluster, so we need to build a Service Account for tiller to use. We’ll then apply this to the cluster.

# create a new service account manifest
cat <<EoF > ~/environment/rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system
EoF

# Apply config
kubectl rbac.yaml
serviceaccount "tiller" created
clusterrolebinding.rbac.authorization.k8s.io "tiller" created

# Initialise helm
helm init --service-account tiller
Creating /home/ec2-user/.helm 
Creating /home/ec2-user/.helm/repository 
Creating /home/ec2-user/.helm/repository/cache 
Creating /home/ec2-user/.helm/repository/local 
Creating /home/ec2-user/.helm/plugins 
Creating /home/ec2-user/.helm/starters 
Creating /home/ec2-user/.helm/cache/archive 
Creating /home/ec2-user/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /home/ec2-user/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

#Update
helm repo update


Helm usage
helm search
helm search jenkins
NAME            CHART VERSION   APP VERSION     DESCRIPTION                                                 
stable/jenkins  0.22.0          lts             Open source continuous integration server...

# add repository
helm repo add bitnami https://charts.bitnami.com/bitnami


Install bitnami/nginx application from bitnami repository

helm install --name mywebserver bitnami/nginx
NAME:   mywebserver
LAST DEPLOYED: Sun Nov 25 20:47:58 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME               AGE
mywebserver-nginx  0s

==> v1beta1/Deployment
mywebserver-nginx  0s

==> v1/Pod(related)

NAME                                READY  STATUS             RESTARTS  AGE
mywebserver-nginx-866d7bcc97-k6rg4  0/1    ContainerCreating  0         0s


The chart has created 3 objects: service, deployment and a pod. To verify each of objects use comamnds

kubectl get      service     mywebserver-nginx -o wide # service info
kubectl get      deployment  mywebserver-nginx         # deployment info short
kubectl describe deployment  mywebserver-nginx         # deployment info
kubectl get      pods -l app=mywebserver-nginx         # pod info

Clean up

helm list #list running applications installed by helm
helm delete --purge mywebserver # delete deployment

Delete EKS cluster

As the running cluster costs $0.20 per hour it make sense to kill it. The command below will run CloudForamtion and delete stack named eksctl-eksworkshop-eksctl-cluster

eksctl delete cluster --name=eksworkshop-eksctl

References