Difference between revisions of "Kubernetes/Amazon EKS"
Line 1: | Line 1: | ||
= Bootstrap/create EKS Cluster = | = Bootstrap/create EKS Cluster = | ||
The Amazon EKS service is available at the time of writting this in Novmeber 2018 only in following regions: | |||
* US East (N. Virginia) - us-east-1 | |||
* US East (Ohio) - us-east-2 | |||
* US West (Oregon) - us-west-2 | |||
* EU (Ireland) - eu-west-1 | |||
<source lang="bash"> | <source lang="bash"> | ||
# Generate ssh key to be used to connect to Kubernetes EKS Ec2 worker instances | # Generate ssh key to be used to connect to Kubernetes EKS Ec2 worker instances | ||
Line 199: | Line 206: | ||
kubectl delete -f kubernetes/deployment.yaml | kubectl delete -f kubernetes/deployment.yaml | ||
</source> | </source> | ||
= Networking using Calico = | = Networking using Calico = | ||
;Install | ;Install |
Revision as of 19:41, 24 November 2018
Bootstrap/create EKS Cluster
The Amazon EKS service is available at the time of writting this in Novmeber 2018 only in following regions:
- US East (N. Virginia) - us-east-1
- US East (Ohio) - us-east-2
- US West (Oregon) - us-west-2
- EU (Ireland) - eu-west-1
# Generate ssh key to be used to connect to Kubernetes EKS Ec2 worker instances ssh-keygen # Install kubectl mkdir -p ~/.kube sudo curl --location -o /usr/local/bin/kubectl "https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/kubectl" sudo chmod +x /usr/local/bin/kubectl kubectl version --short --client # Install aws-iam-authenticator go get -u -v github.com/kubernetes-sigs/aws-iam-authenticator/cmd/aws-iam-authenticator sudo mv ~/go/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator aws-iam-authenticator help # Install jq sudo yum -y install jq #Amazon Linux sudo apt-get jq -y #Ubuntu # Configure awscli rm -vf ${HOME}/.aws/credentials export AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r .region) echo "export AWS_REGION=${AWS_REGION}" >> ~/.bash_profile aws configure set default.region ${AWS_REGION} aws configure get default.region # Install eksctl by Waveworks curl --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp sudo mv -v /tmp/eksctl /usr/local/bin eksctl version # Create EKS cluster $ eksctl create cluster --name=eksworkshop-eksctl --nodes=3 --node-ami=auto --region=${AWS_REGION} 2018-11-24T12:54:41Z [ℹ] using region eu-west-1 2018-11-24T12:54:42Z [ℹ] setting availability zones to [eu-west-1b eu-west-1a eu-west-1c] 2018-11-24T12:54:42Z [ℹ] subnets for eu-west-1b - public:192.168.0.0/19 private:192.168.96.0/19 2018-11-24T12:54:42Z [ℹ] subnets for eu-west-1a - public:192.168.32.0/19 private:192.168.128.0/19 2018-11-24T12:54:42Z [ℹ] subnets for eu-west-1c - public:192.168.64.0/19 private:192.168.160.0/19 2018-11-24T12:54:43Z [ℹ] using "ami-00c3b2d35bdddffff" for nodes 2018-11-24T12:54:43Z [ℹ] creating EKS cluster "eksworkshop-eksctl" in "eu-west-1" region 2018-11-24T12:54:43Z [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup 2018-11-24T12:54:43Z [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --name=eksworkshop-eksctl' 2018-11-24T12:54:43Z [ℹ] creating cluster stack "eksctl-eksworkshop-eksctl-cluster" 2018-11-24T13:06:38Z [ℹ] creating nodegroup stack "eksctl-eksworkshop-eksctl-nodegroup-0" 2018-11-24T13:10:16Z [✔] all EKS cluster resource for "eksworkshop-eksctl" had been created 2018-11-24T13:10:16Z [✔] saved kubeconfig as "/home/ec2-user/.kube/config" 2018-11-24T13:10:16Z [ℹ] the cluster has 0 nodes 2018-11-24T13:10:16Z [ℹ] waiting for at least 3 nodes to become ready 2018-11-24T13:10:47Z [ℹ] the cluster has 3 nodes 2018-11-24T13:10:47Z [ℹ] node "ip-192-168-13-5.eu-west-1.compute.internal" is ready 2018-11-24T13:10:47Z [ℹ] node "ip-192-168-41-230.eu-west-1.compute.internal" is ready 2018-11-24T13:10:47Z [ℹ] node "ip-192-168-79-54.eu-west-1.compute.internal" is ready 2018-11-24T13:10:47Z [ℹ] kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes' 2018-11-24T13:10:47Z [✔] EKS cluster "eksworkshop-eksctl" in "eu-west-1" region is ready # Verify EKS cluster nodes kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-13-5.eu-west-1.compute.internal Ready <none> 1h v1.10.3 ip-192-168-41-230.eu-west-1.compute.internal Ready <none> 1h v1.10.3 ip-192-168-79-54.eu-west-1.compute.internal Ready <none> 1h v1.10.3 # Get info about the cluster eksctl get cluster --name=eksworkshop-eksctl --region=${AWS_REGION} NAME VERSION STATUS CREATED VPC SUBNETS SECURITYGROUPS eksworkshop-eksctl 1.10 ACTIVE 2018-11-24T12:55:28Z vpc-0c97f8a6dabb11111 subnet-05285b6c692711111,subnet-0a6626ec2c0111111,subnet-0c5e839d106f11111,subnet-0d9a9b34be5511111,subnet-0f297fefefad11111,subnet-0faaf1d3dedd11111 sg-083fbc37e4b011111
Deploy the Official Kubernetes Dashboard
# Deploy dashboard from official config sources. Also can download a files and deploy. kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml # Create kube-proxy to enable accedd to the application (dashboard) from Internet # start the proxy in the background, listen on port 8080, listen on all interfaces, and will disable the filtering of non-localhost requests kubectl proxy --port=8080 --address='0.0.0.0' --disable-filter=true & W1124 14:47:55.308424 14460 proxy.go:138] Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious Starting to serve on [::]:8080
- Access dashboard
Generate temporary token to login to dashboard
aws-iam-authenticator token -i eksworkshop-eksctl --token-only
Go to webbrowser, point to kube-proxy and append to the URL following path
/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
select token sign-in and paste token to login in.
Deploy sample applications
The containers listen on port 3000, and native service discovery will be used to locate the running containers and communicate with them.
# Download deployable sample applications mkdir ~/environment #place of deployables to EKS, applications, policies etc cd ~/environment git clone https://github.com/brentley/ecsdemo-frontend.git git clone https://github.com/brentley/ecsdemo-nodejs.git git clone https://github.com/brentley/ecsdemo-crystal.git ### Deploy applications # NodeJS Backend API cd ecsdemo-nodejs kubectl apply -f kubernetes/deployment.yaml kubectl apply -f kubernetes/service.yaml kubectl get deployment ecsdemo-nodejs # watch progress # Crystal Backend API cd ~/environment/ecsdemo-crystal kubectl apply -f kubernetes/deployment.yaml kubectl apply -f kubernetes/service.yaml kubectl get deployment ecsdemo-crystal
Before deploying frontend application let's see how servive differes between backend and frontend services
frontend service | backend service |
---|---|
apiVersion: v1 kind: Service metadata: name: ecsdemo-frontend spec: selector: app: ecsdemo-frontend type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 3000 |
apiVersion: v1 kind: Service metadata: name: ecsdemo-nodejs spec: selector: app: ecsdemo-nodejs type: ClusterIP <-- this is default ports: - protocol: TCP port: 80 targetPort: 3000 |
Notice there is no need to specific service type describe for backend because the default type is ClusterIP
. This Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. Thus forntend has type: LoadBalancer
The frontend service will attempt to create ELB thus requires access to the elb service. This is controlled by IAM service role that needs creating if does not exist.
aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
Deploy frontend service
cd ecsdemo-frontend kubectl apply -f kubernetes/deployment.yaml kubectl apply -f kubernetes/service.yaml kubectl get deployment ecsdemo-frontend # Get service address kubectl get service ecsdemo-frontend -o wide ELB=$(kubectl get service ecsdemo-frontend -o json | jq -r '.status.loadBalancer.ingress[].hostname') curl -m3 -v $ELB #You can also open this in a webrowser
Scale backend services
kubectl scale deployment ecsdemo-nodejs --replicas=3 kubectl scale deployment ecsdemo-crystal --replicas=3 kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ecsdemo-crystal 3 3 3 3 38m ecsdemo-frontend 1 1 1 1 20m ecsdemo-nodejs 3 3 3 3 40m # Watch scaling in action $ i=3; kubectl scale deployment ecsdemo-nodejs --replicas=$i; kubectl scale deployment ecsdemo-crystal --replicas=$i $ watch -d -n 0.5 kubectl get deployments
Check the browser you should now see traffic flowing to multiple frontend services.
Delete the applications
cd ecsdemo-frontend kubectl delete -f kubernetes/service.yaml kubectl delete -f kubernetes/deployment.yaml cd ecsdemo-crystal kubectl delete -f kubernetes/service.yaml kubectl delete -f kubernetes/deployment.yaml cd ecsdemo-nodejs kubectl delete -f kubernetes/service.yaml kubectl delete -f kubernetes/deployment.yaml
Networking using Calico
;Install
Below will install Calico manifest. This creates the daemon sets in the kube-system namespace.
wget https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.2/calico.yaml kubectl apply -f calico.yaml kubectl get daemonset calico-node --namespace=kube-system
See more details on the eksworkshop.com website.
Policy demo
Before creating network polices, we will create the required resources.
mkdir calico_resources && cd calico_resources wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/namespace.yaml kubectl apply -f namespace.yaml # create namespace # Download manifest for orher resources wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/management-ui.yaml wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/backend.yaml wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/frontend.yaml wget https://eksworkshop.com/calico/stars_policy_demo/create_resources.files/client.yaml kubectl apply -f management-ui.yaml kubectl apply -f backend.yaml kubectl apply -f frontend.yaml kubectl apply -f client.yaml kubectl get pods --all-namespaces
Resources we created:
- A namespace called stars
- frontend and backend replication controllers and services within stars namespace
- A namespace called management-ui
- Replication controller and service management-ui for the user interface seen on the browser, in the management-ui namespace
- A namespace called client
- client replication controller and service in client namespace
Pod-toPod communication
In Kubernetes, the pods by default can communicate with other pods, regardless of which host they land on. Every pod gets its own IP address so you do not need to explicitly create links between pods. This is demonstrated by the management-ui.
$ cat management-ui.yaml kind: Service metadata: name: management-ui namespace: management-ui spec: type: LoadBalancer ports: - port: 80 targetPort: 9001 # Get Management UI dns name kubectl get svc -o wide -n management-ui
If you open the URL you see Visual Start of connectiona between PODs B-C-F. The UI here shows the default behavior, of all services being able to reach each other.
Apply network policies
By default all Pods can talk to each other what is not what we shuld allow in produciton environemtns. So, let's apply policies:
cd calico_resources wget https://eksworkshop.com/calico/stars_policy_demo/apply_network_policies.files/default-deny.yaml cat default-deny.yaml #not all output showing below kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: default-deny spec: podSelector: matchLabels: {} # Create deny policies to followign name spaces 'stars' and 'client'. Web browser won't show anything as UI won't have access to pods. kubectl apply -n stars -f default-deny.yaml kubectl apply -n client -f default-deny.yaml # Create allow policies wget https://eksworkshop.com/calico/stars_policy_demo/apply_network_policies.files/allow-ui.yaml wget https://eksworkshop.com/calico/stars_policy_demo/apply_network_policies.files/allow-ui-client.yaml cat allow-ui.yaml kind: NetworkPolicy apiVersion: extensions/v1beta1 metadata: namespace: stars name: allow-ui spec: podSelector: matchLabels: {} ingress: - from: - namespaceSelector: matchLabels: role: management-ui cat allow-ui-client.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: namespace: client name: allow-ui spec: podSelector: matchLabels: {} ingress: - from: - namespaceSelector: matchLabels: role: management-ui kubectl apply -f allow-ui.yaml kubectl apply -f allow-ui-client.yaml # The website should start showing connection star again but Pods cannot communicate to each other.
Allow Directional Traffic
Network policies in Kubernetes use labels to select pods, and define rules on what traffic is allowed to reach those pods. They may specify ingress or egress or both. Each rule allows traffic which matches both the from and ports sections.
# Download cd calico_resources wget https://eksworkshop.com/calico/stars_policy_demo/directional_traffic.files/backend-policy.yaml wget https://eksworkshop.com/calico/stars_policy_demo/directional_traffic.files/frontend-policy.yaml
backend-policy | frontend-policy |
---|---|
$ cat backend-policy.yaml: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: namespace: stars name: backend-policy spec: podSelector: matchLabels: role: backend ingress: - from: - podSelector: matchLabels: role: frontend ports: - protocol: TCP port: 6379 |
$ cat frontend-policy.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: namespace: stars name: frontend-policy spec: podSelector: matchLabels: role: frontend ingress: - from: - namespaceSelector: matchLabels: role: client ports: - protocol: TCP port: 80 |
Apply policies
# allow traffic from frontend service to the backend service apply the manifest kubectl apply -f backend-policy.yaml # allow traffic from the client namespace to the frontend service kubectl apply -f frontend-policy.yaml
Let’s have a look at the backend-policy. Its spec has a podSelector that selects all pods with the label role:backend
, and allows ingress from all pods that have the label role:frontend and on TCP port 6379, but not the other way round. Traffic is allowed in one direction on a specific port number.
The frontend-policy is similar, except it allows ingress from namespaces that have the label role: client
on TCP port 80.
Clean up
Remove deleting the namespaces and uninstalling Calico
kubectl delete ns client stars management-ui #delete namespaces kubectl calico.yaml #uninstall Calico kubectl delete -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.2/calico.yaml
Delete EKS cluster
As the running cluster costs $0.20 per hour it make sense to kill it. The command below will run CloudForamtion and delete stack named eksctl-eksworkshop-eksctl-cluster
eksctl delete cluster --name=eksworkshop-eksctl
References
- eksworkshop Official Amazon EKS Workshop