Kubernetes/Amazon EKS
Bootstrap/create EKS Cluster
# Generate ssh key to be used to connect to Kubernetes EKS Ec2 worker instances
ssh-keygen
# Install kubectl
mkdir -p ~/.kube
sudo curl --location -o /usr/local/bin/kubectl "https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/kubectl"
sudo chmod +x /usr/local/bin/kubectl
kubectl version --short --client
# Install aws-iam-authenticator
go get -u -v github.com/kubernetes-sigs/aws-iam-authenticator/cmd/aws-iam-authenticator
sudo mv ~/go/bin/aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
aws-iam-authenticator help
# Install jq
sudo yum -y install jq #Amazon Linux
sudo apt-get jq -y #Ubuntu
# Configure awscli
rm -vf ${HOME}/.aws/credentials
export AWS_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r .region)
echo "export AWS_REGION=${AWS_REGION}" >> ~/.bash_profile
aws configure set default.region ${AWS_REGION}
aws configure get default.region
# Install eksctl by Waveworks
curl --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv -v /tmp/eksctl /usr/local/bin
eksctl version
# Create EKS cluster
$ eksctl create cluster --name=eksworkshop-eksctl --nodes=3 --node-ami=auto --region=${AWS_REGION}
2018-11-24T12:54:41Z [ℹ] using region eu-west-1
2018-11-24T12:54:42Z [ℹ] setting availability zones to [eu-west-1b eu-west-1a eu-west-1c]
2018-11-24T12:54:42Z [ℹ] subnets for eu-west-1b - public:192.168.0.0/19 private:192.168.96.0/19
2018-11-24T12:54:42Z [ℹ] subnets for eu-west-1a - public:192.168.32.0/19 private:192.168.128.0/19
2018-11-24T12:54:42Z [ℹ] subnets for eu-west-1c - public:192.168.64.0/19 private:192.168.160.0/19
2018-11-24T12:54:43Z [ℹ] using "ami-00c3b2d35bdddffff" for nodes
2018-11-24T12:54:43Z [ℹ] creating EKS cluster "eksworkshop-eksctl" in "eu-west-1" region
2018-11-24T12:54:43Z [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
2018-11-24T12:54:43Z [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --name=eksworkshop-eksctl'
2018-11-24T12:54:43Z [ℹ] creating cluster stack "eksctl-eksworkshop-eksctl-cluster"
2018-11-24T13:06:38Z [ℹ] creating nodegroup stack "eksctl-eksworkshop-eksctl-nodegroup-0"
2018-11-24T13:10:16Z [✔] all EKS cluster resource for "eksworkshop-eksctl" had been created
2018-11-24T13:10:16Z [✔] saved kubeconfig as "/home/ec2-user/.kube/config"
2018-11-24T13:10:16Z [ℹ] the cluster has 0 nodes
2018-11-24T13:10:16Z [ℹ] waiting for at least 3 nodes to become ready
2018-11-24T13:10:47Z [ℹ] the cluster has 3 nodes
2018-11-24T13:10:47Z [ℹ] node "ip-192-168-13-5.eu-west-1.compute.internal" is ready
2018-11-24T13:10:47Z [ℹ] node "ip-192-168-41-230.eu-west-1.compute.internal" is ready
2018-11-24T13:10:47Z [ℹ] node "ip-192-168-79-54.eu-west-1.compute.internal" is ready
2018-11-24T13:10:47Z [ℹ] kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes'
2018-11-24T13:10:47Z [✔] EKS cluster "eksworkshop-eksctl" in "eu-west-1" region is ready
# Verify EKS cluster nodes
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-13-5.eu-west-1.compute.internal Ready <none> 1h v1.10.3
ip-192-168-41-230.eu-west-1.compute.internal Ready <none> 1h v1.10.3
ip-192-168-79-54.eu-west-1.compute.internal Ready <none> 1h v1.10.3
# Get info about the cluster
eksctl get cluster --name=eksworkshop-eksctl --region=${AWS_REGION} NAME VERSION STATUS CREATED VPC SUBNETS SECURITYGROUPS
eksworkshop-eksctl 1.10 ACTIVE 2018-11-24T12:55:28Z vpc-0c97f8a6dabb11111 subnet-05285b6c692711111,subnet-0a6626ec2c0111111,subnet-0c5e839d106f11111,subnet-0d9a9b34be5511111,subnet-0f297fefefad11111,subnet-0faaf1d3dedd11111 sg-083fbc37e4b011111
Deploy the Official Kubernetes Dashboard
# Deploy dashboard from official config sources. Also can download a files and deploy.
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
# Create kube-proxy to enable accedd to the application (dashboard) from Internet
# start the proxy in the background, listen on port 8080, listen on all interfaces, and will disable the filtering of non-localhost requests
kubectl proxy --port=8080 --address='0.0.0.0' --disable-filter=true &
W1124 14:47:55.308424 14460 proxy.go:138] Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious
Starting to serve on [::]:8080
- Access dashboard
Generate temporary token to login to dashboard
aws-iam-authenticator token -i eksworkshop-eksctl --token-only
Go to webbrowser, point to kube-proxy and append to the URL following path
/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
select token sign-in and paste token to login in.
Deploy sample applications
The containers listen on port 3000, and native service discovery will be used to locate the running containers and communicate with them.
# Download deployable sample applications
mkdir ~/environment #place of deployables to EKS, applications, policies etc
cd ~/environment
git clone https://github.com/brentley/ecsdemo-frontend.git
git clone https://github.com/brentley/ecsdemo-nodejs.git
git clone https://github.com/brentley/ecsdemo-crystal.git
### Deploy applications
# NodeJS Backend API
cd ecsdemo-nodejs
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml
kubectl get deployment ecsdemo-nodejs # watch progress
# Crystal Backend API
cd ~/environment/ecsdemo-crystal
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml
kubectl get deployment ecsdemo-crystal
Before deploying frontend application let's see how servive differes between backend and frontend services
frontend service | backend service |
---|---|
apiVersion: v1
kind: Service
metadata:
name: ecsdemo-frontend
spec:
selector:
app: ecsdemo-frontend
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 3000
|
apiVersion: v1
kind: Service
metadata:
name: ecsdemo-nodejs
spec:
selector:
app: ecsdemo-nodejs
type: ClusterIP <-- this is default
ports:
- protocol: TCP
port: 80
targetPort: 3000
|
Notice there is no need to specific service type describe for backend because the default type is ClusterIP
. This Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. Thus forntend has type: LoadBalancer
The frontend service will attempt to create ELB thus requires access to the elb service. This is controlled by IAM service role that needs creating if does not exist.
aws iam get-role --role-name "AWSServiceRoleForElasticLoadBalancing" || aws iam create-service-linked-role --aws-service-name "elasticloadbalancing.amazonaws.com"
Deploy frontend service
cd ecsdemo-frontend
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml
kubectl get deployment ecsdemo-frontend
# Get service address
kubectl get service ecsdemo-frontend -o wide
ELB=$(kubectl get service ecsdemo-frontend -o json | jq -r '.status.loadBalancer.ingress[].hostname')
curl -m3 -v $ELB #You can also open this in a webrowser
Scale backend services
kubectl scale deployment ecsdemo-nodejs --replicas=3
kubectl scale deployment ecsdemo-crystal --replicas=3
kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
ecsdemo-crystal 3 3 3 3 38m
ecsdemo-frontend 1 1 1 1 20m
ecsdemo-nodejs 3 3 3 3 40m
# Watch scaling in action
$ i=3; kubectl scale deployment ecsdemo-nodejs --replicas=$i; kubectl scale deployment ecsdemo-crystal --replicas=$i
$ watch -d -n 0.5 kubectl get deployments
Check the browser you should now see traffic flowing to multiple frontend services.
Delete the applications
cd ecsdemo-frontend
kubectl delete -f kubernetes/service.yaml
kubectl delete -f kubernetes/deployment.yaml
cd ecsdemo-crystal
kubectl delete -f kubernetes/service.yaml
kubectl delete -f kubernetes/deployment.yaml
cd ecsdemo-nodejs
kubectl delete -f kubernetes/service.yaml
kubectl delete -f kubernetes/deployment.yaml
References
- eksworkshop Official Amazon EKS Workshop