A Kubernetes Introduction – Setting up a Docker Cluster on AWS with kops

1. Overview

Docker offers a great built-in orchestration tool with docker swarm. However, Kubernetes is much more used an offers a wider functionality range.

In this tutorial you will learn how to set up a Docker Cluster with Kubernetes. Specifically, we want to use kops to set up multiple machines on AWS and deploy a simple app (kubernetes dashboard).

For this tutorial we assume a fresh installation of Ubuntu 18.04 (Bionic Beaver) and an AWS account.

2. Installing the AWS CLI

We will need to set up a S3 bucket on AWS for kubernetes configuration storage later. This can be easily done using the AWS CLI tool in our Ubuntu environment.

Since it is a python-based tool we can use pip to get it and install it. We download pip and then awscli that will be available in the .local/bin folder which we in turn add to our PATH and the .bashrc for persistence.

apt install python-pip
pip install awscli
echo 'export PATH=~/.local/bin:$PATH' >> .bashrc #global availability and persistence
source ~/.bashrc # reload config

3. Setting up kubernetes kops

The next step is to configure our aws cli with our access key id and secret key. After that we set the kubernetes provider to aws and get curl to be able to download the latest kops release.

We make kops executable and move it to our user binaries.
aws configure
apt install curl
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops

4. Create a S3 Storage bucket for kubernetes

We are now ready to create a AWS S3 bucket to store our kubernetes configuration data and state. This can be done using aws cli.

Note that the bucket name has to be unique! We also enable bucket versioning and export name variable that will be used in the next step for cluster creation.

aws s3api create-bucket --bucket ${bucket_name} --region us-east-1
aws s3api put-bucket-versioning --bucket ${bucket_name} --versioning-configuration Status=Enabled
export KOPS_CLUSTER_NAME=pushcommit.k8s.local
export KOPS_STATE_STORE=s3://${bucket_name}

We should see the following output after the bucket creation:

“Location”: “/pushcommit-kops-state-store-123”

5. Starting the cluster machines

Before we can use kops, we need to generate a ssh public key and a kops secret.
ssh-keygen -t rsa -b 4096 -C "info@pushcommit.com"
kops create secret --name pushcommit.k8s.local sshpublickey admin -i ~/.ssh/id_rsa.pub

Finally we can create a two node cluster:
kops create cluster --node-count=2 --node-size=t2.micro --zones=us-east-1a --name=${KOPS_CLUSTER_NAME}

I0601 13:17:12.781892 8566 create_cluster.go:1318] Using SSH public key: /root/.ssh/id_rsa.pub
I0601 13:17:13.908079 8566 create_cluster.go:472] Inferred –cloud=aws from zone “us-east-1a”
I0601 13:17:14.639951 8566 subnets.go:184] Assigned CIDR xxx/19 to subnet us-east-1a
Previewing changes that will be made:

Must specify –yes to apply changes

Cluster configuration has been created.

* list clusters with: kops get cluster
* edit this cluster with: kops edit cluster pushcommita.k8s.local
* edit your node instance group: kops edit ig –name=pushcommita.k8s.local nodes
* edit your master instance group: kops edit ig –name=pushcommita.k8s.local master-us-east-1a

Finally configure your cluster with: kops update cluster pushcommita.k8s.local –yes

As the message says we just have to run kops update and see our machines starting:
kops update cluster --name ${KOPS_CLUSTER_NAME} --yes

I0601 13:21:23.870854 8589 apply_cluster.go:456] Gossip DNS: skipping DNS validation
I0601 13:21:24.898754 8589 executor.go:91] Tasks: 0 done / 77 total; 30 can run
I0601 13:21:26.811994 8589 vfs_castore.go:731] Issuing new certificate: “apiserver-aggregator-ca”
I0601 13:21:27.206702 8589 vfs_castore.go:731] Issuing new certificate: “ca”
I0601 13:21:28.988747 8589 executor.go:91] Tasks: 30 done / 77 total; 24 can run
I0601 13:21:31.174577 8589 vfs_castore.go:731] Issuing new certificate: “kubelet”
I0601 13:21:31.522636 8589 vfs_castore.go:731] Issuing new certificate: “kubecfg”
I0601 13:21:31.878678 8589 vfs_castore.go:731] Issuing new certificate: “kubelet-api”
I0601 13:21:32.033865 8589 vfs_castore.go:731] Issuing new certificate: “apiserver-proxy-client”
I0601 13:21:32.293296 8589 vfs_castore.go:731] Issuing new certificate: “kube-proxy”
I0601 13:21:32.808335 8589 vfs_castore.go:731] Issuing new certificate: “apiserver-aggregator”
I0601 13:21:32.921519 8589 vfs_castore.go:731] Issuing new certificate: “kops”
I0601 13:21:32.926458 8589 vfs_castore.go:731] Issuing new certificate: “kube-scheduler”
I0601 13:21:32.948814 8589 vfs_castore.go:731] Issuing new certificate: “kube-controller-manager”
I0601 13:21:34.713297 8589 executor.go:91] Tasks: 54 done / 77 total; 19 can run
I0601 13:21:36.718314 8589 executor.go:91] Tasks: 73 done / 77 total; 3 can run
I0601 13:21:38.710598 8589 vfs_castore.go:731] Issuing new certificate: “master”
I0601 13:21:39.944498 8589 executor.go:91] Tasks: 76 done / 77 total; 1 can run
I0601 13:21:40.428498 8589 executor.go:91] Tasks: 77 done / 77 total; 0 can run
I0601 13:21:40.428767 8589 kubectl.go:134] error running kubectl config view –output json
I0601 13:21:40.428794 8589 kubectl.go:135] I0601 13:21:40.428807 8589 kubectl.go:136] W0601 13:21:40.428830 8589 update_cluster.go:279] error reading kubecfg: error getting config from kubectl: error running kubectl: exec: “kubectl”: executable file not found in $PATH
I0601 13:21:40.552287 8589 update_cluster.go:291] Exporting kubecfg for cluster
kops has set your kubectl context to pushcommita.k8s.local

Cluster changes have been applied to the cloud.

Changes may require instances to restart: kops rolling-update cluster

When it’s done we can validate our kubernetes cluster:
kops validate cluster

master-us-east-1a Master m3.medium 1 1 us-east-1a
nodes Node t2.micro 2 2 us-east-1a

ip-xxx.ec2.internal node True
ip-xxx.ec2.internal master True
ip-xxx.ec2.internal node True

Your cluster pushcommita.k8s.local is ready

6. Deploying the management dashboard

For testing purposes we want to deploy a kubernetes dashboard as an app. For that we need kubectl.

We download the latest stable release, make it executable and move it to our user binaries folder. By applying the .yaml file of kubernetes dashboard it will be deployed to our nodes.

We need to get the secret token to be able to login and can get it with kops get secret.

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
kops get secrets kube --type secret -oplaintext

We copy the secret and get our hostname with:

kubectl cluster-info

Now we can open it in any browser under the url https:///ui and login with admin and the before mentioned token as a password.

The dashboard asks for a admin permission token that can be option by the following command with kops:
kops get secrets admin --type secret -oplaintext

After entering we are done:

7. Tearing down the cluster

Shutting down the cluster is a simple one-liner:

kops delete cluster --name ${KOPS_CLUSTER_NAME} --yes

8. Conclusion

Kubernetes is pretty simple to set up using kops on aws. We have also seen how to deploy an app just from a remote .yml file.

The kubernetes dashboard is actually quite useful for keeping a bird’s eye perspective on our cluster.

Docker swarm is even easier to set up and ideal for less demanding deployment configurations.

Leave a Reply

Your email address will not be published. Required fields are marked *