Deploying Kubernetes on AWS

A beginner’s introduction to Kubernetes and a step-by-step guide to launching your first EKS cluster

Shubham Agarwal
9 min readMar 23, 2021
Photo by Lorenzo Herrera on Unsplash

Kubernetes: What is it?

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management.

That doesn’t exactly clear things up does it?

Let’s take an example to grasp this concept.

You are a baker and you need a website to allow customers to order the tasty treats online.
You set up a website: https://oldschoolbaker.com

Now your customers can visit this website, scroll through the menu and place orders.
But what if your popularity grows leaps and bounds and thousands of people are trying to access this website all at once?

The servers hosting the website won’t be able to handle the traffic, causing them to shut down or crash.

How to solve this:

  1. Upgrade the specs (Memory, CPU size and other specs) so that the server can handle massive amounts of requests.
  2. Add more servers (Analogous to distributed computing).

Option 1 isn’t sustainable in the long run since there is a limit to which the specs can upgraded, not to mention the exponentially increasing cost.

Adding more servers to support the load seems to be the better option.

STEP IN Kubernetes!!

Kubernetes handles this on its own, with the included benefit of resource encapsulation.
Resource encapsulation meaning that different services are launched in different resources space.

Kubernetes terminologies:

Node

Photo by Markus Spiske on Unsplash

A node is the actual machine that will be running the service (it will be the piece of hardware handling the wrath of your hungry customers!)

Node is the entity providing computing power. It can be a on-premises machine or compute resource hosted on AWS or google cloud.

A Node is a worker machine in Kubernetes and may be a VM or a physical machine.

Cluster

Photo by Taylor Vick on Unsplash

Simply put, cluster is the collection of nodes.
A single node(machine) can’t handle the load on its own, because of which multiple nodes pool together to form THE CLUSTER.

Just think of the cluster as a huge computing entity.

Pods

A pod is a group of one or more containers. A container is an enclosed, self-contained execution process, much like a process in an operating system. Kubernetes uses pods to run your code and images in the cluster.

A pod is a unit of computation on which the services actually run.
In Kubernetes, the nodes can (and they usually do) contain multiple pods.

Think of it this way:

Baker analogy incoming!!

Suppose you, the young baking prodigy have taken on a project which involves making sandwiches. While grocery shopping, you decide to buy multiple 20 inch loaves of bread since they are on sale.
But you see there is a option: You can buy the entire 20 inch as a single loaf or break it down into multiple loaves and the supermarket will repackage them as individual loafs.

You ponder over it and realize that if you buy the 20 inch as a whole, you will be stuck with it no matter what.
Even if you have to make toast using a single slice, you will have to open the entire 20 inch loaf and waste the entire thing.
But if you opt for multiple smaller loaves, this wastage will be reduced.

Same goes for Nodes and Pods.

If you commit a entire node to a particular service, even if 10% of node is being utilized, the rest is being wasted.
But since the Node holds multiple Pods, the Node can run multiple services thus eliminating the rigidity of committing the entire Node to one service.

High Level Overview

Kubernetes Resources Overview
Kubernetes Resources Overview

10000 miles high view: Cluster

1000 miles high view: Cluster

100 miles high view: Cluster

10 miles high view: Nodes

1 mile high view: Pods

Deploying EKS:

Now that we are familiar with Kubernetes, let’s move on to deploying your first AWS EKS Cluster!!

Installations:

Step 1: Creating an EKS role

Our first step is to set up a new IAM role with EKS permissions.

What are IAM roles?
An IAM role is an AWS Identity and Access Management (IAM) entity with permissions to make AWS service requests
IAM Roles allow one AWS service to access another service

Open the IAM console, select Roles from the left panel and click on Create Role.

From the list of AWS services, select EKS and then click on Next: Permissions at the bottom of the page.

Keep the selected policies unchanged, and proceed to the Review page.

Choose a relevant name for the role (e.g. eksrole) and hit the Create role button to create the IAM role.

Ta-da!!

Your IAM Role is now created! Note down the Role ARN as it’ll be used later on.

Step 2: Creating a VPC for EKS

Next, we’re going to create a separate VPC.

What is VPC?
The easiest way to describe a VPC is as your own private data center within the AWS infrastructure.

A Virtual Private Cloud that protects communication between worker nodes and the AWS Kubernetes API server — for our EKS cluster.

To do this, we’re going to use a CloudFormation template that contains all the necessary EKS specific infra for setting up the VPC.

Open up CloudFormation, and click the Create new stack button.

On the Select template page, enter the URL of the CloudFormation YAML:

https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09/amazon-eks-vpc-sample.yaml

Give the VPC a name, leave the default network configurations, and click Next.

On the Review page, simply hit the Create button to create the VPC.

CloudFormation will begin to create the VPC. Once done, be sure to note the various values created for SecurityGroups, VpcId and SubnetIds.

Step 3: Creating the EKS cluster

Now onto to the big one, creating the EKS Cluster!

There are multiple ways to create a EKS Cluster, we’ll use the AWS CLI to create the Kubernetes cluster.

To do this, spin up an EC2 instance in the VPC created in step 2, and in the subnets created in the same step.

Once the instance is up and running, use Putty to connect to the instance and perform the Installations as mentioned above.

Once all the installations are done, we can proceed and create the cluster using the following command:

$ aws eks --region <<us-west-2>> create-cluster --name <<my_first_cluster> --kubernetes-version 1.18 --role-arn <<EKS-role-ARN>> --resources-vpc-config subnetIds=<<subnet-id1>>,<<subnet-id2>>,<<subnet-id3>>,securityGroupIds=<<security-group-id>>

*Replace the bracketed parameters with the values for the respective entities.

Example:

For <<EKS-role-ARN>>, replace this with the ARN of the EKS role you created in Step 1.

Once you execute this command, you will receive a response similar to:

{
"cluster": {
"status": "CREATING",
"logging": {
"clusterLogging": [
{
"enabled": false,
"types": [
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler"]
}]},
"name": "my_first_cluster",
"certificateAuthority": {},
"roleArn": "arn:aws:iam::12200099912:role/eksrole",
"resourcesVpcConfig": {
"subnetIds": [
"subnet-0011c5ddeeg019l0g",
"subnet-1ffhm2b8x418y8867"
],
"vpcId": "vpc-9m46kqe0",
"endpointPrivateAccess": false,
"endpointPublicAccess": true,
"securityGroupIds": [
"sg-1y7gl90lp147n3c22"
]
},
"version": "1.11",
"arn": "arn:aws:eks:us-west-2:988378309856:cluster/my_first_cluster",
"platformVersion": "eks.5",
"createdAt": 1566903086.416
}
}

You can check the status of the cluster by logging onto AWS Console and searching EKS under services.

The cluster will take around 5 minutes to be active.

After that, we need to bind the ec2 machine we used (to create the cluster) to the cluster we just created.

This is done via Kubeconfig.

Execute the following commands:

$ aws sts get-caller-identity 
$ aws eks --region us-west-2 update-kubeconfig --name <<my_first_cluster>>

This should create a link between ec2 and your cluster. Now you should be able to run commands on the cluster. Let’s try one!

$ kubectl get svc

Tip: You need to type “kubectl” for every command you execute. Create a alias to make this task easier.

$ alias k=kubectl
$ k get svc

The output for “kubectl get svc” or “k get svc” should be the same:

NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 2m

Step 4: Creating Kubernetes worker nodes

Now that the cluster is created, we need to launch the worker nodes; entities that will actually perform the computation task.

Since we are associating these nodes (ec2 machines) to EKS, the AMI ID for them is different. If not entered correctly, the nodes won’t join the cluster

We’ll use CloudFormation to deploy the worker nodes along with the adjoining infra needed.

Open CloudFormation, click Create Stack, and this time use the following template URL:

https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09/amazon-eks-nodegroup.yaml

Clicking Next, name your stack, and in the EKS Cluster section enter the following details:

  • ClusterName — the name of Kubernetes cluster (my_first_cluster)
  • ClusterControlPlaneSecurityGroup — the same security group you used for creating the cluster in previous step.
  • NodeGroupName — a name for node group.
  • NodeAutoScalingGroupMinSize — leave unchanged.
  • NodeAutoScalingGroupDesiredCapacity — leave unchanged.
  • NodeAutoScalingGroupMaxSize — leave unchanged.
  • NodeInstanceType — leave unchanged.
  • NodeImageId — the Amazon EKS worker node AMI ID for the region you’re using. Find the AMI ID for your region here.
  • KeyName — the name of AWS EC2 SSH key pair for connecting with the worker nodes.
  • BootstrapArguments — leave empty.
  • VpcId — enter the VPC ID created in Step 2 and used while creating the cluster.
  • Subnets — select the three subnets created in Step 2 above and used while creating the cluster.

Proceed to the Review page, select the check-box acknowledging that the stack might create IAM resources, and click Create.

CloudFormation creates the worker nodes with the VPC settings we entered — three new EC2 instances are created.

Open Outputs tab and note the value for NodeInstanceRole as you will need it for the next step — allowing the worker nodes to join our Kubernetes cluster.

To allow the worker nodes to join the cluster, we need to update the aws-auth file.

$ kubectl describe configmap -n kube-system aws-auth 
$ curl -o aws-auth-cm.yaml https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/aws-auth-cm.yaml

The aws-auth will be downloaded. We need to edit this file and add the NodeInstanceRole created from CloudFormation.

apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <<ARN of instance role>>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes

Edit the rolearn attribute and add the value of the role ARN and execute the following commands:

$ kubectl apply -f aws-auth-cm.yaml
$ kubectl get pods -o wide

You should be seeing the nodes along with their IP address.

Congratulations! You did it!!

Now you can deploy any application and sit back while Kubernetes takes over!!

Issues debugging

Issue in verifying AWS user:

The first time we use the ec2, we need to enter the AWS IAM User’s secret key and access key to allow the AWS service know which user making the request.
To configure this, follow these steps:

$ aws configure 
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPL
Default region name [None]: us-west-2
Default output format [None]: json

That should do it!!

Nodes aren’t joining the cluster:

There can be multiple reasons why this is happening:

  1. Verify cluster subnets

Make sure you are launching worker nodes in the subnet which is a part of your EKS cluster.
You can logon to AWS Console → EKS and verify the subnets are the same as the ones you mentioned whilst creating the CloudFormation stack.

2. Tags missing on EC2 and subnets

Mandatory Tags for EC2 (worker nodes)

a) key                 = "kubernetes.io/cluster/<cluster-name>"
b) value = "owned"

Mandatory Tags on subnets

c) key              = "kubernetes.io/cluster/<cluster-name>"
d) value = "shared"

3. Incorrect AMI ID for worker nodes

If the AMI ID provided whilst creating the CloudFormation stack is invalid, the nodes won’t be able to join the cluster.

AMI ID can be invalid because of 2 reasons:

a. AMI ID provided is in a different region than the cluster: Make sure that the region of the AMI ID is the same as that of the cluster

b. AMI ID provided is not eks-optimized-ami: Since we need the ec2 nodes to join EKS, we need to use the eks-optimized-ami. You can find it for your region here

That’s all from my side folks!

Please reach out to me in case of any queries; I know how frustrating and vexing EKS can be :D

LinkedIn: www.linkedin.com/in/shubham-s-agarwal

Gmail: shubhamsagarwal10@gmail.com

--

--

Shubham Agarwal

Detail-oriented, focused and driven professional with a get-it-done and quality-first spirit. AI Master’s student at University of Manchester