Write your Kubernetes Infrastructure as Go code - Combine cdk8s with AWS CDK
In the previous lesson, you imported existing Kubernetes Custom Resource Definitions using cdk8s
and deployed DynamoDB
along with a client application.
But, what if you continue using AWS CDK for AWS infrastructure and harness the power cdk8s
(and cdk8s-plus
!) to define Kubernetes resources using regular code? Thanks to the native integration between
the AWS EKS module and cdk8s, you can have the best of both worlds!
The goal of this lesson is to demonstrate that with a few examples. We will start off with a simple (nginx based) example before moving on to a full-fledged application stack (including DynamoDB
etc.). Both will be using the Go programming language which is well supported in AWS CDK as well as cdk8s.
All the code is available in this GitHub repo
Prerequisites
To follow along step-by-step, in addition to an AWS account, you will need following CLIs - AWS CLI, cdk8s CLI and kubectl. Also, dont' forget to install AWS CDK, the Go programming language (v1.16 or above) as well as Docker, if you don't have them already.
Keeping it simple with Nginx on EKS
As with most things in life, there are two ways - the easy way or the hard way ;) You will see both of them! Let's try things out first, see them working and then look at the code.
To start off, clone the repo and change to the right directory:
git clone https://github.com/abhirockzz/cdk8s-for-go-developers
cd cdk8s-for-go-developers/part6-cdk-eks-cdk8s/cdk-cdk8s-nginx-eks
To setup everything, all you need is a single command:
cdk deploy
you can also use
cdk synth
to generate and inspect the Cloud Formation template first
You will be prompted to confirm. Once you do that, the process will kick off - it will take some time since lots of AWS resources will be created, including VPC, EKS cluster etc.
Feel free to check the AWS Cloud Formation console to track the progress.
Once the process is complete, you need to connect to the EKS cluster using kubectl
. The command required for this will be available as a result of the cdk deploy
process (in the terminal) or you can refer to the Outputs section of the AWS Cloud Formation stack.
Once you've configured kubectl
to point to your EKS cluster, you can check the Nginx Deployment
and Service
.
kubectl get deployment
# output
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment-cdk8s 1/1 1 1 1m
nginx-deployment-cdk 1/1 1 1 1m
You will see that two Deployment
s have been created - more on this soon. Similiarly, if you check the Service
(kubectl get svc
), you should see two of them - nginx-service-cdk
and nginx-service-cdk8s
.
To access Nginx, pick the EXTERNAL-IP
of any of the two Service
s. For example:
APP_URL=$(kubectl get service/nginx-service-cdk -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
echo $APP_URL
# to access nginx (notice we are using port 9090)
curl -i http://$APP_URL:9090
If you get a
Could not resolve host
error while accessing the LB URL, wait for a minute or so and re-try
Behind the scenes
Let's look at the code now - this will clarify why we have two Nginx Deployment
s.
Thanks to AWS CDK, VPC creation is a one liner with awsec2.NewVpc function and creating an EKS cluster isn't too hard either!
func NewNginxOnEKSStack(scope constructs.Construct, id string, props *CdkStackProps) awscdk.Stack {
//...
vpc := awsec2.NewVpc(stack, jsii.String("demo-vpc"), nil)
eksSecurityGroup := awsec2.NewSecurityGroup(stack, jsii.String("eks-demo-sg"),
&awsec2.SecurityGroupProps{
Vpc: vpc,
SecurityGroupName: jsii.String("eks-demo-sg"),
AllowAllOutbound: jsii.Bool(true)})
eksCluster := awseks.NewCluster(stack, jsii.String("demo-eks"),
&awseks.ClusterProps{
ClusterName: jsii.String("demo-eks-cluster"),
Version: awseks.KubernetesVersion_V1_21(),
Vpc: vpc,
SecurityGroup: eksSecurityGroup,
VpcSubnets: &[]*awsec2.SubnetSelection{
{Subnets: vpc.PrivateSubnets()}},
DefaultCapacity: jsii.Number(2),
DefaultCapacityInstance: awsec2.InstanceType_Of(awsec2.InstanceClass_BURSTABLE3, awsec2.InstanceSize_SMALL), DefaultCapacityType: awseks.DefaultCapacityType_NODEGROUP,
OutputConfigCommand: jsii.Bool(true),
EndpointAccess: awseks.EndpointAccess_PUBLIC()})
//...
Nginx on Kubernetes, the hard way!
Now we look at two different ways of creating Nginx, starting with the "hard" way. In this case, we use AWS CDK (not cdk8s
) to define the Deployment
and Service
resources.
func deployNginxUsingCDK(eksCluster awseks.Cluster) {
appLabel := map[string]*string{
"app": jsii.String("nginx-eks-cdk"),
}
deployment := map[string]interface{}{
"apiVersion": jsii.String("apps/v1"),
"kind": jsii.String("Deployment"),
"metadata": map[string]*string{
"name": jsii.String("nginx-deployment-cdk"),
},
"spec": map[string]interface{}{
"replicas": jsii.Number(1),
"selector": map[string]map[string]*string{
"matchLabels": appLabel,
},
"template": map[string]interface{}{
"metadata": map[string]map[string]*string{
"labels": appLabel,
},
"spec": map[string][]map[string]interface{}{
"containers": {
{
"name": jsii.String("nginx"),
"image": jsii.String("nginx"),
"ports": []map[string]*float64{
{
"containerPort": jsii.Number(80),
},
},
},
},
},
},
},
}
service := map[string]interface{}{
"apiVersion": jsii.String("v1"),
"kind": jsii.String("Service"),
"metadata": map[string]*string{
"name": jsii.String("nginx-service-cdk"),
},
"spec": map[string]interface{}{
"type": jsii.String("LoadBalancer"),
"ports": []map[string]*float64{
{
"port": jsii.Number(9090),
"targetPort": jsii.Number(80),
},
},
"selector": appLabel,
},
}
eksCluster.AddManifest(jsii.String("app-deployment"), &service, &deployment)
}
Finally, to create this in EKS we invoke AddManifest (think of its like the programmatic equivalent of kubectl apply
). This works, but there are a few gaps in this approach:
- We are not able to reap the benefits of Go which is a strongly typed language. That's because the API is loosely typed, thanks to
map[string]interface{}
everywhere. This makes it very error prone (I made a few mistakes too!) - The verbosity is apparent as well. It seems as if we are writing
YAML
inGo
- not too much of an improvement!
Is there a better way..?
Let's look at the second function deployNginxUsingCDK8s
- by the name its obvious that we used cdk8s
, not just CDK)
func deployNginxUsingCDK8s(eksCluster awseks.Cluster) {
app := cdk8s.NewApp(nil)
eksCluster.AddCdk8sChart(jsii.String("nginx-eks-chart"), NewNginxChart(app, "nginx-cdk8s", nil), nil)
}
This looks "too easy" to to be true! But it's made possible due to the inter-operability between CDK and cdk8s
. What this implies is that, you can use define Kubernetes resources using cdk8s
Chart
s and apply them to an EKS cluster created with CDK (this makes it sort of a hybrind system).
The hero of our story is AddCdk8sChart function, which accepts a constructs.Construct (remember, everything is a construct!). In this case, the Construct
happens to be a cdk8s.Chart thats returned by NewNginxChart
function - so lets take a look at that.
func NewNginxChart(scope constructs.Construct, id string, props *MyChartProps) cdk8s.Chart {
//....
dep := cdk8splus22.NewDeployment(chart, jsii.String("nginx-deployment"),
&cdk8splus22.DeploymentProps{
Metadata: &cdk8s.ApiObjectMetadata{
Name: jsii.String("nginx-deployment-cdk8s")}})
dep.AddContainer(&cdk8splus22.ContainerProps{
Name: jsii.String("nginx-container"),
Image: jsii.String("nginx"),
Port: jsii.Number(80)})
dep.ExposeViaService(&cdk8splus22.DeploymentExposeViaServiceOptions{
Name: jsii.String("nginx-service-cdk8s"),
ServiceType: cdk8splus22.ServiceType_LOAD_BALANCER,
Ports: &[]*cdk8splus22.ServicePort{{
Port: jsii.Number(9090),
TargetPort: jsii.Number(80)}}})
return chart
}
This should look familiar - a strongly-typed, compact and expressive API! I don't even need to walk you through this since its so readable - we use cdk8s-plus
to create a Nginx Deployment
, add the container info and finally expose it via a Service
so that we can access the Nginx from outside of EKS.
This was a simple enough example to help bring out difference between the two approaches. The next scenario is different - in addition to the EKS cluster it has DynamoDB along with a URL shortener application that will be deployed to EKS.
End to end example: DynamoDB along with an application on EKS
Instead of creating a new EKS cluster from scratch, we will re-use the existing cluster created as a result of the previous example - this is a good opportunity to take a look at how you can reference an existing EKS cluster in your CDK code. As expected, we need to create DynamoDB
table as well.
Just like in the previous example, let's try out the solution first. Change into the right directory first:
cd part6-cdk-eks-cdk8s/cdk-cdk8s-dynamodb-app-eks
Since the URL shortener application has to make API calls to DynamoDB
, we need to configure IAM Roles for Service Accounts (also known as IRSA).
Refer to https://docs.aws.amazon.com/eks/latest/userguide/create-service-account-iam-policy-and-role.html
Define IAM roles for the application
Start by creating a Kubernetes Service Account:
kubectl apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: eks-dynamodb-app-sa
EOF
To confirm -
kubectl get serviceaccount/eks-dynamodb-app-sa -o yaml
Set your AWS Account ID and OIDC Identity provider as environment variables:
ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
export EKS_CLUSTER_NAME=<enter cluster name>
export AWS_REGION=<enter region e.g. us-east-1>
OIDC_PROVIDER=$(aws eks describe-cluster --name $EKS_CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")
Create a JSON file with Trusted Entities for the role:
read -r -d '' TRUST_RELATIONSHIP <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_PROVIDER}:aud": "sts.amazonaws.com",
"${OIDC_PROVIDER}:sub": "system:serviceaccount:default:eks-dynamodb-app-sa"
}
}
}
]
}
EOF
echo "${TRUST_RELATIONSHIP}" > trust.json
Check -
cat trust.json
Now, create the IAM role:
export ROLE_NAME=dynamodb-app-irsa
aws iam create-role --role-name $ROLE_NAME --assume-role-policy-document file://trust.json --description "IRSA for DynamoDB app on EKS"
We will need to create and attach policy to role since we only want to allow PutItem
and GetItem
operations from our application. Here is the policy JSON file:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PutandGet",
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:GetItem"
],
"Resource": "arn:aws:dynamodb:*:*:table/urls"
}
]
}
Create and attach the policy to the role we just created:
aws iam create-policy --policy-name dynamodb-irsa-policy --policy-document file://policy.json
aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn=arn:aws:iam::<enter AWS account ID>:policy/dynamodb-irsa-policy
Finally, we need to associate the IAM role and Service Account:
kubectl annotate serviceaccount -n default eks-dynamodb-app-sa eks.amazonaws.com/role-arn=arn:aws:iam::<enter AWS account ID>:role/dynamodb-app-irsa
Get the EKS kubectl
role ARN
To reference existing EKS cluster in AWS CDK, you need EKS cluster name and kubectl
role ARN.
You can find the role ARN in the Outputs section of the AWS Cloud Formation stack.
We are ready to deploy the application using CDK. Set the required environment variables followed by cdk deploy
:
you can also use
cdk synth
to generate and inspect the Cloud Formation template first
export EKS_CLUSTER_NAME=<enter name of EKS cluster>
export KUBECTL_ROLE_ARN=<enter kubectl role ARN>
export SERVICE_ACCOUNT_NAME=eks-dynamodb-app-sa
export APP_PORT=8080
export AWS_REGION=<enter region e.g. us-east-1>
cdk deploy
CDK (and cdk8s
) will do all the heavy lifting (we will look at the code very soon):
- New DynamoDB table will be created
- The docker image for our application will be built and pushed to ECR
- Kubernetes resources for the URL shortener application will be deployed to the existing EKS cluster
Once the stack creation is complete, check the Kubernetes Deployment
and Service
:
kubectl get deployment/dynamodb-app
kubectl get pods
kubectl get service/dynamodb-app-service
Testing the URL shortener service is easy. But I will not repeat it here since its already covered in [a blog](https://dev.to/abhirockzz/write-your-kubernetes-infrastructure-as-go-code-manage-aws-services-3pgi. All you need is the load balancer URL to access the service and use your browser or curl
to save and access URLs.
Back to exploring Go code again
Within the stack, we define the DynamoDB
table (using awsdynamodb.NewTable) along with the docker image for our application (with awsecrassets.NewDockerImageAsset)
func NewDynamoDBAppStack(scope constructs.Construct, id string, props *CdkStackProps) awscdk.Stack {
//...
table := awsdynamodb.NewTable(stack, jsii.String("dynamodb-table"),
&awsdynamodb.TableProps{
TableName: jsii.String(tableName),
PartitionKey: &awsdynamodb.Attribute{
Name: jsii.String(dynamoDBPartitionKey),
Type: awsdynamodb.AttributeType_STRING,
},
BillingMode: awsdynamodb.BillingMode_PAY_PER_REQUEST,
RemovalPolicy: awscdk.RemovalPolicy_DESTROY,
})
appDockerImage := awsecrassets.NewDockerImageAsset(stack, jsii.String("app-image"),
&awsecrassets.DockerImageAssetProps{
Directory: jsii.String(appDirectory)})
//...
Then comes the interesting part where we get the reference to our exsiting EKS cluster and use AddCdk8sChart
(just like before) to deploy the application to EKS.
//...
eksCluster := awseks.Cluster_FromClusterAttributes(stack, jsii.String("existing cluster"),
&awseks.ClusterAttributes{
ClusterName: jsii.String(eksClusterName),
KubectlRoleArn: jsii.String(kubectlRoleARN)})
app := cdk8s.NewApp(nil)
appProps := NewAppChartProps(appDockerImage.ImageUri(), table.TableName())
eksCluster.AddCdk8sChart(jsii.String("dynamodbapp-chart"), NewDynamoDBAppChart(app, "dynamodb-cdk8s", &appProps), nil)
The NewDynamoDBAppChart function defines the Deployment
and Service
. Unlike the earlier Nginx example which had static values, this application takes in dynamic values - specifically the DynamoDB
table name (which is used as the container environment variable TABLE_NAME
). Also notice the fact that we explicitly add the the name of the Kubernetes service account (for IRSA) that we had created in the previous step.
func NewDynamoDBAppChart(scope constructs.Construct, id string, props *AppChartProps) cdk8s.Chart {
//...
dep := cdk8splus22.NewDeployment(chart, jsii.String("dynamodb-app-deployment"), &cdk8splus22.DeploymentProps{
Metadata: &cdk8s.ApiObjectMetadata{
Name: jsii.String("dynamodb-app")},
ServiceAccount: cdk8splus22.ServiceAccount_FromServiceAccountName(
chart,
jsii.String("aws-irsa"),
jsii.String(props.serviceAccountName))})
container := dep.AddContainer(//.. omitted for brevity)
container.Env().AddVariable(jsii.String("TABLE_NAME"), cdk8splus22.EnvValue_FromValue(props.tableName))
container.Env().AddVariable(jsii.String("AWS_REGION"), cdk8splus22.EnvValue_FromValue(&props.region))
dep.ExposeViaService(//.. omitted for brevity)
return chart
}
Wrap up
We started off with a simple example to showcase the integration between AWS CDK and cdk8s
and how easy it makes things (compared to just using CDK to deploy apps to EKS). Then, we moved on to explore a full fledged scenario where you deployed the infrastructure (DynamoDB
etc.) along with the client application on EKS.
Happy Building!