Use AWS Controllers for Kubernetes to deploy a Serverless data processing solution with SQS, Lambda and DynamoDB

In this blog post, you will be using AWS Controllers for Kubernetes on an Amazon EKS cluster to put together a solution wherein data from an Amazon SQS queue is processed by an AWS Lambda function and persisted to a DynamoDB table.

Image description

AWS Controllers for Kubernetes (also known as ACK) leverage Kubernetes Custom Resource and Custom Resource Definitions and give you the ability to manage and use AWS services directly from Kubernetes without needing to define resources outside of the cluster. The idea behind ACK is to enable Kubernetes users to describe the desired state of AWS resources using the Kubernetes API and configuration language. ACK will then take care of provisioning and managing the AWS resources to match the desired state. This is achieved by using Service controllers that are responsible for managing the lifecycle of a particular AWS service. Each ACK service controller is packaged into a separate container image that is published in a public repository corresponding to an individual ACK service controller.

[Read More]

Use CDK to deploy a complete solution with Kafka, App Runner, EKS and DynamoDB

A previous blog post covered how to deploy a Go Lambda function and trigger it in response to events sent to a topic in a MSK Serverless cluster.

This blog will take it a notch further.

  • The solution consists of a MSK Serverless cluster, a producer application on AWS App Runner and a consumer application in Kubernetes (EKS) persisting data to DynamoDB.
  • The core components (MSK cluster, EKS and DynamoDB) and the producer application will be provisioned using Infrastructure-as-code with AWS CDK.
  • Since the consumer application on EKS will interact with both MSK and DynamoDB, you will also need to configure appropriate IAM roles.

All the components in this solution have been written in Go.

[Read More]

Write your Kubernetes Infrastructure as Go code - Combine cdk8s with AWS CDK

In an earlier blog post you saw how to use cdk8s with AWS Controllers for Kubernetes (also known as ACK), thanks to the fact that you can import existing Kubernetes Custom Resource Definitions using cdk8s! This made is possible to deploy DynamoDB along with a client application, from using cdk8s and Kubernetes.

But, what if you continue using AWS CDK for AWS infrastructure and harness the power cdk8s (and cdk8s-plus!) to define Kubernetes resources using regular code? Thanks to the native integration between the AWS EKS module and cdk8s, you can have the best of both worlds!

[Read More]

Write your Kubernetes Infrastructure as Go code - Manage AWS services

Deploy DynamoDB and a client app using cdk8s along with AWS Controller for Kubernetes

AWS Controllers for Kubernetes (also known as ACK) are built around the Kubernetes extension concepts of Custom Resource and Custom Resource Definitions. You can use ACK to define and use AWS services directly from Kubernetes. This helps you take advantage of managed AWS services for your Kubernetes applications without needing to define resources outside of the cluster.

[Read More]

Manage Redis on AWS from Kubernetes

Using AWS Controller for Kubernetes and CDK for Kubernetes

In this blog post, you will learn how to use ACK with Amazon EKS for creating a Redis cluster on AWS (with Amazon MemoryDB).

AWS Controllers for Kubernetes (also known as ACK) leverage Kubernetes Custom Resource and Custom Resource Definitions and give you the ability to manage and use AWS services directly from Kubernetes without needing to define resources outside of the cluster. It supports many AWS services including S3, DynamoDB, MemoryDB etc.

[Read More]

Write your Kubernetes Infrastructure as Go code - Extend cdk8s with custom Constructs

Build a Wordpress deployment as a cdk8s construct

Constructs are the fundamental building block of cdk8s (Cloud Development Kit for Kubernetes) - an open-source framework (part of CNCF) with which you can define your Kubernetes applications using regular programming languages (instead of yaml). In Getting started with cdk8s, you saw how to use the core cdk8s library.

You can also use the cdk8s-plus library (also covered this a previous blog) to reduce the amount of boilerplate code you need to write. With cdk8s-plus, creating a Kubernetes Deployment, specifying it’s container (and other properties) and exposing it via a Service is three function calls away.

[Read More]

Write your Kubernetes Infrastructure as Go code - Using Custom Resource Definitions with cdk8s

cdk8s (Cloud Development Kit for Kubernetes) is an an open-source framework (part of CNCF) using which you can define your Kubernetes applications with regular programming languages (instead of yaml). Some of the previous blogs on this topic covered the getting started experience and using cdk8s-plus library to further improve upon the core cdk8s library features. We are going to continue and push cdk8s even further. This blog post will demonstrate how you can use Kubernetes Custom Resource Definitions with cdk8s. We will start off with a simple Nginx example and then you will use the combination of Strimzi project CRDs along with Go and cdk8s to define and deploy a Kafka cluster on Kubernetes!

[Read More]

Write your Kubernetes Infrastructure as Go code - cdk8s-plus in action!

The previous blog post covered how to get started with cdk8s (Cloud Development Kit for Kubernetes), that is an an open-source framework (part of CNCF) using which you can define your Kubernetes applications using regular programming languages (instead of yaml).

You were able to setup a simple nginx Deployment and accessed it via a Service - all this was done using Go, which was then converted to yaml (using cdk8s synth) and submitted to the cluster using kubectl. This was a good start. However, since the core cdk8s library is pretty low-level (for a good reason!) the code involved lot of boilerplate (you can refer to the code here).

[Read More]

Write your Kubernetes Infrastructure as Go code - Getting started with cdk8s

Infrastructure as Code (IaC) is a well established paradigm and refers to the standard practice of treating infrastructure (network, disk, storage, databases, message queues etc.) in the same way as application code and applying general software engineering practices including source control versioning, testing and more. For exampple, Terraform and AWS CloudFormation are widely-adopted technologies that use configuration files/templates to represent the infrastructure components.

Infrastructure-IS-Code - A different way of thinking about this

Imagine you have an application that comprises of a Serverless function fronted by an API Gateway along with a NoSQL database as the backend. Instead of defining it in a static way (using JSON, YAML etc.), one can represent these components using standard programming language constructs such as classes, methods, etc. Here is pseudo-code example:

[Read More]

Extracting Value from IoT Using Azure Cosmos DB, Azure Synapse Analytics, and Confluent Cloud

It’s possible to build seamless device-to-cloud experience with the integrations between Azure and Confluent Cloud. The example provided in this blog post showcases how the following capabilities of Azure and Confluent Cloud are leveraged:

  • A fully managed platform for data in motion and self-managed connectors from Confluent
  • A secure connection from Confluent to Azure
  • A highly distributed and scalable data management solution from Azure
  • A limitless analytics service offered by Azure that brings together data integration, enterprise data warehousing, and big data analytics
  • A fully managed microservice development platform on Azure

The following diagram provides high-level architecture of the end-to-end solution:

[Read More]