Use CDK to deploy a complete solution with Kafka, App Runner, EKS and DynamoDB

A previous blog post covered how to deploy a Go Lambda function and trigger it in response to events sent to a topic in a MSK Serverless cluster.

This blog will take it a notch further.

  • The solution consists of a MSK Serverless cluster, a producer application on AWS App Runner and a consumer application in Kubernetes (EKS) persisting data to DynamoDB.
  • The core components (MSK cluster, EKS and DynamoDB) and the producer application will be provisioned using Infrastructure-as-code with AWS CDK.
  • Since the consumer application on EKS will interact with both MSK and DynamoDB, you will also need to configure appropriate IAM roles.

All the components in this solution have been written in Go.

[Read More]

Use CDK to deploy a complete solution with Kafka, App Runner, EKS and DynamoDB

A previous blog post covered how to deploy a Go Lambda function and trigger it in response to events sent to a topic in a MSK Serverless cluster.

This blog will take it a notch further.

  • The solution consists of a MSK Serverless cluster, a producer application on AWS App Runner and a consumer application in Kubernetes (EKS) persisting data to DynamoDB.
  • The core components (MSK cluster, EKS and DynamoDB) and the producer application will be provisioned using Infrastructure-as-code with AWS CDK.
  • Since the consumer application on EKS will interact with both MSK and DynamoDB, you will also need to configure appropriate IAM roles.

All the components in this solution have been written in Go.

[Read More]
cdk  aws  go  msk  kafka 

Getting started with MSK Serverless and AWS Lambda using Go

In this blog post you will learn how to deploy a Go Lambda function and trigger it in response to events sent to a topic in a MSK Serverless cluster.

The following topics have been covered:

  • How to use the franz-go Go Kafka client to connect to MSK Serverless using IAM authentication.
  • Write a Go Lambda function to process data in MSK topic.
  • Create the infrastructure - VPC, subnets, MSK cluster, Cloud9 etc.
  • Configure Lambda and Cloud9 to access MSK using IAM roles and fine-grained permissions.

Prerequisites

You will need an AWS account, install AWS CLI as well a recent version of Go (1.18 or above).

[Read More]

Getting started with MSK Serverless and AWS Lambda using Go

In this blog post you will learn how to deploy a Go Lambda function and trigger it in response to events sent to a topic in a MSK Serverless cluster.

The following topics have been covered:

  • How to use the franz-go Go Kafka client to connect to MSK Serverless using IAM authentication.
  • Write a Go Lambda function to process data in MSK topic.
  • Create the infrastructure - VPC, subnets, MSK cluster, Cloud9 etc.
  • Configure Lambda and Cloud9 to access MSK using IAM roles and fine-grained permissions.

Prerequisites

You will need an AWS account, install AWS CLI as well a recent version of Go (1.18 or above).

[Read More]

MySQL to DynamoDB: Build a streaming data pipeline on AWS using Kafka

Use change data capture with MSK Connect to sync data between Aurora MySQL and DynamoDB

This is the second part of the blog series which provides a step-by-step walkthrough of data pipelines with Kafka and Kafka Connect. I will be using AWS for demonstration purposes, but the concepts apply to any equivalent options (e.g. running these locally in Docker).

This part will show Change Data Capture in action that let’s you track row-level changes in database tables in response to create, update and delete operations. For example, in MySQL, these change data events are exposed via the MySQL binary log (binlog).

[Read More]

Build a data pipeline on AWS with Kafka, Kafka connect and DynamoDB

Integrate DynamoDB with MSK and MSK Connect

There are many ways to stitch data pipelines - open source components, managed services, ETL tools, etc. In the Kafka world, Kafka Connect is the tool of choice for “streaming data between Apache Kafka and other systems”. It has an extensive set of pre-built source and sink connectors as well as a common framework for Kafka connectors which standardises integration of other data systems with Kafka and making it simpler to develop your own connectors, should there be a need to do so.

[Read More]

Getting started with Kafka Connector for Azure Cosmos DB using Docker

Having a local development environment is quite handy when trying out a new service or technology. Docker has emerged as the de-facto choice in such cases. It is specially useful in scenarios where you’re trying to integrate multiple services and gives you the ability to to start fresh before each run.

This blog post is a getting started guide for the Kafka Connector for Azure Cosmos DB. All the components (including Azure Cosmos DB) will run on your local machine, thanks to:

[Read More]

Processing Time-Series Data with Redis and Apache Kafka

RedisTimeSeries is a Redis Module that brings native Time Series data structure to Redis. Time Series solutions which were earlier built on top of Sorted Sets (or Redis Streams) can benefit from RedisTimeSeries features such as high volume inserts, low latency reads, flexible query language, down-sampling and much more!

Alt Text

Generally speaking, Time Series data is (relatively) simple. Having said that, we need to factor in other characteristics as well:

[Read More]

Getting started with Kafka and Rust: Part 2

This is a two-part series to help you get started with Rust and Kafka. We will be using the rust-rdkafka crate which itself is based on librdkafka (C library).

In this post we will cover the Kafka Consumer API.

Alt Text

Initial setup

Make sure you install a Kafka broker - a local setup should suffice. Of course you will need to have Rust installed as well - you will need version 1.45 or above

[Read More]

Getting started with Kafka and Rust: Part 1

This is a two-part series to help you get started with Rust and Kafka. We will be using the rust-rdkafka crate which itself is based on librdkafka (C library).

In this post we will cover the Kafka Producer API.

Alt Text

Initial setup

Make sure you install a Kafka broker - a local setup should suffice. Of course you will need to have Rust installed as well - you will need version 1.45 or above

[Read More]