A previous blog post covered how to deploy a Go Lambda function and trigger it in response to events sent to a topic in a MSK Serverless cluster.
[Read More]
Getting started with MSK Serverless and AWS Lambda using Go
In this blog post you will learn how to deploy a Go Lambda function and trigger it in response to events sent to a topic in a MSK Serverless cluster.
[Read More]
MySQL to DynamoDB: Build a streaming data pipeline on AWS using Kafka
Use change data capture with MSK Connect to sync data between Aurora MySQL and DynamoDB
This is the second part of the blog series which provides a step-by-step walkthrough of data pipelines with Kafka and Kafka Connect.
[Read More]
Build a data pipeline on AWS with Kafka, Kafka connect and DynamoDB
Integrate DynamoDB with MSK and MSK Connect
There are many ways to stitch data pipelines - open source components, managed services, ETL tools, etc.
[Read More]
Getting started with Kafka Connector for Azure Cosmos DB using Docker
Having a local development environment is quite handy when trying out a new service or technology. Docker has emerged as the de-facto choice in such cases.
[Read More]
Processing Time-Series Data with Redis and Apache Kafka
RedisTimeSeries is a Redis Module that brings native Time Series data structure to Redis. Time Series solutions which were earlier built on top of Sorted Sets (or Redis Streams) can benefit from RedisTimeSeries features such as high volume inserts, low latency reads, flexible query language, down-sampling and much more!
[Read More]
Getting started with Kafka and Rust: Part 2
This is a two-part series to help you get started with Rust and Kafka. We will be using the rust-rdkafka crate which itself is based on librdkafka (C library).
[Read More]
Getting started with Kafka and Rust: Part 1
This is a two-part series to help you get started with Rust and Kafka. We will be using the rust-rdkafka crate which itself is based on librdkafka (C library).
[Read More]
Real-Time Search and Analytics with Confluent, Azure, Redis, and Spring Cloud
Self-managing a distributed system like Apache Kafka ®, along with building and operating Kafka connectors, is complex and resource intensive. It requires significant Kafka skills and expertise in the development and operations teams of your organization.
[Read More]
PostgreSQL pgoutput plugin for change data capture
Set up a Change Data Capture architecture on Azure using Debezium, Postgres and Kafka was a tutorial on how to use Debezium for change data capture from Azure PostgreSQL and send them to Azure Event Hubs for Kafka - it used the wal2json output plugin.
[Read More]