Enhance local development experience using the Azure Cosmos DB Linux emulator and VS Code

This blog post provides a quick overview and demo of how you can use the Azure Cosmos DB Linux Emulator on Docker (in preview at the time of writing) along with Visual Studio Code in order to enhance your local development experience.

Since the Azure Cosmos DB Linux Emulator is readily available as a Docker image (simply docker pull mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator), it’s easy to incorporate it within your existing setup. For instance, it could be in a docker-compose file as a part of a larger stack (here is an example of how to use it with Apache Kafka). However, complementing it with the Visual Studio Code Remote - Containers extension, gives you the ability to leverage a Docker container as a full-fledged development environment.

[Read More]

Kubexpose: A Kubernetes Operator, for fun and profit!

Say you have a web service running as a Kubernetes Deployment. There are a bunch of ways to access it over a public URL, but Kubexpose makes it easy to do so. It’s a Kubernetes Operator backed by a Custom Resource Definition and the corresponding controller implementation.

Kubexpose built using kubebuilder and available on GitHub

To try it out, jump into the next section or scroll down to How it works? to learn more

[Read More]

Getting started with Kafka Connector for Azure Cosmos DB using Docker

Having a local development environment is quite handy when trying out a new service or technology. Docker has emerged as the de-facto choice in such cases. It is specially useful in scenarios where you’re trying to integrate multiple services and gives you the ability to to start fresh before each run.

This blog post is a getting started guide for the Kafka Connector for Azure Cosmos DB. All the components (including Azure Cosmos DB) will run on your local machine, thanks to:

[Read More]

Redis Streams in Action - Part 4 (Serverless Go app to monitor tweets processor)

Welcome to this series of blog posts which covers Redis Streams with the help of a practical example. We will use a sample application to make Twitter data available for search and query in real-time. RediSearch and Redis Streams serve as the backbone of this solution that consists of several co-operating components, each of which will we covered in a dedicated blog post.

The code is available in this GitHub repo - https://github.com/abhirockzz/redis-streams-in-action

We will continue from where we left off in the previous blog post and see how to build a monitoring app to make the overall system more robust in the face of high load or failure scenarios. This is because our very often, data processing applications either slow down (due to high data volumes) or may even crash/stop due to circumstances beyond our control. If this happens with our Tweets processing application, the messages that were assigned to a specific instance will be left unprocessed. The monitoring component covered in this blog post, checks pending Tweets (using XPENDING), claims (XCLAIM), processes (store them as HASH using HSET) and finally acknowledges them (XACK).

[Read More]

Redis Streams in Action - Part 3 (Java app to process tweets with Redis Streams)

Welcome to this series of blog posts which covers Redis Streams with the help of a practical example. We will use a sample application to make Twitter data available for search and query in real-time. RediSearch and Redis Streams serve as the backbone of this solution that consists of several co-operating components, each of which will we covered in a dedicated blog post.

The code is available in this GitHub repo - https://github.com/abhirockzz/redis-streams-in-action

This blog post will cover a Java based Tweets processor application whose role is to pick up tweets from Redis Streams and store them (as a HASH) so that they can be queried using RediSearch (the accurate term for this is “indexing documents” in RediSearch). You will deploy the application to Azure, validate it, run a few RediSearch queries to search tweets. Finally, there is a section where we will walk through the code to understand “how things work”.

[Read More]

Redis Streams in Action — Part 2 (Tweets consumer app)

Welcome to this series of blog posts which covers Redis Streams with the help of a practical example. We will use a sample application to make Twitter data available for search and query in real-time. RediSearch and Redis Streams serve as the backbone of this solution that consists of several co-operating components, each of which will we covered in a dedicated blog post.

The code is available in this GitHub repo - https://github.com/abhirockzz/redis-streams-in-action

In this part, we look at the service which interacts with the Twitter Streaming API to consume tweets and move them on to the next part in the processing pipeline.

[Read More]

Redis Streams in Action: Part 1 (Intro and overview)

Welcome to this series of blog posts which covers Redis Streams with the help of a practical example. We will use a sample application to make Twitter data available for search and query in real-time. RediSearch and Redis Streams serve as the backbone of this solution that consists of several co-operating components, each of which will we covered in a dedicated blog post.

The code is available in this GitHub repo - https://github.com/abhirockzz/redis-streams-in-action

This is the first part which explores the use case, motivations and provides a high level overview of the Redis features used in the solution.

[Read More]

RediSearch in Action

Redis has a versatile set of data structures ranging from simple Strings all the way to powerful abstractions such as Redis Streams. The native data types can take you a long way, but there are certain use cases that may require a workaround. One example is the requirement to use secondary indexes in Redis in order to go beyond the key-based search/lookup for richer query capabilities. Though you can use Sorted Sets, Lists, and so on to get the job done, you’ll need to factor in some trade-offs.

[Read More]