Using AWS Lambda Function URL to build a Serverless backend for Slack

A combination of AWS Lambda and Amazon API Gateway is a widely-used architecture for serverless microservices and API based solutions. They enable developers to focus on their applications, instead of spending time provisioning and managing servers.

API Gateway is a feature rich offering that includes with support for different API types (HTTP, REST, WebSocket), multiple authentication schemes, API versioning, canary deployments and much more! However, if your requirements are simpler and all you need is an HTTP(S) endpoint for your Lambda function (for example, to serve as a webhook), you can use Lambda Function URLs! When you create a function URL, Lambda automatically generates a unique HTTP(S) endpoint that is dedicated for your Lambda function.

[Read More]

Now Generally Available – Partial Document Update in Azure Cosmos DB

Originally posted on https://devblogs.microsoft.com/cosmosdb/partial-document-update-ga/

We’re excited to announce the general availability of partial document update for the Azure Cosmos DB Core (SQL) API, which was announced at Microsoft Ignite! This new feature makes it possible to perform path-level updates to specific fields/properties in a single document without needing to perform a full document read-replace operation. Partial document update is currently supported in Azure Cosmos DB .NET SDK, Java SDK, Node SDK, and stored procedures.

Document updates – then and now

To put things in context, here is a refresher on how one would typically use a document replace operation: Read the document, update it locally (client side) including any optimistic concurrency control (OCC) checks if necessary and, finally call the replace operation along with the updated document.

[Read More]

Extracting Value from IoT Using Azure Cosmos DB, Azure Synapse Analytics, and Confluent Cloud

It’s possible to build seamless device-to-cloud experience with the integrations between Azure and Confluent Cloud. The example provided in this blog post showcases how the following capabilities of Azure and Confluent Cloud are leveraged:

  • A fully managed platform for data in motion and self-managed connectors from Confluent
  • A secure connection from Confluent to Azure
  • A highly distributed and scalable data management solution from Azure
  • A limitless analytics service offered by Azure that brings together data integration, enterprise data warehousing, and big data analytics
  • A fully managed microservice development platform on Azure

The following diagram provides high-level architecture of the end-to-end solution:

[Read More]

Enhance local development experience using the Azure Cosmos DB Linux emulator and VS Code

This blog post provides a quick overview and demo of how you can use the Azure Cosmos DB Linux Emulator on Docker (in preview at the time of writing) along with Visual Studio Code in order to enhance your local development experience.

Since the Azure Cosmos DB Linux Emulator is readily available as a Docker image (simply docker pull mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator), it’s easy to incorporate it within your existing setup. For instance, it could be in a docker-compose file as a part of a larger stack (here is an example of how to use it with Apache Kafka). However, complementing it with the Visual Studio Code Remote - Containers extension, gives you the ability to leverage a Docker container as a full-fledged development environment.

[Read More]

Kubexpose: A Kubernetes Operator, for fun and profit!

Say you have a web service running as a Kubernetes Deployment. There are a bunch of ways to access it over a public URL, but Kubexpose makes it easy to do so. It’s a Kubernetes Operator backed by a Custom Resource Definition and the corresponding controller implementation.

Kubexpose built using kubebuilder and available on GitHub

To try it out, jump into the next section or scroll down to How it works? to learn more

[Read More]

Getting started with Azure Data Explorer and Azure Synapse Analytics for Big Data processing

Azure Data Explorer is a fully managed data analytics service that can handle large volumes of diverse data from any data source, such as websites, applications, IoT devices, and more. Azure Data Explorer makes it simple to ingest this data and enables you to do complex ad hoc queries on the data in seconds. It scales quickly to terabytes of data, in minutes, allowing rapid iterations of data exploration to discover relevant insights. It is already integrated with Apache Spark work via the Data Source and Data Sink Connector and is used to power solutions for near real-time data processing, data archiving, machine learning etc.

[Read More]

Securely access Azure SQL Database from Azure Synapse

The Apache Spark connector for Azure SQL Database (and SQL Server) enables these databases to be used as input data sources and output data sinks for Apache Spark jobs. You can use the connector in Azure Synapse Analytics for big data analytics on real-time transactional data and to persist results for ad-hoc queries or reporting.

At the time of writing, there is no linked service or AAD pass-through support with the Azure SQL connector via Azure Synapse Analytics. But you can use other options such as Azure Active Directory authentication or via direct SQL authentication (username and password based). A secure way of doing this is to store the Azure SQL Database credentials in Azure Key Vault (as Secret) — this is what’s covered in this short blog post.

[Read More]

Getting started with Kafka Connector for Azure Cosmos DB using Docker

Having a local development environment is quite handy when trying out a new service or technology. Docker has emerged as the de-facto choice in such cases. It is specially useful in scenarios where you’re trying to integrate multiple services and gives you the ability to to start fresh before each run.

This blog post is a getting started guide for the Kafka Connector for Azure Cosmos DB. All the components (including Azure Cosmos DB) will run on your local machine, thanks to:

[Read More]

Redis Streams in Action - Part 4 (Serverless Go app to monitor tweets processor)

Welcome to this series of blog posts which covers Redis Streams with the help of a practical example. We will use a sample application to make Twitter data available for search and query in real-time. RediSearch and Redis Streams serve as the backbone of this solution that consists of several co-operating components, each of which will we covered in a dedicated blog post.

The code is available in this GitHub repo - https://github.com/abhirockzz/redis-streams-in-action

We will continue from where we left off in the previous blog post and see how to build a monitoring app to make the overall system more robust in the face of high load or failure scenarios. This is because our very often, data processing applications either slow down (due to high data volumes) or may even crash/stop due to circumstances beyond our control. If this happens with our Tweets processing application, the messages that were assigned to a specific instance will be left unprocessed. The monitoring component covered in this blog post, checks pending Tweets (using XPENDING), claims (XCLAIM), processes (store them as HASH using HSET) and finally acknowledges them (XACK).

[Read More]

Processing Time-Series Data with Redis and Apache Kafka

RedisTimeSeries is a Redis Module that brings native Time Series data structure to Redis. Time Series solutions which were earlier built on top of Sorted Sets (or Redis Streams) can benefit from RedisTimeSeries features such as high volume inserts, low latency reads, flexible query language, down-sampling and much more!

Alt Text

Generally speaking, Time Series data is (relatively) simple. Having said that, we need to factor in other characteristics as well:

[Read More]