Redis Enterprise for Microservices

Build resilient and highly available microservices with Redis Enterprise

Accelerate innovation with a real-time data layer for your microservices architecture

Microservice architectures make it possible to launch new products faster, help them scale easier, and respond better to customer demands. With its multiple modern data models, fault tolerance in any scenario, multi-tenancy for isolation, and the flexibility to deploy across multiple environments, Redis Enterprise enables developers and operators to optimize their data layer for a microservices architecture.

What is a microservices architecture?

As defined by Chris Richardson, noted microservices expert, microservices architecture is an architectural style for structuring applications as a collection of loosely coupled services that are highly maintainable and testable, independently deployable, bounded by specific business domains, and owned by small teams. A microservice architecture enables the rapid, frequent, and reliable delivery of large, complex applications. 

Monolith versus microservices architecture

Monolith versus microservices architecture

Monolith versus microservices architecture

Why microservices matter

Microservices-based applications enable strategic digital transformation and cloud migration initiatives

Microservices is an architecture style that has helped development teams create better software, faster, and minimize the costs and complexity of application modernization. As a result, microservices architectures have been adopted across all industries, for projects that justifiably can be labeled “digital transformation initiatives” as well as for more mundane but important tasks such as bootstrapping cloud deployments. 

This architecture style and its related software development culture enables  microservices development teams to operate on their own release cycles, embrace end-to-end product ownership, and adopt a DevOps framework built on continuous integration/continuous delivery. The result is that enterprises can reduce time-to-market for new service development, often from projects measured in months to days. 

Microservices accelerate data tier cloud migrations. That’s because they primarily rely on cloud-native NoSQL databases. NoSQL databases are replacing on-premises relational databases that were not built for the cloud nor for independent release cycles, according to a 2021 IDC InfoBrief survey

In addition, some organizations cannot migrate their legacy monolith applications to cloud native all at once. Microservices enable incremental migration of subdomains from monolithic architecture to modern technology stacks.

Redis Enterprise: a perfect solution for microservices

Performance at microservices scale

In a microservices environment, services that need to run in real-time must compensate for networking overhead. Redis Enterprise delivers sub-millisecond latency for all Redis data types and models. In addition, it scales instantly and linearly to almost any throughput needed.

Designed for fault tolerance and resilience

To ensure your applications are failure resilient, Redis Enterprise uses a shared-nothing cluster architecture. It is fault tolerant at all levels: with automated failover at the process level, for individual nodes, and even across infrastructure availability zones. It also includes tunable persistence and disaster recovery.

Reduce complexity with fast and flexible data models

Redis Enterprise allows developers to choose the data model best suited to their performance and data access requirements for their microservices architecture and domain driven design, while retaining isolation with multi-tenant deployment on a single data platform. 

Kubernetes

Simplify operations with native Kubernetes deployment

Redis Enterprise provides a unified operational interface that reduces technology sprawl, simplifies operations, and reduces service latency. The Redis Enterprise Operator for Kubernetes gives you consistent, automated deployments to reduce risk. That lets development teams focus on innovation and business value.

Adaptable across clouds and geographies

Choose where your database should run. Redis Enterprise can be deployed anywhere: on any cloud platform, on-premises, or in a multi-cloud or hybrid cloud architecture.

Design patterns for microservices architectures

multi-tenant-multi-model-data-platform-01

Data tier for Domain Driven Design

Isolation or bounded context is a key characteristic of a microservice architecture. Each service can have its own unique data model and service level agreement goals which require  a dedicated database. This does not scale well as the number of microservices grows. The Redis Enterprise in-memory data platform provides multiple data models that can be deployed multi-tenant, yet remain isolated, all without sacrificing performance.

Asynchronous messaging for inter-service communication 

Microservices use lightweight mechanisms such as application program interfaces (APIs) to communicate among the services that comprise an application. That makes it easier to isolate specific services – and that has a host of benefits.

Microservices must communicate state, events, and data among one another without breaking isolation, and they have to stay decoupled. A common solution is to bring a publish-subscribe messaging broker into the architecture – that is, to make inter-service communication event-driven and eventually consistent – and to treat every message between microservices as an event.

Asynchronous messaging for inter-service communication 

Redis Streams is an immutable time-ordered log data structure that allows a service (producer) to publish asynchronous messages to which multiple consumers can subscribe. This helps a development team to create a lightweight message broker or to act as an event store. You can configure Redis Streams to handle different delivery guarantees, support consumer groups, and other features that are comparable to Apache Kafka topic partitions. Even better, Redis Streams helps to create reporting, analytics, auditing, and forensic analysis on the backend.

Caching cross-domain shared data via CQRS

Caching cross-domain shared data via CQRS

Microservices need fast access to data, but when dozens or hundreds of microservices try to read from the same slow disk-based database, that presents a challenge. Cross-domain data needs to be available to each microservice in real-time — and to do so without breaking the scope of its focused business context and goal.

Command Query Responsibility Segregation (CQRS) is a critical pre-fetch cache pattern within microservice architectures that decouple reads (queries) and writes (commands). This enables an application to write data to a slower disk-based database, while pre-fetching and caching that data in Redis Enterprise for reads. This makes that data available in real time to other associated microservices that need it.

Redis Enterprise features for microservice architecture

Active-Active replication

A microservices architecture has many connected services, yet it faces the same performance demands as monolithic applications. To minimize latency, data should reside as close to the services as possible. You also need to ensure databases are consistent with one another in the event of failures or conflicting updates. Redis Enterprise can be deployed as an Active-Active, conflict-free replicated database to handle updates from multiple local installations of your services without compromising latency or data consistency, and providing continuity in the event of failures.

Multiple data models

Redis Enterprise provides multiple data structures (hashes, strings, Streams, lists, etc.) and models including JSON, search, time-series, and graph that let you choose the data model best suited for your microservice domain, performance, and data-access requirements. And it’s all in a single data platform.

Multi-tenant databases

Within a microservices architecture database design, a single Redis Enterprise cluster can provide databases to many different services, each with its own isolated instance, tuned for the given workload. Each database instance is deployed, scaled, and modeled independently of the others, while leveraging the same cluster environment, isolating data between services without increasing operational complexity.

Flexible across clouds

Microservices provide a great deal of technology flexibility, and choosing where you want to run your database should be no exception. Redis Enterprise can be deployed anywhere: on any cloud platform, on-premises, or in a multi-cloud or hybrid-cloud architecture. It is also available on Kubernetes, Pivotal Kubernetes Service (PKS), and Red Hat OpenShift.

Native Kubernetes container orchestration and management

Containers are closely aligned with and help enterprises implement microservice applications. Kubernetes is the de facto standard platform for container deployment, scheduling, and orchestration. Redis is the top database technology running on containers, with over two billion Docker hub launches. Redis Enterprise Operator for Kubernetes provides: automatic scalability, persistent storage volumes, simplified database endpoint management, and zero downtime rolling upgrades. It is available on multiple Kubernetes platforms and cloud managed services, including RedHat OpenShift, VMware Tanzu Kubernetes Grid (formerly Enterprise PKS), upstream Kubernetes, and Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), or Amazon Elastic Kubernetes Service (EKS).

FAQ

  • What are microservices?
    • Microservices architecture (often shortened to microservices) refers to an architectural style for developing applications. Microservices allow a large application to be separated into smaller independent parts, with each part having its own realm of responsibility. To serve a single user request, a microservices-based application can call on many internal microservices to compose its response.
  • What is the difference between monolithic architecture and microservices architecture?
    • In a monolithic architecture, processes are tightly coupled and run as a single deployable artifact. While this is relatively simple to begin with, scaling up or modifying one part of your microservice application requires updating the entire service, resulting in inefficient scalability and increased complexity as your codebase grows in size. 
      • Microservices architecture involves a collection of loosely coupled services that can be independently updated and scaled by smaller teams. Because individual services are easier to build, deploy, and manage than a single monolithic application, microservices enable more frequent deployments, data store autonomy, and increased flexibility.
      • Organizations are transitioning their entire applications to microservices architecture in order to drastically decrease time to market, more easily adopt new technologies, and respond faster to customer needs.
  • What is Kubernetes?
    • Kubernetes, also known as k8s, is an open-source orchestration system for automating deployment, scaling, and management of containerized applications, typically used as part of microservice and cloud native architectures. 
  • What are Docker containers?
    • Docker containers images are lightweight, standalone, executable packages of software that includes everything needed to run an application.
  • What is an API gateway?
    • An API gateway is a software application for api management that sits between a client and a set of backend microservices. The API Gateway serves as a reverse proxy to accept API calls from the client application, forwarding this traffic to the appropriate service.

Next steps