RedisDays Available Now On-Demand.

Redis Enterprise, The High-Performance Caching Solution For Your Mission Critical Applications

A fast, highly available, resilient, and scalable caching layer that spans across clouds

What is application caching?

Caching improves application response time by storing copies of the most frequently used data on ephemeral but very fast storage. In-memory caching solutions, which hold the working set in speedy DRAM instead of slow spinning disks, can be extremely effective at achieving these goals. While caching is commonly used to improve application latency, a highly available and resilient cache can also help applications scale. Offloading responsibilities from the application’s main logic to the caching layer frees up compute resources to process more incoming requests.

Use cases for application caching

database icon

Storing DBMS data

Most traditional databases are designed to provide robust functionality rather than speed at scale. The database cache is often used for storing copies of lookup tables and the replies to costly queries from the DBMS, to improve the application’s performance and reduce the load on the data source.

Outline of a person inside a connected matrix

User session data

Caching user session data is an integral part of building scalable and responsive applications. Because every user interaction requires access to the session’s data, keeping that data in the cache speeds response time to the application user. Holding session data in the cache is better than keeping sessions sticky at the load-balancer level, because caching allows the requests to be processed by any app server without losing users’ states, while a load-balancer approach effectively forces all requests in a session to be processed by a single app server.

gear icon

Fast access to API responses

Modern applications are built using loosely coupled components that communicate via APIs. Application components use the APIs to make requests for service from other components, whether inside (microservices architecture) or outside (in a Software-as-a-Service use case) the application itself. Storing the API’s reply in cache, even if only briefly, improves application performance by avoiding this inter-process communication.

Still managing your open source Redis? Get a customized report on how much Redis Enterprise could save for your business.

Top 3 challenges a high-performance caching solution must address


The first requirement a caching layer has to deliver is high performance under any load. Research indicates that in order for users to perceive an experience to be “instant,” end-to-end response time must be within 100 milliseconds. Simply put, a high-performance caching layer must consistently provide high throughput at low latency to avoid being a performance bottleneck.


A high-performance caching layer should be able to scale and meet demand arising from business growth or sudden surges (such as Valentine’s Day, Black Friday, natural disasters, or pandemics). Additionally, scalability should be done dynamically without causing downtime or offline migrations— without increasing response time.

Multi-cloud and geo-distribution

More and more organizations are adopting multi-cloud strategies, whether to avoid vendor lock-in or take advantage of the best tools from a variety of cloud providers. But it’s significantly more challenging to manage a geographically distributed caching system that guarantees sub-millisecond latency while also resolving dataset conflicts across multiple clouds.

Redis Enterprise provides the best-in-class caching solution

Cache-aside (Lazy-loading)

This is the most common way to use Redis as a cache. In this strategy, application first look into the cache to retrieve the data. If data is not found (cache miss), then the application retrieves the data from the operational data store directly. Data is loaded to cache only when necessary (which is why the method is also referred to as lazy-loading). Read-heavy applications can greatly benefit from implementing a cache-aside approach.

15 Reasons to use Redis as an application cache

Write-Behind (Write-Back)

In this strategy, data is first written to cache (for example, Redis), and then data is asynchronously updated in the operational data store. This approach improves write performance and eases application development since the developer writes to only one place (Redis). RedisGears provides both write-through and write-behind capabilities.

Visit GitHub demo


This strategy is similar to the write-behind approach, as the cache sits between the application and the operational data store, except the updates are done synchronously. The write-through pattern favors data consistency between the cache and the data store, as writing is done on the server’s main thread. RedisGears provides both write-through and write-behind capabilities.

Visit GitHub demo


In an environment where you have large amount of historic data (e.g. a mainframe) or have a requirement that each write must be recorded on an operational data store, Redis Enterprise change data capture (CDC) connectors can capture individual data changes and propagate exact copies without disrupting ongoing operations with near real-time consistency. CDC, coupled with Redis Enterprise’s ability to use multiple data models, can give you valuable insight to data that had previously been locked up. 

Multiple data model capabilities in Redis

An enterprise grade caching layer for globally distributed applications

High availability, resilience, and durability

Application performance relies on the caching layer. As your cache may hit millions of operations per second, even a second of downtime can have an extreme effect on performance and the ability to meet your SLA. Redis Enterprise automatic backups, instant failure detection, fast recovery across racks/zones/regions, and multiple data-persistence options are key factors to ensure a highly available caching layer and deliver a consistent user experience.

Unmatched performance at any scale

A high-performance application-caching layer needs to be scaled easily and instantly to meet growth requirements and peak demand. Redis Enterprise’s high throughput at sub-millisecond latency, true shared-nothing architecture to enable linear scaling, support for multi-tenancy, and a multi-core architecture ensure compute resources are fully utilized while providing superior performance.

Global distribution at local latency

Regardless of your deployment environments, the caching layer should deliver high availability and low latency across geographies. Additionally, Redis Enterprise’s CRDTs-based Active-Active technology provides local latency for read and write operations, seamless conflict resolution for both simple and complex data types, and ensures business continuity even when major number of replicas are down. The results: decreased development hassle and operational burden.

Open source DNA

There are many solutions available that are based on niche technologies or built for specific use cases and are not widely adopted. Redis open source supports 50+ programming languages, 150+ client libraries and the default caching layer in most deployment environments. Redis is the home of open source Redis, the most loved database 4th year in a row and brings enterprise grade capabilities to your caching layer.

Multi-cloud or hybrid deployments

Developing a caching layer should be easy and fast, without adding operational burden on your team. Redis Enterprise can be deployed as a fully managed service on public clouds freeing you up from provisioning, patching, monitoring and other management tasks. It can also be deployed as a software on your own infrastructure to give you full control of management and configuration. Additionally, hybrid model is supported to preserve operational flexibility.

How to implement caching?

Redis is designed around the concept of data-structures and can store your dataset across Strings, Hashes, Sorted Sets, Sets, Lists, Streams, and other data structures or Redis modules.

Using Node.js, you can retrieve from and save key-value pairs with simple strings through the GET and SET commands of the client object, as shown here:

// connecting redis client to local instance.
const client = redis.createClient(6379) // Retrieving a string value from Redis if it already exists for this key return client.get(‘myStringKey’, (err, value) => { if (value) { console.log(‘The value associated with this key is: ‘ + value) } else { //key not found // Storing a simple string in the Redis store client.set(‘myStringKey’, ‘Redis Enterprise Tutorial’); } } });

This snippet tries to retrieve the string value associated with the myStringKey key using the GET command. If the key is not found, the SET command stores the Redis Enterprise Tutorial value for myStringKey.

The same code can be written in Python, as shown here:

# connecting redis client to local instance.
r = redis.Redis(host='localhost', port=6379, db=0) # Retrieving a string value from Redis if it already exists for this key value = r.get(‘myStringKey’) if value == None: # key not found # Storing a simple string in the Redis store r.set(‘myStringKey’, ‘Redis Enterprise Tutorial’) else: print ‘The value associated with this key is: ‘, value