Redis modules enrich the Redis core data structures with search capability and modern data models like JSON, graph, time series, and artificial intelligence (AI). Redis modules allow developers to build new application services on top of Redis while continuing to enjoy Redis’ sub-millisecond speed.
Additionally, including multiple data models inside the database layer removes the needless expense of multiple databases, maintains low latency, reduces overhead, and eliminates tedious communication and connection management between the application layer and multiple databases.
Furthermore, when deploying over Redis Enterprise, modules enjoy Redis’ five-nines availability with linear scalability, together with enhanced capabilities like Active-Active and Active-Passive deployments in multi-cloud and hybrid deployments. And with RedisGears, users can easily control—in a fully programmable manner—cluster-wide operations across shards, nodes, data models, and data structures.
Typically, search engines index data slowly, so recently indexed data takes a long time to show up in search results.
RediSearch is a real-time search engine that runs on your Redis dataset and allows you to query data that has just been indexed. It can be used as a secondary index for datasets hosted on other datastores, as a fast text search or auto-complete engine, and as a search engine that powers other modules such as RedisGraph and RedisTimeSeries.
Written in C, built from the ground up on modern data structures, and using the efficient Redis protocol, RediSearch is the fastest search engine in the market. Furthermore, RediSearch is feature rich, supporting powerful capabilities including ranking, Bboolean queries, geo-filters, synonyms, numeric filters and ranges, aggregation, and more.
To store a JSON object in Redis, you either serialize it to a String or break it up into Hash fields, imposing a translation overhead on your application and preventing in-place updates as well as operations like increment.
RedisJSON makes JSON a native data structure in Redis. It’s tailored for fast, efficient, in-memory manipulation of JSON documents at high speed and volume. As a result, you can store your document data in a hierarchical, tree-like format that can be modified and queried efficiently.
RedisGears is a programmable engine for Redis that runs inside Redis, closer to where your data lives. RedisGears allows cluster-wide operations across shards, nodes, data structures, and data models at a sub-millisecond speed. Using Python—and soon Java, Scala, and other JVM languages—you can program RedisGears to 1) support advanced caching use cases like write-behind/write-through; 2) control event-driven processing in a reliable way; 3) perform cluster-wide real-time data analytics; and 4) orchestrate AI serving.
Machine learning and AI inference workloads traditionally operate from the application or specialized serving layers. In many cases, AI inferencing needs to be enriched with reference data that originates in a database. As a result, multiple round-trips are required between the application/AI serving layer and the database, which significantly increases the end-to-end inferencing latency.
RedisAI implements an inference engine inside the database layer. This enables data locality between the engine and the target data, drastically reducing latency. Additionally, it provides a common layer among different formats and platforms, including PyTorch, TensorFlow/TensoRT, and ONNXRuntime. RedisAI is fully integrated with state-of-the-art AI/ML pipeline tools like MLFlow and soon Kubeflow.
Queries on highly connected data over traditional databases are inefficient because they are based on sub-optimal approaches to graph processing. Most graph databases improve this behavior but they still require queries to each connection individually.
RedisGraph is based on a novel approach that translates graph queries to matrix operations, enabling you to evaluate relationships in a highly parallelized way. RedisGraph supports the industry-standard Cypher as a query language and incorporates the state-of-the-art SuiteSparse GraphBLAS engine for matrix operations on sparse matrices.
This new design allows use cases like social graph operation, fraud detection, and real-time recommendations to be executed up to two orders of magnitude faster than any other graph database.
Redis has been used for years to store time-series data. Its in-memory architecture makes it a natural fit for this insert-heavy workload. While the built-in data structures of Redis provide several time-series options, they have limited ability to query and aggregate the data.
With RedisTimeSeries, capabilities like automatic downsampling, aggregations, labeling and search, compression, and enhanced multi-range queries are natively supported in Redis. Built-in connectors to popular monitoring tools like Prometheus and Grafana enable the extraction of data into useful formats for visualization and monitoring. RedisTimeSeries provides all this functionality while still leveraging the linear scalability of Redis Enterprise.
Bloom and Cuckoo filters, TopK, and CountMinSketch are widely used to support data-membership queries, thanks to their space-efficiency and constant-time membership functionality. That said, fast and efficient probabilistic data-structure implementations aren’t easy to develop in the application layer.
RedisBloom enjoys the linear scale, single-digit-seconds failover time, and durability, with easy provisioning and built-in monitoring capabilities.
By continuing to use this site, you consent to our updated privacy agreement. You can change your cookie settings at any time but parts of our site will not function correctly without them.