Redis Enterprise – A New Approach to Multi-Model Database
Break the data matrix. Explore what Redis has to offer.
Multi-model databases aim to reduce the overhead associated with provisioning and maintaining a different database for each data model. The traditional approach to multi-model databases fails to deliver true interaction between the different models: Once a data model is selected it is very hard to access a different model on the same database. To solve this problem, users usually develop a separate logic running outside the database, either at the application tier or over a serverless infrastructure, acting as a glue between the different data models. Such logic, external to the database, creates significant execution overhead and cannot meet today’s requirements for instant experience.
Powering the Redis modules architecture, Redis Enterprise solves this complex problem by allowing multi-model operations across and between modules and the core Redis data structures to be executed in a fully programmable and distributed manner, while maintaining instant, sub-millisecond latency.
Enable inter-module and module-core data structure operations in an efficient way, at sub-millisecond latency.
Program any database logic that combines Redis modules and core operations in Redis using RedisGears.
Easily migrate an existing dataset to a pure multi-model database in Redis by implementing a few simple principles —without changing your database architecture.
Ingest and process millions of time-stamped data points per second with minimal latency using minimum resources.
Collect telemetry data from multiple remote IoT devices, on-premises, in any cloud, or on the edge for data-driven insights.
Gain deep insights into infrastructure and application health with integrations into Prometheus, Grafana, and Telegraf.
Apply machine-learning models in real-time to any data in Redis to score transactions, classify data, and detect needle-in-the-haystack anomalous activity.
Enable automated discovery of patterns across large volumes of streaming transactions to weed out false positives and identify real fraudulent activity.
Customize experiences by providing relevant content instantly, based on user preferences, trends, and interactions.
Eliminate the overhead associated with storing multiple copies of a dataset and avoid memory-copy operations.