We are now, simply, Redis
Earlier this week, thousands of data architects, engineers, and DevOps practitioners attended Kafka Summit SF 2019 in San Francisco to learn more about Apache Kafka—a stream processing software originally developed by LinkedIn—as well as streaming data and data management in the modern world of microservices and real-time digital businesses.
Of course, the folks at Confluent, which was founded by several Kafka co-creators, made a number of announcements, including:
Beyond the day-to-day product and positioning news, though, Confluent CEO and Kafka co-creator Jay Kreps used his keynote address to declare that “databases are only half done.” As people continue building applications that need to handle massive volumes of real-time data, Kreps explained, databases will need to evolve for new use cases, and our notion of what databases are (and could be) will continue to be challenged.
Naturally, his bold statements prompted questions at the Redis booth about whether we agreed, and what Redis has to offer in the fast-evolving database ecosystem. For the record, we do agree that databases need to evolve to new use cases, requirements, and roles and that traditional databases haven’t kept up. But that was only the beginning of the many interesting questions we received—let’s take a look at the four most common Redis-related questions that popped up at the Kafka Summit:
It’s not unheard of for Kafka and Redis to appear in the same environment, but they typically aren’t part of the same path in an application. Redis is increasingly configured as an in-memory database with persistence and durability using its data structures and Redis modules, but is still most frequently used for cases like caching, session management, leaderboards, and fast data ingest, while Kafka tends to be employed mostly for messaging and stream processing.
Using Redis as a database means that it can act as a data source for Kafka or consume data from Kafka topics, in addition to providing capabilities such as text search and indexing, time-series analysis, and Bloom filters. If you want to start writing data from Kafka topics to Redis, check out the Redis Sink Connector for Kafka Connect,
This question was often asked by people who weren’t yet using Kafka, but were familiar with Redis and searching for a data store that could support messaging and event streaming. Redis Streams can be the answer, depending on your requirements! Redis Streams can act as a persistent log for messaging and event streaming, but is arguably simpler to implement given that you don’t have to deal with Zookeeper—plus you get the sub-millisecond performance that Redis is known for.
Additionally, these features come with the operational characteristics of Redis. In other words, if you already know how to operate Redis, implementing Redis Streams for messaging should be relatively straightforward. To get started, check out the Introduction to Redis Streams.
Redis Enterprise combines the features and performance characteristics of Redis with platform advantages that make Redis databases more flexible, reliable, and easier to scale. It provides additional capabilities such as:
As a Redis Enterprise user, you can continue leveraging all of the commands, data structures, and modules that you love in open source Redis, while also benefiting from five-nines (99.999%) uptime and sub-millisecond latency.
Redis Enterprise is designed to integrate tightly with Kubernetes. One benefit of Kubernetes is that it will automatically add or remove pods to scale applications in response to load, and will restart them in the event of failures to maintain availability. However, this traditionally meant that additional work was required to orchestrate stateful applications (e.g. databases), especially when it came to maintaining data persistence and proper database cluster configuration.
With the Redis Enterprise Kubernetes Operator, you can deploy Redis Enterprise to Kubernetes with specific logic that not only makes it easier to create and update Redis Enterprise clusters, but also provides failover instructions that ensure data isn’t lost when pods are terminated.
You can also deploy Redis Enterprise as a multi-tenant database in a microservices architecture. Multi-tenancy allows for multiple databases to be administered under one cluster, with each database being isolated from one another. Each of your services can have its own database or cache, configured individually to support the service in the best way.
Clearly, the world of data is changing fast and the Kafka Summit was a great opportunity to meet and hear Kafka and Redis community members talk about their challenges around instant data requirements—and the innovative technologies and techniques they’re using to overcome them. I can’t wait to see how the community members continue to evolve the usage of Redis and Kafka.