Redis has a great reputation – but where’s it used? Developers rely on Redis Enterprise for critical use cases across several industries. Learn several scenarios where Redis has made a difference in application development for gaming, retail, IoT networking, and travel.
How are developers using Redis for their database needs? We began as Redis open source, but in the subsequent 13 years, the must-haves for any growing digital business have shifted, and now what’s required is more of everything. That means more availability, more persistence, and never any room for lags in performance, so add backup-enabled and instant failover to the mix of required features.
Painless business scalability is an objective goal for any programming team. Building applications with an in-memory Redis cache and Redis database cuts through complexity and latency since Redis Enterprise provides dual support in a single system.
Everything comes down to speed. It means faster application interactions (such as quick data retrieval and a balanced load of backend services with caching) and software that scales on user demand (build low latency microservice architectures with Redis’ multi-model database).
Let’s explore some popular Redis uses and customer real-world Redis performance examples.
Fraud costs money, and the cost is only going up. Opportunists follow the growth, leading them to the digital space, with retail, gaming, and finance among the verticals hit hardest by fraudsters. “Every $1 of fraud now costs U.S. retail and e-commerce merchants $3.75,” a LexisNexis report asserts; that’s up 19.8% since 2019)
BioCatch is an Israeli digital-identity company that uses groundbreaking biometrics tracking to stay ahead of fraudsters. As the company’s business rapidly grew to 70 million users, 40,000 operations per second, and 5 billion transactions per month, the BioCatch team needed a way to deal with significant database scaling issues.
This isn’t a unique challenge. Online transactions soared as a result of COVID-19. According to Morgan Stanley’s global e-commerce growth forecast 2022 report, the market is estimated to soar from $3.3 trillion today to $5.4 trillion by 2026. With that growth comes cybersecurity dangers: digital identity threats, cybercrime, and customer fraud. Phishing and counterfeit pages increased by 53% in 2021, reports Bolster.AI.
Your data layer needs lightning-fast speed to build finely-tuned fraud detection algorithms that respond in under 40 milliseconds before anything can negatively influence the customer experience.
Data breaches have become their own epidemic. IBM reports that 83% of organizations have had multiple data breaches. The average cost of each instance is approximately $4.3 million. The United States earned the top spot for the 12th year in a row for countries with the highest average total cost of a data breach.
The increased complexity, volume, and sophistication of threats require more advanced fraud detection methods to keep up with malicious actors and build more substantial fortifications against them. Because traditional data platforms often struggle to keep up with modern online transactions’ speed, scale, and complexity, it is difficult to detect and stop fraud in real-time.
In need of blazing performance, high availability, and seamless scalability from its data layer, BioCatch turned to Redis Enterprise to decouple the compute from the data. After initially considering Redis Enterprise as a cache, the team soon realized that Redis would also make a great system configuration NoSQL database.
BioCatch uses Redis features, and various data structures to create a single source-of-truth database that serves mission-critical information across the entire organization. BioCatch captures behavioral, meta, and API data during active user sessions. It also creates user behavior profile subsets and predefined fraudulent-behavior profiles.
With 3 petabytes of data, 300 million keys, and 40 databases running on Microsoft Azure, BioCatch relies on Redis Enterprise to serve data for all its microservices. Since operating with our enterprise-grade Redis cache, BioCatch has had zero downtime and no operational hassles, giving its team breathing room to focus on the strategic projects that serve its core mission.
Learn more about real-time fraud detection and exabyte analytics in our Data Economy podcast vlog.
In 2021, the global gaming market topped $198.40 billion and is expected to reach a value of $339.95 billion by 2027, according to Mordor Intelligence. This massive estimated growth is due in large part to mobile gaming.
Successful mobile games require a great user experience, which can post significant infrastructure challenges, especially for real-time multiplayer games. Users must be able to launch the game, connect to a server, and collaborate with other players; any lag or hiccup can ruin the experience. The gaming experience also includes transactions in real-time, sometimes with real money involved. For the customer, the expectation is an immediate transaction, with personal payment details cached at the ready.
Developers rely on Redis’ low latency to deliver high performance and virtually unlimited scale critical in gaming situations where large volumes of data arrive at high speed. Take fantasy sports, for instance, which is estimated to become a $48 billion market by 2028. American football is the most popular fantasy sport in the United States, with 35 million players. But that pales in comparison to India’s fantasy cricket leagues, which stand at around 100 million players, according to a study by the Federation of Indian Fantasy Sports (FIFS).
In India, teams release their game rosters before a match, and online players only have 10-15 minutes to update their respective fantasy teams. This is a massive amount of data to intake, and it should not impact the customer’s experience, especially when time is of the essence.
For game developers, serving game elements such as graphics, pictures, thumbnails, and music requires a robust caching solution that can reduce the load on datastores operating on a relational database such as MySQL while ensuring blazing fast response times.
Caching helps provide a responsive user experience with minimal overhead. Case in point: Scopely.
Scopely needed to support a variety of data structures and features, such as customizable expiration, eviction, intelligent caching, request pipelining, data persistence, and high availability. These needs can’t be fulfilled with a SQL database, not without the need for a complex load balancing cluster.
Discover more detailed information in our e-book, Why Your Game Needs a Real-Time Leaderboard.
Creating a comprehensive digital presence as an online business is a massive endeavor with a steep learning curve. An excellent digital business needs a backend that ensures store pages are available, an inventory management system, a rapid-speed cache for autocomplete functions to the site, a search engine for products, and machine learning technology for creating personalized customer experiences in real-time. And it all must be performant; according to a 2020 YOTTA study, 90% of shoppers say they will abandon a site if it is too slow.
However, modern multi-channel retailers are turning to real-time inventory systems to optimize their inventory, yield, and supply-chain logistics, with the aim of improving the customer experience and supply chain. Building and maintaining these complex systems is daunting for application developers.
Here too, performance is critical. Delayed or inaccurate inventory information can frustrate customers, leading to shopping cart abandonment (an $18 billion problem of its own) and order cancellations, lost revenues, higher costs, and brand damage.
Apparel retailer Gap Inc. wanted to give its e-commerce customers real-time shipping information for each item shoppers added to their carts. The company faced issues with delays and inaccurate inventory information.
This issue created a poor customer experience that inflated costs and eroded brand loyalty.
Application developers at Gap Inc. found Redis Enterprise’s linear scalability and sub-millisecond performance at a massive scale to be a huge assist, particularly for seasonal Black Friday peaks. In microservice environments, fast and flexible data models protect from overprovisioning parts of the infrastructure that are not used during slower periods.
Availability, speed, performance, and experiences: a true 360° omni-channel journey keeps these balls in the air and never lets them drop.
In the era of big data, businesses require software that instantly collects, stores, and processes large volumes of data. Yet many of today’s solutions supporting fast data ingest are complex and over-engineered for simple requirements such as streaming real-time data from the internet of things (IoT) and event-driven applications.
In these applications, data must be analyzed quickly to make rapid business decisions. D ata loss is typically not permissible for these use cases.
However, data loss does occur, predominantly when working with a relational database. A SQL database is usually created around a known use case at the onset. Introducing another data structure or data model into a SQL stack can bog down the system with slower speed, slower ingestion, and lost data, as the data has to be altered to fit the database’s chosen model.
Lost data equals lost opportunities. Any lost data could be the gateway to an entirely new revenue stream.
A notable Redis usage example of fast data ingestion is Inovonics, which provides high-performance wireless sensor networks with more than 10 million devices deployed worldwide. For most of its 30-year history, Inovonics considered itself primarily a wireless technology provider. But the rise of big data helped the company realize that the unique data sets collected by its wireless devices and sensors could also have tremendous value.
Inovonics’ edge platform required robust data platform capabilities for resilience and performance while minimizing the operational footprint and operating costs. With the application of Redis Enterprise Cloud, a fully automated Database-as-a-Service (DBaaS), Inovonics centralized all its data on Google Cloud, opening up new product offerings in the form of insightful, easy-to-access data analytics.
Inovonics uses Redis Enterprise on its IoT edge devices to push data to its gateways and to the company’s virtual private Google Cloud from those gateways.
On Google Cloud, the application of Redis Enterprise is used for data ingestion to store the millions of daily messages coming from Inovonics’ sensor networks and to provide a central, aggregated view from which to analyze the data. Redis Enterprise also stores the application data model so incoming messages can correlate with representational information such as sensor location.
In addition to challenging in-store retail, the COVID-19 crisis has forced technology vendors to re-calibrate and customize their operations and application delivery models. To maintain business continuity at scale without any downtime, businesses need the right tools and techniques to scale their infrastructure and accelerate their application response times.
For example, consider Freshworks, which builds cloud-based suites of business software. Due to extraordinary growth over the past six years, the company was straining the capabilities of its application architecture and development operations. As the company’s database load grew, it struggled to maintain performance. Looking to dynamically scale its cluster without compromising availability, the team also wanted to reduce the burden on Freshworks’ primary MySQL database and speed application responses.
After evaluating NoSQL in-memory databases like Aerospike and Hazelcast, Freshworks chose the high performance and flexibility of Redis. Ultimately, the team chose Redis Enterprise Cloud to ensure high availability and seamless database experience as an infrastructure service for developers.
In addition to using Redis Enterprise as a frontend cache for its MySQL database, Freshworks uses Redis Enterprise’s highly optimized hashes, lists, and sorted set data structures and built-in Redis commands to meter the API requests coming into its Freshdesk software.
Redis Enterprise also serves as a persistent store for background jobs stored on disk. And as Freshworks transitions to microservices, the company has started to separate critical workloads out of its monolithic Ruby on Rails web application framework. One of the first microservices to result from this effort is dedicated to authentication and uses Redis Enterprise as a session store.
Finally, Freshworks leverages Redis Enterprise’s powerful data structures, including HyperLogLog, bitmaps, and sets, as a frontend database for user analytics.
The case studies above are merely a sample of the use cases where Redis Enterprise is an optimal choice. But there are a few features worth highlighting.
A caching layer stores repeatedly requested data. Ideally, that data is served with a sub-millisecond response time that enables faster loads and eases backend costs.
At an enterprise-grade level, an in-memory cache scales linearly across clouds and suffers no performance degradation. Quick data retrieval equals a faster response time for the user. A fast cache also balances backend service loads, allowing existing hardware to operate at peak performance.
(Note: Some Redis service providers, such as Amazon ElastiCache, support Redis only as a cache and not as a primary database.)
Robust messaging solutions are vital in a microservices architecture. These various collections of services need to communicate with one another in a loosely connected environment.
Messaging protocols such as Pub/Sub aid live broadcasting notifications and are extremely useful for message dispersal when minimal latency and massive throughput are essential. These protocols make a difference. A Sorted Set and Hashes power chat rooms, social media feeds, real-time comment streams, and server intercommunications. Another structure, Lists, can help create lightweight messaging queues.
Session management is all about personalization. Applications need to handle high volumes of data, all while caching and retrieving user-profiles and session metadata. With a reliable session store, it’s possible to scale to billions of field-value pairs with sub-millisecond response times and to handle unexpected spikes in traffic.
A session store manages session data and improves usability, authentication, user profiles, and logging performance by caching IDs and tokens. This reduces the load on the primary database and computes resources, saving money in the end.
Geospatial data can integrate location-based features in an application. Common examples including estimating distance, arrival times, and nearby recommendations.
With a geospatial index, it’s possible to store and search for coordinates using a geospatial command, such as GEOADD (to add one or more geospatial items using a Sorted Set, or GEODIST, GEOHASH, and many others.
See what Redis geospatial indexes can do in the video below.
Developers have to scramble to deliver new applications with compelling features. That keeps them busy, as it’s hard to match the expectations.
To speed time to market, development teams must unlock sub-millisecond response time performance and find easy ways to apply multiple data models to get the freedom they need to build software the right way.
What way will you go when you start building tomorrow’s next great application? Explore these impactful ways to use Redis and build powerful applications with speed and performance.