A Buyer’s Guide for Enterprise Caching, an e-book companion with enterprise caching solutions to provide consistently high performance while scaling, is now available. Download for free below.
For decades, databases worked behind the scenes driving applications and websites making digital experiences more dynamic and adaptable. But there’s been a fundamental problem with this model. These same databases have also made applications slower.
That’s where caching comes in. It takes data that was stored in a database on your server’s hard disk and moves it to a temporary place where it can be accessed far more quickly and efficiently. As a result, the complex, energy, and time-consuming operation to acquire data only needs to be performed once. From that point on, the data can be retrieved quickly and efficiently from the cache.
Of course, as your company gets bigger and its reach grows, the stakes get higher and your margin for error becomes razor thin. Suddenly, caching is no longer a nice-to-have—it’s a must-have. What’s convenient for a small-scale company becomes essential for a large-scale competitive enterprise. And failure is not an option.
Enter the enterprise cache. Built on the solid foundation of the basic cache, it provides a suite of features that enterprises require in order to keep pace with growing demands, including high availability, genuine product support, sub-millisecond performance, fully distributed replication, and a cost-effective way of managing your complex data sets. It’s more scalable, more failure resistant, and yes, more affordable.
The rationale for adopting an enterprise caching solution is simple: When you need to be able to scale and you can’t afford to fail. How can you tell when it’s time to adopt an enterprise caching solution? There are a number of factors to consider:
1. Your original database won’t scale effectively
The humble ant is one of the most amazing creatures on the planet. It’s capable of lifting close to 5,000 times its own body weight. Over the centuries, numerous scientists (and science fiction filmmakers) have wondered what would happen if we took the immense strength of the tiny ant and expanded it to human size. Unfortunately, an ant doesn’t scale. If we produced one the size of your coworker, its legs would collapse under the weight of its own body.
Although the perils of expanding a cache aren’t quite as spectacular, they share some similarities. As they expand, standard caches typically run into two types of roadblocks: storage and resource limits. The former describes the amount of space available to cache data. The latter refers to the capacity to perform necessary functions, including storing and retrieving cached data.
The solutions can be straightforward, but potentially never-ending. When you reach storage limits, the standard remedy is obvious: Increase your storage. If you don’t have enough oomph to handle all your resources, increase your bandwidth and your processing power. With vertical scaling, you increase the resources allotted to your cache to operate.
On-prem, this usually means replacing your current server with a more powerful one that has more RAM, processing power, network bandwidth, or all three. If your cache is in the cloud, it may mean moving to a larger instance. Another alternative, horizontal scaling, involves adding more nodes to the cluster of instances that are handling your cache without altering the size of an individual cache instance. In short, vertical scaling means increasing by size, while horizontal scaling involves increasing by number.
2. The cost of caching is becoming prohibitive
Steadily expanding the size of your cache to meet a growing demand may temporarily solve your problem, but at what cost? If you’re like many people, your belongings have outgrown your house or apartment and you may have had to rent one or more storage units. If so, then you realize that in most cases, it costs the same amount to stuff a storage locker with worthless knick-knacks as it does to fill it with priceless antiques.
Basic caching works in a similar fashion. Frequently used or high value data is treated identically to less common or less important keys and values. Not only that, but when you run out of cache space, the nature of that data is irrelevant. You’re out of space. Unfortunately, adding cache space can be expensive. Redis on Flash (a component of Redis Enterprise) helps to keep your cache costs in check by setting up a caching hierarchy. More actively used cache values are stored in RAM, while lesser used ones can be maintained in much larger and less expensive flash memory.
3. You can no longer rely on a single master
Adding nodes to your cache can meet the demands of increased traffic, but it only tackles part of the issue. Basic caching allows for additional read replicas, a method of horizontal scaling that improves read performance by distributing the read load across multiple servers. Unfortunately, you are still limited to one master to handle all writes.
Being limited to a single master can create problems if your deployments span multiple regions or use multiple providers or multiple clouds. If your application has a far-flung customer base, relying on a single master can create a debilitating bottleneck. That’s because all of your write requests, regardless of their origin, must be directed to one limited location.
It’s a little like going to a carry-out restaurant where you can pick up your order from multiple windows, but there’s only one cash register open where you’re supposed to pay. With Active-Active Geo-Deployment from Redis Enterprise, any master instance, regardless of its region or its provider, can handle both read and write requests.
4. High availability has gone from luxury to necessity
With a small-scale application, those occasional times when your app goes down can be an annoyance and an embarrassment. An enterprise-level outage is a game changer. A fielding error in a little league baseball game is unfortunate. A similar error in the World Series can cost millions.
Likewise, a failure in availability is no longer just an inconvenience. It’s a genuine liability. In fact, depending on the SLA you have with your customers, it can put you in legal jeopardy. Unfortunately, basic caching provides no inherent guarantees of scaling, security, or high availability. Although it’s theoretically possible to build many of these safeguards atop your open-source cache, these home-grown solutions often come with their own special headaches and hidden costs.
Of course, some third-party Redis caches offer 3-9s availability, but only across a single region and with no data persistence, just snapshots. If your application is limited, it’s a limited solution. But if your company and/or your customer base are international, it isn’t enough. Redis Enterprise Cloud offers 5-9s SLA across one or more regions. It supports data persistence and backup without impacting performance. In addition, it provides automatic cluster recovery and pure in-memory replication.
All of these increased demands as your company expands can result in slower load times, which can alienate long-time customers and lead to widespread rejection by potential new ones. Like it or not, response time is a critical component of the online experience.
According to Unbounce, 70% of users said that load time influenced their willingness to buy from online retailers. Research has shown that applications have roughly 100 ms before users get the sense they are waiting. That’s one-third of the time that it takes to blink. If your customers are able to blink while your application is loading, there’s a good chance you’ve already lost them.
And it’s not just a question of purchasing decisions. A study by Salesforce found that 83% of customers considered the experience to be as important as a company’s products and services.
Finally, in the age of viral media, one customer’s isolated bad experience is unlikely to remain isolated for long. When people have an unsatisfying experience on a website, they don’t usually keep it to themselves. On the contrary, according to Salesforce, 61% of customers share that bad experience with others. As a result, your application’s deficiencies can trigger a chain reaction of bad will as the news gets rapidly disseminated throughout your potential customer base.
Luckily, there’s a silver lining to this last sobering set of statistics. The same Salesforce study found that 70% of customers are apt to share their good experiences with others. If your company is growing and you want to build a base of happy new customers instead of shedding old ones, enterprise caching may prove to be just what you need to lay the groundwork for potentially limitless expansion of their digital experiences.
For more information, check out our complimentary A Buyer’s Guide for Enterprise Caching.
By continuing to use this site, you consent to our updated privacy agreement. You can change your cookie settings at any time but parts of our site will not function correctly without them.