Try Redis Cloud Essentials for Only $5/Month!

Learn More

2.1 Login and cookie caching

back to home

2.1 Login and cookie caching

Whenever we sign in to services on the internet, such as bank accounts or web mail, these services remember who we are using cookies. Cookies are small pieces of data that websites ask our web browsers to store and resend on every request to that service. For login cookies, there are two common methods of storing login information in cookies: a signed cookie or a token cookie.

Signed cookies typically store the user’s name, maybe their user ID, when they last logged in, and whatever else the service may find useful. Along with this user-specific information, the cookie also includes a signature that allows the server to verify that the information that the browser sent hasn’t been altered (like replacing the login name of one user with another).

Token cookies use a series of random bytes as the data in the cookie. On the server, the token is used as a key to look up the user who owns that token by querying a database of some kind. Over time, old tokens can be deleted to make room for new tokens. Some pros and cons for both signed cookies and token cookies for referencing information are shown in table 2.1.

Table 2.1 Pros and cons of signed cookies and token cookies

Cookie type

Pros

Cons

Signed cookie

Everything needed to verify the cookie is in the cookie.
Additional information can be included and signed easily

Correctly handling signatures is hard.
It’s easy to forget to sign and/or verify data, allowing security vulnerabilities

Token cookie

Adding information is easy.
Very small cookie, so mobile and slow clients can send requests faster

More information to store on the server.
If using a relational database, cookie loading / storing can be expensive

For the sake of not needing to implement signed cookies, Fake Web Retailer chose to use a token cookie to reference an entry in a relational database table, which stores user login information. By storing this information in the database, Fake Web Retailer can also store information like how long the user has been browsing, or how many items they’ve looked at, and later analyze that information to try to learn how to better market to its users.

As is expected, people will generally look through many different items before choosing one (or a few) to buy, and recording information about all of the different items seen, when the user last visited a page, and so forth, can result in substantial database writes. In the long term, that data is useful, but even with database tuning, most relational databases are limited to inserting, updating, or deleting roughly 200–2,000 individual rows every second per database server. Though bulk inserts/updates/deletes can be performed faster, a customer will only be updating a small handful of rows for each web page view, so higher-speed bulk insertion doesn’t help here.

At present, due to the relatively large load through the day (averaging roughly 1,200 writes per second, close to 6,000 writes per second at peak), Fake Web Retailer has had to set up 10 relational database servers to deal with the load during peak hours. It’s our job to take the relational databases out of the picture for login cookies and replace them with Redis.

To get started, we’ll use a HASH to store our mapping from login cookie tokens to the user that’s logged in. To check the login, we need to fetch the user based on the token and return it, if it’s available. The following listing shows how we check login cookies.

Listing 2.1 The check_token() function
def check_token(conn, token):
	return conn.hget('login:', token)

Fetch and return the given user, if available.


Checking the token isn’t very exciting, because all of the interesting stuff happens when we’re updating the token itself. For the visit, we’ll update the login HASH for the user and record the current timestamp for the token in the ZSET of recent users. If the user was viewing an item, we also add the item to the user’s recently viewed ZSET and trim that ZSET if it grows past 25 items. The function that does all of this can be seen next.

Listing 2.2 The update_token() function
def update_token(conn, token, user, item=None):
	timestamp = time.time()

Get the timestamp.

	conn.hset('login:', token, user)

Keep a mapping from the token to the logged-in user.

	conn.zadd('recent:', token, timestamp)

Record when the token was last seen.

	if item:
		conn.zadd('viewed:' + token, item, timestamp)

Record that the user viewed the item.

		conn.zremrangebyrank('viewed:' + token, 0, -26)

Remove old items, keeping the most recent 25.


And you know what? That’s it. We’ve now recorded when a user with the given session last viewed an item and what item that user most recently looked at. On a server made in the last few years, you can record this information for at least 20,000 item views every second, which is more than three times what we needed to perform against the database. This can be made even faster, which we’ll talk about later. But even for this version, we’ve improved performance by 10–100 times over a typical relational database in this context.

Over time, memory use will grow, and we’ll want to clean out old data. As a way of limiting our data, we’ll only keep the most recent 10 million sessions.1 For our cleanup, we’ll fetch the size of the ZSET in a loop. If the ZSET is too large, we’ll fetch the oldest items up to 100 at a time (because we’re using timestamps, this is just the first 100 items in the ZSET), remove them from the recent ZSET, delete the login tokens from the login HASH, and delete the relevant viewed ZSETs. If the ZSET isn’t too large, we’ll sleep for one second and try again later. The code for cleaning out old sessions is shown next.

Listing 2.3 The clean_sessions() function
QUIT = False
LIMIT = 10000000
def clean_sessions(conn):
	while not QUIT:
		size = conn.zcard('recent:')

Find out how many tokens are known.

		if size <= LIMIT:
			time.sleep(1)

We’re still under our limit; sleep and try again.

		continue
	end_index = min(size - LIMIT, 100)
	tokens = conn.zrange('recent:', 0, end_index-1)

Fetch the token IDs that should be removed.

	session_keys = []
	for token in tokens:
		session_keys.append('viewed:' + token)

Prepare the key names for the tokens to delete.

	conn.delete(*session_keys)
	conn.hdel('login:', *tokens)
	conn.zrem('recent:', *tokens)

Remove the oldest tokens.


How could something so simple scale to handle five million users daily? Let’s check the numbers. If we expect five million unique users per day, then in two days (if we always get new users every day), we’ll run out of space and will need to start deleting tokens. In one day there are 24 x 3600 = 86,400 seconds, so there are 5 million / 86,400 < 58 new sessions every second on average. If we ran our cleanup function every second (as our code implements), we’d clean up just under 60 tokens every second. But this code can actually clean up more than 10,000 tokens per second across a network, and over 60,000 tokens per second locally, which is 150–1,000 times faster than we need.

Where to Run Cleanup FunctionsThis and other examples in this book will sometimes include cleanup functions like listing 2.3. Depending on the cleanup function, it may be written to be run as a daemon process (like listing 2.3), to be run periodically via a cron job, or even to be run during every execution (section 6.3 actually includes the cleanup operation as part of an “acquire” operation). As a general rule, if the function includes a while not QUIT: line, it’s supposed to be run as a daemon, though it could probably be modified to be run periodically, depending on its purpose.

PYTHON SYNTAX FOR PASSING AND RECEIVING A VARIABLE NUMBER OF ARGUMENTSInside listing 2.3, you’ll notice that we called three functions with a syntax similar to conn.delete(*vtokens). Basically, we’re passing a sequence of arguments to the underlying function without previously unpacking the arguments. For further details on the semantics of how this works, you can visit the Python language tutorial website by visiting this short url: https://mng.bz/8I7W.

Expiring Data in RedisAs you learn more about Redis, you’ll likely discover that some of the solutions we present aren’t the only ways to solve the problem. In this case, we could omit the recent ZSET, store login tokens as plain key-value pairs, and use Redis EXPIRE to set a future date or time to clean out both sessions and our recently viewed ZSETs. But using EXPIRE prevents us from explicitly limiting our session information to 10 million users, and prevents us from performing abandoned shopping cart analysis during session expiration, if necessary in the future.

Those familiar with threaded or concurrent programming may have seen that the preceding cleanup function has a race condition where it’s technically possible for a user to manage to visit the site in the same fraction of a second when we were deleting their information. We’re not going to worry about that here because it’s unlikely, and because it won’t cause a significant change in the data we’re recording (aside from requiring that the user log in again). We’ll talk about guarding against race conditions and about how we can even speed up the deletion operation in chapters 3 and 4.

We’ve reduced how much we’ll be writing to the database by millions of rows every day. This is great, but it’s just the first step toward using Redis in our web application. In the next section, we’ll use Redis for handling another kind of cookie.

1 Remember that these sorts of limits are meant as examples that you could use in a large-scale production situation. Feel free to reduce these to a much smaller number in your testing and development to see that they work.