Back in chapter 6, we introduced an autocomplete procedure that used a ZSET to
store user names to be autocompleted on.
As you may remember, we calculated a pair of strings that would surround all of
the values that we wanted to autocomplete on. When we had those values, we’d insert
our data into the ZSET, and then WATCH the ZSET for anyone else making similar
changes. We’d then fetch 10 items between the two endpoints and remove them
between a MULTI/EXEC pair. Let’s take a quick look at the code that we used.
def autocomplete_on_prefix(conn, guild, prefix):
start, end = find_prefix_range(prefix) identifier = str(uuid.uuid4()) start += identifier end += identifier
Find the start/ end range for the prefix.
zset_name = 'members:' + guild
conn.zadd(zset_name, start, 0, end, 0)
Add the start/end range items to the ZSET.
pipeline = conn.pipeline(True) while 1: try: pipeline.watch(zset_name)
sindex = pipeline.zrank(zset_name, start) eindex = pipeline.zrank(zset_name, end) erange = min(sindex + 9, eindex - 2)
Find the ranks of our end points.
pipeline.multi()
pipeline.zrem(zset_name, start, end) pipeline.zrange(zset_name, sindex, erange) items = pipeline.execute()[-1]
Get the values inside our range, and clean up.
break
except redis.exceptions.WatchError: continue
Retry if someone modified our autocomplete ZSET.
return [item for item in items if '{' not in item
Remove start/end entries if an autocomplete was in progress.
If few autocomplete operations are being performed at one time, this shouldn’t cause
many retries. But regardless of retries, we still have a lot of code related to handling
hopefully rare retries—roughly 40%, depending on how we count lines. Let’s get rid
of all of that retry code, and move the core functionality of this function into a Lua
script. The result of this change is shown in the next listing.
def autocomplete_on_prefix(conn, guild, prefix):
start, end = find_prefix_range(prefix) identifier = str(uuid.uuid4())
Get the range and identifier.
items = autocomplete_on_prefix_lua(conn, ['members:' + guild], [start+identifier, end+identifier])
Fetch the data from Redis with the Lua script.
return [item for item in items if '{' not in item]
Filter out any items that we don’t want.
autocomplete_on_prefix_lua = script_load('''
redis.call('zadd', KEYS[1], 0, ARGV[1], 0, ARGV[2])
Add our placeholder endpoints to the ZSET.
local sindex = redis.call('zrank', KEYS[1], ARGV[1]) local eindex = redis.call('zrank', KEYS[1], ARGV[2])
Find the endpoint positions in the ZSET.
eindex = math.min(sindex + 9, eindex - 2)
Calculate the proper range of values to fetch.
redis.call('zrem', KEYS[1], unpack(ARGV))
Remove the placeholder endpoints.
return redis.call('zrange', KEYS[1], sindex, eindex)
Fetch and return our results.
''')
The body of the Lua script should be somewhat familiar; it’s a direct translation of the
chapter 6 code. Not only is the resulting code significantly shorter, it also executes
much faster. Using a similar benchmarking method to what we used in chapter 6, we
ran 1, 2, 5, and 10 concurrent processes that performed autocomplete requests
against the same guild as fast as possible. To keep our chart simple, we only calculated
attempts to autocomplete and successful autocompletes, over the course of 10 seconds.
Table 11.3 shows the results of this test.
Looking at our table, when executing the older autocomplete function that uses
WATCH/MULTI/EXEC transactions, the probability of finishing a transaction is reduced
as we add more clients, and the total attempts over 10 seconds hit a peak limit. On the
other hand, our Lua autocomplete can attempt and finish far more times every second,
primarily due to the reduced overhead of fewer network round trips, as well as
Benchmark configuration |
Tries in 10 seconds |
Autocompletes in 10 seconds |
---|---|---|
Original autocomplete, 1 client |
26,339 |
26,339 |
Original autocomplete, 2 clients |
35,188 |
17,551 |
Original autocomplete, 5 clients |
59,544 |
10,989 |
Original autocomplete, 10 clients |
57,305 |
6,141 |
Lua autocomplete, 1 client |
64,440 |
64,440 |
Lua autocomplete, 2 clients |
89,140 |
89,140 |
Lua autocomplete, 5 clients |
125,971 |
125,971 |
Lua autocomplete, 10 clients |
128,217 |
128,217 |
not running into any WATCH errors due to contention. Looking at just the 10-client version
of both, the 10-client Lua autocomplete is able to complete more than 20 times
as many autocomplete operations as the original autocomplete.
Now that we’ve seen how well we can do on one of our simpler examples, let’s look
at how we can improve our marketplace.