Try Redis Cloud Essentials for Only $5/Month!

Learn More

6.3.2 Fair semaphores

back to home

6.3.2 Fair semaphores

Because we can’t assume that all system clocks are exactly the same on all systems, our earlier basic counting semaphore will have issues where clients on systems with slower system clocks can steal the semaphore from clients on systems with faster clocks. Any time there’s this kind of sensitivity, locking itself becomes unfair. We want to reduce the effect of incorrect system times on acquiring the semaphore to the point where as long as systems are within 1 second, system time doesn’t cause semaphore theft or early semaphore expiration.

In order to minimize problems with inconsistent system times, we’ll add a counter and a second ZSET. The counter creates a steadily increasing timer-like mechanism that ensures that whoever incremented the counter first should be the one to get the semaphore. We then enforce our requirement that clients that want the semaphore who get the counter first also get the semaphore by using an “owner” ZSET with the counter-produced value as the score, checking our identifier’s rank in the new ZSET to determine which client got the semaphore. The new owner ZSET appears in figure 6.7.

Figure 6.7 Fair semaphore owner ZSET

We continue to handle timeouts the same way as our basic semaphore, by removing entries from the system time ZSET. We propagate those timeouts to the new owner ZSET by the use of ZINTERSTORE and the WEIGHTS argument.

Bringing it all together in listing 6.14, we first time out an entry by removing old entries from the timeout ZSET and then intersect the timeout ZSET with the owner

ZSET, saving to and overwriting the owner ZSET. We then increment the counter and add our counter value to the owner ZSET, while at the same time adding our current system time to the timeout ZSET. Finally, we check whether our rank in the owner ZSET is low enough, and if so, we have a semaphore. If not, we remove our entries from the owner and timeout ZSETs.

Listing 6.14 The acquire_fair_semaphore() function
def acquire_fair_semaphore(conn, semname, limit, timeout=10):
 
   identifier = str(uuid.uuid4())

A 128-bit random identifier.

   czset = semname + ':owner'
   ctr = semname + ':counter'

   now = time.time()
   pipeline = conn.pipeline(True)
 
   pipeline.zremrangebyscore(semname, '-inf', now - timeout)
   pipeline.zinterstore(czset, {czset: 1, semname: 0})

Time out old entries.

   pipeline.incr(ctr)
   counter = pipeline.execute()[-1]

Get the counter.

   pipeline.zadd(semname, identifier, now)
   pipeline.zadd(czset, identifier, counter)

Try to acquire the semaphore.

   pipeline.zrank(czset, identifier)
   if pipeline.execute()[-1] < limit:

Check the rank to determine if we got the semaphore.

      return identifier

We got the semaphore.

   pipeline.zrem(semname, identifier)
   pipeline.zrem(czset, identifier)

We didn’t get the semaphore; clean out the bad data.

   pipeline.execute()
   return None
 

This function has a few different pieces. We first clean up timed-out semaphores, updating the owner ZSET and fetching the next counter ID for this item. After we’ve added our time to the timeout ZSET and our counter value to the owner ZSET, we’re ready to check to see whether our rank is low enough.

FAIR SEMAPHORES ON 32-BIT PLATFORMSOn 32-bit Redis platforms, integer counters are limited to 231 – 1, the standard signed integer limit. An overflow situation could occur on heavily used semaphores roughly once every 2 hours in the worst case. Though there are a variety of workarounds, the simplest is to switch to a 64-bit platform for any machine using any counter-based ID.

Let’s look at figure 6.8, which shows the sequence of operations that are performed when process ID 8372 wants to acquire the semaphore at time 1326437039.100 when there’s a limit of 5.

Releasing the semaphore is almost as easy as before, only now we remove our identifier from both the owner and timeout ZSETs, as can be seen in this next listing.

Listing 6.15 The release_fair_semaphore() function
def release_fair_semaphore(conn, semname, identifier):
   pipeline = conn.pipeline(True)
   pipeline.zrem(semname, identifier)
   pipeline.zrem(semname + ':owner', identifier)
 
   return pipeline.execute()[0]

Returns True if the semaphore was properly released, False if it had timed out

 

Figure 6.8 Call sequence for acquire_fair_semaphore()

If we wanted to be lazy, in most situations we could just remove our semaphore identifier from the timeout ZSET; one of our steps in the acquire sequence is to refresh the owner ZSET to remove identifiers that are no longer in the timeout ZSET. But by only removing our identifier from the timeout ZSET, there’s a chance (rare, but possible) that we removed the entry, but the acquire_fair_semaphore() was between the part where it updated the owner ZSET and when it added its own identifiers to the timeout and owner ZSETs. If so, this could prevent it from acquiring the semaphore when it should’ve been able to. To ensure correct behavior in as many situations as possible, we’ll stick to removing the identifier from both ZSETs.

Now we have a semaphore that doesn’t require that all hosts have the same system time, though system times do need to be within 1 or 2 seconds in order to ensure that semaphores don’t time out too early, too late, or not at all.