Break the data matrix. Explore what Redis has to offer.

Learn More

e-Book - Redis in Action

This book covers the use of Redis, an in-memory database/data structure server

  • Redis in Action – Home
  • Foreword
  • Preface
  • Acknowledgments
  • About this Book
  • About the Cover Illustration
  • Part 1: Getting Started
  • Part 2: Core concepts
  • Part 3: Next steps
  • Appendix A
  • Appendix B
  • Buy the paperback
  • Redis in Action – Home
  • Foreword
  • Preface
  • Acknowledgments
  • About this Book
  • About the Cover Illustration
  • Part 1: Getting Started
  • Part 2: Core concepts
  • Part 3: Next steps
  • Appendix A
  • Appendix B
  • Buy the paperback

    4.1.1 Persisting to disk with snapshots

    In Redis, we can create a point-in-time copy of in-memory data by creating a snapshot. After creation, these snapshots can be backed up, copied to other servers to create a clone of the server, or left for a future restart.

    On the configuration side of things, snapshots are written to the file referenced as dbfilename in the configuration, and stored in the path referenced as dir. Until the next snapshot is performed, data written to Redis since the last snapshot started (and completed) would be lost if there were a crash caused by Redis, the system, or the hardware.

    As an example, say that we have Redis running with 10 gigabytes of data currently in memory. A previous snapshot had been started at 2:35 p.m. and had finished. Now a snapshot is started at 3:06 p.m., and 35 keys are updated before the snapshot completes at 3:08 p.m. If some part of the system were to crash and prevent Redis from completing its snapshot operation between 3:06 p.m. and 3:08 p.m., any data written between 2:35 p.m. and now would be lost. But if the system were to crash just after the snapshot had completed, then only the updates to those 35 keys would be lost.

    There are five methods to initiate a snapshot, which are listed as follows:

    • Any Redis client can initiate a snapshot by calling the BGSAVE command. On platforms that support BGSAVE (basically all platforms except for Windows), Redis will fork, 1 and the child process will write the snapshot to disk while the parent process continues to respond to commands.
    • A Redis client can also initiate a snapshot by calling the SAVE command, which causes Redis to stop responding to any/all commands until the snapshot completes. This command isn’t commonly used, except in situations where we need our data on disk, and either we’re okay waiting for it to complete, or we don’t have enough memory for a BGSAVE.
    • If Redis is configured with save lines, such as save 60 10000, Redis will automatically trigger a BGSAVE
      operation if 10,000 writes have occurred within 60 seconds since the last successful save has started (using the configuration option described). When multiple save lines are present, any time one of the rules match, a BGSAVE is triggered.
    • When Redis receives a request to shut down by the SHUTDOWN command, or it
      receives a standard TERM signal, Redis will perform a SAVE, blocking clients from
      performing any further commands, and then shut down.
    • If a Redis server connects to another Redis server and issues the SYNC command
      to begin replication, the master Redis server will start a BGSAVE operation if one
      isn’t already executing or recently completed. See section 4.2 for more information
      about replication.

    When using only snapshots for saving data, you must remember that if a crash were to happen, you’d lose any data changed since the last snapshot. For some applications, this kind of loss isn’t acceptable, and you should look into using append-only file persistence, as described in section 4.1.2. But if your application can live with data loss, snapshots can be the right answer. Let’s look at a few scenarios and how you may want to configure Redis to get the snapshot persistence behavior you’re looking for.


    For my personal development server, I’m mostly concerned with minimizing the overhead
    of snapshots. To this end, and because I generally trust my hardware, I have a
    single rule: save 900 1. The save option tells Redis that it should perform a BGSAVE
    operation based on the subsequent two values. In this case, if at least one write has
    occurred in at least 900 seconds (15 minutes) since the last BGSAVE, Redis will automatically
    start a new BGSAVE.

    If you’re planning on using snapshots on a production server, and you’re going to
    be storing a lot of data, you’ll want to try to run a development server with the same or
    similar hardware, the same save options, a similar set of data, and a similar expected
    load. By setting up an environment equivalent to what you’ll be running in production,
    you can make sure that you’re not snapshotting too often (wasting resources) or
    too infrequently (leaving yourself open for data loss).


    In the case of aggregating log files and analysis of page views, we really only need to
    ask ourselves how much time we’re willing to lose if something crashes between
    dumps. If we’re okay with losing up to an hour of work, then we can use save 3600 1
    (there are 3600 seconds in an hour). But how might we recover if we were processing

    To recover from data loss, we need to know what we lost in the first place. To
    know what we lost, we need to keep a record of our progress while processing logs.
    Let’s imagine that we have a function that’s called when new logs are ready to be processed.
    This function is provided with a Redis connect, a path to where log files are
    stored, and a callback that will process individual lines in the log file. With our function,
    we can record which file we’re working on and the file position information as
    we’re processing. A log-processing function that records this information can be seen
    in the next listing.

    Listing 4.2The process_logs() function that keeps progress information in Redis
    def process_logs(conn, path, callback):

    Our function will be provided
    with a callback that will take
    a connection and a log line,
    calling methods on the
    pipeline as necessary.

       current_file, offset = conn.mget(
          'progress:file', 'progress:position')

    Get the current progress.

       pipe = conn.pipeline()
       def update_progress():

    This closure is
    meant primarily to
    reduce the number of
    duplicated lines later.

             'progress:file': fname,
             'progress:position': offset

    We want to update our
    file and line number
    offsets into the log file.



    This will execute any
    outstanding log updates,
    as well as actually write
    our file and line number
    updates to Redis.

       for fname in sorted(os.listdir(path)):

    Iterate over the log
    files in sorted order.

          if fname < current_file:

    Skip over files
    that are before
    the current file.

          inp = open(os.path.join(path, fname), 'rb')

          if fname == current_file:
   , 10))

    If we’re continuing
    a file, skip over the
    parts that we’ve
    already processed.

             offset = 0
          current_file = None

          for lno, line in enumerate(inp):

    The enumerate function
    iterates over a sequence (in
    this case lines from a file),
    and produces pairs
    consisting of a numeric
    sequence starting from 0,
    and the original data.

             callback(pipe, line)

    Handle the log line.

             offset = int(offset) + len(line)

    Update our
    about the offset
    into the file.

             if not (lno+1) % 1000:

    Write our progress back
    to Redis every 1000 lines, or
    when we’re done with a file.


    By keeping a record of our progress in Redis, we can pick up with processing logs if at
    any point some part of the system crashes. And because we used MULTI/EXEC pipelines
    as introduced in chapter 3, we ensure that the dump will only include processed log
    information when it also includes progress information.


    When the amount of data that we store in Redis tends to be under a few gigabytes,
    snapshotting can be the right answer. Redis will fork, save to disk, and finish the snapshot
    faster than you can read this sentence. But as our Redis memory use grows over
    time, so does the time to perform a fork operation for the BGSAVE. In situations where
    Redis is using tens of gigabytes of memory, there isn’t a lot of free memory, or if we’re
    running on a virtual machine, letting a BGSAVE occur may cause the system to pause
    for extended periods of time, or may cause heavy use of system virtual memory, which
    could degrade Redis’s performance to the point where it’s unusable.

    This extended pausing (and how significant it is) will depend on what kind of system
    we’re running on. Real hardware, VMWare virtualization, or KVM virtualization will generally
    allow us to create a fork of a Redis process at roughly 10–20ms per gigabyte of memory that Redis is using. If our system is running within Xen virtualization, those
    numbers can be closer to 200–300ms per gigabyte of memory used by Redis, depending
    on the Xen configuration. So if we’re using 20 gigabytes of memory with Redis, running
    BGSAVE on standard hardware will pause Redis for 200–400 milliseconds for the fork. If
    we’re using Redis inside a Xen-virtualized machine (as is the case with Amazon EC2 and
    some other cloud providers), that same fork will cause Redis to pause for 4–6 seconds.
    You need to decide for your application whether this pause is okay.

    To prevent forking from causing such issues, we may want to disable automatic saving
    entirely. When automatic saving is disabled, we then need to manually call BGSAVE
    (which has all of the same potential issues as before, only now we know when they will
    happen), or we can call SAVE. With SAVE, Redis does block until the save is completed,
    but because there’s no fork, there’s no fork delay. And because Redis doesn’t have to
    fight with itself for resources, the snapshot will finish faster.

    As a point of personal experience, I’ve run Redis servers that used 50 gigabytes of
    memory on machines with 68 gigabytes of memory inside a cloud provider running
    Xen virtualization. When trying to use BGSAVE with clients writing to Redis, forking
    would take 15 seconds or more, followed by 15–20 minutes for the snapshot to complete.
    But with SAVE, the snapshot would finish in 3–5 minutes. For our use, a daily
    snapshot at 3 a.m. was sufficient, so we wrote scripts that would stop clients from trying
    to access Redis, call SAVE, wait for the SAVE to finish, back up the resulting snapshot,
    and then signal to the clients that they could continue.

    Snapshots are great when we can deal with potentially substantial data loss in
    Redis, but for many applications, 15 minutes or an hour or more of data loss or processing
    time is too much. To allow Redis to keep more up-to-date information about
    data in memory stored on disk, we can use append-only file persistence.

    1 When a process forks, the underlying operating system makes a copy of the process. On Unix and Unix-like systems, the copying process is optimized such that, initially, all memory is shared between the child and parent processes. When either the parent or child process writes to memory, that memory will stop being shared.