Use Redis for scaling distributed deep learning

By Medha Atre

At Eydle, we are reimagining distributed deep learning technology to optimize training speed and cost. We are reinventing the technology to handle fault tolerance, variable network latency and heterogeneity of devices leading to 70-90% reduction in cost. By using Redis as an eventual consistency key-value store for model parameters, we have achieved 1.5x faster transaction times.

Join our session to learn the detailed results of our experiments with using Redis in various situations, and see how the results measure up against MySQL.