Try Redis Cloud Essentials for Only $5/Month!

Learn More

Back to episode list

THE DATA ECONOMY PODCAST

HOSTED BY MICHAEL KRIGSMAN

THE DATA ECONOMY PODCAST / HOSTED BY MICHAEL KRIGSMAN

How to Create Immersive Player Experiences with Real-Time Data

Farah Ali, VP of Growth Technology / Electronic Arts

https://www.youtube.com/embed/c0Lx7kepBFc

“Everybody is having an expectation of a valuable decision driven by a machine learning mo“AI and ML is where we can extract the most value when it comes to data… if you look at the raw potential of a technology and where we can extract the most value for gaming, it’s with artificial intelligence and ML

Farah Ali
VP of Growth Technology / Electronic Arts

Farah Ali

Farah Ali, VP of Technology Growth Strategy at Electronic Arts, is responsible for creating world-class, immersive player experiences for 500 million global users. Prior to EA, Farah was Co-Founder and CTO at FreightWeb, a Venture Capital backed logistics company. In addition to her current role at EA, she is Founder and President of the non-profit One Good Act and Co-Founder of PWIC to help advocate for underrepresented populations in STEM fields.

In this episode, Farah explains how her team uses petabytes of data to build a more competitive gaming experience. She shares how data is used to drive personalization at scale, improve game quality, create new worlds, and cultivate realism. She also shares insight for how you can leverage data to ensure better performance and reliability, fight fraud, and cultivate a fair, inclusive environment for gamers. 

Transcript

MICHAEL KRIGSMAN: Today, we’re talking about real-time data at scale with Farah Ali, the VP of Technology Growth Strategy at Electronic Arts. Before we go forward, I have to say a huge thank you, shout-out to Redis for making this conversation possible. So thank you, Redis. Farah, how are you? It’s great to see you today.

FARAH ALI: You too, Michael. I’m doing good.

MICHAEL KRIGSMAN: Farah, tell us about Electronic Arts. It’s a brand name. We all know the name Electronic Arts, but give us a little bit of the insider view.

FARAH ALI: Well, we say our mission at EA is to inspire the world to play. It’s kind of like that as a company. We really focus on building the best games, and digital content, and services, to make the customer experience, the player experience, delightful. And we really think about gaming as a way to build connection, to build meaning, and keep curiosity alive. No matter your age, no matter your demographic. So it’s really about fun, entertaining, connective experiences.

MICHAEL KRIGSMAN: I love that. And I’m excited to talk about data, and how data supports this mission that you just described. So Farah, what is your role? I know you’re VP of Technology Growth Strategy, but what does that translate to? What are the activities and things that you do at Electronic Arts?

FARAH ALI: So, it’s a very interesting role. It’s a mix between a tech strategy, a corp strategy, and a corp dev role. And what I primarily do is I look at emerging and future technologies, and how that relates to what our tech strategy should be. So think of it as creating the future fit for our technology strategy. As part of that, there’s special projects, incubation projects that I’m working on. I look at technology MNA. Should we be investing in certain technologies? Should we be buying certain tech?

And it’s really just about, how do we make sure that we stay competitive? And every new trend, or idea, or tech, that you hear about, we have somebody who is evaluating the merits of it. And evaluating it, but in context with what we do as a company, what our goals are. And so the technology isn’t the end, but it’s really about how can this be used to further delight our players and power the player experience?

MICHAEL KRIGSMAN: So you’re looking at technology as the support or enabler for that great player experience, and immersive experience that you were describing earlier.

FARAH ALI: Exactly, exactly. And how do we make sure that we’re competitive, right? We’re not behind. When we’re thinking about things like blockchain technology, where is it relevant? When VR/AR came on first, there’s always been constant experimentation in this space. But we don’t go to scale until we really see the opportunity, the killer experience, the revenue growth opportunity there.

MICHAEL KRIGSMAN: So it’s all very much strategically in support of your customers.

FARAH ALI: Exactly. It’s very much in support of our customers, our players. And if it doesn’t make sense for our player, if it doesn’t enhance the player experience in some way, if it doesn’t add to the game experience in some way, then we’re not looking at it purely from the technology aspect.

MICHAEL KRIGSMAN: I know that you are very data-focused. You use lots of real-time data. Can you describe how that data fits into this overall strategic picture that you just laid out for us?

FARAH ALI: Yeah, I mean we get petabytes of data coming in daily, generated by our users. This is data that’s coming from games that are being played, from services that are being run, from content that’s being dropped, from campaigns that we’re running. So it’s all kinds of data. And we look at it for a bunch of different use cases.

And I would say, particularly there are four different personas, when you think about how our data is used. So there’s the business leaders. They mostly don’t need real-time data. They’re really looking at, what are some key customer metrics? And they’re looking at trends. They’re making sure that things are trending the right way. Is the average spend per player, is that in the right place? Are customers on average engaging a particular time, a session with a game? What’s the sentiment of the players?

And so sometimes daily fluctuations, there is just noise. It doesn’t really point to a direction. And so that’s really a trend over time. But then, there may be cases where it’s a privacy, or compliance issue, or there’s a geo-sensitive issue. And so there’s also real-time aspect to that data, where executives can react in real time. And so that sort of data for business leaders, for executive decision-making, and for strategy.

Then, there is the producers or the game developers. They’re really focused on the player experience. The producer, the game designer is looking at data and saying, I put this game map in this way. I have a building here. I have these pieces of content that you can interact with. Are the players actually interacting with the game the way that I intended for it?

And so it’s a constant loop of playtesting, getting that data, going back to the drawing board, and iterating over that. So that, we are building the best game. And we’re building the experience the way that it’s actually being used, versus how we think it should be. So there’s a lot of real-time telemetry, but it’s not always just for live games. It’s also for games that are currently under production.

Then, you have analysts. And analysts are looking at, how do you get insights from the data? So you have metrics, you have some quantitative data, you some qualitative data. How do you piece that together to actually make sense of it? And either have experiments that you can prove or disprove. Or hypotheses that you can test out. Or just look at forecasting and trends. And analysts are usually paired with business leaders, or producers, game developers, to actually help them mine this data.

And then you have your engineers. You can have data scientists. You have data analysts. So you have data engineers, and then you have just your regular software engineers who are actually building the system. So they’re figuring out, how do you build the right standardized telemetry? Where is this data going to be collected and stored? If it’s real-time, what’s the scalability of our systems, allowing the data to come in?

And essentially, making it such that the systems allow for ease of access for everybody who needs access to that data. And then, the workflows on top are self-serve. So, for example, the game developer spends most of their time focusing on the game and developing the game. And they don’t have to worry about telemetry, or building systems, or capturing data. So that can be done at scale by these engineers. That’s primarily kind of the four ways we think about data.

MICHAEL KRIGSMAN: It sounds like data is woven very tightly into the fabric, into the DNA of the company.

FARAH ALI: Yep, it is. And some of the personas that I talked about, you can think about the use cases. So, for example, game producers, game developers, they really are thinking about personalization at scale. So when you are playing a game, you should feel like it’s made just for you. The fun aspect of it is built in a way that the difficulty is not too much, but it’s not too little. So there’s a dynamic difficulty piece of it, which is different for different people.

If you are playing an online game that is with other players, this is a multiplayer game, how do we match make you with the right skill set? Not too high and not too low. So there’s tons of ways to think about how that data can be used. So we think about personalization at scale and data needed for that. We think about– we call it intelligent quality applications. And that can be anywhere from finding bugs in games, to live site issues, to fraud detection, cheat.

Then there’s things like procedural generation. So how do you use data to actually create epic worlds? Create characters? Create faces? So you’re not manually creating the thing. You’re actually using data and learning from it. And algorithmically creating worlds, and games, and characters, and games, and so forth.

And then there’s things like believable characters in games. You have non-playing characters. You have– in FIFA, you’ll see stuff like the grass that the players are running on. And it kind of moves in the wind, the way that you’d expect real grass to. So there’s tiny, minute things that we pay attention to. Tiny details that make your experience more immersive and more real. So yeah, those are kind of the different ways that we think about data.

MICHAEL KRIGSMAN: You said something really intriguing. Using data to create worlds. And there’s been all this discussion of the metaverse with Facebook, recently. Can you give us a glimpse of how you use data to create worlds? It’s such an intriguing statement.

FARAH ALI: I think it’s slightly different from the metaverse idea. Where– if you think about it, if you had to build a game today. And let’s say, the game is you want to look at some sci-fi book that you’re really excited about. You want to convert that into a game. Just think of the sheer number of characters. Think of the worlds that you’d have to actually draw. And then the scenes you’d have to create, to animate. It’s a lot of manual effort. And then, the time to market that that results in.

And so it’s– part of it is an efficiency play. So we don’t need to actually go and film and motion capture 100 different people just to create the audience in, let’s say, FIFA, or Madden, or one of these other sports games. But if we have enough data, we can actually go and create crowds. So they’re not actual people, but they’re algorithmically generated. So part of it is for efficiency, part of it is for realism. And you see this in movies, you see this with computer-generated.

On the metaverse side, what we’re thinking about it is in terms of immersiveness. So, we really think about how your interaction with the game transports you into feeling like you’re in the game, without sort of even thinking about AR, VR, or those techniques. And one of the ways that you do it is to make the gameplay experience very, very seamless. When you’re in the character, you feel like you are the character. You are part of the narrative in the story. And it’s really about the storytelling. That’s really about the gameplay mechanics. That’s really about the psychology of gaming.

The metaverse itself is a fascinating topic, and I think there’s so much conversation going on around it. I think, to me, primarily it’s about identity at the very core of it. So how do you have this online identity that’s the same across every online property that you’re on? So if I have a particular identity, I want that same identity tied to YouTube, or Twitter, or my EA account, or when I go to Facebook, or when I go play an epic game. And how does that happen, that interoperability between organizations.

I think, to me, those are some of the challenges, the interoperability and the core identity. Because how do you do that, and still keep a decentralized environment? And I think finally on the metaverse, the final piece on data is that, who’s responsible for the data? So there’s a lot of content that people will create. Who’s responsible for governing that?

Who’s responsible for looking at– we look at data for personalization for gamers. It’s because we have a mission, and we have a duty towards our gamers. And that metaverse that’s not really owned or controlled by anyone. It’s the collective community that will decide what those experiences look like, whether something should be personalized or not. So I’m very curious how that will evolve, and the governance will evolve around that, and the moderation will evolve around that.

MICHAEL KRIGSMAN: A lot of very interesting, and very open questions, that I suspect are not going to be resolved for years. When [LAUGHS] you are collecting data around the game play, are the games very instrumented? How are you getting that data, and what are you doing with that data as the player is moving through the game?

FARAH ALI: So we do have a standardized telemetry. And that’s very important because then a game team could actually decide the kind of metrics they want to look at, the kind of analysis they want to do. And for that, what kind of instrumentation they should enable. And so when you’re instrumenting, let’s say, something as simple as this game is mode x, and x means something.

You can actually instrument that in the game in a very standard way. Which means that when you’re actually trans– actually digesting, or ingesting that data, and transforming that data, it comes out how you’d expect it to. And that means you could have standardized dashboards, standardized reporting. So telemetry and instrumentation standardization is a very important tenet of how we handle and manage data.

The other piece how do you– so, for example, when you’re going through a game. Let’s say it’s in production, it’s live site. One of the most important things is performance, and reliability, and uptime. If you’re playing a game, you’re actually about to win, and then, boom, you get disconnected. And you don’t get back on for another minute. The game’s over. You lost that history. That data was gone. Any coins you’d won in that match, for example, might be gone. So how do you make sure that that doesn’t happen?

You cannot get away with no downtime all the time. But then how do you have reliability, so you have a consistent experience. So we really think about looking at anomaly detection. We have instrumentation around that. One example is we look at the peak simultaneous users at any given time. And we will draw curves to look at here’s kind of the actual PSU, and here’s kind of the predicted.

And then we’ll measure the variance between the actual and predicted. And if the variance is greater than what we’d expect it to be then, we’d actually go and flag that as an anomaly. And then that anomaly, usually we have more data that will point to, oh this is an outage in the data center. Or this is a fiber cut in a channel. Or this ISP is interacting this way. Or we actually have a crash on the server side.

And then some of these are actually auto-remediated. So for example, maybe there was a bug in the game. And after, the game server ran for x hours. And the memory, it just kind of kept adding up. And it ran out of memory. And so, that particular anomaly when it’s detected, might just go and reboot that machine. But then, it might also open up a ticket in the incident management flow that might say, this out of memory exception was found in this particular code. And this needs to be remediated and put into our next patch.

So there’s a number of automatic ways to handle it. There’s also manual ways to handle it. Another example would be when we’re matchmaking. And so if somebody’s been online, and trying to match to another player. And they haven’t been matched. And it’s been more than a certain number of minutes or even seconds then, something would kick in. And we’d have, here you play against the machine. Here’s an NPC character you can play with, because maybe they’re just not that many people online, or maybe we’re unable to connect to that geo. So that’s another way.

And a final sort of example would be around, let’s say, security or fraud. So we’d have ways to detect if somebody in the game is a bad actor. So an example could be we have– we know all the Xbox devices that the players are on. And we could just know that which devices have one banned account, or more than one banned account, or no banned accounts, and how are they connected to each other.

And then, we assign risk scores to players based on that. And then depending on the type of intrusion we’re detecting, it could lead to some sort of blacklisting temporarily. It could lead to temporary suspension. It could lead to some other sort of mechanism where there’s some communication with the player. Or, if it’s really egregious then, outright banning of the player, which happens rarely and can be contested.

So it’s really all of these things that can happen. And when you have a real-time live game happening with lots of people on it, you have a duty to make sure that that experience is safe, that experience is secure, and it’s fair. Everybody in that ecosystem has a fair shot at winning or losing. And so we look at a lot of that when we’re actually thinking about live site and data.

MICHAEL KRIGSMAN: As you said earlier, it sounds like you’re working with a tremendous volume of real-time data, non-real time data, quantitative data, qualitative data, all mixed together and flowing simultaneously.

FARAH ALI: Correct. Yes. Yeah, I mean petabytes of data generated every day. So we have something like 500 million players worldwide, and more than 300 games. So you can just imagine the type of data that we get. And EA has been around since ’82, so we also have a lot of historical data. And we use that data, too, when we look at newer versions of our game.

And I would say some of the real-time use cases like, I said, are around security compliance, live site, uptime, performance, reliability. It’s also for things like, are we right-sized? Do we have too many servers up, and too few players on? So do we need to actually scale back, and make sure that we’re cost-effective?

And then the second piece of it, which is the non-real time. That’s where you probably don’t need the level of granularity that you would need with some of these live site reporting metrics. So if you want to look at forecasting, or budget, or sentiment over time, or average session length over time in a game. Or average spend per player, like I said, over time on a game. You want to look at that in historical data. You want to be able to slice and dice it multiple ways.

So what’s important with the non-real time data is the categorization of that data. Can I see what were the top customer support issues in a particular area? Can I do it by franchise? Can I do it by channel ID? Can I do it by customer service rep? Can I do it by number of incidents opened and closed? And so what do you do with that data down the line, is part of how that data is collected.

And I think the final piece is when you have that mix of real-time and non-real time. When I mentioned that business case of leaders needing to look at trends, but then also really needing to know if something really egregious is happening right now. And how do you react to it? Sometimes, you have issues where you need to actually go and, as an executive, make a statement about the state of the world. You see all kinds of incidents happening around that.

And so it’s also about our employees. It’s about our environment. It’s about the world we live in. So we’re not just here to build games and profit from that, but we’re also existing within an ecosystem. And how do we actually interact with that ecosystem is a big part of it, too.

MICHAEL KRIGSMAN: Farah, you’re collecting so much data of so many different types. For many businesses, when they collect that data, it’s very hard to figure out where to focus. So how do you prioritize what data you look at, and what you let go, and so forth?

FARAH ALI: That’s a great question. And, you know, I don’t think we have it perfect either. And I think the mistake that a lot of people, and a lot of companies, make is not asking themselves, what are the questions you want to get answered. What are the experiments that you wanted run? And how do you want to evaluate them?

And so if I want to see trend over time, it doesn’t matter that I have to collect the data every five minutes or every 15 minutes. The hourly granularity could be fine. The two-hourly or the daily granularity could be fine. So not asking yourself those questions also then means, that you’re over-engineering, or you’re ending up building stacks that end up costing a lot of money.

And I think the second piece is sometimes this stuff doesn’t scale well horizontally. And so you’re creating bespoke systems, and you’re tied to on-prem hosting solutions. So I think it’s really about asking the right questions and really going back to the personas. You’ve got your business leader or executive persona. You’ve got your engineer persona. You’ve got your producer, game designer persona, in our case. In some other case, it would be the medical expert persona and what kind of data they would need, for example. And then you have the analyst.

And not all of these people need real-time data. Some of these people are not technical, and they don’t need to understand how the data is organized, or the data structures behind it. You just need to have a self-service way to access it, access it immediately, and be able to process it.

So it’s really about building those workflows on top of your storage pipeline and your query processing pipeline that allow everybody to plug and play. You might have a hub and spoke model in that case, where you have analysts who can maybe even create their own ML models, or their plug and play components that fit in. So they can do the processing that they need.

But it’s really about asking the right questions upfront, understanding what percentage of data needs to be real-time versus not, and then factoring that in when you’re actually designing your system. And then making it extensible. So building a plug and play or modular component, so you can actually extend it over time for new use cases that you cannot foresee right now.

MICHAEL KRIGSMAN: You have a lot of clarity around the roles and, with each one of those roles, the type of data and the type of analysis that needs to be done. I have to imagine that part of that clarity comes from your longevity as an organization. As you said, you’ve been around for decades as a company.

FARAH ALI: Yes. Part of it is that, I mean my own personal kind of clarity, comes because I’ve been an operator for a very long time. So I started as an individual contributor, as an engineer. And I was a people manager. I ran big engineering teams. So I’ve dealt with these pain points firsthand.

I dealt with not having well-understood telemetry systems or instrumentation systems. So having to build these bespoke systems and then having to build a translation layer every single time. Not being able to get real-time data when you needed it. Not being able to go debug and troubleshoot on the live side when something happened. So part of that is because I’ve actually learned from past experience, past mistakes, and failures.

And then the second piece is, yes, when you actually have historical data. When you’ve seen how franchises did well, or didn’t do well. And then when you have so much sentiment data. The great thing about gamers is that they’re such a vocal bunch. You will not have a problem getting feedback from this customer segment. And so they’re there on Twitter. They’re there on Reddit. They’re calling us. They’re on chat. They’re on email. And we mine all of this information.

So we can actually go and look at sentiment, and then we create topics around those sentiments. So we have a good sense of here’s the top areas where we see issues, and here’s where we can actually remediate them. And here’s where this data is actually stuff we hadn’t thought about, and does this go back into our game production? And do we use this to actually build our next live content drop? Are people asking for more weapons of a certain kind? Or more masks or skins of a certain kind? And should we go and do that?

For example, over time you’ve learned that people like to see themselves in the game, especially Sims which is a simulation. And so we had a lot of feedback around, what about Halloween in there? What about all of these things that we celebrate? And so we’ve taken that feedback into account over time.

And we actually have a very cool division. It’s called the Positive Play, I believe. And that looks only at how do we make our games more inclusive for players. So they’re looking at data in an entirely different way. Because they’re looking at it from a what’s our corporate social responsibility to our players. And how do we make sure that every player feels like they’re in the game with us. So, yeah. So I think it’s over time being in the shoes of the engineers and the analysts, but then also being in the shoes of the players. And really thinking like a player. I think that’s what helps us get that clarity.

MICHAEL KRIGSMAN: So you’re bringing both of these sides together. Again, I keep using this term clarity. The behind the scenes together with the player perspective, and they layer on top of each other. So you’re dealing with so much data, and it’s coming at you so rapidly. What kind of infrastructure do you have to manage this explosive amount of data?

FARAH ALI: Right. Yeah, and I think the other piece of that puzzle is you don’t always know what you need it for. Like, we don’t know what the metaverse will require, or what other future use cases will require. And so any sort of infrastructure level, you have to think about storage. And then you have to think about is it– do you need access to that storage immediately? Or is it kind of cold storage? And so how do you decide immediate need versus longer-term archival.

And so we have the storage piece, then you have the ingestion layer. So how do you actually ingest all of the data coming from your games, but then coming from social channels? So Twitter, and Metacritic, and so forth. From your financial systems, from your accounting systems, from many other sources. So a place where you can actually ingest and aggregate that data.

And then on top of that, you need a way to access that data. So you need to build APIs that are self-service. You need to build tools that are self-service. You need to build workflows that anybody can go, and create the workflow that they would need to extract. There could be ETL workflows, for example, that data analysts use. There are experimentation pipelines that an analyst might use.

And so you might actually build on top of your workflow as a whole AI and experimentation pipeline. That’s actually what we have. It’s your train, test, predict/evaluate pipeline of that data. So you can actually go and build models for anomaly detection, models for fraud detection, all of those different type of cases.

And so you have your infrastructure. You have your stack. And then you have your teams that need to do something with it, and they are building their particular piece of the application. Think of it as a plug-in provider model, where they write their own plug-ins. Perhaps for their piece of the thing, or the app, or their service. But we provide– the core data team provides the infrastructure for storage, for ingestion, for access, and the key workflows in the core AI and experimentation platform.

MICHAEL KRIGSMAN: This infrastructure that you’re describing, how much of it is cloud-based? And how much of it is on-premises? And how do you decide whether to put pieces of your infrastructure in the cloud or on-premise?

FARAH ALI: Yeah, a great question. So we actually took a deliberate strategy to move to the cloud for several years now. And I would say most of our workloads are on the cloud. I think the other reason for that is because the gaming use cases became so important that cloud providers out there have also started building the right SKUs. So initially we needed GPUs, and that just wasn’t something that we could get from some of these providers. Now, it’s pretty common.

So most of our workloads are cloud-based, and it gives us the advantage of scale. It gives us the advantage of regional coverage, geo-distribution. And then it’s ease of access for all of the data pipelines, for our developers, and our analysts.

When I think about on-prem, historically we’ve used it when it’s data that’s BII. Or for some security reason. Or some compliance reason. Or if there’s a very customized game server, very bespoke sort of game service that only we can run in on-prem. So that’s, I would say, an exception. And our goal there, too, is once we see an offering that’s commoditized, that’s in the cloud, we want to move it there. And the primary reason for that is scale and efficiency. It really helps us plan better.

So one thing that we do in our games is while we’re doing production, and while the game is being developed on the infrastructure side, we’re constantly testing the game server density. So we’re seeing how many players per game server, and how do we rightsize it. And so we can actually go and say, OK, we can save x million by rightsizing to this particular density.

And then we do a lot of performance and scale tests. So if we look at past data. And we say, OK, for this same type of franchise launched last year, this was the peak users on day one, and day two, and day three. And we go look back as many years as we need to. And then we use that. And we look at presales to predict what the peak usage would be going forward, when we actually go and test it 2x, 3x, 5x. And then we try to start at a reasonable scale, but we’re provisioned so we can just spin up servers in the back end.

You can do that with on-premise. It’s very, very difficult. You have to buy the machines. You already paid for it, and what if you don’t get the usage you thought. And so you end up wasting a lot of money. And that’s how we think about– And we’re not tied to any particular provider. So we really try to have a hybrid strategy, so we can leverage the benefits of the services across all providers.

MICHAEL KRIGSMAN: Does latency and speed of being able to access, transmit, and communicate that data ever play a role in your cloud versus on-premise decision making?

FARAH ALI: Absolutely, absolutely. The great thing about these cloud providers is that, in the last 10 years or so, there are very few regions where they don’t exist. I think, initially, that was one of the challenges, where if we didn’t have– We have a lot of players in the Middle East, but we didn’t have any data centers close by. And so we had to go. We had some really high-value players there, for example, for our sports games. And we want to make sure that they get the right bandwidth.

But now, we actually can move those workloads to the cloud. So, for the most part, it’s not a problem. For some regions, it can still be a problem. But then it’s also because their localized fees and their own bandwidth constrained in those particular geos. And in that case, we sometimes have to do something specialized. But for the most part, I think the cloud has solved that problem for us. And having the regional data centers has really solved that problem for us.

And what we have done is– and we build our games so that they are modular and component ties. And you can actually go and deploy in any particular container. So once you containerize a game, it’s very easy to actually go and deploy anywhere. And that’s a shift we made. Otherwise before, it was really being packaged in a way that it would just run on this bespoke on-prem machine.

So I think we’ll still have some performance issues. We have rubberbanding issues. Sometimes, it’s because there’s just a lot of online traffic. I think last year we definitely were put to the test there. Everybody was online and not everybody was playing games. And so that has put us in a better position to actually be able to anticipate, and do something with the additional workloads.

MICHAEL KRIGSMAN: It sounds like as the cloud providers have gotten better and continue to improve, it makes life easier for you. Gives you a great deal of flexibility that in the past you just didn’t have.

FARAH ALI: Absolutely, because we need to be always on. We need to. With the subscription model and the life services model, we have to be always on. And that raw compute, that access to even more raw compute than was possible before. And the fact that we could just horizontally scale out as we need has really helped us with the time to market, with that scale.

And I think what has really worked well for us is a lot of these cloud providers are great partners. When we see a use case that doesn’t exist, we can actually go to them and explain to them. And then, that helps the community because there are others who could benefit from that particular SKU.

And so, I think if that didn’t happen– if you can only use one provider, and there was some sort of competition like that in the market, it would be very hard. So the fact that the providers have to each provide better service than the other for us to use them makes it better for us as a company, because we’re always getting better service. We’re always getting access to great customer service, and access to architects who want to go and build the things that we would use.

MICHAEL KRIGSMAN: Farah, you’ve mentioned several times the importance of stability and performance. How do you balance the cost against the level of elasticity, availability, stability, and performance? Because look, if you spend enough money, you can have basically 100% uptime. But you may not have a business anymore because you can’t afford to maintain it. So how do you draw that balance?

FARAH ALI: Yeah, I mean I guess the example I gave about game identity taps into that a little bit. Because we are forecasting what we think the traffic is going to look like. Because we are testing that. Because we are testing twice and 3x, that we have a better idea of rightsizing. And then we are building our infrastructure such that we can horizontally scale.

So when we start out, let’s say, you’re an early access beta launch. I’m out there. I thought I’d get 5 million users. I actually got 20. OK, you know it doesn’t freak me out. I just go spin up more servers. You realize that oh, the reason we got 20 and not 5 is because marketing also had this campaign where every Coke bottle would give somebody a free pass to sign up. And so we actually expect that traffic to drop down back to 1 in two or three days because that’s when the promo ends.

And so being able to monitor that, and then being able to spin back, and being able to scale back helps us maintain that. And the way that the pay-per-use model allows us to be charged for what we’re actually using, that really helps us. And so even if we’re off by, that’s a big magnitude from 1 to 20, we can still course correct if we are monitoring. So the key there is to monitor. The key there is to know when you monitor, when you have a data point, actually know what it means.

Sometimes marketing will run a campaign, and there really won’t be close communication. Maybe the game team doesn’t know that there is this campaign going on. So there’s time wasted trying to figure that out. So those are the kind of things that actually are more important. That you actually, when you’re launching, you know everything that’s going around that launch. Then, you can plan for it accordingly.

But the ability to test, even spin up cloud services in a test environment, in a preproduction environment, actually lets us test that scale. That says more accurately with the configuration that production would have. Not overprovisioning is a key to actually not spending more than you have to.

MICHAEL KRIGSMAN: And of course, you have this large body of highly-accurate historical data, which then enables you to operate this aspect of the business, namely this prediction and planning aspect.

FARAH ALI: Exactly. I think just having the historical data, but then also having had multiple launches where we’ve used it where we’ve been wrong. And we’ve tweaked, and we’ve course corrected. So we can actually feel fairly confident about our predictions.

I think it’s when we have a completely new IP, or we’re targeting a completely new demographic, or we’re trying something completely new. That’s when it’s hard. But then, I suspect, that’s hard for everyone. It’s not unique. But then there are ways to do testing offline. There are ways to collect data offline, where we have done that with early play tests, and with using these player cohorts that we actually talk to for feedback.

MICHAEL KRIGSMAN: What does innovation look like when it comes to real-time data? Where is the future of using real-time data?

FARAH ALI: I still think AI and ML is where we can extract the most value when it comes to data. There’s a lot of conversation around NFT, and blockchain, and metaverse, and all that within the gaming community. But I still think that if you look at raw potential of a technology, where we can extract the most value for gaming, I think it’s with artificial intelligence and ML.

I was talking about procedural generation. Think about the value we can extract there, but then think about being able to really completely understand every single player in your ecosystem. And then being able to build machine learning models that can learn. And deep learning models that can learn through osmosis through this data that we have about this player, all kinds of things about them. And then think about the value creation that creates. So we have all of this now, how can we add more value. We already have them as a player.

And then the second piece of it where I think a lot of innovation can happen is understanding Gen Z. These 19 to, I guess, 30-year-olds who are going to be playing the games. What are they looking for? What are they interested in? What is their behavior online? They might be interacting with some of our games, but not others. Why not? And when they interact with different games. If they play Sims, but they don’t play FIFA, but they are interested in Madden for some particular time period. Why is that?

So can we actually look at the data to do the storytelling. And then, use that storytelling to actually help us gain those actionable insights that help us with business decisions. I think that’s where it is. And when I think of business leaders and executives, they’re great at making connections. They’re not great at what machines are good at, which is looking at lots and lots of data and making sense of it. But once you can make some sense of it, you can present it with context to the right people. Then, they can have great ideas that come from that.

And so it helps them make better executive decisions. It helps them with their storytelling. And it helps them understand their players and their customers better, which is the key here. What are customers doing right now? What are Gen Z or new demographics doing?

And then if we want to enter these new business segments, if we want to enter these new markets, if you want to create this new IP that didn’t exist before, how do we understand who is looking for it? What do they want out of it? What value can we extract from it for them? And I think that’s still where data, and AI, and ML has a big part to play.

MICHAEL KRIGSMAN: What I find fascinating is many companies struggle to collect the data, to figure out the kinds of data, and to gather that data. In your case, you have such a large body of very high-quality data. So the challenge then becomes the creativity, the innovation of what can we do with that data? Now we’ve got it. What do we do with it?

FARAH ALI: Yes, exactly. And I think to your point also, how do we make sure it’s not noise? Sometimes you get very well-meaning leaders who are like, yes we must collect everything. But in reality, if you look at dashboards in companies, they’re the most underused utility there. People don’t like looking at graphs. They don’t look like looking at dashboards. They want to understand data in context of what they’re doing. And so I think the key there is, how do we build the thing on top of the data that actually provides those interesting actionable insights? And then the action on top of it, depending on your role and your function, that’s what you take with it.

But I think reducing the noise and reducing this idea that you have to collect everything. You can collect everything, but you’re probably never going to use it. And so be smart about the data you’re collecting. Be smart about your data strategy, and be smart about what you plan to use it for. That’s when you can really have clarity around how to actually use it for the best future-fit technology strategy.

MICHAEL KRIGSMAN: OK. Farah Ali, thank you so much for sharing your experience and expertise with us. It’s been a fascinating conversation.

FARAH ALI: Thank you, Michael. Thanks for having me. I really enjoyed it.

Gain insights on how to use data to drive business growth

Your peers also viewed

Aerial view of a intersecting highway

EBOOK

Data Innovation in Financial Services

The digital economy is challenging bankers to re-evaluate their business models. Learn solutions for the four common challenges that arise when making the shift to real-time financial services.

Stay up to date on the latest data content