HOSTED BY MICHAEL KRIGSMAN
Shy Chamala, Head of Enterprise Data Analytics & Digital Technology / HPE
“Data is at the front and center because you got to know your customer and what solution they are looking for in order to provide a customized, personalized service.”
Head of Enterprise Data Analytics & Digital Technology / HPE/ HP
Shy Chalakudi is Head of Enterprise Data & Digital Technology at HPE. A leading edge-to-cloud company spanning over 150 countries. Shy and her team leverage data to identify deep insights and drive competitive advantage. Prior to HPE, she held VP roles at Wells Fargo, Charles Schwab, and American Express.
In this episode, Shy shares her perspective on the role of data in a modern business model and how it drives digital business and innovative customer experiences. She explains how companies can create a single source of data to deeply understand their end-to-end user journey and better serve customers.
MICHAEL KRIGSMAN: Today we’re speaking with Shy Chalakudi. She has the head of enterprise data analytics and digital technology at HPE.
HPE is an enormous company and today we’re going to talk about the lifecycle of data, the role that data plays in digital transformation and organizational transformation more broadly. Hey Shy. How are you today?
SHY CHALAKUDI: Wonderful Michael, how about you how are you doing?
MICHAEL KRIGSMAN: I’m doing very well Shy. Before we even start. How is the past quarter, your most recent quarter?
SHY CHALAKUDI: Oh Michael we had a fantastic quarter and a remarkable 2021. We exceeded expectations across all of our metrics. Our other growth went up 9% sequentially and 28% from prior period.
Our net revenue 7.4 billion up 7% sequentially. And if you talk about our asset service ARR, 796 million up 36% year on year. Honestly it has been one of the fantastic year coming out of COVID, and coming out of this major change that we are all adjusting ourselves to I’m very proud about what HPE has done in 2021.
MICHAEL KRIGSMAN: Shy congratulations on these great results. Now, you are head of enterprise data analytics and digital technology, what does that role mean and what does that role encompass?
SHY CHALAKUDI: It really one day feel very, very happy to be part of that enterprise role doing it. It’s basically, think about it Michael, is internalizing the business model I just explained to you.
Is ensuring that I can drive a data first modernization for the company. We want to ensure, Antonio’s biggest charter is for me is to enable data leading our digital transformation, and that is what me and my team does as part of my role.
The biggest vision for me is to create a unified data source, not necessarily a physical data source but a virtualized unified data source that is accessible for each and every one of my stakeholders. My internal customers and external customers being able to get from a single source of truth.
Well, the vision is easier said than done. If you think about what Gartner is telling us is, the company spent 80% of their time doing data handling and honestly we are no different.
The amount of time it takes to ensure that your data is managed effectively, you get the metadata effectively from your data, your data lineage, your data profiling, your data quality. I can go on and on about the governance that requires to manage the data, and that is primarily the mission of my organization.
My team members live and breathe to ensure that we create a sophisticated automated simplified view of a data that can be consumed across by both internal and external customers.
And that is a pivot for our digital transformation because data is going to be driving insights to help the business drive the digital transformation.
MICHAEL KRIGSMAN: Shy, you said something very important that using data to support digital transformation is an initiative that is driven by your CEO.
SHY CHALAKUDI: Absolutely Michael. If you look at our business model, Antonio has been very clear about us pivoting into as a service subscription model.
And today probably we’re all very used to it if you look at your probably your phone you sage or even in electrical usage at your home you pay for what you used and that is becoming the demand from the customer.
So it is one of our biggest transformations that Antonio is driving. But the beauty of that is we are not just changing our billing pattern from your monthly subscription or yearly subscription to a usage base, but when we’re doing that, we want to ensure we provide our customer solution for their problems.
Not just products or services as we have been doing all along. So that is one of the biggest pivot and change we are doing to our business model. And data is at the front and center because you got to know your customer, you got to know what the solution that they are looking for in order to provide a customized, personalized service.
And therefore, our entire new business model is driven from data first mindset or data leading the digital transformation. Which is what makes it even more exciting with all of the importance of data that drives the transformation.
MICHAEL KRIGSMAN: So data is at the heart of digital transformation which is really foundational right now for HPE. Can you give us some examples of use cases in which you’re using data very intensively to support these goals you were just describing.
SHY CHALAKUDI: Absolutely. I can probably give a more generic use case that all of us can relate to. Of course, being in a technology industry we do have a specific use case, we can talk about if time permits. But the most generic one that all of us can relate to is knowing your customer.
More commonly probably the industry calls it a customer 360, basically understanding the full spectrum of your customer. And if you look at that from HPE perspective, like I said, we provide solutions for your server needs, for storage needs for the enterprise, for networking, for software, for consulting, on and on.
So being able to understand the customer from each of these different metrics, each of these personas and profiles and also understanding their entire journey.
I want to be able to understand my customer every interaction from them the minute they start probably with the website inquiry, or a sales call they make, or a support call they have with my support staff, encapsulating all of that information into one single pane, provides immense value to all of our businesses, including our business functions like service team members or our marketing team members, our sales team members.
I think that has been one of the important initiative we are maniacally focusing on. So that it goes with our brand value as well. Michael if you think about HPE brand is customer service, customer engagement, treating customer rights. So I think it comes naturally to us. But that is one of the biggest initiative that is data driven that we are focusing on to enable a digital transformation.
MICHAEL KRIGSMAN: So when you talk about customer engagement and using data to better understand and to better service the customer, what kinds of data are you referring to?
SHY CHALAKUDI: Interesting. Every sort of data is the generic answer. But if you peel down the onion, it is about looking at the master and the transaction data.
We follow the customer to their entire life journey to be honest, even before they could become a potential customer for us. So the data kind of starts from a pre-sales perspective all the way to the sales, and then post-sales.
Like I said, our brand focuses on the customer post the sales as well. We build long lasting relationships with our customers so the data flows through from end to end perspective Michael.
And most of them are a mixture of data, it could be more structured data when it comes to sales. But when it comes to pre-sales, we look at the industry trends that’s happening, we pick on social data.
What is the current need of this customer? What are they talking about in terms of their biggest problem? We identify that through the social data which is mostly unstructured data that we also have to cater to.
Then we also look at customer sentiment and customer analysis that comes through these different listening channels that we have, which are potentially unstructured data.
And then when it come to the sales part I think it is simply think about it as from the time code to cash, the ability for us to be able to generate transactions– and this is where the heart of our data is, a highly secured and highly regulated set of data that we manage here to enable the customer journey through the sales process.
And then we come to pause sales I think, you look at a mixture of data again here. Because when it comes to the service channels, it’s mostly voice data.
So we are looking at converting voice to structured data. Or when you look at images because we trace our images of our products that we are implementing in our customer location, so that is more again, unstructured data where we look at converting images to unstructured data. So the data is mixed bag but predominantly focusing on transactional data that can help us serve our customer better.
MICHAEL KRIGSMAN: And that transactional data covers the entire customer journey basically, all of the interactions that customer may have with HPE, I would assume.
SHY CHALAKUDI: Absolutely. And we also because being a technology company and because we do have the luxury of using our own product, we are very, very open to collecting as much of the data as possible.
I strongly believe as a data leader, you got to pick your data and of course then you cleanse it and you pick the right information you need to service your customer, but you never cut short on your ability to collect as much data as possible.
To be honest, a little bit greedy over there. We pick every data as possible and then use what we need to use but we never go shortage on collecting all the data that we need from a customer perspective.
MICHAEL KRIGSMAN: One important question is, how do you link that body of data that you’re collecting to the business decisions that support the customer through the lifecycle? That seems to be the missing link that a lot of organizations have trouble with.
SHY CHALAKUDI: Absolutely. Again, if I go and code it right, there is a myth that you can use AI/ML, or you can create this personalization, or you can respond to a better experience of the customer.
But you’re spot on Michael, if you are not able to clearly articulate and tie the dependency that is where your data governance comes into perspective, that is what my team is maniacally focusing on we do have data stewards within our business partners who we call upon to help us connect the business language of the data.
But I think the automation of tools and processes help us to bring the data profiling, the data lineage and connecting the key aspects of the data all through the journey, is where most of our time is being spent.
And that comes with both I would say tools and processes, both of them have to work hand in hand. And usually I tease my team saying it’s a little bit of characteristic, you got to have governance policies, you’ve got to have the strict policies of how you use this data and how are you connecting the data. And there is no exception to it.
And I’m very big stricter of that, and I don’t hesitate to use my stick it need be. But then you also have to give credit in terms of your customer usage.
So we do have a lot of federated data sources where we allow the business to go and experience the data themselves, go and discover a new elements of the data which keeps them engaged and keeps them reinforcing why it’s important for them to invest up front in defining the data governance policy. So it’s a Yin and the Yang that works together for us.
MICHAEL KRIGSMAN: Shy, you’ve been discussing the shall we say, the business data the transactional data to help you understand your customers better and serve them better. What about telemetry data that comes from the software and the technologies that you’ve deployed with your customers?
SHY CHALAKUDI: Oh, gosh, you hit the heart of what we do here Michael. Yes, customer obsessed, we are and customer data is important for us but oh, boy, the telemetry data is a lot of fun for us.
If you really think about telemetry, it comes from a Greek word. Tele and mitron very simple tele means remote and mitron means measure. And this is not a new science or a new field, telemetry has been there for a long time and machines have been used to generate data.
But I think what is, important now in this day and age is the amount of data that these machines create. Machines are getting more and more sophisticated especially our machines as they deploy into our client spaces are highly smart machines.
Not only they generate data and give us indications and clues of how they behave, but they also can self-heal themselves. They can also understand and have AI/ML embedded in them to speculate what is going to happen and predict and react to it.
So it’s important for us to understand how these machines behave and we collect tons and tons of telemetry data. And I can tell you anyone if you talk about telemetry, the first biggest challenge they’re going to tell you is about storage because machines are why they generate more data than we do and continually being getting information from them.
To be honest we spent quite a bit of time my team in being able to manage that effectively, slice the data, how do we get the trend from the data, how do we ensure that we do not have the burden of storage of this data exclusively.
But continually work with the data to understand and improvise on what we can build. It is definitely a big focus for us and a fun place and we are learning oh, boy as we hear more and more discovering the usage of telemetry.
MICHAEL KRIGSMAN: Well, in a minute I do want to talk about the infrastructure you have for collecting and managing all this data. But first can you give us a couple of examples of the kinds of data that comprise the telemetry that you’re discussing.
SHY CHALAKUDI: Absolutely so maybe you can give an example that might connect much better as well. So if you think about our network devices, think of it as a modem that is available in your household, hopefully it’s an Aruba modem that we have supplied for us.
But if you think about this modem, what we do is this modem continually generates data for us and we monitor this data through our servers and it tells us the behavior.
So let us say an organization has deploying about 70, 80, or 800 of these modems across their entire office space, what we can really do with this data is understand the behavioral patterns of that ecosystem.
The recent example I have is where we had an organization that supplied about 800 of these modems. And we found out that about 30% of these devices are rarely being used because those are the remote areas where people never go and access the data.
So what we did was, defined an optimization plan for this organization to re-use or place these modems in the right configuration to maximize their investment with us.
And that was a win-win because not only were we able to provide a better solution for our customer, but the customer were also able to save on overall optimization efficiency with their infrastructure.
So that is the big advantage we have with telemetry. And when you talk about AI/ML and all of that, that helps us understanding the self-healing pattern of a device, understanding how it controls the temperature around its ecosystem.
That kind of data be entirely used to understand our product catalog. How do we improvise our product depending on, if a device for example fails four times from a customer perspective a cell field itself and there is no disruption but for us it means a lot.
So it means we do a recall of that product so that we can now fix it and prevent those kind of failures. So I would probably look at it in both ways we first focus on serving and optimizing for external customers using telemetry data, but more important we learn from these devices the capability of our products and we can’t really improve that in order to make better products for our customers.
MICHAEL KRIGSMAN: That’s so interesting. So you’re collecting this large volume of telemetry data, you’re running that data through your machine learning models that then gives you action items we could say, whether it’s to feed information back to product design or deploy those products at this particular customer in some different way. But it all ends up being very actionable because of that data source namely the telemetry.
SHY CHALAKUDI: Absolutely. I think getting deep insights and actionable insights in my data is what makes the rubber hit the road to your point Michael. That’s where I think the– don’t get me wrong, not all of the data has been successful.
We did have quite a bit of fast fail approaches, but what excites us is the possibility that we can deal with this action, that we can get more importantly benefiting our customers. I think that is what kind of puts us back on focus to collect this data and serve our customer better
MICHAEL KRIGSMAN: You mentioned that this data also gets fed back upstream into product design, product development. Can you tell us a little more about that.
SHY CHALAKUDI: Absolutely. I think as we learn about this data we do have R&D teams within every product lifecycle. We do have R&D teams where we kind of take this data and the data is being given to this business leads and then they investigate the data and identify actionable insights from it, gets prioritized into the investment lifecycle and they use it accordingly based on the requirement that the data tells them.
So it is a continuous cycle of understanding the data, trying to make some meaningful sense out of it. My data scientist help them understand and connect the feature sets to it and then try to make an actionable inside coming out of it going back to the product lifecycle for future product features.
MICHAEL KRIGSMAN: It sounds like as time goes on this telemetry data is becoming an increasingly more important source of information for you to refine the products and the services that you’re selling.
SHY CHALAKUDI: Absolutely. Absolutely because data speaks to us. And the more we try to interpret and understand the language of data. I think the better off are going to be.
And like I said that is something I don’t want to say that we are completely literate yet, we’re still learning and evolving. But I think that that’s where the fun starts for us. Is being able to understand and interpret and make some sense from the data.
MICHAEL KRIGSMAN: Now, let’s talk about the infrastructure that you mentioned briefly a few minutes ago. First, are you working with real time data or is it non-real? Time data what kind of data do you work with for the most part?
SHY CHALAKUDI: It’s really a mixed bag and I would say, Michael it’s more use case driven. From an architecture perspective and infrastructure perspective, I can tell you that we are quite sophisticated and we can handle both real time and non-real time data.
When I mean it’s use case driven, let me tell you an example. For example, a customer comes in place an order with us and then they want to immediately understand where is that order? Where is the lifecycle of the orders happening? What is a shipment date look like? They want to make an immediate change to the order.
Those ought to be handled real time. So that is all, we treat them with real time data, we bring it to our infrastructure, we follow the data from source to the target.
And as it goes from the source, we’ll just think of it as a pipeline going and we’re going to be able to plug and play and get the data the right time the way we need it for operational purposes.
But then as we talked about telemetry data or customer data, we then collect this data into our data lake and then do create a lot of aggregations on this data, bring several sorts of data together, bring the social data, bring the telemetry data, bring the customer data, and aggregate the data and create a consumption layer that is usually done not in real time.
Because you need time, you need to understand the data, you need to be able to interpret the meaning of the data, and that requires a lot of computational power and a lot of processing that needs to happen on the data.
So that usually happens in the back end as a non-real time data where we can go and do AI/ML, on it but most of our customer experiences and customer needs are done through real time data.
MICHAEL KRIGSMAN: So your infrastructure is flexible enough to handle all of these different kinds of data sources, bring them together so that you can do whatever analysis is appropriate?
SHY CHALAKUDI: Michael, I wouldn’t say that we have a fully fledged architecture that supports all business case. I don’t believe that as a technologist and as a data architect, you can have an architecture that is going to support and all business case.
As we are advising to our end customer, one of the key principles I asked my team to adopt is to ensure that we optimize our infrastructure. But I can assure you this though, we do have a fast data architecture and well-defined data principles that we embed and we live by.
That as a business use case comes to us, we look at the use case, understand the data needs of it, understand the consumption need of it, and ensure that we can embed that to our existing architecture more in a plug and play fashion.
MICHAEL KRIGSMAN: So you begin with the business use case and then you optimize the path from beginning to end based on that specific use case?
SHY CHALAKUDI: Absolutely. I think every business use case, we have a very strict principle of understanding the value proposition that is going to bring.
Because as I said, as much as we talk about storage is very cheap, especially me being in HPE, we use most of our infrastructure is HOH, what we call as HP on HPE. But nonetheless I think by principle what we are insuring is that we optimize.
And one of the big advantage I have or you can call it as responsibility as well I have is, being a zero customer of our product. Call it bringing my own champagne.
So I often go talk to the clients about my existing data architecture, and how I optimized it, and how I’m reusing various components of it to show as a real use case example for them to implement up in their infrastructure.
So it’s very, very important for us to ensure that anything that we build within HPE for our internal use is something that we could confidently recommend to our end user as well. So it’s quite optimized and quite principle driven depending on the use case, we’re supporting.
MICHAEL KRIGSMAN: So when it comes to data, your customer zero, meaning you’re the test.
SHY CHALAKUDI: I am the test, I don’t want to say eat your own dog food, I would rather say drink your own champagne. Now, that’s how I look at it. I am my first defense for my product teams.
We use most of the infrastructure that we have within our existing data structure is on or on greenlit, the product that we maniacally support or encourage our customers to use.
And I am the first test bed, a live test bed for my product team. So I’ll be gel very well and it has been a quite interesting experience with that. As we use it we are able to relate to our customers, we are able to see from the customer lens as well as we are able to identify real problems before we go to our customers. So it’s like a big win for us being our first customer for our products.
MICHAEL KRIGSMAN: How much data are you working with? How fast do you need that data? Can you give us a sense to understand the context of the data itself?
SHY CHALAKUDI: Absolutely. We talked about telemetry data lot when you talk about telemetry and looking at zettabytes of data, so I’m being very careful about that.
So there I think my approach is going to be like I said, is collect the data over a period of time, analyze it offline, and then identify a trend. If you look at the use cases that you’re looking at that kind of dealing with zettabytes of data, you’re not looking at real time fast time response nature.
It’s more about analytics understanding and creating solutions based on what the data tells you. But when you look at the quote to cash that we talked about, which is your bread and butter of HPE, ERP systems, there you’re talking about probably petabytes of data.
So you’re still talking about large volume of data but not enormous huge, humongous volume as telemetry data. So data I think our approach is more about contained architecture, managing within our own infrastructure, managing storage to the needs that we have, and being making effective use of the data for real time needs is what I would probably look at.
MICHAEL KRIGSMAN: And what kind of infrastructure are you using to collect, to manage, to analyze, all of this data? It’s a lot of data.
SHY CHALAKUDI: Coming from an infrastructure perspective. And very fortunate working for HPE, because like I said, I’m the zero customer. Most of my infrastructure HOH, HPO, and HPE, but of course we also rely on third party products and third party open sources that be extensively used within our premise.
Again, my aim is to create an infrastructure guideline or infrastructure blueprint that matches my end customer. So that is being the premise of my approach, obviously I would have a third party embedded within.
If you think about any common data lake, you’re looking at probably in Hadoop infrastructure that can process multiple map, or map/reduce jobs, a spark, sport streaming, Kafka streaming, you call it within our enterprise we have that infrastructure Delta Lake.
Which is very important for our customers and for me as well, which helps us combine the traditional RDBMS with the latest and greatest flexible schema architecture, is very much part of my infrastructure.
And then if you look at microservices and fast API one of the biggest things that we are pushing is to ensure that we can decouple all of these intellectual blocks.
The more we decouple the more better off we are because we can effectively manage plug and play architecture better and more efficiently. So fast API and microservices architecture is very much embedded part of my infrastructure as well.
MICHAEL KRIGSMAN: And how much of this infrastructure is cloud based versus on premises?
SHY CHALAKUDI: To me I would say, again, we have the luxury of a private cloud, so most of it is cloud based from that sense of being in a private cloud.
But we do have quite a bit of legacy infrastructure so we are no stranger to having legacy infrastructure and technical debt like most of our customers have.
So most of our data is On-prem as well. So I use my [INAUDIBLE] product that we are very much would combine this all of this data seamlessly for me into one data fabric, that allowed me to switch from On-prem to cloud seamlessly is one of the biggest advantage I have that I use effectively within my infrastructure.
MICHAEL KRIGSMAN: How do you make the decision about whether to place data on premises or in the cloud?
SHY CHALAKUDI: To be honest, I think automate made the question is more about the volume of data because the default is cloud based right against the private cloud for us, that’s how we start most of our default starts the guiding principle.
But like you talked about earlier it also depends on the volume of data. It’s important that your data stays very close to your business processes. Where you have a real time need of data, where you want to respond to your customers very quickly, where you are looking at ability to manage it effectively and react to it, then that kind of data usually stays On-prem for us.
Because that is where you can act quickly and you can make the decision that you need to make. But things like telemetry data that we talked about or Salesforce there we talk about, usually resides on the cloud.
Because in this case, you do have the luxury of time was a storage. You look at the storage first, then it becomes On-prem If it is more the time first, then you have the luxury you can wait and work through it to a private cloud situation.
MICHAEL KRIGSMAN: So you’re making explicit decisions about what makes sense based on the particular use case really?
SHY CHALAKUDI: Absolutely. I think goes back to what I said before. Most of our infrastructure and most of our decisions are on architecture is driven by use cases.
Of course, we do have an architecture framework that will be applicable to most of the use case but is spot on Michael. I think it’s important that we look at the use case and design to it because each of them is unique and they have a unique need that we need.
MICHAEL KRIGSMAN: Shy, earlier you used the term data culture, tell us about the data culture at HPE, and what are you trying to do with data culture?
SHY CHALAKUDI: The more and more the industry is moving towards the technology trends, data is becoming a common language. I think we all have to become data literate in order to survive today in any industry that you are in.
So one of the biggest aspects for us is focusing on and embedding that culture within HPE today. The good news for me again, is that our business strategy as we discussed earlier, is built around helping enterprises manage data effectively.
So translating that business language into an internal data language is a little bit easier for me and embedding that part of our culture. So we have established data operating model, we are calling it as data operating model 2.0 in the spirit of supporting Antonio’s as a transformation business strategy. And that enables us instill the data literacy and the data focus we need across all of our business partners
MICHAEL KRIGSMAN: And what is the composition of your team? How do you enable this kind of data literacy inside the organization?
SHY CHALAKUDI: Again, great question one way I look at it I’d probably advise most of the data leaders to look at is, I think when I look at the team when you talk about it Michael, I look at it as an extended organization.
Because every consumer of data has to be data literate in order for it to be successful. So what we have done is, part of a data operating model one big change that we have emphasized is, every business organization has a data leader.
And the data leader is a very senior leader of the organization. And if you have to quote our chief operating officer, we basically see say that the data leader has to have two questions that they need to look at.
One is, what data do I need to run my business? That’s the primary goal. There looking into that all of that business process and say, what data do I need to run my business and do I have the right data?
But most importantly, I think the role that we have asked them to do is, what data am I missing, that I don’t have, that I need in order to be pushing my business further?
So I think that role is very, very significant that has been coming from top down support that enables us bring the data literacy to each of the business units.
We also have data stewards and data custodians across the enterprise. Stewards are typically in the business unit and their job is to be the guardian of the data. They own the data they have to ensure that the policies and principles around the data are established clearly.
And the custodians are part of my organization. And what their job is to do as part of the stewards to automate as much as possible around data. So the combination of a technical custodian working with a business owner with a steward, who owns and manages the data, I think that combination helps us to manage the data wrangling that very much we spend quite a bit of time on.
Then if you look at my team perspective, I think, we are a very small team because I consider the entire fee to be my extended organization. And typically my team is focused on data analyst, and data engineers, and machine learning engineers, who focus on automating and creating those pipelines for the end users to take the value out of the data.
MICHAEL KRIGSMAN: Shy, you’ve described a data transformation, digital transformation, organizational transformation, and you’ve said several times that this is being driven by your CEO. So can you describe the impact of data across the organization at HPE?
SHY CHALAKUDI: Absolutely. If you think about our as a service business model where I emphasize that, we are not just looking to shift our billing pattern to become more a usage based billing, but providing a customized solutions for our business partners.
So one of the big use cases that we have is to understand our customer need. Let me probably give you an example. When we are looking at telemetry data across our multiple customers, and being able to connect the data with the existing subscription that they have with us and being able to provide a personalized solution for them for their future needs.
So we sit down and talk to our customer, we do not just look at the past history of data, but we also look at what their future business needs are. And they get that future trend from the industry data, from the social data that we have , and be able to create a pattern for them.
So data helps us not only understanding our customer from their lens, but also getting the social and the surrounding data to give them a projection of what we think they need.
And in most cases Michael we have found that the customer may not even have thought about it, about their needs, what they have. But us bringing that to the table with them based on experience has made a big difference to the bottom line for our business perspective.
The other example I can give you I said, is being a zero customer. As we are experiencing all of these changes, we’re trying to balance between a personalization and standardization.
Of course, anyone in the industry would know the more you personalize, yes, you are trying to get more closer to your customer but then you’re adding a lot of overhead to your operations. It’s important to find a balance between these two.
So these kind of being the zero customer, being able to understand from an end user perspective, we try to integrate as much as the customer needs into standardization.
I think it’s a little bit public information most recently one of the recommendations we made was to integrate presto into this model for example.
So as we understand the customer need being a zero customer, we are also able to provide insights to our product of what is the best product roadmap they should adopt in order to get more closer to standardization as we personalize for customer needs.
MICHAEL KRIGSMAN: Shy, given everything you’ve been describing, I have to ask what advice do you have for business leaders who want to follow the example you’ve described of embedding data into the DNA, the fabric of the organization?
SHY CHALAKUDI: A great question. Great question, Michael. I’ve been a data leader for almost 15 years of my career and I can tell you the common myth is around AI/ML.
Most of us think AI/ML is a new buzzword, the industry it was big data about eight 10 years ago. And we think that machine learning and artificial intelligence is going to solve all of our problems.
Trust me, I am a doctoral student and thesis of AI/ML I value it very much but I think it’s important to understand that it is garbage in is garbage out.
So you need to focus on your foundation, you need to ensure that you spend time– and it’s not the fun part of the job to be honest, it’s a hard job but ensuring that you’re able to manage your data effectively.
I think that is where I would ask that restart your focus on. On the foundation, ensure that you have it very strong, you have a good data governance policy, you are able to Institute a very standard data operating model that you can repeatable systems and processes that you have.
The next one I would probably look at is a balanced approach what I call as the Yin and the Yang or officially in my strategy you would see it as a defense and active strategy.
The difference is more about I said, ensuring that you’re abiding by the governance policies, ensuring that you focus on security. The last thing you want is a headline, and we’ve seen this over and over and again with data that the cyber threats is happening around, is the headlines about a security breach within your data.
And as in when you create more and more you’re trying to create that common data fabric, you have to think about it you are creating the crown jewel for the company.
So the last thing you want is to hand off the keys to someone that you don’t want to have the keys for. So focusing on the defense, focusing on security, focusing on data policies, being the tough partner that you have to be, is very important for any data leader, especially when they take an enterprise role, is to have that standard policy is as important.
Then an active strategy wise what I have usually done is follow the n equal 1 strategy as I call it. Meaning, start small, start with a use case, understand the business value your use case can drive, and start implementing that use case to show the business value to your customers.
I think the more you start showing the value, you can then easily argument your solutions rather than building a big ocean or solving for the entire end to end strategy.
I’ve always found that being able to defense and introduce an active strategy has been one that has been helping me successful in all of my previous roles and it’s no change over here, I would do it.
The last one I would say to my appeals is, I don’t try to do it alone. This is an ocean, continually evolving, data technology has the most changes it has seen in the last 10 years and more changes to come.
Seek help, I look for– if I can plug-in here and Michael I would say, HPE is a company that thrives on helping with data solutions to the enterprise. So look for help with external customers, external products and enable a successful end to end strategy.
MICHAEL KRIGSMAN: Very practical advice, important advice. Shy, where is all of this going? What is the future of data?
SHY CHALAKUDI: I think data is going to be everywhere. If you think about what has happened over the 10 decade time, if I’m right I think we had about two zettabytes of data created globally in 2010.
Today, if you can imagine Michael in 2020 a 10 years later, we are about creating 41 zettabytes of data. I mean, look at the volume of growth and we have just started our journey.
The more and more introducing machines into our day to day operations, the more and more Bluetooth we are having, more and more, we’re getting to the edge computing. This volume of data is only going to increase.
And like I said earlier, with automation, becoming a norm for us it’s not even a luxury anymore, robotics is becoming part of our everyday life. Data operations and DataOps and looking at more automation is becoming a big thing that we need to continue to focus on.
5G is becoming a new wireless standard. Enterprise with leverage, edge analytics, and start making real time positioning. I mean, it’s not long enough for us to be able to do a space travel and come back in a matter of time.
So I think if you look at from a data perspective, it is going to be the biggest asset a company has to focus on. I know we’ve been hearing about this in the industry right now, but I truly believe that data is going to become an asset and become part of the balance sheet of all of the companies.
And I think the digital natives will take a lead on it for us to show us how it has to be done, and data is going to become a universal language. I know we talk about becoming a flat universe, I think data is going to be the one that’s going to let us communicate with each other and the machines together in one common language. So it’s important that we all get literate around data.
MICHAEL KRIGSMAN: Data as part of the balance sheet. I love that, I’ve not heard that before. Shy Chalakudi, thank you so much for taking the time to speak with us today.
SHY CHALAKUDI: Well thank you, Michael, for having me. I equally enjoyed the conversation. Data is a great passion for me and thanks for giving me a platform to share my perspective to all of your audience, thank you.
MICHAEL KRIGSMAN: Thank you so much to Redis for making this podcast possible. Thank you Redis.
Nowadays, customers expect you to deliver more than a seamless shopping experience. Learn how you can use data to keep track of their cart, remember their past purchases, and make future recommendations.
The digital economy is challenging bankers to re-evaluate their business models. Learn solutions for the four common challenges that arise when making the shift to real-time financial services.
In the age of real-time, cloud is essential. Learn how to migrate effectively to power digital experiences.
By continuing to use this site, you consent to our updated privacy agreement. You can change your cookie settings at any time but parts of our site will not function correctly without them.