Tech Talk: Harnessing generative AI for enhanced productivity

Tech Talk: Harnessing generative AI for enhanced productivity

Show Video

Hello and welcome in today's session. I'm going to spend a little bit of time talking about our experiences as we talk to customers and see their journeys as they adopt AI in enterprise. This is a technical session and with me, we have expert from NVIDIA. So we want to stand up really quickly who will be happy to take questions or any technical discussions after I'm done talking on stage here. So to begin with, let me just set the context for why is there so much hype and excitement about AI, right? So if you take a step back and look at the market opportunity of sorts, what we've started observing is generative AI is one of those landmark moments in the industry where we think it's going to be a technology driven infection point which will add trillions of dollars to the economy. More interestingly, if you just look at the adoption of technology and the technology sector alone, there is a potential to add anywhere from 250 to like almost $500 billion in the next 3 to 5 years, right? This is a huge face shift, right? If you think about it.

Buying if you think about the amount of money that has been spent on technology in the history of technology, nobody has ever seen such a big transformation, right? And we think the transformation here is not just about infrastructure spend, we do truly believe that there is a top line improvement, which means the world is going to get better, the economy, the global economy is going to expand significantly because of the improvement in productivity and unlocking potential for doing things that were not possible before. If you take a step back and look at our customer conversations, we started on an AI journey for several years now. But over the last I would say 12 to 18 months, we spent a lot of time with enterprise customers, talk to them about their applications of AI, right? And one of the big themes for us over the last 6 to 9 months has really been the fact that we are starting to see repeatability in these conversations. And there's about three or four common use cases that come up in almost every first conversation we have with like a large enterprises, right? The most common use cases that we hear about and the most frequent use cases we hear about are typically related to customer care, customer engagement or sales improvement, sales efficiency, right? Almost 60 70% of our conversations start here. But if you look at some of the other use cases that we've heard in the industry, we are seeing a lot of interest in folks trying to adopt AI and ML techniques to improve observable in their systems.

If you look at the amount of data being generated in your network in your it operations, be it for device life cycle management, be it for cyber security, the amount of data being generated is almost superhuman and this is where cutting edge AI technologies actually make it easy for you to as a human being synthesize data quickly, right? So it's really augmenting the human capability. And finally, I used to tend to think of this as an extension of the customer care use case. But we are seeing a lot of interest in improving how support systems are both customer facing support as well as internal support engines of sorts. What's interesting here is when you think about these use cases and stuff as well, if you think about conversations that are actually leading to meaningful outcomes as well as successful sales, et cetera, what we see is it's conversations that have a clear business value tend to make it further along in the technology adoption life cycle. More interestingly, if I just look at the customer care of the sales efficiency, use cases.

The typical business value being created by AI is not, it's not something unusual, right? It's usually things like AI can help you improve engagement with customers, right? It can help you plan campaigns better, it can help you like personalized products to meet the needs of require of customers better of sorts, right? Similarly, you can also help to track and automatically return detect patterns in your organizations and operational flows. And so the idea here is again, there is no, there's nothing out of the blue here, right? There's nothing, there's nothing that has been invented here that is not expected. It's just that this technology is now coming in and it's helping our end users are in customers, quantify the impact of the the the the ability to basically extract value from the data that they are collecting today, right? And if you think about the value that AI is generating in large enterprise today, I would say it's a two sided coin on the one side, there's a very probable productivity improvements, right? So for example, if you just go into the customer care use case and we are actually seeing this internally in my business unit in HPE, almost 50% of the customer conversations can actually be automated. Because typically, if you think about L one L two type of request, et cetera, it's a very monotonic process where you're looking at a bunch of forms, you're looking at a bunch of work flows and following the pattern here.

Again, an or an AI board can very easily automate almost like, you know, 50 to 70% of the interactions we have with customers. What this does is not only does it make it easier for like the support function to basically work with the customer, but also frees up time. So they can actually work on things like improving the satisfaction of the customer and make sure the customer feels connected, right? So the customer experience can actually be improved instead of focusing on finding data and reporting out numbers at a macro level. Right. On the other side of the coin is a

lot of people talk about AI coming and displacing jobs. And we actually don't think that's going to be the case, right? When you think about the impact of AI on labor, at least in large enterprise, the way we see it is AI will initially target repetitive work, right? So think about monotonous processes, think about scanning documents, think about extracting tables from documents, et cetera. This is all I would say high labor, low value type of work. But these are all the kind of problems that can be solved with an automated technique, right? What's more interesting is what we are starting to see within HPE itself is by automating some of the repetitive work, you actually free up your knowledge workers to spend time focusing on the knowledge and the analytic aspects of their day jobs, right? More interestingly if you take the same use case and apply it to developers. And again, I spend a lot of time with engineers, we are easily seeing a 20 to 30% increase in developer productivity. So you're able to generate more code, you're able to generate code with fewer bugs and net net.

If you look at the bigger scale of things, this leads to significant improvements in overall delivery times in an enterprise that said, while we there's a lot of hype and excitement about AI building generative AI applications is actually not easy, right? And to explain in a little bit more detail where the complexities are at. I'm going to talk about it in two parts. The first part is going to focus on how has the architecture changed with generative AI coming into play, right. So what you're seeing on the screen here is again, if you think about AI and ML techniques, you know, many of these techniques have been around since the eighties, right? And what you're seeing over here is a very simple natural language processing based chatbot, which was very popular, I would say like, you know, a decade ago or even like five or six years ago, right? So typically when you designed and built out a natural language processing engine, you would typically have a model, the model would be basically scaffolded with a bunch of rules or a dialogue engine that basically made it more conversational of sorts.

And then there was a knowledge base of facts that could be used to basically pull data from the model, right? If you take this architecture and look at what a modern a really simple chat bot looks like today. It's a completely different architecture, right? It's not as simple as you have a model. You have some data, you train it on, let's feed it and let's get an output, right? It's a much more complex ecosystem because if you simply think about the chatbot use cases that again, very popularly, there's a technique called rag retrieval augment generation, which basically requires us to take data sets, break up the data sets into smaller chunks. OK? Something called embeddings. When you take these chunks and you store them in a technology called a vector database that makes it easy for you to retrieve these chunks when needed.

Then there is a model, then there is an actual language interpretation engine, etcetera. So the point I want to make here is when you're designing a simple chatbot, even though the user experience will be exactly the same as what you have seen in the old school architectures behind the scenes. There's a lot of technology being used, right? And it's a much more complex story. Now, along with architectural complexity comes, what type of architecture are you going to use? And what type of infrastructure are you going to land this on? Right. What type of software are you going to use across the various components that we have here? Right.

Are we sure that this is going to work together again, if you look at almost 80 90% of the AI projects today, they are based on some form of open source technology and this technology is evolving very rapidly, right? So one of the big challenges here really is life cycle management. How do we know that the various moving parts that we've taken so much effort to assemble will continue to operate when there's a version upgrade or there's new frameworks available, et cetera and finally testing right on the LeftHand side, it's a fairly simple like testing process where you have where you have a predefined scope and a domain and you're able to like you know, test the type of responses you would get etcetera. But if you look at the modern architectures, there are so many moving parts, there are so many more components, unit testing, functional testing and testing is much more complicated beast through and through, right? Moving away from the architecture also just thinking about the data life cycle of the story today in the good old days, data used to always be an issue, but managing data was in some sense, a well known understood pattern, right? You would typically focus on things like data quality, you would focus on experimentation, right? You would focus on bringing on skills such as data engineering, pipeline building, et cetera that would be able to help you build up the data engines that you need. And then there is all these a PS that you could use for a mi integration with your model building technologies today, if you just look at the chatbot world, again, there's a whole bunch of complexity that's getting added to the data side of the world. But more interestingly, there are new skills that are required.

For example, there is all these issues around model governments. You know, very often. When you think about gen and chat bots, we think of data privacy, safeguards making sure we don't have malicious use or malicious responses, et cetera. This is a completely new domain of work and we have to actually train our workforce to be able to understand these concepts as well as equip them in terms of skills so they can operate and create such pipelines. So shifting gears like I've hopefully now laid out the fact that while there's a lot of interest in generative AI right, building these applications, building these frameworks is actually quite challenging, right? And this is where HPE and NVIDIA did something landmark several months ago, right? We decided to join forces, right? And very deliberately target the enterprise AI segment, right? Specifically, our approach is kind of unique and the first of its kind in the market where we took a holistic view of how our enterprises adopting AI starting with people like all the way from the skill sets required as well as basically the business problems that like, you know, various functions and organization may want to solve for through to for example, what kind of platforms do you need? What kind of models do you need? What kind of data and platform guards do you need of sorts.

We took a very comprehensive view of all the technology available out in the marketplace. And then finally, we put a business angle to this where we basically said it's not just about people and technology, it's also about making business sense for an enterprise to be able to adopt these technologies. So we've in some sense, try to create this trifecta where a combination of HPE and NVIDIA tries to think about the generative AI being adopted in enterprises as a holistic end to end project of sorts over here, right? And one of the first landmark projects coming out of HPE's collaboration with NVIDIA was a product line called HPE's Private cloud AI, which tried to bring together the best of Breed technologies from NVIDIA along with the best of breed technologies as well as the GreenLake experience from HPE. And the key value proposition to end customers was what if we make it simple for our end customers to basically adopt AI? So remember the architectural pieces, remember the data P planning pieces and stuff. What if you could imagine a world where a combination of HPE and NVIDIA can provide a technology platform which is a combination of infrastructure and software and services that make it easy for enterprises to not have to rebuild the entire stack from scratch, right? So when you think about H P's private cloud AI solution. It is a first of its kind fully engineered turkey solution available in the market today where we really optimized it for time to value.

So from a customer perspective, our big value proposition, our big differentiator is in three clicks, you can get started with AI, right? And this is typically like not heard of in like the it space at least today because the typical it journey is really about going, picking a bunch of server configurations, picking a bunch of software waiting for them to come, basically assembling these pieces together, spending a bunch of time trying to make sure that the exact system works, right? This process based on our conversations with customers takes anywhere from several weeks to several months. On an average, we've seen it takes 4 to 7 months for customers to get started with AI from when they place the order. In contrast with private cloud AI, we really reduce this down to a point where you place an order the machine ships in a couple of weeks today. You're looking at delivery times of 2 to 3 weeks at the most, right? But more importantly, once the machine ships, you don't have more configuration customization, you need to do once the machine arrives either in your color or in your Datacenter, you simply plug it in.

And it's literally a three step process to get started with AI in the sense that you plug it in the machine basically asks you for some network configuration and it asks you for network configuration because we realize every enterprise has their own network architecture. So we wanted to basically provide enterprises, the ability to basically bring in technologies was also adapted to their network topologies. More importantly, once you provide the network configurations, it dials home to the GreenLake platform authenticates itself and then you are up and running right.

It is literally a matter of minutes to get started with HPE's private cloudy story. We spent a lot of time in private cloud air, not just to think about the start up experience, but also it comes preloaded with all the software that you may need. When I say you may need, I'm talking about various persona in enterprise all the way from data engineers to data scientists, to administrators and it admins who may want to basically observe the infrastructure, who may want to do things like, you know, provisioning machines and doing entitlements for managing cloud spend, for example, right. One of the big things that differentiates our solution from like many reference architectures that are available in the market today is everything that comes in the stack today is built and designed for enterprise, great confidence, right? Specifically what this means is we spent many years perfecting our private cloud stack. We are now bringing a lot of those learnings into the HPE private cloud AI like offering more importantly as an enterprise buyer, you don't have to worry about it, checklist.

You don't have to worry about data governance, safety, security, et cetera, right. We've done a lot of thinking and designed to make sure that these elements are already built into the platform before it ships to you. Right? And finally, one of the the most interesting and distinguishing characteristics of our offering is it provides a very cloud like experience. So what this means is we connect to the GreenLake cloud platform and the cloud platform provides a control plane that does things like life cycle management for all the software that's loaded onto the platform. So you don't need to worry about like doing upgrades, patches, security fixes, etcetera, all of these things automatically appear in the box and once you start using it of sorts over here. OK.

So the big thing for us with private cloud AI was really to try and figure out how we can simplify and accelerate time to value as enterprises get on to their AI journey. And more interestingly, if I go one level deeper, right? When you think about private cloud AI, I would encourage you to think about it as a very simple layer cake. What we've tried to do in this layer cake is first and foremost, we started off with various use cases that we saw in industry, right? Depending on these use cases, what we realized is there are different price points or budgets that enterprises are willing to spend on AI similarly for different types of problems that you solve, there are different configurations that kind of make sense for you.

So we started off by identifying four different configurations ranging from small, medium large to extra large that customers can basically use to basically fit different types of use cases and price points. They may be able to spend in adopting a within the system. We spent a bunch of time curating and looking for best of Breed infrastructure. We use a combination of NVIDIA GPUS and networking along with HPE storage and like AI compute service right. We brought in place best of Breed AI software which is a combination of Nvidia's latest greatest AI enterprise platform, which includes some of the most popular names which include some of the most popular models that are available in the industry today.

This is combined with a offering called HPE A essentials which is in some sense, packaged managed open source technologies which in some sense complement and the AI but more importantly, they address various persona all the way from data engineering to data science. We have the most popular open source tools that are like mature and have price grade adoption. We offer them in a package manage manner in AI essentials.

We are also spending a lot of time building out a unique distributed data lake house. The idea here is when we think about our software stack, it's not just about tools and frameworks that you use to experiment in AI but we think about it holistically all the way from what are the tools we're going to use to train test, like monitor as well as just data store your data stage, your data, etcetera. So right, when you think about models, we've been doing a lot of work, it's a deep collaboration with NVIDIA where the product teams are basically working closely together to build out road maps, et cetera. We have access to in some sense, some of the best models that have been optimized for the latest greatest NVIDIA GPUS.

More interestingly, we also allow you as an end user to bring in custom models or if you're working with S I's many, I have their own domain specific models that we call partner ecosystem models. We allow you to bring all of these models in, right. The way we are designing the system really is as an open ecosystem where we come in with an opinionated point of view. This opinionated point of view helps us provide an evergreen experience to the customer. But we treat this as an open ecosystem where you as an end user can bring in more tools you can bring in more platform capabilities if you want to just so you can make sure it matches your user requirements. Finally, when we think about HPE private cloud AI it is not just about infrastructure and software, right.

We've gone the extra mile to also include services. Because what we realize is adopting AI is not just about adopting technology, but there is a mindset and behaviors and processes change typically required in enterprise. So when you think about private cloud AI, we bundle a bunch of services that help you get off the ground. But that said depending on your specific needs, you have the ability to work with either HPE professional services or a bunch of partner organizations that can help you customize and configure your AI based solutions on the private cloud AI ecosystem of sorts over here.

Now, one of the things that I want to really call out over here is while everything on the LeftHand side of the screen up here sounds like a lot of interesting AI specific technology. Our secret sauce, something that is really distinctive for us in the market today is our private cloud control plane. So think of private cloud control plane as this piece of technology that helps us provide a cloud like experience to our customers, irrespective of where they deploy these platforms, right? It could be in a colum, it could be On-Prem, it could also be in dark sites of sorts.

What we do through the private cloud control plane is really we provide monitoring metering capabilities, we provide the capabilities for you to do workload orchestrations. So think about creating like clusters, think about like placing applications on clusters. We also provide like end to end observable security capabilities of sorts. So in some sense, from an administrator's point of view, the private cloud control plane gives you the ability to have a single pane of glass to manage, monitor and like, you know, improve all of your assets without having to go to different tools and basically look at it as bits and pieces that make up a solution of sorts over here. So again, for me to take a step back and some sense, try to what are we really trying to do with private cloud AI? We realize building AI is going to be challenging. We also realize AI is a huge market opportunity

today. So what we've tried to do is we really try to simplify day zero to day and operations specifically. If you think about the procurement journey, it starts by you, placing an order from HPE HPE has made it really easy for us to basically deliver these boxes to you as well as the installation process is as simple as you plug it into the wall three steps and you're off to the races.

You don't need to architect solutions. You don't need to basically create custom configurations. All of that, all of that hard work, labor intensive stuff has all been taken care of for you, right? The box that ships to you comes preloaded with all the software and tools that you may need. We think of this as an ecosystem in the sense that it has a base set of tools in some think of it as we have about a dozen open source frameworks as well as about half a dozen models.

Some of the most popular models that are available in the industry that are preloaded into the platform backed by A HPE managed coin layer and state of the art data lakehouse capabilities built into the platform. However, we also have a marketplace capability that said if should you need more software, you can always transact to the HPE marketplace. So it's couple of things that we really spent a lot of time to try and basically make this truly enterprise grade is how we think about data management and security with private cloud. AI one of the things that we are very proud of is when we designed the system, we designed it bottoms up for an enterprise user. So a lot of capabilities around access control, isolating data, creating things like multiple users of the concept of tenancy, et cetera are all prebuilt into the system.

Similarly, we were two steps ahead of the market in designing the entire software stack all the way from the control plane through to the the application layer that you see in the private cloud platform, etcetera using zero trust principles. So what this means is it makes it really, really hard for any external malicious agent to actually break into the system. So thats it, what are we trying to do with private cloud AI? Right, I think the big theme for us over here with private cloud AI is really we want to try and streamline infrastructure operations all the way through to data science, right? We want to make it easy for enterprises to adopt AI in the sense that it should be as simple as you go place an order for a box. When you want to pick a box, you simply look at the use cases that you want to be solved for. What is the budget you have in mind are configured. If you go to the HPE website will recommend a set of configurations

and these configurations typically have some variation in the GPU and the storage that are offered. But by and large, they adhere to a use case more than a technology decision journey. Similarly, once you basically place an order for the system, the system arrives at your Datacenter or your collo you plug it in.

We made it really, really easy for you to get started with the stuff. One of the things that we are very excited about is we are truly landmark in the sense that we take pride in providing a curated AI ecosystem which has the best of the NVIDIA and HPE tools. But more importantly, we've thought ahead and what we've realized is it's not just about building these systems and getting started, but it's also about the day, two and day three and day n operations where keeping the systems alive, keeping them evergreen keeping them like, you know, up to date as well. Like there's a lot of magic that happens behind the scenes. We've tried to really simplify the experience for you as an end user, right? So a lot of it really happens transparently behind the scenes of sorts.

And finally, what we're spending a lot of time doing and we can talk more about this offline is we are looking at common enterprise use cases. So like I talked about the four common use cases we talked about before we are building out patterns that make it easy for enterprises to build out their own chat bots, for example, or their own customer service bots, et cetera. There are different ways we go about doing this. HPE provides something called a solution accelerator which is really think of it as a thought through end to end pipeline of sorts that can be used to show any of these use cases. NVIDIA also has this concept of blueprints which is far greater in number, but the idea of a blueprint is really a reference architecture for building out one of these gen solutions. And most of the blueprints that NVIDIA is publishing are also being validated with private cloud.

So in the future, you will have a rich ecosystem of use cases that can simply be picked and deployed into one of the solutions. I hope for the last 20 minutes. I've been able to at least speak your interest in enterprise AI and some of the challenges in adopting it. Happy to take any questions offline and you should also waste his time up and take anything if you want to talk about.

Thank you so much.

2024-11-29 01:40

Show Video

Other news