Keynote by Antonio Neri – Unlock the future of AI, Hybrid Cloud, and Networking (with ISL)

Keynote by Antonio Neri – Unlock the future of AI, Hybrid Cloud, and Networking (with ISL)

Show Video

Every day the doors open to fans passionate about their favorite sport at the most innovative stadiums and arenas of today and the future. Right in this city, FC Barcelona is transforming its historic home into a new global destination, reimagining and redefining how a match can feel. That feeling, it's the standard we've set. At the racetrack, along the goal line, even in the virtual world of gaming, streaming in to see their favorite player, track the fastest lap, buy the latest merch.

To have an experience that feels exciting, personalized and safe. Every moment, these dynamic arenas open to infinite possibilities for coaches to analyze film, engineers to monitor weather patterns, for the teams behind the scenes to optimize operations, all towards an experience that's entertaining and seamless. This is what we unlock when intelligence has no limits.

With HPE GreenLake and AI, scores of data points are collected and analyzed across the entire enterprise, in real time. So the fan experience evolves with the game, keeping them connected on and off the court. So where you see more scanners at Gate 5, 6 hot dogs for seat 22 section 4B? In other words, a perfect day at the park? We see a fully immersive fan experience. Unlock excitement, determination, thrills.

HPE, Unlock ambition. Please welcome HPE President and Chief Executive Officer Antonio Neri. Buenas tardes y bienvenidos! [Good afternoon and welcome!] Wow, I'm looking at this thing and this thing doesn't end. What an amazing venue? Our Discover Barcelona event has a special place in my heart. I hope you had fun with that entrance. It is my second home actually. I have a place here I come very often to visit my friends and spend time in this amazing city and an amazing country, but I always look forward to coming here to share our news, engage with all of you our talented and innovative customers and partners.

If you are a sports fan, there's a good chance your experience at your favorite stadium was enhanced by HPE technology. And as we just saw, some of the best loved teams in the world count on HPE to supercharge their fan experience. I cannot think of a better way to begin today by celebrating a team that's so beloved by the city, one of the greatest clubs in the history of sports.

A team that early this year made HPE its official technology partner, the legendary FC Barcelona. We are super excited about our partnership with the club which has HPE providing solutions for the new state of the art Spotify Camp Nou and the Espai Barça mega complex. When completed, it will be one of the largest and most advanced sports and entertainment centers in Europe. Now there is a reason why on the chairs in the room you have the FC Barcelona scarves on them. You know what to do with them. You saw some of the young people here on stage.

Please welcome to stage FC Barcelona Chief Information Officer Miriam Ferrando. Hola, Miriam, com estas? [Hello, Miriam, how are you?] Someone has been practicing his Catalan. -I love to see! -I have to speak Catalan. I have to speak Catalan, you know? Thank you, Antonio, for having me.

Es un autentic plaer [It's a true pleasure] tenir-te aqui amb nosaltres. [to have you here with us.] La teva presencia aqui es molt apreciada. [Your presence here is greatly appreciated.] Gracies. [Thank you.] Now we need to speak English. So as you know I'm a big football fan.

In the US, we call it Soccer. But I love--, obviously I come from Argentina. I still play, believe it or not, I'm almost 60 and I still play, although I'm not as fast as I used to be. But that said, I'm still there.

But I'm really thrilled about the opportunity to work with you to help create stronger connections with your fans in the club. What excites you about this partnership? Why you selected HPE? Antonio, this is a very ambitious project with very high stakes as we aim to revolutionize the spectator experience in our facilities. Technology is key enabling us to provide immersive personalized services tailored to our fans. To achieve this, we needed a world class technology environment and HPE has what it takes for us to reach the next level. So can you elaborate a little bit more about what experience you are aspiring to provide. Really, the possibilities are endless.

Okay, so at Spotify Camp Nou, we are building an advanced high speed secure wireless network that will provide and offer fans exciting features like gamification and virtual reality. We can also use a large amount of data that we will capture to bring advanced analytics to optimize game performance and strategy. Our vision is to fully immerse fans during their entire visit and deliver a fast, stable, and secure customized experience. I know we have a video, so let's play the video.

Soccer, the world's game. It's more than just a sport. It's a global phenomenon that unites billions.

Every roar, every Chant, every Goal, they are more than just moments. Behind every moment, a story. Behind all the data, a fan. Now, FC Barcelona is teaming up with HPE for a world-class cloud platform and networking solution that sets a new standard. FC Barcelona and HPE.

Together, we're crafting an immersive experience where fans aren't just spectators but part of the game. Together, we're building more than just a stadium, we're building the future of football. Espai Barça, powered by innovation, fueled by passion. Unlock pride, Unlock community, Unlock destiny. HPE Unlock ambition.

Now we have a couple of things that we want to give to the audience, maybe you just throw one to them and they had who wants the ball? Right there. I'm not liable for hitting you. Here it comes. There you go. I'm dislocated my hip, but that's OK.

We are delighted to be working with you to bring that vision that we just saw on the video to life and so that you can find, you can enjoy the experience. I know there is a lot of work to be done, but we're just starting this journey and we are really proud to be part of this journey with you. Me too. So one more thing, Antonio,

when it comes to football, I know you're a weekend warrior, so I brought some gear for you. I hope it elevates your game on the pitch. OK, well, I need it. Thank you.

On the other side, maybe. there you go. They're going to sign me up this afternoon. My contract is ready. I will play with Lewandowski, Pedri And many others, so thank you very much, really proud, something I'm going to keep here in my house in Barcelona.

Thank you very much. Please thank Miriam. Alright, thank you. -And thank you again, Miriam for coming today. -Thank you.

Alright, let's give another round of applause. So FC Barcelona has been around for 125 years. In fact, this month it is hosting a huge celebration for its anniversary. The investment it is making on its technology, as you see, is essential to maintain deep connections with customers and their fans. I personally see HPE in a very similar way, an organization with a stored history, engineering roots, and deep technical expertise and with the inspiration to transform our capabilities and focus, a focus that reflects not where the world has been, but where it is going. And where the world is going, it is clearly AI, Artificial Intelligence, not with little steps, but with giant leaps.

Change is coming at us fast. Think about some of the groundbreaking technologies that came before AI in our personal lives over the last few decades. Technology, that changed everything.

For example, the telephone took 75 years to reach 50 million users. The mobile phone reached 50 million users in just 12 years. The internet got there in just 4 years with Web 1.0. Generative AI, it reached 50 million users less than one month after its launch.

This is not just rapid growth. It is a major paradigm shift, a profound transformation unlike anything we have ever seen. AI is not just enhancing technology, It is enabling new worlds of interaction, capabilities, redefining what is possible in our lifetimes.

It is transforming every sector, every line of business, and creating opportunities we couldn't even imagine 18 months ago. In short, the AI future is here and it is calling us to action. With its arrival has also created a tremendous challenge for every sector to adapt and accelerate, to take advantage of its potential. Businesses that quickly bring AI into enterprise, into their enterprise will not just stay competitive, they will set the standard. Those that do not will struggle to keep pace. HPE is here to help your AI journey.

And we are well aligned to serve the values and unique challenges faced by customers here in Europe. Our purpose remain unwavering to advance the way people live and work. HPE values, ethical business practices is concern for the well-being of the planet, and it is committed to data protection. We also see ourselves as the stewards of AI.

In fact, in 2019, Hewlett Packard Labs established five key AI principles. The first one is Privacy. Second, we want AI to be Human focused.

Third, AI needs to be Inclusive, ensuring access for everyone. We also want AI to be Responsible. Last, AI must must be Robust, quality tested continuously. From our purpose to our products, HPE is uniquely positioned to be your technology partner in Europe.

Here at HPE Discover is where you have the opportunity to learn about our new technology, products, and services. And these offerings will unlock limitless potential for your enterprise whether it's your stadium, your store, your manufacturer line, your hospital. You can see these solutions for yourselves here at the Discover showcase. Our technical experts stand ready to guide you and answer your toughest questions.

While you are at the showcase, I encourage you to also speak with our experts from HPE Financial Services, who can help you remove financial barriers and enable your business to scale and innovate quickly. We also provide a unique capability to help manage your IT assets. You can renew, recycle, repurpose, or resell your technology. This helps extend the life cycle of your IT equipment while also reducing waste and also the impact of the environment. For those who have been following HPE's journey, you have witnessed a company that is transforming before your eyes. In fact, through strategic innovation and organic investments, whether it's investing ourselves or acquiring companies, but also through decades of experience solving our customer's biggest challenges we have reimagined HPE from the ground up.

This transformation has been essential. In preparing to meet the moment of AI so that you can. Since I became CEO 7 years ago, almost, I have said that the enterprise of the future will be edge centric, cloud enabled and data driven. The future is here. It is being driven by the 3 key building blocks that form a unified technology experience that helps customers accelerate that innovation.

Those 3 key building blocks are networking, hybrid cloud, and AI. Today I will show you how HPE combined these essential components to put AI to work for you right away. That includes illustrating the power of our decades of experience in designing, manufacturing, operating, and servicing El Capitan class systems at scale, leveraging 100% fundless direct liquid cooling. I will also share details about how our imminent AI-powered networking capabilities and our hybrid cloud expertise unified by HPE GreenLake our cloud, makes us a reliable trusted partner.

These building blocks are foundational to meeting the needs of AI for enterprises. And today I will showcase how no one is better positioned than HPE to deliver a truly comprehensive cloud-native and AI-native portfolio to help you capitalize on the AI industrial revolution. We share many updates on our transformation at the Discover, Las Vegas in June at Sphere. In case you miss it, here were some of the key highlights. Let's play the video. Big moments require big venues.

One of the crown jewels of HPE GreenLake is our acquisition of OpsRampp, and today we announced three major updates. OpsRamp now supports full stack AI infrastructure to workload observability. We introduced an AIOps co-pilot feature, and finally a new integration with CrowdStrike APIs.

As you can see, we are not just keeping pace. With the future of AI operations, we are leading the industry. HPE and NVIDIA have a proven track record for delivering innovation. Today we are taking a partnership further. It is with great excitement that we announce NVIDIA AI Computing by HPE to accelerate the Generative AI industrial revolution. Today, it is a pleasure to introduce what we call the first of a kind turnkey Private Cloud AI solution co-developed with NVIDIA.

We call it the HPE Private Cloud AI. When I say turnkey, I mean the simplest experience today for deploying and operating the AI NVIDIA software stack in the industry. The era of geneerrative AI is here.

You must engage the single most consequential technology in history. We are going to, because of this, make it possible for the first time to bring Generative AI to every single company in the world. Another aspect of our HPE GreenLake differentiation is also a cloud-native approach, providing you with flexibility and choice of runtime environments, including bare metal, containers, and virtualization. And today we announce a new HPE developed virtualization capability for our Private Cloud portfolio. As we stand on the cusp of an AI revolution, our ambition knows no bounds. It drives us to reach for the stars, to solve the unsolvable, and to transform ideas into reality.

Presenting the HPE vision in the magnificent space was really humbling, but I have to tell you this is very humbling when I looked that far, it's just remarkable how far we have come, and as I think about the future, I think the future with optimism, but if I think about this Sphere was a marvel of technology, and I hope you will one day, if you haven't yet, you will enjoy it. As you saw in the video, in June, we announced a major partnership with NVIDIA to provide Turkey AI Enterprise solutions, NVIDIA AI Computing by HPE to accelerate the Generative AI industrial revolution. The partnership includes our flagship Enterprise offering HPE Private Cloud AI. HPE Private Cloud AI is a full turnkey private cloud integrated system that makes it easy for enterprises of all sizes to develop and deploy Generative AI applications.

With 3 clicks and less than 30 seconds to deploy, HPE Private Cloud AI integrates NVIDIA accelerated computing, the Silicon. Networking and software with HPE servers, storage and cloud services all delivered through a unified experience under the HPE GreenLake cloud. Following those transformational announcements, we have kept the innovation engine churning. We never stop and just a few days ago at the SC24, which is the Supercomputing 2024 event, the largest Annual Supercomputing Conference, we announced a new milestone in Exascale computing with the world's largest, fastest supercomputer called El Capitan.

The supercomputer was built by HPE in collaboration with the United States Department of Energy, the National Nuclear Security Administration, and Lawrence Livermore National Laboratory. Beyond its speeds, it is also among the top 20 energy efficient systems. HPE delivers the world's only 3 Exascale systems and now it delivers the top 3 fastest to me in the analogy of football, that's a hat trick. El Capitan illustrates HPE's continued leadership in building and running the most scale computer systems on the planet. This is an area where HPE demonstrates its engineering prowess time and again.

In performance and sustainability, leadership, our supercomputers are marvels of engineering. Since 2018, we have manufactured and deployed more than 200,000 direct liquid cool server nodes. And since 2020 we have done the same for nearly 22,000 direct liquid cool networking switches, which when you do the math, it translates to more than 1 million ports globally with our HPE slingshot networking fabric.

These nodes, switches, and ports are what enable supercomputing and Generative AI. As an example, here in Europe, in the United Kingdom, the AI Research Resource program funded by the US government is providing extremely large machines such as Isambard-AI at the University of Bristol to ensure sovereign AI capability and ease of access to both research and industry. These systems represent the ingenuity that is rooted in HPE's DNA and enable us to help our customers and partners unlock their ambitions. I would like to recognize our outstanding engineering team for continuing to push the boundaries of what's possible. As an engineer myself, as I used to be, I'm reminded that engineering is not just a function with HPE or a company, but the foundation of our ability to innovate and deliver value to customers. Engineers are problem solvers, and we have solved some of the biggest challenges that customers put in front of us, including some of the biggest societal challenges.

Being able to deliver Exascale computing is a testament to our deep engineering expertise and heritage, which makes me, as the CEO of this company, incredibly proud. Today's announcement reflects again HPE's relentless pursuit of advancing our capabilities and offerings for you. Our first news builds on what we shared in June with our HPE Private Cloud AI announcement, and expanded collaboration with Deloitte. Deloitte is teaming with HPE to bring AI solutions to market quickly, implementing HPE Private Cloud AI with Deloitte's existing AI capabilities.

Deloitte's industry experience with a co-developed Private Cloud for AI by HPE and NVIDIA provide businesses of all sizes with a ready to deploy AI solution tailored for their industry specific use cases. HPE Private Cloud AI can be deployed across Deloitte's NVIDIA powered solutions, including AI factory as-a-Service, hybrid by design, C-suite AI, and the Chord AI Suite. In addition to our collaboration with Deloitte in September, we introduced the Unleash AI Partner program to grow our ecosystem and expand customers' ability to address more AI use cases with HPE Private Cloud AI.

Unleash AI now connects AI customers with access to cutting edge software vendors for RAG applications, AI-powered software development, video analytics, and solutions for building safe and secure Generative AI systems. But this is just the beginning. We will continue to add even more partners and AI use cases to deploy with our HPE Private Cloud AI. One of the reasons our Private Cloud for AI is so unique is because of the experience we delivered through HPE GreenLake cloud. Today, more than 37,000 customers use HPE GreenLake cloud for their hybrid cloud needs. In the last 24 months, we thoughtfully enhanced the capabilities of hybrid by design, hybrid design inside our HPE GreenLake cloud, adding two critical cloud-native services.

The first one ITOps and the second DevOps through the acquisition of OpsRamp and Morpheus Data. OpsRamp AI driven incident response management functions enables autonomous IT operations. It takes a massive amount of inbound telemetry, then uses that information along with AI to find the signal in the noise, identify issues, and propose or even automate resolution. With deep integration into a dozen of common tools and multiple clouds, Morpheus can fully automate the end-to-end provisioning and life cycle management of traditional and cloud-native workloads. This means more speed and agility for your business as well as the ability to control where loads are placed. Together, they can expand the capabilities of HPE GreenLake under multiple clouds and multiple IT vendors to help customers accelerate cloud operating model adoption.

HPE will get you there faster with a unified approach from Day-0 and beyond. Our recent acquisition of Morpheus is not only being integrated across our portfolio, It is the true foundation of an exciting new solution I'm announcing here today in Europe. While AI obviously is a mega trend, there is another subject that comes up with almost every customer I speak to, and I speak to customers a lot, more than 50% of my time I spend with customers and partners, and what they tell me is that the virtualization landscape is shifting. Due to recent changes in various contracts and license terms, many customers tell me their costs have increased 3 to 5 times. For that reason, many customers are re-evaluating their options and looking to create the right mix of runtimes across their hybrid cloud operating model. We have heard from many of you that you need more flexibility, freedom from locking, and more value.

HPE stands ready to help with a solution that puts you, our customers in control. So introducing HPE VM Essentials. Utilizing Morpheus software, HPVM Essentials provides a unified VM management experience, which means you can manage existing VMware workloads or the new HPE VM Essentials Hypervisor with a simplified experience across both stacks. Not only does HPE VM Essential offer more choice and flexibility, it is also a non-ramp{inaudible} to the full Morpheus hybrid cloud management solution. The interest we saw early in the process for beta testing has been enormous with more than 100 requests. And the feedback has been consistant of three key themes.

Theme no 1, customers are impressed with how intutive simple experience is. Second, customers are pleased, we have matured and complete the solution is. And really, it's credit to the Morpheus team who built a tremendous solution that will now make available to much broader audience globally. And third, the feedback has been very positive on a per-socket pricing. These three ingredients combined means enterprises of any size can accelerate their future strategy with HPE GreenLake Cloud and HPE VM Essentials. And guess what? You can expect up to five times lower TCO.

And the good news is you don't have to wait long for a beta, because I'm pleased to announce that the HPE VM Essential will be available next month as a standalone software solution. So you can try. You can do beta testing. And then within our Private Cloud portfolio for virtualization in the spring of 2025. As you can see, we continue to expand innovation with our HPE GreenLake Cloud experience to make it simpler and more flexible to address more of your multi-vendor and multi-cloud estates.

Another area of innovation we're expanding today is to address data security and sovereignty needs, which here in Europe, obviously, is very high. Given governance requirements, security concern in this region, many of you have asked us for the same HPE GreenLake Cloud native experience, but not connected to the internet or outside a cloud. Data sovereignty is no longer just a regulatory requirement. It is a business imperative for regulated industries, such as government, health care providers, and defense entities.

And in response to that, today we are introducing the option for disconnected management for HPE Private Cloud solutions for a fully air-gapped management option for you. Further building our disconnected management capability, today we're also announcing that HPE is enabling authorized HPE Partner-Ready Vantage partners within specific geographies across Europe and regulated industries to deliver sovereign Private Cloud services powered by HPE GreenLake. Partners can create sovereign cloud capabilities that address local, regional, and industry-specific regulations.

These partners can earn the HPE sovereignty competency, which demonstrates expertise using HPE GreenLake Cloud to provide secure private clouds within hosted environment or customers' data centers. HPE is delivering cloud models that address the highest security and sovereignty standards, ensuring that your data remains protected, compliant, and within your control at every step. Now I would like to unveil announcement that addresses another critical topic, Data. And we know data is the lifeblood of any company and is the most precious asset. It is what fuels also the rapidly expanding AI opportunities. This will require a more intelligent and autonomous infrastructure, higher performance, and much more data, security, governance, and control to ensure your success.

As a leader in advanced storage solutions, HPE continues to make great strides to address today's AI inflection point. As a part of the HPE GreenLake Cloud with our HPE Alletra Storage MP strategy, which is guided by a bold vision to seamlessly run any application without compromise from edge to cloud with a cloud experience for every workload. That's why we have designed HPE Alletra Storage, a cloud-native AI-driven data operations platform that maximizes value of your data wherever it resides. It delivers a consistent cloud operational experience across edge, core, and cloud. This enables you to manage your data seamlessly and accelerate time to value for all your workloads.

In terms of AI for storage, HPE Alletra transforms IT operations. Predictive analytics prevent disruption before they happen, ensuring your application runs smoothly and efficiently, freeing your IT teams to focus on innovation, not troubleshooting. Together, this innovation's positioned HPE Alletra as the cornerstone of your data-first modernization strategy. It is more than storage. It is a catalyst for your digital transformation.

For environments with the highest security needs, today we're also pleased to introduce an on-prem fully disconnected block storage experience, HPE Alletra Storage MP Disconnected. This new solution provides customers in highly regulated environments the ability to manage with stringent privacy and security requirements for the most mission critical databases and workloads. With HPE Alletra MP Disconnected platform, HPE now offers private cloud and network management available on-premises and offline, and is the only vendor that provides a control plane running in a disconnected or sovereign mode. With this, HPE delivers the industry-first capabilities allowing customers to run the HPE GreenLake control plan that we all like on-premise.

So this is more just in storage. It enables on-premise capability for storage, private cloud, networking, compute our servers, and other services. But today, we are taking our vision even further as we introduce the X10000, a new member of the HPE Alletra Storage MP family. The X10000 is new. It is an old flash object storage designed for exabyte scale and optimized for high-speed data lakes with rapid restore for backup and recovery. It will offer up to six times faster object storage performance compared to our competitors.

And guess what? The S3 compatible object storage interface and deduplication supports up to 20 times data reduction and streamlines integration with any backup solution of your choice. And because it is built on our unique HPE Alletra MP disaggregated shared everything storage architecture, you can easily scale performance and capacity independently to meet future needs. You can also manage your entire storage fleet, including block, file, and object protocols with one cloud experience based on a single architecture. I call this radical simplicity. And as you can see, we continue to accelerate our hybrid cloud strategy and drive industry-leading bold innovations.

So I know I talked a lot. I gave you a lot of news. But in summary, all what we are doing today is accelerating our innovation. And this includes our expanded partnership with Deloitte, with more partnerships and AI use cases to come through our HPE Private Cloud AI. The acquisitions, the curated approach we have taken, advances our already incredible capabilities inside HPE GreenLake through DevOps and ITOps. Our new HPE VM Essentials software that gives you more choice and flexibility.

And also, we help you reduce the cost of virtualization. But also here in Europe, because of your unique requirements, Disconnected Private Cloud solution that helps customers like you meet the compliance and security requirements that will enable full-time control of your data. And last but not least, we continue to expand our HPE Alletra MP portfolio with the solutions around disconnected approaches, as well as the fast object store, which brings tremendous performance and incredible data compression. But in addition to the innovation we are driving, it is imperative also we think about what comes next. In my view, networking is one of the core tenets that's going to enable and advance AI. And that's not lost on us.

AI requires a modern networking foundation from client to cloud to connect data. And this foundation will be every bit as important as the silicon in unlocking the power and value AI holds. As the world transitions to this type of accelerated computing, a high-performance networking fabric is essential. And we are taking our networking position to a new level, one that will disrupt the industry and extend our network and AI expertise by leaps and bounds.

Since its founding time, Juniper Network has kept the ethernet run. In fact, the top 20 global service providers and the top 30 cloud providers are Juniper customers, using high-performing routing and switching infrastructure. Juniper also took an early lead in AI-native operations. Its Mist AI technology brings simplicity, reliability, and scale to network operations, which is the backbone for AI. Separately, HPE and Juniper were each recognized by Garner as a leader in the 2024 Magic Quadrant for enterprise wire and wireless LAN infrastructure.

And when we become one, we expect the network of the future will take a giant leap forward. The Juniper deal will be an essential piece of the puzzle because together, we expect to have a line-up of secure AI-native network solutions to deliver exceptional user experiences across all segments – enterprise, cloud, and service providers. Networking is the biggest enabler of AI and hybrid cloud. It is the core foundation.

So, whether it is building the right network for AI or using AI to optimize network operations, the HPE and Juniper combination will bring what we expect to be an incredible solution to meet the needs of today's modern applications and workloads. HPE is a leader partner to help enterprises harness the power of AI as the technology continues to radically change the current landscape and unlocks countless possibilities for many organizations. In the automotive industry, there are possibilities involve transforming transportation to meet the needs of the future. This includes big topics like electrification, digitalization, and sustainability. BMW is using data in new ways to redefine how they develop the next generation of vehicles. And HPE is helping them to empower their journey through AI and advanced analytics with our HPE GreenLake Cloud at the center of all.

So, let's take a look. We're at a new threshold where human expertise is harnessing the unprecedented power of AI, opening the door to infinite possibilities for changing our lives. Enter BMW, reimagining sheer driving pleasure with a groundbreaking approach to electric vehicle design that transforms automotive manufacturing and is powered by data. To advance performance and safety, BMW's Global Engineering team tests the EVs in diverse driving scenarios, from long distance to extreme weather.

Using HPE GreenLake, data is processed from each vehicle on an edge computing platform, assisting the team in leveraging analytics and machine learning, engaging AI to deliver vital insights. All allowing for seamless collaboration, development, and a greater speed to market. This is what we unlock when intelligence has no limits. You drive with better performance, more confidence, more sustainably.

We drive a data foundation, advanced connectivity, faster innovation. You drive with joy. We drive towards an industry transformed. Unlock adventure, collaboration, joy, HPE Unlock ambition. What an amazing look at tomorrow's driving experience.

I'm looking forward to that. But before we begin our next segment, I would like to thank our sponsor for making Discover Barcelona possible. Special thanks to Diamond level sponsor NVIDIA and our Emerald sponsor Intel. I would also like to thank our Platinum sponsor Infosys, Kioxia, and Microsoft, as well as all our value sponsors at every level.

Your partnership makes HPE Discover Barcelona a reality. So as I mentioned earlier, HPE's portfolio now has all the critical building blocks, the expertise, the ecosystem of partners, and the edge to cloud technology to deliver on the promise of both hybrid cloud and AI. Let's get some perspective from two leaders that I'm very proud to have as a part of my team who are driving the innovation at the nexus of these two key trends. First is Fidelma Russo, Executive Vice President and General Manager for Hybrid Cloud and also our Chief Technology Officer.

And Neil MacDonald, Executive Vice President for Servers and AI and scale. All right. Okay. What do you think? -Pretty good. That was a pretty good kick. -Pretty much. It didn't look like this yesterday. You've done a lot of work overnight. Yeah, I know. It's full of people. That's the difference.

So first of all, thank you. As I said earlier, being on stage and being able to announce this amazing innovation is a testament of what you and the team really do every day. So thank you for all the hard work that you put in every day to meet the needs of our customers in ways I believe no one else is doing.

Now, we think about where we are today. I'm also excited, to help show our customers how they can take advantage of AI and this revolution that we all see. So first for you, Neil.

Let's step back a little bit because I know you love history and you like to think about how things really change. Some people might think AI is something new. And you have an interesting background. If you want, you can share it. But it hasn't been around for a long time.

What is your view of AI and how it has evolved to today? Well, that's right, Antonio. The AI of today is vastly different from the AI of years past. AI has been around as a rich field for a very long time. In fact, it's the subject I studied at university many years ago. In traditional terms, artificial intelligence didn't rely on massive amounts of data and computation.

It was about handcrafted rules and expert knowledge being encoded. And then since then, we saw a more modern subset of AI emerge with machine learning and then later deep learning, which allowed computers to make predictions without being explicitly programmed with that knowledge. These neural networking technologies are intrinsic to machine learning and deep learning. And what they do is they take labeled data and they train models in order to make predictions about new data.

So really it's all about learning patterns. And that's what's been in classical AI that's been around for so long. So generative AI is a key shift because it's not about how we are, it's now...,

it's no longer about labeling things, but it's generating things. How we think about this content, how it's evolving? It's radically different. That's what's groundbreaking about generative AI because now AI is generating new content that resembles human generated content.

That might be text, it might be images, it might be code, it might be any number of things that humans have been creating. And because of that, generative AI has this enormous potential to fundamentally transform human productivity. In fact, according to a 2024 study on the future of work by McKinsey, up to 30% of the hours worked today will be automated by 2030, fully automated. So whether that prediction is right on the mark or off by a little bit, you can see that generative AI has an enormous potential to radically change how we live and work. And that's quite a bit different from the AI, I studied 34 years ago.

Now, because of all that potential, let's say 30%, 30% productivity improvement. Big investments, huge investments are being poured into AI. And everyone wants to know when it will pay off, when this is going to pay off. And there is different schools Fidelma, I think the answer to the question depends, what are the questions you ask? For example, what are the use cases, how fast you can deploy? You have spent a lot of time with customers on the enterprise specifically. So what are your thoughts on this? So as you said, it's all about the use cases and speed to deployment.

And really for all of us, in order to get this Gen AI investment to pay off, we need it adopted successfully across the enterprise. Not just the model builders, not just the sovereigns, not just the Fortune 50, the big guys, but broadly across the enterprise and the public sector market. If you think about it, we really need to increase adoption at 1000X. So what does that mean? A 1000X. A 1000X, yes. So, everybody who's adopting it, we need to have..., we need a hundred more of them adopting it.

And every enterprise that adopts it, they need to find at least 10X the amount of applications and business processes that they're changing in order to make sure that this revolutionary technology turns into a real boost for human beings and for everybody. Now you told me a while back as we, Fidelma and I and Neil, obviously spent a lot of time with our sales force and the customers and we collect data almost real time. And one of the things you told me is that one of only one in 10 AI proof of concepts ever make it to production.

By the way, here at HPE, we have hundreds of proof of concepts and we already deploy almost 30% of those proof of concepts in production. But I think it's because obviously what we're learning, the AI lifecycle is very complex at every level, from training to tuning to inferencing. It's also because AI in nature is a hybrid workload. There are many roadblocks and customers trying to piece it all together is a complexity.

So Neil, what's your view on that? When we talk to customers, we hear three themes about common barriers that many, many enterprises are dealing with. The first is the one you just raised about the time to value of making these investments. But also the challenge of managing data, because fundamentally it's about data when you're applying generative AI. And then perhaps most importantly of all, how do you manage the risk and the compliance issues that are brought up, both about protecting the data but also protecting your brand as an enterprise.

So until now, there wasn't really a solution to easily address those three barriers, each of which could come with quite a lot of complexity and very long planning and development cycles. So our approach has really been very simple. We want to meet our customers where they are and make generative AI dramatically simpler for enterprises of all sizes so that they can quickly realize this promise of generative AI leveraging the game-changing technology without being consumed with complexity.

So the theme for most of the customers here is about simplicity and time to value. So early on, as you guys know, I talked about what we did, Fidelma with NVIDIA, NVIDIA computing by HPE, and our first offering, the HPE Private Cloud AI, which is exactly addressing the points you make, Neil, a fully integrated turnkey system for generative AI. And at the front of that was Fidelma working with Neil, but Fidelma spearheaded these value proposition. Can you talk about the value prop and the features that this product has? Let's compare it to what's out there first and so what people were struggling with. There are a number of competitors offering GPUs in servers with custom solution integrations, and what happens is this takes a lot of consulting hours, it takes months and months of work to even get it to the point where the infrastructure is tuned.

And so if an enterprise wants to get to value, they don't want to spend their time on that. And so we took a radically different approach. We don't just take a whole bunch of servers, compute and networking, hand them to you and say good luck.

Instead, we took the time to give our customers a fully assembled, integrated and tested product that's ready to go. So with Private Cloud AI, with the results of that, you can get your AI up and running in seconds with three clicks, and you can compare that to on-prem projects which can take up to six months to deploy. It comes in four T-shirt sizes, so we make that incredibly simple.

You buy a small, medium, large or extra large, and then each of those has a curated recipe of GPUs, networking, switches, servers, storage and all of the software that you need to get up and running. That's the value. And so, again, simplicity in integration. So the enterprise then can invest and grow the capability of scale over time. So there's a need to start to these four sizes.

They can pick one and scale from there. Yes, so you can start and then you can scale out from there. And we take the guess work out of that with a size or a very intuitive sizer where you put in your workload and it pops out your answer. So, Antonio, one important thing is, as we look at AI, it's not just about the technology. It's about re-engineering your business processes. And you announced earlier our expanded collaboration with Deloitte, as well as our Unleash AI Partner Program.

And so the reason that we have these alliances is, for instance, with Deloitte, we've had over a 25-year history with Deloitte, working with us to deliver solutions around hybrid cloud and edge and IoT, and now we've expanded that to AI. And so that you get an end-to-end implementation with Private Cloud AI, whether it's with Deloitte, and we integrate in the software packages from a number of the partners in Unleash AI. So you basically, you know, we take all the guesswork out of us. Now, I think the audience will appreciate how tough it is really to bring all this together. By the way, you Neil talk about technology and software and the like, but one of the key elements is people. It's really hard to find people that have these skill sets.

And I will argue that in this room, whatever your job is, everybody in the room will have to have a minor. So you have a major, whatever you are doing, and a minor in something. That minor in the future is going to be AI. You need to understand how it works today. When I think about this, talk a little bit more about that, how hard this is for my talent and technology standpoint. So an organization that's embracing generative AI, the hard way has to go modernize the infrastructure and the software, but it's also got to be connected to data.

And when you talk about everyone needing a minor in AI, it should be a minor in how to use AI to transform your business processes and your customer experience, not a minor in all of the underlying technology, which is changing incredibly quickly and could consume an enormous amount of effort to curate internally with your organizations. Generative AI is fundamentally about data, and taking a generic generative AI model and just trying to apply it is not going to work. It is not going to be effective in making you more competitive. You have to infuse it with your enterprise knowledge, your enterprise data.

And for most organizations, that data is the crown jewels and it's on-prem. But if you want to take that data and embody it into models, you have to fine-tune the models, or you have to use technologies like retrieval augmented generation using your own data in the enterprise. And putting that all together requires a lot of expertise in multiple domains that most organizations don't have and that is very difficult to compete for in the marketplace. So we're excited with the work that Fidelma and the team have done, that we're solving that problem on how to access the data, how to bring it to generative AI, and how you can use that to compete in your markets a gainst competitors who are not going to leave you at a big disadvantage in the productivity gains from generative AI because you're spending all of your time on plumbing. It lets you focus on the next big step and the application of your data with generative AI to enhancing your business. So what Fidelma has done in the HPE Private Cloud for AI is to remove some of these infrastructural blocks you talked about, Neil and Fidelma.

Enterprise can focus on getting the value because you provide the curation of all these infrastructure software. So Fidelma, when you think about all of that, how do you think about what the focus for enterprise should be? So I think what it is is all around time to value as fast as possible. And part of this is instead of spending your time on the upfront integration of the models, which there are new revisions coming out pretty much at a shorter and shorter time interval, and putting together all of your solution accelerators yourself, you really want to get something where it's ready to come and run. So with Private Cloud AI, what we've done, for instance, you can deploy a chatbot in a few clicks.

And so back to what Neil was saying, AI is fundamentally a data hungry application. And we've all spent a lot of time harnessing data, grooming data, wrangling data. If you think about the verbs, they're very hard. What we believe is you need to bring the AI to the data, not the data to the AI.

And that's why we've integrated our data fabric solution in there so you can unify disparate sources across your enterprise, whether that's up in the public cloud, whether it's in a co-lo, or on the edge, and putting all of that together. And so these are all of the pieces in the experience that you get that's different when you get a Private Cloud AI versus buying the individual piece parts, putting together all of the hardware, all of the software, managing the whole lifecycle, and then being able to deploy it as fast as possible. So in the end, it's very clear objective, first and foremost is to make it simple and seamless for enterprise customers to really access and extract value from that data right away. And that's why we believe that HPE Private Cloud AI is such an innovative solution for any number of use cases.

In June, Jensen joined me on stage, Discover Las Vegas, and he reminded us how every layer of the computer stack has been transformed. And to me, that's a big deal. Do you agree, if you don't mind, with that? Oh, no, it's one of the biggest transformations I've seen. And what it means is everybody who're all used to enterprise applications, we have to rethink our strategies.

It's a new architecture. It's completely different from everyone that we've known to date. It impacts your data storage, it impacts your networking. And your networking now goes from connecting GPUs to connecting nodes to connecting things on-prem to connecting things at the edge. And so our approach is, let's simplify all of this. Curate it, integrate it, and make sure that everything is ready to go.

And so you can see we are thinking every aspect, including workflows and all the things that come with. Now, when you do a turnkey approach, it's really important for customers to learn it quickly, because obviously your job is changing. So maybe you want to elaborate, Neil, a little bit how they should think about this.

I think for the majority of companies, there are a few key things to consider in order to get to the benefits of generative AI. The first is that the technology stack here is completely alien to what has traditionally been deployed in enterprise IT. It doesn't look anything like even what well-experienced enterprises have built and deployed before. So that's a whole new learning area. Then, as you bring these technologies together in order to apply generative AI, you have to figure out what use case or class of use cases you're going to deploy the technology against. And if you're trying to do that from scratch, on top of all of the piece part infrastructure, you're dealing with different servers, different fabrics, different data, all of the stacks in the software space that are different, and then piece part assembling all of that which is a massive learning curve.

And organizations that we've seen try to do that from the component parts, we see spending a lot of energy and a whole new set of talent and skills acquisition that most organizations just don't have the time or the bandwidth to deal with. And at the end of the day, if you believe that generative AI is about transforming your business and its productivity and how you engage customers, it's all about you being able to do that faster than your competitors. And having the approach of an integrated turnkey solution lets you focus your energies on getting those benefits, not on dealing with this very rapidly changing complexity. So think about this for a moment.

For enterprise customers, before we shift gears here, brand new technology moving a light in speed. The stack is totally different. Traditional thinking doesn't work here. And fundamentally, when you're trying to make an investment which is expensive in AI, you have to shift your mindset from global in technology to use technology to deliver value faster.

Because ultimately that's what it is. It's improving business productivity and decision making. And we believe HPE has built, engineered, and provides now an integrated offer that addresses all those needs. Now, let's shift gears for a moment.

We continue to see the increasing origins around massive amount of power and cooling. Because in addition to being expensive, it's also a power hog here. And so whether it's power or cooling, everything that we know has to change. And so we, HPE, have a lot of experience transforming that type of infrastructure. And you are leading this with the team, can you elaborate a little bit? So as we push the boundaries of computing power in this generative AI era, the traditional methods of cooling systems just don't cut it.

These Gen AI systems consume massive amounts of power and are delivering incredible computational capabilities. But doing that produces a huge amount of heat. And so the heat thats dissipated in these systems is increasing very, very rapidly. Managing that is critical to delivering performance and efficiency and reliability. By making the shift to use a liquid medium to draw heat away from these accelerators and processors and the other components in the systems, we can get to a far greater efficiency than can be achieved by using air.

And that results, for everybody in a lower energy consumption, Antonio, and a more sustainable data center footprint. Neil tells me a lesson, he's using a simple analogy. If you burn your finger, where do you go, you're going to blow it, you're going to go under the water. Probably going to go under the water to cool it down faster. But this is why it's amazing, this transformation that we see today, because we have to think about the entire data center infrastructure totally different, HPE is a leader in direct liquid cooling across both server and networking, because I don't think the audience understand when we cool systems it's not just the server or the silicon.

We actually cool the entire infrastructure. And we have more than 300 patents in this space. So maybe, you can elaborate a little bit more how we're thinking about that with this inflection point and some of the amazing announcements we did not long ago. Well, as you say Antonio, when you burn your finger, you don't blow on it to cool it. That seems like a ridiculous idea. You don't stick your hand in the fridge or the freezer.

You use liquid, you run it under a cold tap. And why do you do that? You do that because the liquid transfers heat much more effectively than air does. And it works just the same in a data center environment. That liquid cooling draws the heat away much more efficiently.

And it also simplifies the rest of your cooling equipment in your data center. That becomes absolutely critical when you're dealing with the thermal demands of generative AI workloads and these ever more powerful technologies that are going to demand that cooling. And it's also the cooling technology that's helped us achieve seven of the top 10 most powerful computers in the world on the top 500 list. By approaching this with 100% fanless direct liquid cooling, not a little bit of liquid cooling and still some fans and air cooling, but 100% fanless direct liquid cooling, we can eliminate the need for fans and the energy that they can consume using cold plates through the entire system, including as you say, Antonio, the switching but also some of the local storage.

When you think about the switching, it's more than just being able to cool it with liquid that matters. You also make choices to design that fabric for energy efficiency. And by using copper connections across our slingshot network architectures, we can reduce the power required for running the network by at least 50% compared with fabrics that are using optical networking connections. Why do you care, why does this matter? Because it means that more of the energy that you're consuming from your utility is actually doing useful work.

You're wasting much less energy cooling your infrastructure. You're wasting less energy running your infrastructure so that the energy you're consuming in your business is translating much more efficiently and much more directly to the work you're doing and giving you a much lower environmental footprint. But it's not just also the opportunity to reduce the energy consumption by applying liquid capabilities. It's also the ability to reduce space.

Because when you go to this type of architecture, we can prove we can reduce space by 50%. So in general, it's not just the technology benefit, but it's also the carbon footprint benefit. So from our vantage point, eventually this needs to deploy a scale to meet the sustainability targets. I think this is such an important point, Antonio. It behooves all of us to maximize the efficiency with which we're using energy. If you add up all the data centers in the world and the energy they're consuming already, they're bigger than many, many countries if you were to group them all together.

And so there's going to be increasing pressure on energy supply. And it's going to behoove everybody to make the most efficient use of that that we can. And 100% fanless direct liquid cooling is the best way to do that. If you compare that approach to air cooling a system, you can reduce the wasted energy in cooling by up to 90%. If you compare even to a hybrid liquid cooled environment, such as some of our competitors are providing, you can reduce that by 37% when you go to 100% fanless DLC. That reduces your utility costs for the energy.

It reduces obviously the carbon footprint for the generation of that energy. And it's super important for driving forward in generative AI as sustainably as possible. Neil, while we're on the subject of sustainability goals and maximized efficiency, which are here in Europe, obviously, there is a heightened awareness of that, will you please share some of the thinking about the data center capacity and how we optimize now the space? So I talk to a lot of customers who have extended and extended the lifecycle of computing equipment in the data center. That was something that many organizations did during the pandemic, and it's been an ongoing trend. And people do that because they believe that's the most responsible thing to do, run that infrastructure longer. But it actually turns out that older infrastructure is incredibly inefficient.

Uptime did a whole set of research where they found that 40% of all servers are now six years old or more. But they're consuming 66% of the power while doing 7% of the work. And when you look at that inefficiency, and you look not just at the economics on that energy consumption, but on the carbon footprint associated with its generation in many places, there's a real opportunity to refresh that infrastructure.

Reduce the energy consumption in an energy constrained world, improve the economics, and even taking into account the carbon footprint of the new equipment, still reduce relative to the energy footprint carbon that was being driven before. So if you think about our latest ProLiant Gen11 portfolio, which delivers record breaking performance across many enterprise workloads, and has been engineered for this hybrid world that we're all living in. One ProLiant Gen11 server can replace 11 Gen8 servers, those six year old or old servers, gives you all the performance that you had before, but can reduce your power consumption by 90%. And it's not just about the sustainability benefits of doing this, it's also the other benefits that you get when you move away from older systems. These systems are almost certainly from anybody in the industry, not delivering the most advanced security features, and they generally don't have the latest manageability. So ultimately, across your organization, even beyond the energy and sustainability benefits, there are efficiency gains to be had by refreshing that equipment and unlocking the capacity in the datacenter to deploy for other purposes like some of these energy hungry generative AI infrastructure that has to be deployed where the data is, which is in the data center.

This is quite interesting because traditionally, we look at CapEx deployment depreciated over a long period of time, and as the changes on the workloads, the demands on the workloads changes, some workloads moves out, some workloads come in, but this energy, this system is not energy efficient. So the opportunity to reduce the entire bill get more sustainable, free up space for new accelerated computers that will need more power. There is a recipe for that, and I think our Gen11 does that job extremely well, in addition to being managed in a hybrid environment with our GreenLake cloud. So now Fidelma, we know AI is a data intensive workload, we have spoken about that, and enterprises often struggle. to where process that data. So where that data needs to gravitate and how it needs to be processed.

Why hybrid approach is such a critical area for managing AI workloads, and then ultimately how networking

2024-11-24 21:12

Show Video

Other news