Dell Tech World Day 2 Keynote: Embracing New Potential with Jeff Clarke and guests
Please welcome Jeff Clarke. Good morning everyone, and welcome to day two of Dell Technologies World. It's good to see everybody again. And I look handsome. Well,
clearly ChatDTW has poured some poor, or was given some poor parameters on what handsome was. But I have hope that we can tune the model for next year, so it can do this for me and I don't have to be here. I know the marketing team is now cringing with that, but let's start over. Welcome to day two, our day for the techies. We're gonna have fun this morning.
We're gonna be a little futuristic. We're gonna be a little technical. We're gonna dabble in some architecture. Hopefully I'll have a surprise or two, and then we're gonna tie it all together by making it real over the next 60 to 65 minutes. And when I think about what's happened since we were last together in the workshop, it's been a heck of a year, a rollercoaster in many, many ways.
I think about a year ago, shortages to now excess supply, I think about rising interest rates and rising inflation rates. We were in a high growth market no longer than 365 days ago, and now we have a slowing economy globally, escalating geopolitical tensions, remote work to now hybrid work. Through it all, there was one constant, and that constant was innovation.
And innovation has thrived at Dell quite possibly - let me strike that - this has been our greatest year ever innovating in this company. Period. With more than 120 new products launched in the last year, 30 infrastructure products launched in a 13 week period, Project Alpine, Project Frontier, you heard yesterday about our expansion of our APEX portfolio with Compute Managed Device Services, Cyber Recovery, storage for public cloud, and many, many others. Over 2000 new features across our storage portfolio, the increased capabilities and security with Cybervaults in all three hyperscalers, Zero Trust advisory services, and PowerProtect Data Manager, and first to market with the next generation AMD and Intel CPUs and our 16th generation server, which includes a purpose-built portfolio around artificial intelligence with products like the XE 9680. This past January at CES, we had this wonderful event where we won over 60 awards, many best-ofs with Alienware our M16 and M18 gaming notebooks and best-in-class distinction with our UltraSharp 32-inch monitor. An exciting time.
And some personal pride OptiPlex - which I was part of the original team - celebrated its 30th birthday, Precision, personal pride, celebrated its 25th birthday. And as you saw yesterday, our Latitude products have never been better. Last year was a breakout for our telecom business with the release of infrastructure blocks and a series of telecom servers with the XR8000, the 7620, and the 5610.
And we've been leaders in the environmental area for a long time. We take this stewardship very, very seriously. Last year alone, we diverted more than 500,000 pounds of ocean bound plastics that made its way into our products and packaging.
If I think about what we've been able to do with 2,400-ish patents last year alone. Our company was granted 2,445 patents in calendar year 2022. To put that in perspective, just behind Intel and ahead of Apple, Microsoft, and Amazon. Worldwide,
our portfolio includes 28,738 patents at this point today. And then lastly, another point of personal pride in this last year has been our supply chain has done an incredible job - particularly through this pandemic era. And we're gonna make it better. We're investing to digitize it. We're investing to build more resiliency. And we'll do that by building greater geographic diversification across our global supply chain. Simply put, unmatched breadth of innovation on our, in our industry.
No one can come close to what we've done in the last year, hence our greatest year ever. And I hope you can tell a little bit of gleam and smile. I couldn't be more proud of our team and the work they've done.
It was an extraordinary year. And we should give them all a round of applause. And for you, our customers, you can count us to innovate, lead, and help you on your digital journey. We're here for the long run as you can actually see. Another way to think about this, and I wanna to bridge into what you heard yesterday. Yesterday we talked about clouds and we talked about clouds everywhere, public clouds on-prem clouds, edge clouds, clouds and co-location centers. And there was one reoccurring theme is how do you get the right workload in the right cloud? How do you optimize that? We characterize this as the multicloud world and for us and for you, our customers, we think the multicloud world has to have four distinct characteristics for you to operate in a way that you want. It's gotta be agile,
it's gotta be elastic, it's gotta be dynamic and it's gotta be easily consumed. In other words, it requires a softer defined infrastructure orchestrated to behave as a single distributed system, empowering developers, users, data and applications. Today, we're gonna talk about multicloud by design, and we're gonna dive into the architecture we've developed to deliver a consistent experience across many different clouds with enterprise-grade capabilities, wherever your applications and data are at a predictable and best cost. The reality is, from our vantage point, to make this multicloud world work, it all starts with your data.
We believe that if you get your handling of the data right, everything else from access, to tools, to choice of cloud, to your ability to control and manage your costs becomes far more achievable. As a result, you heard Chuck and Michael talk about this a little bit yesterday, but as a result, our multicloud architecture is built around a common storage layer, a substrate, if you will, this connective fabric across clouds that's going to bring this cloud world together. So, it connects clouds, it connects workloads, it collects, it connects data in a seamless and efficient way, bringing you data mobility between your on-prem assets and your public cloud storage in a way that you don't have to retrain your staff and minimize this refactoring of applications. Think about that. We've built a connective tissue from public cloud to on-prem clouds. Chuck talked about this ground-to-cloud, cloud-to-ground, and this substrate is allowing us to do that.
This substrate is managed by a modern SaaS control plane, which enables centralized Dell storage software access deployment, monitoring, discovery and lifecycle management across multiple clouds be it block, be it file, be it object or protection storage. With consistent operation or consistent orchestration, telemetry and serviceability, serviceability capabilities - tongue twister, sorry - for data, no matter where it's located. And in the future, we'll bring native AI insights to be able to help you optimize workload placement, optimize cost, and actually bring a new capability called proactive service disruption avoidance - a mouthful -but to make sure this doesn't fail. And if we
combine this with what we believe is happening in the world today, you look at modern applications and data pipelines that are distributed, our architecture increases developer and operator productivity by uniting data and applications where it makes the most sense. This common storage layer is the foundation that you heard about yesterday's APEX Storage for Public Cloud. This connective substrate is the foundation that we built everything that you heard about yesterday on. So, we take our best in industry soft storage, software assets, PowerScale, PowerFlex, PowerProtection, and we think about that running in a way that enables you to run each of those assets in the public cloud, managed by a consistent set of tools.
Managed by a consist set of tools. So, bringing you the consistency that you've asked for and not having to manage these different environments each in their own way. Developers get our enterprise grade data services with their performance flexibility, scaling, security and compliance capabilities for a richer, better public cloud experience. The SaaS control plane that Chuck talked about yesterday, APEX Navigator, enables deployment, automates provisioning and lifecycle management of your Dell storage software in the public cloud. It uses the substrate that we built to orchestrate mobility between your on-prem and public cloud environments, providing you the insights, the monitoring and health across your entire Dell storage estate.
Across your entire Dell storage estate, no matter where it is. And then you heard us extend that yesterday working with Microsoft, Red Hat and VMware, where we've built APEX Cloud Platforms, a series of fully integrated turnkey solutions that extend their public cloud experiences on-prem on our industry leading PowerEdge servers. Guess what it's built on? The same common storage layer that I've been talking about is the foundation of our APEX Cloud Platforms. So whichever stack your developers prefer, the integration of this common storage layer in each one of those APEX cloud platforms makes it possible for you to place your workloads in an optimal location based on your requirements performance, freeing your developers from ever having to think about infrastructure or the location. This is powerful,
really enabling developers the freedom they need to do what we want them to do, which is develop and not have to worry about where it is, how to get it. This is multicloud by design. It's Project Alpine that we talked about a year ago, made real. I couldn't be more excited about the architecture that we put in place that we showcased yesterday and our ability now extend this and help developers across our customer base even be more powerful. Pretty exciting stuff. Maybe now
- you, you can't be on a stage in this type of audience and not talk about the most talked about technology in our industry today. Generative AI powered by large language models. This isn't new news. Natural language processing has been around for a very,
very long time. However, the recent advancements have them passing what is an old benchmark, a 70 year old benchmark, the Turing test. And what's happening now is these models are game changing and highly disruptive. They're incredible and powerful what they can do.
And customers today, whether it's at the edge or in their on-prem data centers, are looking for ways to easily, easily deploy the infrastructure that meets their computational and data requirements so they can actually use their own data and business context to maximize insight and outcomes. Trying to gather what they have about their domains, their specific processes to be able to run them in a way that they can gain greater insight and drive better business outcomes for their respective businesses. I had the opportunity recently to explore this. I sat down with an old friend of mine that I first met 27 years ago when we used to geek out talking about exciting things, at least back in that day in our careers as they crossed around polygons, fil rates, shaders, rendering back when Dell and Nvidia were actually disrupting the proprietary risk UNIX workstation marketplace. And I had a chance to sit down with Jensen, please roll the video. Hi Jensen, thanks for joining me today. I really appreciate the time you've taken outta your busy schedule to sit down with me and talk about one, if not one of the most disruptive technologies that we've seen in our four decades. Think about that. Four decades together. Generative AI.
Pretty amazing stuff, isn't it? Generative AI is a very, very big deal. Finally, for the very first time really since we've been talking about artificial intelligence, every company will be able to put AI to use to revolutionize their products and their company. So I'm very excited about it. Given what you and I do for a living, it has a chance to revolutionize the way we're gonna innovate going forward.
You and I, we've seen the PC revolution, the internet, cloud and mobile, but generative AI is, is a much, much bigger deal for several reasons. It's almost three things in one, really. The application is just miraculous.
It's easy to use and it does amazing things as as we've all experienced. And the second thing is that as an API, because it's able to understand almost any language and any form of input, you can connect it to just about anything. The third thing it does is that it's a whole new computing platform in the sense that, it has a universal programming language called human and is incredibly flexible, with these emergent capabilities that, it's able to perform tasks either without training or with a little bit of fine-tuning. I think about our latest generation of 16G Servers that we purpose-built for this AI world with a range of offers that meets these needs, all powered by Nvidia GPUs.
Enterprises are working to deploy these models and workloads on-prem to make full use of their data. If you think about how do they drive better insight, better outcomes for their business, they're looking at, I need my context of my business with this data, with these models and tools to really drive faster insights to my business. I assume you're seeing the same thing. Yeah, they have to do it on-prem because, that's where their data is. They have to do it close to the edge because that's, you know, closest to the speed of light and, you want it to respond instantaneously. You also want it to be at the edge because in the future, you want to have information from multiple modalities. The more contextual information we get, the better grounded, the better, inference that we can make.
And so the ability to make those decisions as close to the edge as possible where the action is, where all of the data is and where the responsiveness could be as high as possible, is really essential. Couldn't agree more. I think it's spot on. And I think as customers work to deploy these models and these workloads locally on-prem, they're gonna face a new series of challenges. Whether that be issues around how to secure the data, how to make sure the data keeps its integrity so it drives high-value results.
Customers are gonna have to figure out in the world that we live in around privacy and the regulations, around the use of their data. This is the essence of all companies - domain expertise and proprietary data. And nobody could upload that into the cloud. Nobody could discover that on the internet.
It is your domain, it is your domain assets. We've been such great partners. I think about the work we announced not too long ago with VMware around virtualized GPUs for the AI environment you and Michael sat down in January and launched our new generation, our 16th generation of PowerEdge servers. Most recently we just launched our PowerEdge XE9680 with our eight-way H100s from you. It is incredible. And on top of that, Jeff,
we have state-of-the-art AI foundation models that enables all the enterprises to be able to build their own custom models. So between the two of our companies, we have the infrastructure, the compute nodes, the networking, the computing fabric, the AI operating system, the large language models that customers can use to adapt to their own domain and adapt to their own data and, teach it to operate in the domains that they care about. The part that I'm really excited about is we reinvented enterprise computing together. What the last wave was about was data centers for file storage and, managing all of the employee's information.
The next wave is really about harnessing our domain expertise, harnessing our proprietary data and transforming it into intelligence. The next generation is probably not gonna be called data centers. You know, you should probably call it AI factories. And all of our companies are built on intelligence. We're doing something really, really important together.
I couldn't agree more and it kind of takes us why we're here today. Why we are doing this video. It was great, straight, man. We're here to announce a new collaboration between our two companies, what we call between us, Project Helix. This is a full stack solution that enables enterprises to create and run custom AI models built with their knowledge on their business, our powerful infrastructure and software, Nvidia's accelerators, AI software and expertise. This is unmatched second to none in the industry. Yeah, Jeff, we've worked on a whole lot of stuff together over the years. Over the last 30 years, we've built a lot of things.
Nothing is nearly this amazing, nearly this impactful. Every company is at its core about intelligence and for the very first time together, Project Helix, the work that our two companies are collaborating and working so hard on will help every company be an AI factory and be able to produce their intelligence, their domain-specific intelligence, their expertise, and then do it at light speed and do it at scale. And so I'm super excited about our work together. The new data centers, AI factories. I love it. Jensen, thank you so much. You're the best. I'm really excited about what we're doing with Project Helix, but equally important, maybe even more important is where we can go together with this.
It's an exciting domain. We've just scratched the surface. There's so much more we could do together. Again, thank you for our nearly three-decade partnership.
It's been great and I always love geeking out with you. You're the best. Thanks again. Thanks Jeff. There you go. Dell and Nvidia,
partnering again to change the game with Project Helix, our trusted high-performance infrastructure like our PowerEdge XE9680 with eight Nvidia H100 GPUs, all into Dell Validated Designs for any scale wrapped around with AI consultant services, game-changing game on, time to make it real. And to do so, please help me welcome Kari Briski, VP of Software Project Management at Nvidia and an expert on Generative AI. Welcome Kari. Thanks for joining me today. Jeff, thank you for having me. Jensen sends his best to all of you. He wishes he could be here, but I am here and I am excited to talk to you about Project Helix.
Generative AI is exciting. You've heard the buzz, you can feel the buzz, but it might be a challenge for every enterprise to build a large foundation language model from scratch. Still, many enterprises need a hyper-personalized, large language model, or LLM, specific to their domain, trained on their data and reflecting their brand voice. Project Helix will give enterprises that jumpstart to incorporating generative AI into their business applications. For example, Jeff, I hear you like golf. I do.
Let's say there is a golf retailer called Bunkers and Hazards. Bunkers and Hazards. That's a pretty good name. Kinda like my golf game. Right? If it was like my golf game, it would be called the Beer Cart.
Beer Cart <laugh>. But in all seriousness, let's take a quick look at a before and after scenario of a general-purpose large language model versus a customized large language model. Say you ask a virtual assistant powered by a general-purpose large language model to recommend a new golf club. Jeff, why don't you give it a try?
Sure. I need a new driver. The Sledgehammer 10 driver has good reviews. See, the general-purpose large language model gives answers that are just fine, at the surface, but they are not an expert on golf. It's not even recommending a Bunker's product and it's not personalized to your needs.
We can create a virtual assistant to better serve Bunker's customers, leveraging an Nvidia AI foundation's pre-trained model in the NeMo framework, along with the Dell Power Edge XE9680 and PowerScale storage, Project Helix. Using that, we can condition a large language model to be a subject matter expert in the world of golf and bunkers brand voice through a virtual assistant that we will call CaddyChat. CaddyChat. You gotta like that, but it really is gonna use now Bunkered and Hazard's data to inform and train its model, isn't it? That's right. Virtual assistants in this case, CattyChat powered by generative AI can do a lot. That's the beauty of foundational models.
They can serve a multitude of use cases across every single industry vertical. For example, it can help with marketing tasks like generating a catalog of products with descriptions and images, creating a website and social media copy, and even automating emails. When you connect your large language model to your knowledge base, it can fetch up-to-date information and summarize customer service engagements such as purchase history issues and returns. Combining customization and factual data to a pre-trained foundation model results in a better experience in higher ROI. In fact,
even the voices generated by these virtual assistants are generated by an Nvidia pre-trained model. So now let's try it with CaddyChat. Sure. I need a new driver. Of course. Jeff, I remember you purchased a PowerStrike ZR-5. I bet we can find one that will be even better.
What are you looking for specifically? That'd be funnier if that was Bill Murray's voice. But nevertheless, how about I need a new driver with an eight-and-a-half-degree loft for swing speed of 110 miles an hour with a spin rate less than 2000 RPMs? Thank you for the information. Jeff, considering your preferences and the characteristics of the PowerStrike ZR-5, I have a couple of recommendations for you. See, we are using a large language model that is customized to the golf domain, stores data and your personal preferences. The suggestion is so much more personalized and accurate and you can get a more natural interaction. So let's hear those recommendations. I suggest the Helix Thunder X1000 driver with 8.5 degrees of loft
and a stiff flex shaft. This driver features a lightweight design with low torque for faster swing speeds and greater forgiveness due to its larger sweet spot. Nice, that rolls off the tip of your tongue. I guess I'll try the Helix Thunder X1000. Your custom Helix Thunder is ready.
Wow. Only in Vegas would they hand somebody like me a stick on a stage. Uh, but if you think about it, what we tried to demonstrate here is adding your data and business contacts makes Generative AI a powerful tool for your organization. In fact, highly disruptive.
Absolutely. Jeff, let's drive this home. Project Helix will give businesses the tools they need for generative AI, turning their data centers into AI factories, while maintaining control over their sensitive, high-value data. Thanks for showing us how generative AI can help and work and really change the game for our customers.
You're welcome. Thanks for inviting me. Thanks for having me. I'll leave this with you. I'm sure they don't want me to have that. Everyone, Kari Briski from Nvidia.
Now we're gonna move to the edge. Infrastructure, you think about it, compute and storage resources are being pulled to where the data's being created to minimize latency with a predictable quality of service to fuel new workloads and AI at the edge. And given the data is very expensive and complicated to move, the result is we have massively, massively distributed architectures that have been difficult to manage, automate, and provision. Then you add,
kind of what I call the 'Triple Lindy' test here, the NativeEdge use cases are growing exponentially. Devices are deployed in a variety of environments, many without the IT resources to install them, add some unreliable connectivity, and to finish the list off, one's gotta secure the data. Last October, you might remember we announced Project Frontier to solve these challenges and today we're gonna close out Project Frontier and I couldn't be more pleased that we're making it real, with the launch of Dell NativeEdge, a first-of-its-kind edge operations software platform built to simplify edge operations, optimize investments, and secure your data like never before. Fundamentally, Dell NativeEdge accomplishes two things. First, it enables centralized deployment and lifecycle management in a zero-touch manner with a Zero Trust infrastructure. And two, the software platform enables centralized deployment of containerized and virtualized applications across your entire edge estate.
Dell NativeEdge is multicloud by design, enabling you to deploy your applications in new and in existing environments. Let's take a look at how one industry is taking their edge to deliver a better experience for their customer. Shelves. Shelves that know what taste buds want. Shelves smart enough to see `sense, react restock. So Caramel Swirl is always there for the taking.
To tell us more about Dell NativeEdge. Please help me welcome Gil Shneorson, our senior vice president of Edge Solutions. Welcome Gill. Before we get started, I need to tell you a little bit about Gil.
He's an interesting character engineer by trade. He's worked on many of our groundbreaking projects, programs, new businesses where we need someone that was maybe a little unorthodox -. What you think? Oh, I know. Maybe a little of the right stuff with the right attitude.
That's the guy we've asked to build Dell NativeEdge. You know... Did I surprise you? Did you? Yes, you did. <laugh>.
Lost for words. If you know Gil, this is a first. <Laugh>, you know, I had to create this whole video just to get some free ice cream. Yeah, shocker on that one. And if you think about what that video I think really shows is the fact that retail has progressed a long way. It's advanced in many ways and far more integrated than people imagine. That's true. These technologies deliver far better experience for their,
their customers. The data that they're now able to capture enables them to predict exactly which items need to be stocked at any given time, allowing them to optimize their supply chain. Now, you are mister supply chain, Jeff, right? So you can see how this allows them to carry minimal inventory and reduce spoilage and ensure that every customer gets what they want, when they want it. Put together, solutions like this allows stores to dramatically reduce cost while optimizing the customer experience, increasing both profitability and customer's loyalty.
Right, if you think about it, how does our Dell NativeEdge platform help our customers deliver better business outcomes? Well, everything we saw in that video was being driven by workloads purpose-built for and deployed at edge locations. Also, this customer experience you saw was in one of hundreds of stores in a chain that all need to deliver a very consistent experience. From the cow... From the cow to the factory. From a ship connected via Starlink to the distribution center. And finally, from a 5G connected truck to that, hands of that employee who puts that ice cream on the shelf.
Every point in this chain benefits from sensors, cameras, compute, connectivity and edge specific, edge native specific applications, both on-premises and in the cloud that all have to be deployed lifecycle, managed, secured. This is not your typical IT environment. Can you imagine how challenging it is to deploy and manage all of that? Well, this is where the real world operations technology intersects or runs into the digital world, information technology and now add thousands of devices in many, many locations. And for extra credit, how we need to be responsive and fast. How does NativeEdge provide or become an enabler for our customers? Well, the sheer number of devices, workloads, and locations require that our solution to this problem needs to be affordable, really simple to deploy and manage, highly secure and able to be deployed and managed across hundreds or even thousands of locations. NativeEdge is the first platform that allows anyone at an edge location, even you, to simply plug in network and power and just walk away.
This device will securely onboard itself, apply any updates and provision the application necessary to do its job. In this retail example, we chose to use a combination of devices based on workloads. In any given store, we may require a ruggedized PowerEdge XR4000 in the office to do the processing for analyzing data and the data streams in the store.
We need an industrial PC like OptiPlex for point of sale, and we need gateways to connect to the Smart Shelf application. And there are devices like this in every industrial setting, isn't there? Absolutely. Now, after you select the device or the right hardware, we assemble and digitally sign it in the factory and then we drop-ship it directly to any location. Now this is the interesting part. While it's in transit, IT is now able to claim entitlement to that device using a Dell-provided digital voucher and configure blueprint to predetermine what will be deployed on this device. To accomplish this, by the way, we had to be the first in the industry to commercially offer secure device onboarding at scale. That's awesome. If we double-click on that,
one of the opportunities is how do we help our customers ensure their configurations are updated and their applications are provisioned correctly? Great question. Take a look at this NativeEdge dashboard, everything is centrally managed and way more automated than before. NativeEdge uses blueprints to deliver applications to any device at any location at scale. We connected a Gateway 3200 to the ice cream freezer. You can see it's pre onboarded, it's gonna stay like that until it's powered on for the first time. At first, power on this device is validated, certified that it has not been tempered with, and then the operating environment is installed.
You can see it's now provisioning the device. And when done, this device is now provisioned. What this means, it's serving a dial tone of containers and VMs. Right? From there, NativeEdge automatically pushes infrastructure updates and application blueprints that you have predetermined should run on this device, including any supporting applications that may be running on other edge devices, your data center or cloud of your choice.
So we can deploy and lifecycle a complete outcome. This would happen to any new device. And as you can see, this device is now online with both applications deployed and running.
And we can also take a more detailed view of the device and estate, but we need all of this to connect and monitor the ice cream freezer. So check a look at this live dashboard that we just deployed as part of this blueprint. So let me give you, if I summarize correctly, a single IT administrator can manage a fleet of devices in multiple locations, leverage blueprints to deploy and manage all of this hardware, and more importantly, these edge native applications remotely all from a single pane of glass? Absolutely with NativeEdge, we simplify the deployment of end data operations of infrastructure and workloads across the entire edge estate.
Through tagging and workload definition. We could create batch operations that allow us to work on multiple devices and locations in bulk, and we ensure the secure operation with Zero Trust security capabilities. Now, by the way, we talked about a cold chain example. This, those very steps can be applied to... And,
I want to say we deployed this, but we deployed, an application to manage the temperature of the ice cream. Next thing you wanna do maybe is a loss prevention machine based. And so with NativeEdge, you can just deploy another application using blueprints on the same infrastructure or in existing infrastructure to optimize your investment. So it's as simple as deploying a blueprint to hundred, well, thousands or hundreds of locations. By the way, we discussed this as a cold chain. It's the same steps are applicable for energy or manufacturing, or transportation.
Many more a wide variety of environments and connectivity scenarios. To conclude, Michael Dell talked about yesterday, how an idea forming in one said, may spawn an ecosystem of innovation. So maybe while I'm talking, some of you were thinking, well, what does it take today for me from an idea all the way to deploying a complete outcome at my age? And so hopefully if you compare this to the experience I just showed you, you can start thinking and seeing what we see and why we are so excited to bring our own innovation to all of you. Awesome. I just love it, Gil. When you see our technology in action, turning it from a concept now into a real product offering, it's just so exciting. Thanks for joining me today to talk about your baby.
Jeff, can I get off script for a moment? Do I have a choice? It's just that we have such a great audience and I know many of the Dell team members are watching us live. I just want to thank, first of all our, design partners, design customers that are sitting here with us today, helping us design and bring NativeEdge to market. So thank you very much, and all of the Dell team members, hundreds of them, that they've been working for months and months, tirelessly to bring this to all of you. So thank you very much and thanks for including me. Great call out Gil. Thanks for joining me. Hey,
and go Easy on the Ice cream <laugh>. So next up, Zero Trust. I've mentioned this several times already. It was recently pointed out to me that Zero Trust is an awful marketing name. And I guess you can tell us, engineers named an architecture that trust no user, no device authenticates everything and gives permission only as specified... A way to summarize that is: damn us, I guess we call 'em like we see 'em or we see 'em like we call 'em. That really doesn't make a difference. And clearly,
I would have had a short-lived marketing career, so I guess it's pretty good that I stayed as an engineer. But to help me really talk about our plans around Zero Trust, I've invited a new colleague of ours, Herb Kelsey, to join me on stage. Herb, I don't even know what his title is, Industry CTO - Government. Here's what I call him - Chief smart guy of all things Zero Trust. He's gonna
join me on the stage. This is Herb's coming out party, so to speak, at Dell. He has spent the last 30 years building architectures and large-scale information systems at the highest level of security defined by the United States government. He has supported the intelligence community designing secure clouds and infrastructure, and now he's on our team. And Herb, like me, is a truck guy. And to be honest, when I saw Herb yesterday, I got a little truck envy. He drives a big truck and by definition, I drive an F250, Herb drives a really, really big truck. Here he comes now...
I had to. Love it. Welcome Herb. Thanks for joining me today. Absolutely. You gotta tell the audience what in the hell is in that truck? So that truck is our 5G connected Zero Trust protected Mobile Operations Center.
That's a mouthful. Well, it represents our commitment to have Zero Trust in any environment that we need from a data center down to that tactical edge deployment. And I just mentioned making fun of Zero Trust. But when we talk about Zero Trust architecturally inside our engineering meetings, what do we mean by that? So, what we mean is that previous security paradigms had us protecting the edge of our environment, our perimeter, but not really inside. And what we found is that the only way that we could actually protect our data was to not trust that every user that was inside, or every application that was inside or device that was connected should be accessing that data. So we verify very,
very often as we go through, the use cases within that enterprise. And if we think about Zero Trust, that definition, what are we Dell doing about that? Well, so what you've heard over the last day is that we're embedding Zero Trust into several of our applications and products and services and solutions like NativeEdge. But to actually get to an end-to-end Zero Trust capability, that's a complex task for most of our customers. There are a lot of products to integrate, there's a lot to take care of. And so it takes a new architectural approach and we have to find a way to get people to that full Zero Trust journey. So today I'm happy and excited to announce Project Fort Zero, which is a Dell industry initiative with more than 30 partners that we're using to accelerate the path to advance Zero Trust.
And we're doing that by absorbing the integration burden into Dell. Within the year, we're gonna have a full end-to-end Zero Trust solution and it's gonna be validated by the U.S. Department of Defense. Awesome. You and I regularly meet on this. How far along are we with Project Fort Zero? So this journey started about a year ago.
We took senior executives into the DoD CIO, he had his staff there and he spoke about his highest priorities, Zero Trust being the highest priority. So once we understood that, we committed to help them accelerate their adoption of Zero Trust. We started at Center of Excellence in Maryland. It's called Dream Port.
Dream Port is the U.S. Cyber Commands integration facility that is open to industry to come in and innovate. So, once we established ourselves there, we also then had to establish an ecosystem of partners, of technology partners.
We have over 30 partners that are involved with us to build this end-to-end Zero Trust system. So, we are going to go through the 45 capabilities that the Department of Defense requires of us. We're gonna meet the 152, activities that we have to perform in order to be considered Zero Trust. And then we're going to go out and defeat the adversaries that are attacking our systems.
Awesome. Toughest question I'm gonna ask all day, who's leading this? Oh, softball. We are! We are committed to actually taking the test that the government's gonna provide so that we can be certified as an advanced Zero Trust system. Awesome. And what comes next as we get through that first phase in characterization, what's next? The opposite of the softball question. We're gonna be evaluated by NSA that's gonna do penetration testing and validate that all of our use cases are functioning properly.
And we're gonna get that stamp of approval from the Department of Defense that says we are advance Zero Trust. That's awesome news. I appreciate you joining me today. I've got the keys to the truck. Where can the audience find the truck? Ah, the truck. The truck. See, I knew might get 'em. I'm not gonna get those back. So,
the truck you'll be able to find downstairs in the solution center. You can see here we've got it lit up. We've got multiple use cases. We've got it being followed by a drone. We've got our security operations center. So if you come down, you'll be able to see as much about Project Fort Zero as you like. And about half of those 30 partners are downstairs and happy to interact with each of you.
I have one last question. Can that thing help me track Elk? Absolutely. Thanks for joining me with.
I appreciate it. Thank you very much. So let's see. Innovation, check. Multicloud uncommon storage layer, check. Generative AI, check. Dell NativeEdge, check. Project Fort Zero, check. Like last year, it's time to make this real and demonstrate that this stuff really works. And for that, I need our ultimate practitioner to join me. One of you,
our Chief Digital Officer and CIO, Jen Felch. I agree. ChatDTW. Please help me welcome Jen to the stage.
I am a Jen, Jen AI and I'm already here. No, Jen Felch. You want me to change my name to Jen AI Felch? <Laugh> Damnit. Uh, we have some model tuning to do here.
No, Jen with a J. Please welcome Jen Felch to the stage. Thank you. You know, last year you crushed it being with your peeps. I'm so excited. Well, you bring this incredible, I think, perspective as a user.
You bring that end user point of view to the stage and I know our audience appreciates that. What do you think so far? I love it. I love it. So many things to help simplify our environment and I'm kind of thrilled about Project Helix. Me too. So, time for me to leave, print it over to you and your peeps. But before you go, I think you need a new t-shirt. So, let's see, let's see what we can do for you.
You know I like a good t-shirt. I'm just gonna type into Dell-E-2, create a t-shirt with some Jeff themes, like Texas, technology, edge and the future work. And then let's see what happens. I think you can pick it up backstage. So, you can get the hell off the stage. Yeah, that's a graceful way for me to say, I'm ready.
I can't wait to see it. I'll join you when you're done. Thanks. Thank you. Well, hi everyone. It is great to be here today and see all of this awesome, really customer-driven innovation.
Jeff covered a lot of ground on how we're working on amazing new technologies, and I'm excited about all of them. From the edge to the multicloud solutions and of course Gen AI. You know, as IT leaders, we're presented with a lot of great technology and a lot of options that can either work in our favor or just add to the complexity. So, we need to be able to guide our teams through those decisions and empower them in their work. So, in our IT organization, known as Dell Digital, we've standardized and automated many, many of our processes with the aim of simplicity, agility and control.
And while we want options to run workloads anywhere, we need to approach the multicloud by design so we can avoid complexity and we actually enable those options. So, there's six factors that we always consider when we're evaluating workloads as to where we wanna run them. The first of which is that we look at the performance characteristics of the workload itself. Is it bursty? Is it steady state? Will, will the usage fluctuate greatly throughout its use? Second, what are the workload dependencies from the application ecosystem? Applications tend to work together and understanding that is really important when we're considering where to put that workload. Do we have data residency requirements, unique compliance or security requirements for the workload that may dictate it needs to run in a very specific location or country? Is the workload real-time or is it more time tolerant? So that we can have delays that won't impact the experience, cost differences, of course, and we also look at the environment. What's the environmental impact of these decisions? And these are the factors that we look at when we consider, you know, how do we optimize the multicloud environment with a workload driven approach? Developers, application developers have an enormous influence on the infrastructure, especially when they're putting applications together.
Those interdependencies can often dictate where an application, where a workload can run. So it's important to help them to not only be productive, but to be informed about the decisions that they're making. So, one of the common challenges for developers and it's industry-wide is they spend about 20% or less of their time actually writing their code. The rest of their time is spent waiting, or doing manual tasks, or as we heard yesterday, dealing with the drudgery of life or of their jobs. Last year we shared our internal Dell Digital cloud portal helps us provide an awesome developer experience internally with automation and self-service capabilities. Over the last four years,
it has evolved such that our developers now are spending 70 to 75% of their time on technical tasks, including writing software. And not just that. Thank you for the applause. I'm pretty proud of that too. I think it's awesome. And in just the last 12 months, since we were last here, we've produced an additional 27% in increase of our user stories. That's how we measure actually producing things for our business,
the outcomes. So it has generated a lot of interest. Oftentimes when we talk about what we're doing internally, people wanna know, can we get that? And so I'm really happy to share that that environment has now been productized. So, little drum roll, little drum roll.
We've launched the Managed Developer Cloud, was launched this two weeks ago. It gives developers self-service, access to virtual machines or containers in an API-based cloud environment. Built-in infrastructure is code. So, I'm really thrilled about it, but we're not done yet. So,
earlier we just heard Jeff and Jensen and Kari showcasing the immense potential of Generative AI and which is honestly, has unexpected become a priority for all of us this year. I'm sure we weren't planning for it last fall when we weren't talking about it everywhere. And developers are among those who have a, a great opportunity to benefit from it. So we are starting to apply Gen AI to our own processes.
And because we've spent the last few years standardizing and automating them, it makes it much easier for us to build those capabilities directly into our workflows and to our CICD pipelines. For example, Pipeline Builder is our low-code, no-code platform where developers can manage their CICD workflows. Our developers can take advantage of knowledge with our infrastructure as code templates, which can be applied to those routine tasks - the drudgery of life. We can automate the entire process of creating and managing infrastructure in minutes with just a few codes. Clicks, a few clicks. Lots of code, but a few clicks, and not any manual code. You can just reuse the templates.
So, let's look at an example of how Gen AI can help developer productivity when you have standard pipelines. So, let's say a manager requests a list of issues that were just closed in the last week, instead of having to write a script, or look it up manually, simple, natural language query in Gen AI by the developer completes the task. Once done, easily forward it on to the manager, free up the developer's time. And I know, the manager should probably just be able to do that themselves. So we'll get there too. Some of the other benefits are we
don't have to worry as much about unit testing when we get the tests automatically from AI. It's great for helping us to find defects, offering suggestions on fixes. We're talking about really great productivity gains built on these standard processes. Also, you know,
what developer loves to do documentation? Probably most people don't. But we can use Gen AI to not only create the documentation, but provide information at your fingertips from what was created earlier. So it's a pretty awesome time to be in IT. And it's not just for developers.
We're making it easier for everyone in the organization to access and use Gen AI like open AI and the open source LLMs through our Dell Digital Cloud portal. Earlier we talked about - I gave the six things that we're looking at of where to place workloads. We do the same thing when we're talking about where are we running AI? So, we wanna give access to open AI, but we also wanna offer LLMs and training models that are on premise.
Because what's top of mind for us is when there's IP or privacy, we wanna have it secured in on-prem, which is just what we heard from Jeff and Jensen as they were talking about the opportunity ahead of us. So we find it's really important to be able to centralize the cloud strategy so that we can monitor and govern the use cases across the company. And it makes an incredible difference for the organization when we're thinking about, you know, how do we harness this new technology? Our goal is to have multicloud by design and we enjoy being customer zero for many of these technologies.
And with our workload driven approach, we've been able to break down barriers in this multicloud environment, keeping data and apps flowing between the cloud and the data centers. But I'm thrilled about some of the things that have been announced over the last couple days. So I'd like to bring Magi Kapoor to show off some of the latest efforts, that allow customers to achieve a better cloud experience in our APEX portfolio. So welcome Maggie.
It's a pleasure to be here. Welcome. Well, it is awesome, awesome to have you. You know, so many customers, they want to place workloads wherever they need.
So how can APEX help customers innovate faster? You know, now more than ever, we need to send workloads ground-to-cloud and cloud-to-ground easily and seamlessly so that you can innovate faster. Well, we can't wait to innovate faster, but you know, as an IT leader, I'd just love to know how can APEX simplify my multicloud world? Let me show you how these services work together to help application scale rapidly to new regions while maintaining agility and flexibility between all clouds and on-prem. It all starts in APEX Console.
That's where IT admins manage their subscriptions. Let's get started. Let's get started. Let's begin by configuring some storage assets with APEX Storage for public cloud services. In the APEX Navigator view, we can choose the type of storage and the cloud provider. So, in this example, with just four clicks, we can provision either block or file in any cloud provider. So, for our application, we're gonna choose block storage for AWS. Next,
we get to choose our performance tier by selecting either a balanced or performance optimized option. Then we can even choose the availability zone option. By selecting a Multi-AZ option, we get the same resiliency that comes built into our on-prem block storage that can scale up to thousands of nodes. In addition, this comes embedded with storage efficiencies, like thin provisioning capabilities. This can result in dramatic total cost of ownership savings for our customers. Maggie, I love it. So we get powerful, familiar,
reliable storage for both block and file on multiple public clouds. That's right, Jen. So once we have sent the job to deploy for storage, it appears on the cloud deployment dashboard. This entire process is secure with RBAC SSO IAM all based on Zero Trust. It's pretty incredible.
It is. So, and now I see that cloud deployment is in the green. And you know, where do I see my APEX assets? All within the APEX Console, you have visibility and control across your entire Dell estate from on-prem to colo to edge and the public clouds.
Okay, I love it. I love it. So now I have storage in the public clouds. How do I get my on premise data? How do I get it there? It's easy. Again, from the APEX Console, a volume mobility group is created by linking our on-prem block storage to the cloud-based storage that we just deployed. So we set up our on-prem source, choose the volumes that we wanna copy over, then we choose our target cloud deployment.
It's as easy as that to put workloads wherever they need, whether it's on-prem or in the cloud. So you bring up for - I love, I mean that is a lot easier than Jeff was talking about earlier with the complexity that we face as IT organization. So I love what we're doing for workloads and with our modernization efforts within Dell Digital, probably 50 to 60% of our apps are containerized and they're such that they're cloud agnostic. How can these APEX services help in the containerized world that we live in? They absolutely can.
We know more and more mission critical applications are becoming containerized and that requires persistent storage. So the mobility service that we just saw in APEX Navigator enables us to move more than just volumes. After we onboard a Kubernetes cluster, we can subscribe to any of the Dell container service modules that you see over here. Ooh. I see replication,
observability and application in associated data mobility. Look, this gives us a lot of optionality. You know, we all face decisions that we make today have a lasting impact, right? Where we decide to move our workload. So, we need to continuously be able to evaluate our application framework and ensure optimal placement.
Exactly APEX services combined to help optimize app placement to and from any location. So here we'll use our application mobility modules to move applications along with all of its underlining data to the new regions. This way our developers can focus on developing. Just lift and shift. These are amazing.
I can see a trend where system performance is measured of both on-prem and public cloud. Absolutely. Dell APEX provides more than just multicloud apps. With our common storage layer, we now have seamlessly connected our high performance on-prem infrastructure to the public clouds bidirectionally. So now, Jen, that we have seen how APEX simplifies our ground-to-cloud complexity. Let's take a look at how APEX can bring the cloud experiences on-prem with our APEX Cloud Platforms that are optimized for Microsoft, Azure, Red Hat, OpenShift or VMware. So let's take a look.
An app can be created and deployed natively in Azure, or users can decide to deploy them on-prem for cost savings or maybe a performance boost with our APEX Cloud Platforms for Azure. This flexibility really helps our teams be more productive without having to step into unfamiliar territory. How so? Let's see. So let's imagine there is an issue and if a user wants to investigate and resolve that issue, they can do so by dropping directly into the Dell APEX Cloud Platform extension from within Microsoft's Windows Admin Center. Isn't that cool? It is very cool.
All of these things are amazing, but now I'm gonna have another challenge. So the demand for AI, specifically for training models, it's unbelievable across Dell, probably across all of our companies. So we're exploring opportunities how this could be applied to our customers as are others.
But performance and data governance are both key to enterprise AI apps. What do we have? Yeah, I totally understand that, Jen. And you know, placing AI workloads in public clouds may not be an option for some of the workloads, right? So, the Dell APEX Compute provides high performance bare-metal nodes delivered to data centers that combine with other APEX services can support all operating systems and all container orchestrators. So here we are, looking to set up our APEX Compute which you can choose the number of GPUs, the node type, and then select which performance tier you need.
Bare metal delivered in a hassle-free process. So I can see setting some of these off, load up the GPUs, put Project Helix on 'em, and we're off to the races. Exactly. The Dell APEX portfolio combined with a common storage layer really bridges the gap between all these different environments.
It allows us to deliver an incredible multicloud experience, really enabling innovation at every step. I love it and I'm gonna tell you, for a many reasons, but even after we've decided where to run something, now we have options to move those workloads based on what we're observing or any other criteria. We all like optionality. In fact, we want it.
Whether it's for performance, or cost, or resiliency, it gives us a lot of, a lot of flexibility. Exactly. And you know, Jen, everything that we have talked about today and you've seen can be renewed as a subscription or can be decommissioned as needed. No more lending commitments, no more costly upfront investments.
APEX truly helps businesses focus on innovation instead of financial roadblocks. It's true. It's awesome. Maggie, thank you very much. I really appreciate it. Thank you Jen for having me. I love what you've built. I love it. I think, it is time to get Jeff back on stage and let's see what his shirt looks like. Great job, Jen.
Thank you. You guys like my new t-shirt? I love it. It looks great. It looks great. Well, you made it over there so... I know. I like it. I like it.
She did a great job. Making it real today, didn't she? Thank you. Thanks Jen. We're gonna move to the close. I know we're a little over time. I appreciate your patience this morning. I'm gonna close quickly. So, I hope you enjoyed this session today. For you, the techies,
I'm hopeful you left with four distinct impressions. One, more informed on our architectural direction. Two, I hope we surprised you with a few project announcements and product announcements. Three, that we've continued, and you walk away with an impression that we continue to demonstrate turning these visions into real products and real offers in the marketplace. And that lastly, and most importantly,
that we're addressing your needs by making it real and how we operate in this multicloud environment. So with that ChatDTW, take it home please. Special call outs to Jensen, Kari Gil, Herb, Jen, and Magi for your help today. Thank you. Could not have done it without y'all. And be sure to make it over to the expo and see our technology, products, offers and solutions up close and personal.
Thank you, ChatDTW. You're welcome. Now say goodbye, Jeff. Goodbye Jeff. Goodbye everyone. Thank you for joining me this morning.
Have a great Dell Technologies World.