Is Your Infrastructure Ready for the Age of AI?

Is Your Infrastructure Ready for the Age of AI?

Show Video

>> Welcome to Dell Technologies's Modern Data Center event. Today we're exploring the future of IT infrastructure in the AI era. A modern future- ready data center is no longer optional, it's essential. By focusing on energy efficiency, operational simplicity and cyber resilience, scalable infrastructure provides the foundation for innovation and ensures your organization stays competitive. Modernizing a data center can feel overwhelming, but it doesn't have to be. On-premise data centers are more important than ever as they evolve as key components of hybrid multi- cloud strategies.

When optimized, they enhance efficiency, flexibility, and long-term growth. With Dell Technology's best-in-class portfolio, professional services, and decades of expertise, we make transforming your data center seamless. This event is tailored to help you refine or redefine your data center strategy as we move into tomorrow's AI-driven future. Now let's welcome our host, Dave Vellante co-founder and co-CEO of SiliconANGLE and theCUBE. >> Hi, everybody.

We're here at the Experience Center at Dell Round Rock II. I'm Dave Vellante. We're here with Arthur Lewis, who's the president of Dell's ISG Group. Arthur, good to see you again. >> Good to see you. How are you? >> Thanks. I'm well, thanks.

Not much going on out there, is there? >> Yeah, it's a little busy. - So we've been spending a lot of time with customers trying to help them through their strategies and thinking about their infrastructure. I mean, the data center for a while was kind of a boring place. It's not boring anymore, is it? I mean, they're trying to modernize. There's a lot of action going on. What do you see from your perspective? >> A couple of things, Dave.

Number one, there's no question that data center capacity is growing. You just have to follow the investor dollars to see that. There's no question that customers are very focused on power and cooling as part of modernizing their data center.

But to me what's really interesting is, why are we seeing this in the marketplace? And it's largely driven by AI. And for years, customers have been on a digital transformation journey and the underpinning of that has been the data. It's always about the data. And so as we move into the world of AI, access and visibility to data becomes incredibly important.

Siloes of the past will be dismantled, infrastructure will be connected. Algorithmic innovation is going to drive smaller domain specific models. So you can envision modern data centers with a multitude of models and data is going to be the fuel that drives those models. So in a world today where you have many silos and the majority of the data sitting in cold archive or backup, you can envision a world where the majority of that data moves into hot and warm tiers, constantly in circulation, feeding these AI engines that not only sit in the data center but span to the edge onto the PC.

It's an incredibly exciting time. >> And the data evolution has really been quite remarkable to see. I mean, you had the data center and then you had the cloud, a bunch of stuff went into the cloud, and now you've got multi clouds.

All these things are connected, you've got the edge and you've got data everywhere. And we brought together data to do some analytics, but now people are trying to build essentially digital representations of their business in real time and they're rethinking their processes. And so in order to achieve that, they've got to have a modern infrastructure. Like you said, they need the power and the cooling, that seems to be a big constraint here. But can you explain how your customers are thinking about modernizing infrastructure and how that supports their AI initiatives? >> Well, when you think about the value that artificial intelligence brings, and again, it's all about taking that value and taking that data and driving business value out of it. It all kind of comes down to, do I have access, visibility, clean data that can run the AI? And part of that is modernizing the operations.

So when we chat with customers, and this is a very new thing to them, they always have five questions that are top of mind. The biggest question is that customers struggle with today is ROI and use case, "Hey, what's the best use case to go after? How do I think about ROI? " Then we get into a conversation around model selection and, "Hey, which are the best models to run all the different use cases? " Then we get into a data prep conversation. And only after those three questions are answered, do we get into an architecture infrastructure conversation. And again, what's really interesting is even though those are just three questions, they typically yield many, many more questions around, "Hey, I really have to think about modernizing my processes because all of this is about data and I need really clean good data in order to drive optimal AI.

" So this is not an infrastructure conversation, this is a, how do I change the strategy of my company and modernize to take full value of what is becoming the world's most valuable asset, which is data. It's truly remarkable. >> I like how you keep coming back to the data. And the thing that customers are telling us is they're doing a lot of experimentation in the cloud, but their data lives on prem, especially their high value data and it's been there and they've put a brick wall around it and protected it and secured it, but now they want to bring intelligence to that data, whether it's AI and agents, they're not going to move that data into the cloud. It's too expensive and it's too risky from their standpoint, it's working, but they want to enhance it.

And so they've got to rethink how they approach that, part of it is certainly infrastructure, but it's also, as you say, they have to rethink their processes because that's what's really going to drive business value. I'm interested, you've mentioned use cases. I mean certainly Code Assist is a big one, but there are others.

What are you seeing in terms of the use cases that customers have experimented with and now they want to scale on-prem? >> Yeah, I'm going to answer the question, but let me start with something that's very fundamental and important for customers to understand and something that Jeff Clarke did very in the early days of AI at Dell, which is to organize the company and strategy to be very focused on a select set of use cases to really understand how to get to optimal value of AI. Because what we see with a lot of customers is everybody kind of wants to roll their own in a company, and that's extremely suboptimal. When we started, Jeff took a role of how many AI projects we had in the company, we had over 900 projects running in the company. Quickly got that organized and we broke it down into, we were going to focus on what we thought for Dell were the more valuable use cases. Number one, content creation.

We have thousands of people that are working on content for customers, for internal presentations, incredibly streamlined the ability to generate content. Number two, sales chat. We have hundreds of thousands if not millions of conversations with customers on a weekly basis, and so sales chat. Services and service chat, we talked about next best action, we've seen a significant reduction in how long it takes the time to close a case with a customer because we're able to pull from a variety of data sources, understand what the problem is and quickly get to the rep, what's the next best action that they should take with the customer and we've seen anywhere from a 15 to 20% reduction just at the start.

And then Code Assist. Code Assist, obviously very relevant in my world. Even in the early days, we see close to 40 to 50% of the code generated being generated by AI. Now, you have to get into how much of that it's adopted and actually getting into the source tree and into the code base, and it's about half of that, but a lot of really green shoots. So sales chat, customer chat, Code Assist, content creation, top four use cases, and we just added a fifth, which is in our supply chain.

But taking advantage of this really required the company to organize itself around this initiative because the worst thing that could happen here, you've heard of shadow IT, shadow AI is exponentially worse and CIOs and CEOs have to rein this in, modernize their operations, get it under control, and they will see the value in it. >> And it all started with getting your data house in order, I have no doubt about that. We talked earlier about some of the power constraints that customers faced.

Obviously technology keeps... We talk about inflation, the beautiful thing about technology, it's deflationary, we just keep driving efficiency. So what are the things that you guys are doing to drive efficiency for customers? >> Well, I mean, obviously density is incredibly important. So we can kind of go round-robin across the portfolio.

So on the server side, we started with the 9680, which was the leading AI server for Dell, the fastest product to a billion dollars. But we had a couple of design points on the 9680, silicon diversity, network diversity, density and power efficiency. And we led the industry in all of those, which is why we were so successful with the 9680.

The follow on product, still with the same silicon diversity, network diversity, 33% more dense in a 4U chassis, two and a half times more energy efficient. And then we take those servers and we build super efficient rack scales with 64, 72, 96 growing to 144 GPUs, driving a ton of density. If we take that same mindset into the storage portfolio where we launch PowerStore Prime with an industry leading five to one guarantee, and we talked about data and the importance of protecting it, and we have the world's leading target appliance with data domain that drives a 55 to 1 data reduction guarantee. And then we have PowerScale, which we believe is the most performant dense file system in the industry.

So you take all of those components together and we're able to provide a lot of density for customers that are limited by space. And then we also introduce liquid into the servers to provide for even more performance. So now we have a individual server that's liquid cooled, we have the rack of GPUs that's liquid cooled and very impressively, we launched the M 7725, which is a multi node CPU rack, which is based on AMD's Turin platform, 27, 000 cores in a single rack. Wrap your head around that. >> Okay, so this is critical because as we've talked about many times, data center consumption of electricity is probably about, I don't know, 2 to 3% for years and forecasts are it's going to go into the double digits here, certainly by the end of the decade and so that's got a lot of people concerned. But let's talk beyond the technology.

We were talking a little bit before about processes. What have you guys learned from some of your internal work beyond the tech that is necessary to modernize your infrastructure and prepare for this new wave that we're on? >> Yeah, it's a pretty significant effort and we have this mantra, "Streamline, standardize, automate AI. " Those are the four steps, three steps to get to AI. And what that means is, Dell's a 40-year enterprise and so we have a lot of processes in the company.

We ship a lot of different things to a lot of different customers. And so we had to take a step back and take a look at every single process to ensure that all of our processes were streamlined, that they were standardized across the company, they were automated, were applicable and only then, if we could get to those three steps, would we say it's, "AI-able. " Because obviously it's all about the data and if you have data everywhere and different components, not clean, not visible, <inaudible>, it's a suboptimal AI experience. So AI is really about modernizing operations, modernizing the data center. It's a completely different way of thinking about how you run the business.

Data centers used to be cost centers, now they're value centers. >> We wrote a piece, I'd love to get your thoughts on this, my final question area. We wrote a piece about a couple months ago that Jamie Dimon is Sam Altman's biggest competitor. And the point was, that you're never going to take JPMC... Jamie Dimon is not going to take his data and put it out in the internet so that LLMs can train on it, rather they're going to bring that AI into the data center. I'm sure you guys are seeing this as well at Dell, you've got a lot of your own proprietary data.

And so that seems to be the big opportunity, that it's a huge tailwind for... It's not repatriation, it's just investment in on-prem. >> Here's a misconception.

Well first of all, I'll say, when we started down this path four years ago, we wrote down several hypotheses that we had about the industry and the market, and one of them, the leading hypothesis was, AI is going to follow the data. Everything that we've seen in every customer conversation we have is that is absolutely true. And in fact, we see and hear more about customers needing to repatriate workloads for AI business that they want to run on-prem.

But the misconception is, we have this binary dichotomy of training and inferencing and implicit in that is training is a point in time. Training never stops. Fine-tuning is training. Training is just getting the model ready to go into production. But the reality is, the majority, 90% of the world's data sits on-prem and the majority of the data hasn't even been created. The evolution of fine-tuning is the continued training that allows inferencing to be optimal. We're moving into reasoning and thinking models now, the fine-tuning that you're doing is going to be way more important than the initial training of the model, which is why it's so important for the data to be on-prem, it's more performant, it's cost-effective and also it's more secure.

>> So it's not either or, but I'm inferring from your comments that inferencing is like a spring coil ready to take off, and that's what we're seeing in the marketplace. Would you agree? >> I 100% agree. It's always interesting because it's eyeopening for customers when they sort of think about, "Hey, well training is one thing and inferencing is another.

" And it's like, that's really not a binary thing. It's, training is forever. The way I think about it is as humans, we go to school, we go to college, some of us, and then we live our life.

Training is kind of like through that college experience, but the real experience comes after college, when you get into the world and you get a job and you start to learn, "Oh, this is what they meant in school. " It's the fine-tuning because a trained model by itself, without data, without fine-tuning is suboptimal. You need the data in your business to fine tune that model to get it to derive the type of value that you would expect. >> I think the other misconception too, Arthur, is that people... You hear in the news about these a hundred thousand GPU clusters, you don't necessarily need that for a lot of these on-prem data centers, these midsize and even large companies, they can do a lot with a really efficient targeted infrastructure.

What are you seeing in that regard? >> Oh yeah, there's no question. Look, folks that are deploying hundreds of thousands of GPUs are in pursuit of AGI and ASI and those kinds of things. >> We wish them the best. - And hey, it's great. I love it. And we're a huge part of every single one of those endeavors, and we learn something every day when we meet with those customers. But for a typical enterprise customer, even a large enterprise customer, the ROI that we're seeing is on the order of 25, 30 to 1.

It's incredible what you can do with a couple of racks of infrastructure. But again, it really comes down to, where's my data? Is it visible? Do I have access to it? Is it cleaned? Is it AI-able? And our very strong value proposition is that we architect the compute, the network, and the storage all for AI under one roof. So we are the integrator of the system for the customer, the customer is not their own integrator. >> I feel like we've been preparing for this moment for 40 plus years in technology.

It's really an exciting time. Arthur, thanks so much for spending some time with us. I really appreciate it. >> Thank you for having me. - You bet.

Okay, keep it right there. We'll be back. We're going to dig deeper into some of the product areas and the new innovations that Dell is announcing. Be right back right after the short break. Welcome back to the Experience Lounge here in Round Rock, Texas. My name is Dave Vellante and we're talking about the trends and changes in data centers, data center modernization, and I'm very excited to have the key components of infrastructure. We've got the servers, the storage, and the networking pieces all together.

The brass is here, Arun Narayanan, who's the senior vice president for server and networking products, and Travis Vigil, who's senior VP of product management at Dell Technologies. Gents, good to see you again. >> Great to be here. - Nice to see you.

>> Great to be here. >> Yeah, we saw each other last year at the Supercomputing Show, that was super exciting. And wow, the momentum has just continued, Arun, so it's good to have you here again. >> It's relentless, it's Relentless. The AI momentum is relentless.

>> It is relentless. And Travis, we were talking to Arthur earlier today about some of the trends that are going on in the data center, and you and I have talked about, when we go back to the converged infrastructure days, it was like you had servers and storage and networking. We kind of bolted them together and then hyper-converged came along and simplified that, but now we're entering a new era. Why don't you give us your perspective on the three- tier architecture and where we're headed? >> Yeah, Dave, it is a super exciting time. I mean, I've never been involved with so much change at once. And if you look at what our customers are having to deal with, it is a combination of, what am I going to do about generative AI? What am I going to do about private cloud? And what am I going to do about cybersecurity? And how am I going to get an optimal configuration in my data center? And what we've learned over the course of the last 10 years is that while hyper- converged was really great, people were focused on a singular ecosystem with the need to optimize the performance going to your storage, separate from the capacity to your storage, you need to move to a disaggregated architecture, which means that if you want to be ready for generative AI, if you want to set yourself up to have multiple hypervisors in your environment, and frankly, if you just want to get better TCO in your data center, moving to an architecture that looks a lot like what three-tier was but is different in a couple key ways, is the right investment to make.

And those key ways are, it needs to be automated. Getting all of the components from a single provider like Dell helps with service and support, and it's a trend that we see, almost every customer we're talking to, whether they're looking at, what do I do about generative AI, what do I do about private cloud or what do I do about cybersecurity, look at going forward. >> When you talk about a single ecosystem, you're talking about you're either a VMware shop or you're a Hyper-V shop or you're rolled your own with OpenStack or KVM and you're there, locked in there. >> A hundred percent. - Good, okay, got it. And Arun, I wonder if we could talk about data center monetization.

The thing that we're seeing is, like you said, AI momentum is relentless. We're seeing a lot of experimentation going on in the cloud, but every enterprise that we talk to is saying, "I have data om-prem, that data has gravity. I don't want to do this in the cloud because I don't want to move the data and it's too expensive, so I'm going to build my own capabilities on-prem," meaning I have to modernize my on- prem infrastructure, which I haven't really been focused on doing for the last several years, other than a basic refresh cycle. So what are you seeing in terms of that? >> Yeah, this is a great point. So as you've said, every enterprise, if they need to start doing AI, they need to create space for AI, they need to create power for AI.

I mean, data centers are like 10 years old, 15 years old, nobody has modernized that. And now the average power per kilowatt in the average data center is like less than 15 kilowatts. And now an average rack for an AI rack is 60 kilowatts. And how are you going to create the power envelope to do that? Now, the biggest advantage we have is most of these data centers of assets that are aging five, six, seven years.

Now you go back to a server you bought five or six years ago and you buy a new server today, you can do a seven to one consolidation ratio. So think of if you have a hundred servers, you can take out a hundred of those servers, replace it with 14 servers or 15 servers. That is what new data center modernization is all about, reduce the footprint, reduce your blast radius, and then create both power budgets and space budget to introduce AI into the data center so that you can use the same existing data center to do AI workloads.

And then you can use the existing data that is in the same data center for training those AI models or inferencing from those AI models. That is what I see as the biggest trend happening in enterprises right now. >> And Travis, I don't know if you remember, I think it was at Dell Tech World last year, we had riffed a little bit around life cycles, and I had put forth the premise that because of AI and because of the demands for power that life cycles were going to shorten. And it wasn't certainly not definitive, but you noticed some of the cloud vendors in the income statement who had been elongating, their depreciation schedules are now squeezing them down, specifically citing AI.

And so that's just an example of some of the changes that we're seeing and I think there's more to come. I want to ask you about this notion of disaggregated infrastructure. And so let's think beyond HCI. So HCI was great, it simplified things, but like you said, you were within a stack, you were stuck in that stack and you didn't really have the ability to scale.

So why does disaggregated infrastructure solve that problem? How does it address that problem? >> HCI provided a lot of operational simplicity for folks, and it was the right solution at the right time. What we're seeing with the need to have choice and flexibility around which hypervisor you choose, a VMware environment, a Nutanix environment an OpenShift environment, and what we've seen customers do with HCI environments over the years has led us to a new conclusion, which is in order to get the most out of your infrastructure, in order to make sure that you're deploying something that makes space for generative AI, like Arun was talking about, you can't have a solution where your cores on your HCI are 20, 30% utilized. And that's what we've seen in practice. And so by moving to a disaggregated architecture, you can get higher utilization of your servers, meaning you need less servers. And then you can also take advantage of key functionality and ease of use and cost efficiencies that we've built into the storage arrays, things like five to one deduplication rates. So you can get an environment that utilizes less servers, utilizes less storage, is easier to run and saves you money.

>> I want to come back to the energy footprint that you were talking about before. I had the pleasure of touring some of your advanced labs back during the Analyst Summit in November, saw some really cool stuff, much of which I can't talk about yet, but I'm excited about it. A real main focus was on reducing that energy footprint.

So what can you tell us about how Dell, specifically Dell servers have advanced energy efficiency and are supporting this whole AI adoption wave? >> Yeah, I'll put it into two parts. I mean, there's the traditional air-cooled servers, which we've known for a long time. What we've seen is across the last eight years, we've improved every piece of the technology, be it the airflow in the servers, what is the inlet temperature that goes in? All the materials that are being used on it, the design of the <inaudible> themselves.

So we've done a lot of these changes that has allowed us to take down energy efficiency, increase it by 83% over the last eight years. So significant design changes in airflow, materials being used, fan speeds to increase that, that's on the air-cooled servers. The second, I think the biggest trend we are seeing right now is liquid cool. I mean, I know you saw some of it in the lab there, but direct-to-chip liquid cooling is going to be a pretty key capability that we are doing. And especially in the AI servers, when you have to cool a 200 kilowatt rack, there's no amount of air cooling can do that, you have to cool that temperature, keep it room neutral. So we are innovating on cold plate technology, what material sciences we are going to use there.

We're innovating on CDU technology, on what flow rate, how do you do that? So we are working through the entire cold plate ecosystem, the entire CDU ecosystem to make sure that we have the latest technologies to be very, very energy efficient. So that's part of what we will innovate over the next few years, but that's going to be pretty critical capabilities that I think are differentiated in the market space. >> We talked a lot about servers, they're the big power consumer, and of course when we reduced the amount of spinning disk, that helped of course. But you guys are making a bunch of announcements, PowerStore, PowerProtect, all your whole Power line. What's exciting you in the storage space? >> Well, I'll tell you one of the most exciting things in the storage space actually has to do with generative AI.

We're having conversations constantly with customers about, how do I get started with generative AI? And outside of having the conversation about compute and power and all of that, the number one conversation we're having is about data. And what we've seen, especially for enterprises, is that in order to make a RAG deployment or an inferencing deployment work for an enterprise, it's got to utilize what I call the intelligence of the enterprise, and that intelligence resides in PDFs and it resides in support repositories. It resolves in email. And being able to curate the right data for your generative AI deployment is probably the number one conversation we have with customers. The great news about working with Dell is we're on the cutting edge of that. We can talk about speeds and fees.

I know you had folks talking earlier about we're coming out with the latest and greatest 122 terabyte drive and we have ObjectScale with all flash and we have high performance with PowerScale, and that's all necessary. But we also have things like the Dell Data Lakehouse, which helps customers curate and find the right data and metadata so that their generative AI RAG deployments or their generative AI inferencing deployments is actually utilizing the right data. >> How about the data protection piece? Where does that fit? >> Data protection is critical for customers. I like to say the three big things that I talk with customers about is, what do I do about private cloud? And that ends up coming to a disaggregated story. What do I do about generative AI? That ends up coming to a PowerScale or an ObjectScale or a Dell Data Lakehouse discussion. We just talked about that. And then the third thing is,

what do I do about cybersecurity? And the one thing that customers are always asking us for is obviously you need to have great TCO. And with our PowerProtect data domain, we get dedupe rates of 55 to 1, that's great, but we're building intelligence into the systems. We're building integration into the systems that really make it differentiated. For example, we've built in an integration between our PowerStore product and our PowerProtect data domain products such that you can do backups at 4X the rate versus alternative methods, number one. Where we're adding all flash capabilities to PowerProtect data domain so that if you need to do fast restores, you can do it from flash versus spinning media.

And something I'm really, really excited about is that we've built anomaly detection into PowerProtect Data Manager. And so anomaly detection gives our customers an early warning if something's going awry so that if a cyber event is happening, we can sense it and the customer can take action to mitigate it or to restore from it. >> And that becomes increasingly important because AI is just going to create more seams, more ways for the bad guys to get in. So as they advance, you have to advance as well. Arun, I want to come back to you and we've talked about the efficiency piece.

Workloads are also changing. I mean, the entire stack from silicon all the way up to applications is changing. So what are the things that you pay attention to from the compute and networking standpoint that are changing, as you said, to prepare for the next five years? >> Yeah, I mean, I'll go to the networking piece now.

I think this AI is building an entire new fabric. I mean, there's a whole new fabric that's getting built out. I mean, these GPU clusters don't work without the network. I mean, they have to have the most modern networks, the fastest networks. So what we are seeing is a massive new opportunity of an entire network build out in networking. >> And that's a network for GPU to GPU- >> For GPU to GPU connectivity <inaudible>- >> And GPU to storage- - And GPU to storage. So we have-

>> <inaudible>. >> I mean if you think of any big GPU deployment, for every dollar you spend on a server, you're spending 20 to 25 cents on networking. And in that 20 to 25 cents, 80% of that 20 to 25 cents is this GPU to GPU connectivity, but the remaining 20% is storage connectivity. You need both. You need high speed storage connectivity,

and you need very high speed GPU to GPU connectivity. So we are seeing massive new technologies being built out, the 400 gig networks, 800 gig networks. Now all of this has to be managed, and so SmartFabric management to make sure that the nick on the server, the switch is all working together in an effective way, is also new emerging technologies. I think that that is just scratching today, is think of the <inaudible> and the switch as of one ecosystem.

Historically, there've been two separate ecosystems. Of course, NVIDIA has done some innovations on Spectrum and InfiniBand to do that, but I think that's just beginning. Right now everybody's going to do that.

Fabric management and fabric management at scale is going to be a lot of innovation in the next few three to five years from a software technology standpoint. >> So the entire balance of the system has changed, hasn't it? I mean, when we go from spinning disk to flash, that was obviously a sea change, but then you open up the floodgates with much greater bandwidth. How have you been able to maintain that balance and how do you see it shifting in the future? >> I mean, what we've tried to do is, as we see the market shaping forward, our goal is to be the best compute provider out there because we are a compute-led technology, and of course storage is the number one marketplace, get the best storage technologies. Networking, so far, we've not focused so much on enterprise networking, that has been a Cisco/Arista shop.

But with AI networking, which is a compute-led sale, we want to build out the entire AI network fabric. So that's what we are trying to do is build the capabilities for this AI ecosystem, be the best compute layer, be the best storage layer, and be the best network layer. That's how we're trying to strike the balance, a focus on the AI workload, which is the workload of the future.

>> Good. We'll come back to data. Unstructured data is, ever since I've been in this business, the amount of unstructured data has been greater than 90 %. I mean, it's the just- >> It's the avalanche that never stops. - ... unbelievable. And then of course, forget about video.

I mean that just makes it 99. 99%, but how are you handling unstructured data? What's your story there? Give us a little color. >> If you look at our unstructured story, it's really scale- out file with PowerScale, scale-out object with ObjectScale. We've released some really interesting enhancements there in terms of all flash support for that product line.

And then it goes back to the data. How do you make sense of all the data? How do you curate the data, especially for generative AI? And that's where the Dell Data Lakehouse comes in. >> Let's wrap with some advice for customers. So as we said earlier, a lot of experimentation going on in the cloud. Every enterprise knows it needs to build some kind of on- prem AI capability, but they're still nervous, they're not really sure...

The applications are kind of chatty, ChatGPT-like, they're stealing from other budgets to fund it. It's not like the CFOs are opening their checkbooks. They're a little bit nervous about, "Well, this water-cooled stuff.

I got to move to that direction." Help me, Dell. Give me confidence that I can invest in the future. You're going to be there as a partner. What's your advice to these customers? >> Yeah, so I think about it like this. I mean, we've moved for enterprises, Dell has to play a role in showing them the proof of value.

I mean, it's gone from, these are the use cases. Use cases are important, but you have to go to the CFO or the CEO and tell him that, this is the value from generative AI. I mean, what are we seeing? What is the productivity savings or what is the revenue capabilities you can generate, depending on your business? And I think Dell can play a role. Because we've worked across multiple customers now we've seen what best looks like, we've learned from that.

We are trying to build use cases and proof of value to these customers and showing them. Then once you land that story, then it becomes, okay, what is the infrastructure conversation you want to have? How can Dell provide you the best infrastructure? What is the best storage conversation you want to have? What is the best network conversation you have? And the most important thing to me is these are not simple deployments. Even in the smallest of cases, this is a pretty complex and sophisticated deployment. So building that and making sure the customer feels confident that they have something that's deployed well and gets the best performance out of it is a pretty important thing that Dell can help. So start with a proof of value, because we can show that to those customers, and then talk about the infrastructure.

I don't think it's a infrastructure first conversation, it's a proof of value first conversation. >> Yeah, that's right, value back... Work backwards from value. All right, Travis, to you, your advice, I'm guessing it's going to relate to data, but over to you.

>> Well, actually I think I'm going to go in a little bit of a different direction. >> Oh, please. - I'm going to build off of what Arun said. I mean, if you look at Dell, we do have that broad experience with interacting with so many different customers. And I think we've also been out in front in adopting generative AI within Dell. We've identified the use cases. We're well into deployment for things like code generation, services assistance, sales assistance, marketing content generation, and we're up and running.

And the great thing about having all of that experience is that we're able to package it up in services that we can then help our customers. Because to Arun's point, it's not a simple conversation. And so we can do things like assessments, we can do things like deployments.

We can do things like managed services to help our customers move into this new world that is a combination of modern infrastructure, modern private cloud, and generative AI. >> And that value that you talked about, I think it does start with understanding your data and then understanding how you're going to leverage that data for competitive advantage. Jeff Clarke at the Analyst Summit very eloquently talked about all the data that lives on-prem that's never going to get onto the internet, is never going to go get trained on theses LLMs. >> 84% of customer critical data resides on-premises. And as we've talked many times, Dave, data has gravity. And so I really think that the AI has to follow the data.

>> Well guys, thanks so much for sharing your perspectives on what you're doing to help customers modernize. And we'll be watching. We'll see you at Dell Tech World. Super excited for that. >> You'll be there and you'll be there. >> All right, good deal. Okay, and thank you for watching, this is Dave Vellante.

>> Plankton produce about 50% of the air that we breathe on this planet, Dell Technologies help us understand how the plankton changes the world we live in. At Oregon State University, we look at plankton to help us understand climate change and notion health. Before artificial intelligence existed, we were using nets to actually go out there and collect the plankton, and a human would have to go and hand identify all the plankton, so this was a very slow, arduous process. With AI, using edge devices that we can actually put on the ship, we're able to use Dell PowerEdge servers and PowerScale systems to process billions and billions of these plankton in semi real-time or real-time. So instead of taking manual labor that would cost us months and months, we're able to now get through this data in days, if not hours.

The In-situ Ichthyoplankton Imaging System is a camera system that we developed to actually image plankton and then do the AI on that actual captured image. And by the end of a trip, we will have collected a hundred terabytes of data every 10 days at the ocean. >> Our work is addressing a lot of society relevant issues, climate change, fisheries, pollution, and they all have direct impact. So we take the approach of not only understanding it, but trying to find solutions.

And if we can do that, then at the end of the day, we're having a much bigger impact than we would just as individual scientists. >> If we don't actually process the data in a meaningful time, then it's useless to help us understand climate change and how we're affecting the planet. And it's useless for us to be able to go out there and make changes that will be meaningful. AI allows us to move at the pace of the changing world that we live in and stay ahead of it.

>> We're entering a 10- year data center super cycle, in our view. A lot of people don't like that term, but the fact is that the data center market, the all in data center market, when you include power and cooling and servers and storage and networking, is going to surpass a trillion dollars by 2032, according to theCUBE research estimates. Now, things have changed dramatically. The data center market used to be a low single digit growth market over the past 10 years, but it's now a 15% 10-year CAGR. It's absolutely exploding, and there's a massive shift toward AI.

So the entire stack from silicon all the way up to applications is changing and becoming AI optimized, if you will. Now, here's the thing, organizations that we talk to, they're doing a lot of experimentation in the cloud, but most of their data lives on prem, especially the high value data. So what they're doing is they're building out infrastructure capabilities on-prem, and they're investigating the best ways to do that. So while the experimentation is going on in the cloud because of data gravity, that data that lives on-prem and frankly cloud costs, they're building out their own centers of excellence now and refactoring and modernizing their on- premises infrastructure. But the market's bifurcated, there are some large customers that are retooling for liquid cooling at very large scale, but there are a big set of customers that are just happy with air-cooled, which has been the predominant methodology over the past 10 or 15 years.

So it's kind of a split market, if you will. And you've got a number of customers that don't want to do that, and many, many customers that are thinking about doing so, which requires a much bigger capital investment. So we've been here all day in Round Rock at Round Rock II in the Experience Lounge, talking to executives and partners about the trends in these markets and what Dell's role is. And frankly, Dell's role is really to support this wide range of configurations from small to mid-sized to very, very large, and a variety of workloads. The traditional legacy workloads that are getting injected with intelligence as well as new emerging workloads like rag-based chatbots and coding assistants, and the like, summarizing text and so forth in supporting marketing and sales.

So companies like Dell have to support that wide range, and they have to be a foundational provider for enterprise AI, which we see as a big, big trend going forward. So we want to thank Dell and its partners, AMD and Intel for bringing us here and supporting us in helping to report to you and discuss some of the trends that are going on in the marketplace and hopefully inspire you to take action in your own data centers. Thanks for watching. >> Thanks for joining us at Dell Technologies's Modern Data Center event.

We hope you're leaving inspired and equipped to transform your IT infrastructure, but this conversation doesn't end here. Join us at Dell Technologies World where you'll explore cutting edge solutions, connect with industry experts, and unlock the full potential of your data center. Visit dell.com/dtw to learn more and register today.

Secure your spot, and take the next step into the AI-driven future. We look forward to meeting you there.

2025-04-12 10:02

Show Video

Other news

AI Home Server 24GB VRAM $750 Budget Build and LLM Benchmarking 2025-04-14 17:09
MacBook Air M4 AI Content Creation, Gaming, and Everyday Use 2025-04-11 11:27
Tech News 10: Mind-Control Is Real, Krutrim, Cursor, Meta Mocha, Runway Gen 4, Dark Energy 2025-04-11 14:38