Jeff Clarke, Dell | Dell Technologies World 2025

Jeff Clarke, Dell | Dell Technologies World 2025

Show Video

>> Welcome back everyone to theCUBE's live coverage here at Dell Technologies World in Las Vegas. I'm John Furrier with Dave Vellante. Dave, our 16th year covering Dell Tech world. Jeff Clarke, chief operating officer, vice chairman of Dell Technologies. Jeff, great to see you. You've been busy the past couple

days, 48 hours. >> Oh yes, sir. I've been- - You were in Taiwan for the big speech out there. Now, here, you did the keynote here. Michael Dell was yesterday. How are you feeling?

>> Well, I've been a little busy, covered a bit of ground, but it's always great to be here at Dell Technology World with our customers and with my friends. We're here. >> Taiwan, I've only been to Taiwan once. I was there in like 1988 and I'm looking at the videos that were shown at Computex. Wow. It's Singapore-like explosion. I mean, what's the vibe over there? >> Well, I had a chance to spend, I think I was on the ground for 40 hours or so.

I had a chance to see a bunch of partners, folks that I've grown up with. I first went to Taiwan, I was reminiscing, I think 35 years ago as a young engineer, and I grew up with many of the tech giants that are running the ODMs, the technology companies. I met Morris many, many years ago at TSMC. And the vibe is excitement. Technology matters. Technology's core to what we're doing today and you don't have to go any further than look at the momentum around AI and the engineering innovation and manufacturing presence that exists in Taiwan and how they help build and fill out the ecosystem, particularly for us and the successes we've had is partly to do with what they do. So it's hopping. It was exciting to be there.

>> Yeah, it's super exciting market right now. I mean, talking about the speed at the keynote. The cadence of the supply chain right now is massive. You guys are masters at road mapping.

You guys know the cadence of getting the hardware out, the systems out. These are large systems, they're complicated. It's like a big, a lot of subsystems, a lot of interconnects, a lot of things going on. And then on the market side with agents, the demand for tokens, you've got your token shirt on, which I love by the way, is driving cluster sizes, configurations, systems, rethinking inside enterprises, where all the value is.

So what's your perspective on that? Because you guys are moving very fast. Look at the lineup you have now. Go back a year, go back two years, you've had the progressions going, but in the past 24 months, massive change.

Past 12, big change. >> Well, to be honest, we retooled our engineering capability. AI and this race for technology and deployment, building out clusters that are training these foundational models, we had to go back and reinvent ourselves. We had to go back and rethink engineering workflows because the rate of the technology advancement no longer fit into the way that we traditionally built data center products.

In fact, I'd argue these aren't traditional data center products. These are specifically designed products for the modern computing architecture driven by AI workloads. So we decomposed our engineering flows and rebuilt it to be responsive to these needs, to be able to get an idea interacting with a customer and less than six months later, deploy at scale a large quantity of GPUs to their design. Most people would not have said Dell is a custom design house.

Bullshit, we absolutely are. We're building custom clusters for some of the largest trainers or foundational training models in the world, and we've been able to respond with that and then we take those learnings, that innovation, that engineering, and scale it to the enterprise. So it's been exciting. It's been a rebuild. I think we're doing quite well in the marketplace. First to market with the GB200 NVL72. I think we'll have more market first.

We were the first on the Hopper line as well. So the commitment we've made to engineer be able to solve these massive solution problems, because we're not building a single computer, we're hooking up lots of computers to act as a single computer and to interact with one another. It's remarkable, these systems and what they're capable of. >> It's real engineering. Yeah. Congratulations. Awesome. >> Well, thank you. So we think it is real engineering.

>> I mean, it's hard too. >> I'm trying to keep up with your keynote. It's hard. It's like keeping up with NVIDIA, but you said 35,000 trillion tokens generated by 2028.

>> Right. I think I said in 2024, it was 25 trillion tokens and that number will be 35,000 trillion by 2028. >> So last year, you were blowing my mind with energy requirements. So those things kind of go together. So I wonder if you could talk about how you're attacking that problem. >> Well, if you think about it, tokens are ultimately drive computational, intensity, or capability.

You take reasoning engines. We as an industry now think that's at least 100x of what it was a year ago. You now have tokens and tokens being everywhere. I made fun a little bit today that we're building token factories. That drives another 10x.

So it's at least a 10,000 times increase in capability. Oh my gosh. >> Yeah. Huge. - So how we're building that is clearly getting this new technology into the marketplace in record time, being able to fulfill the needs of our customers, bringing that technology in Time To First Token. And then as we spoke to Brian at CoreWeave today, it's more than just getting one done.

The scale of these machines are racks, what I call rows of racks and then data centers of racks, all interconnected being this single big computer. That's what we've been working on. That's the challenge at hand. I don't see that slowing down for the foreseeable future.

The rate of technology is coming at least at the same rate. We're going to go to 800 gig, 1,600 gig, 3,200 gig fabric. I mean, we're connecting these things at greater rates, more of them together to solve a larger computational problem.

>> There are two things I want to get your thoughts on. One is just the complexity of the data. Unlocking that enterprise data is a big opportunity, and then the tokens with reasoning is creating a massive increase in token amounts. So the demand for tokens is rising. That's impacting the AI factory scope or system scope. So you've got this dynamic where reasoning is kicking off a whole set of demand curve for tokens, for apps and other things, other agents.

>> Oh, that's right. - There's going to be a massive demand for tokens. So the factory will be pumping out tokens. I think that's fair to say. How does that shape the customer's adoption and deployment of the factories? Is there thoughts around that or is it mathematical? >> Well, of course, we probably have a thought or two around that.

>> Share your thoughts. - I think what often gets lost in the discussion in the scale of things, that it all has to be done with these massive clusters that are built. The truth is most data is created out in the wild, out in a smart factory, out in a smart hospital, out in a smart city, in your sneakers. That's where data is actually generated. What we see and continue to believe is the AI migrates to where the data is created to be dealt with. In this case, we're going to be running smarter applications, modern workloads, AI applications that are going to be able to process, synthesize the data to drive an outcome.

Generative AI has been the first part of that. We believe agentic systems will be built on top of that, and then ultimately we'll see physical AI evolve on top of that. So I see this as a one size does not fit all, that there's going to be a scaling of solutions all the way out to the edge.

We talked about our PC that is driven by the GB10, 1-petaflop machine. Another one that's going to be driven by the B300, a 20-petaflop machine, all the way to massive hundreds of thousands of GPUs connected together. In the data center, you're going to see this continuum. And we've designed solutions and systems to meet those needs, and that's how token, if you will, processing will be done.

Small factories at the edge, bigger factories at the core, and everything in between. >> It's funny, Michael Dell led with a keynote yesterday with the Edge. >> But shocker- - He sees our Edge. Right? >> Well, Dave, we've been at this a while.

We're putting it... We've been chasing accelerated computing for years. If you go back to my day, a math coprocessor, you remember those? >> Sure. - What was that?

That was a separate processor doing floating point mathematical problems. Then if you look at what we did with graphics, what we did with RAID, SMART-NICs, DPUs, we've been adding accelerated computing around the host. Now evolving to, we have an NPU around the host CPU and APC. That NPU is a dedicated processor to take the very workloads that Michael spoke about yesterday, I spoke about today.

These things are capable of running a trillion parameter model. They're able to process hundreds and hundreds of thousands of tokens. It's going to be an incredible creativity machine, development machine, the advancements of models, these smaller language models. Oh my gosh. And we've just started. >> And by the way, I haven't had to recharge my new laptop all week. >> I love it. You're running a Dell Pro.

>> Yeah. And I got this baby too. So this actually is my favorite. Okay, so this is the Evo. Okay, this is the vPro. This guy.

>> Yeah. - All right, so this is the higher end. >> You love that? - I love them both. >> That's got a Lunar Lake in it. >> And this a touchscreen. This one's not. So I like this one better.

>> Oh, you got to put that down and get that one. >> I know, but all my stuff's up here. >> Oh, come on. - All my notes from your keynote are here. >> You guys are tech guys. - I know. >> Here, I'll take care of that.

>> Wait, wait, wait. I got to ask you. So we basically heard... I got this part, "Shitty data equals shitty AI. " So I got that part, but you used the term data- >> I think what I said was- - Go ahead. - ... adding AI to a shitty process just gets you a shitty answer faster.

>> Yeah, I'm sort of paraphrasing. I'm paraphrasing here. Okay. >> I'm sure it'll be misquoted all week. >> Let's go. Okay, but so you guys have a lot of experience applying AI internally. >> Yes. - $100 billion almost company.

So you've got a lot of street cred there. You use the term "data mesh" several times and bringing together all this data, harmonizing that data so that it's not crap. What is the data mesh? What is that all about? >> Think of it as we created a substrate to connect what I call data islands. The example I gave on stage today was our services example where we have repair data, dispatch data, telemetry data, knowledge-based data and call log data.

All of that was in five disparate tools and at least five different data locations. Hint, hint, wink, wink. And when you wanted to connect it to get the power of AI, you couldn't connect the data. So we built a data mesh, think of it as a substrate that connected those data islands that we can move across our AI tool.

What we did with RAG, prompt-engineering, what we did with some of the deep learning and machine learning techniques we put in place to bring that data together that we could actually benefit from it simultaneously. So I think it is a data substrate that connected these islands. And then ultimately we built a new data architecture, which it will be built in the right way connected from ground one. But we're a legacy company that's been around for 41 years now and you had data in a bunch of places. It had to come together to get the value and that's what we built. >> So how did you harmonize that data so that you weren't sitting in a meeting arguing about the data? Was that AI that did that? Was that another special sauce? >> Well, it's much of what I described today.

We went on a very focused effort to clean it, to make sure once we clean it, we could ingest it into what we were building. And once we did that, it was pretty easy work. That work upfront, to be honest, was a little difficult. >> Heavy lift. Yeah. - Not as much as you might think, but it was a lift.

It didn't come for free, it didn't come without any effort. But we were able to do that lift, clean it up, and now it don't have to worry about getting contaminated again because we built the data strategy going forward and that architecture brought it together. >> So you're ahead of the game, I would say, than most enterprises.

Do you think that maybe it wasn't that much of a heavy lift up front, but it was a lift, do you feel like technology will alleviate that? Maybe not eliminate it, but dramatically alleviate that upfront piece that you had to go through so that other enterprises, mainstream enterprises, small and medium-sized enterprises can take similar advantage? >> I think to some degree the answer is yes. But in every piece of work that we've done, take our content work we did, we had to go call the herd, so to speak, and take out old outdated content, might have a little bit of that. We've been known to write a lot of content with a lot of different authors, some of it not always right or completely right. So we had to go curate the entire content library of our company first. And once we did that, exactly what you said is what's happening. The tools and capability are helping us.

We curate faster, we search faster, we deliver it in a very specific way in this case to our sales force. But the lift to get the crap out is not a trivial task. >> But then the value unlock is huge. >> Massive. Imagine in this case of our sales content, we're putting in front of our salespeople exactly when they have a question about a power scale or a power store, what its features and capabilities are, hey, I'm against competitor P, competitor N, and how do I sell a power store against that da, da, da, da, da? 90 seconds. >> Done. Love that digital twin idea-

>> With high degree of accuracy. >> A high degree of accuracy because we've reined in who can author it, we've curated what was out there and we have a process the way that only new stuff gets added. >> That's an example of productivity. >> Massive productivity. - All right, so now you're going

to bring that, you guys call that customer zero, which you guys are customer zero doing your own thing. How are you going to bring that to the enterprise and how are you going to drive the business growth on this? Because now you've got the chops there on the motions, you got the muscle, you did the work. What are customers... They're sitting now going, "Okay, I got to do the same thing.

" So from big to medium to small enterprises all are in the same challenge. They want to usher in AI, they have legacy, they have a lot of Dell too. So you know how to sell, you know how to market, you know how to service customers.

Now, what's the playbook? What's the strategy? How are you going to grow the market on the AI side of the enterprise? >> There's two things that I would initially point to and then I have a third that I'm going to bring into the conversation. The first from the infrastructure side, and again, it's probably not understood to the degree that we'd like it to be, but everything we've done for the large scale clusters in deploying this gear at scale, all of those learnings have been put into practice around what we do for the enterprise. That's one. Number two, everything that I just described about what we've done internally, our professional services have been along for the ride. They've created the same capabilities that we had to go through. In fact, we had them do some of the initial assessments and then help us solve some of the initial problems.

So we've been training and teaching and they've been participating in our own customer zero examples. So now they're more capable. You add a rich network of partners to help us. And then the last thing, which I think I'm an advocate of inside the company in driving this, which is ultimately I think your question, how do we make AI simple for the enterprise? That's what we're doing. So think about AI in a box, RAG in a box, agent in a box. Building an appliance, like you would know this from some of our storage products in the past of how do you appliancize, if that's a word I can make it up today, of the infrastructure and the software infrastructure product, a software stack to ultimately show up at a customer and be deployable.

Reference designs, blueprints, all of that go around it. Those are the capabilities that we've been building and we'll continue to roll out. >> And I think you have like in the storage example, you put some semantic search close to the data.

>> Yes, sir. - Smart move. >> But you guys making some great results. >> Well then Arthur talked about it in his keynote this morning as well, taking the Dell AI data platform and then taking the Dell automation platform and adding these capabilities to provide more services to help our customers with the data, because that tends to be the most difficult part of attacking AI in business. >> I love the keynote with Michael and Jensen. There was this moment there, you might've seen the video or you were obviously in the air flying back from Taiwan. There was a moment where Michael and Jensen were riffing on their OG status.

That's my word. They didn't say their OGs. They're self-declared OGs. About the PC revolution. We've been through this, Michael, PC revolution, the web, cloud, and now AI, and it was a moment of like to your experience, 35 years in Taiwan, how as a leader now at Dell, this movie is a 100x faster. Well, what's the scope of...

I mean, we're in a PC-like revolution, it's going so fast, but it's bigger than all those combined. Even Jensen was kind of saying it's unbelievable. What's your view on this? Because you've seen those movies, you've lived through those waves and those transitions. Scope the market in terms of speed, velocity, impact from your historical perspective as a historian.

>> I tend to think of it this way. If you don't like change and you don't like pace, you've picked the wrong category to be in because that's what technology is. And if you thrive in that changing environment, and you can rise to the occasion of the accelerating technology curves that we're on... Remember, we go back to the PC era, away from a terminal and a mainframe, putting PCs, in that case, a desktop with a monochrome screen to what we've built today.

How could you not love this stuff? >> That's great stuff. - You're a technologist. This is what you live for. And I actually like to think I've spent 38 years in this industry.

Much of what we've done is to build the foundation of what's happening today, and we're going to take advantage of that. And it's actually not a linear, it's a nonlinear exponential growth curve. What's not to like? >> That's awesome. The pace of play side. Jeff, thanks for coming on theCUBE.

>> I have one more question. You got time? >> No, yeah, absolutely. - I always have time for you guys. >> I want to thank you. I want to ask you.

You basically explained why being in the large language model is a really crummy business. You have a 10x increase in cost of training models and four orders of magnitude of cost decrease in four years. I wrote about this, I got the chart and I was like, wow. And you're better at math than I am, even though I was a math major and I had a little off, but wow, that is incredible. So basically you've got the cost of training going up 10x and the cost per token going down four orders of magnitude in four years.

>> Yeah. But token growth is five to six orders of magnitude. >> It's like being in the telco business. It may have been over the top. I mean basically there's-

>> The tsunami of tokens are coming. >> I don't know about that. The large language model business, I assume, is a good business because think about it, we've just really scratched text. Multimodal is becoming really effective.

We have images, we have video, audio, incredible capability. We're harnessing tremendous knowledge. We're extracting that. Now, you have reasoning on top of that, which we really didn't think about holistically not too long ago.

You now have the ability, again, to bring agentic and physical. You need the basis of the training of these models to be able to take advantage and do the inferencing at speed and at scale. >> Yeah. We'll put it this way, it's a business not for the faint of heart. You better have to be good CapEx chops and deep pockets to be in that business. >> Well, we're running as fast as we can here.

Jeff, thank you for coming on theCUBE. Really appreciate it. Chief Operating officer, vice chairman of Dell Technologies, sharing his insight. The pace of play is high. The innovation cycle is massive. Dave, I call it supercycle. Some don't like that word. >> I like that. - We like it. Please say it.

We're moving as fast as we can. >> We're in it. Thanks for watching. >> Love supercycle. - Thank you. >> Thanks guys for having me.

2025-05-22 15:48

Show Video

Other news

The Minimal Phone sucks...and that makes it GREAT! | Digital Minimalism Device 2025-06-05 07:11
Varun Chhabra, Dell & Kari Briski, NVIDIA | Dell Technologies World 2025 2025-05-27 02:08
DigiPen Institute of Technology | 2025 DigiPen Europe - Bilbao Student Game Showcase 2025-05-27 01:26