>> Welcome back. Coming in live theCUBE's coverage here at Dell Tech World 2025. I'm John Furrier with Dave Vellante, my co-host.
For 15 years we've been covering Dell Tech World. Been a great journey. This year marks a significant moment in time where AI infrastructure is crossing over and creating value in production. John Roese is here, global chief technology officer, which is his old title, which is still his current title, and the chief AI officer at Dell Technology. John, you got three jobs now? Two jobs? >> A couple of jobs, yeah. It's good to have a few. I was getting bored.
>> Congratulations. - Driving productivity with AI. >> You bet. - Congratulations on the chief AI, which AI should put you out of a job so you can get back to your old job being a- >> Well, as a funny story, before all of this gen AI stuff, we've been doing AI for a long time and at one point I had what was called a data office. And I hired a bunch of people that were experts in data and their job, the actual mission of the data office was to build a technology prediction machine to put me out of a job.
And so for five years we actually did that and we built very advanced systems. And in fact, one of the funny things is before ChatGPT I had put a new lead into that role and we were talking about evolving the architecture. I still remember to this day, he looked at me... This is a very, very smart data guy, and he goes, "Why would we not just use a large language model? " Now today, that makes sense. But before ChatGPT, the guy coming up and saying the right technology to solve this problem is a large language model.
>> Sorry, what year was this? - This was the year before ChatGPT. >> So 2021? >> 2021, yeah. And I still remember that going, "Okay. That was very insightful. " And then all of a sudden we got a lot of really good large language models. Unfortunately what happened is the CTO job changed while all the problems that we could solve are still there, we now have new problems.
>> It's going to be interesting to see. We've been riffing on our CubePod on Fridays, CubePod every Friday, if you're listening, check it out. The chief AI officer is a job where if AI works, it should be infused like the chief digital officer. Remember the day back in the day?
>> Oh, yeah. - That now is everywhere. So there's no need for a chief digital officer. >> Well, I have been in a number of meetings with Michael and Jeff and customers and we talk about, it's actually a weird discussion because the customer will say, "Oh, it's great you have a chief AI officer. They hadn't set one up. How's that going?
" And inadvertently we will always say, "This job is time-bounded. You shouldn't need me doing that job once the company is fully into the AI era. " And we firmly believe that. That's one
of the reasons why I'm both because you'll need a CTO and I got plenty to do there. But once you actually get a company fully into the AI era, which does take a while, it'll be several more years, the idea of there being a single person responsible for doing that makes no sense. So I hope that one day in the future I can go back to being the CTO. >> So that part of your title, you are an accelerant? >> I'm an accelerant. I am a catalyst to make change. And that change once it occurs and becomes stable in the status quo, you do not need the catalyst anymore. Interesting. We've talked about it.
You do need the catalyst. We see so many enterprises that have not really been able to navigate that change because while everybody believes it should happen, there isn't a focal point to actually make the hard decisions to go to accelerate in places where people are uncomfortable. And honestly, if you can't get the change within your organization to happen, you will never really get into production at scale with AI. >> This is a great point. I want to dig into that
because you guys are going to make AI real. That's the main message. This kind of decision, you're technical.
You're not a politician. >> No. - But you kind of have to be, but you're more of a benevolent dictator I would say but the role is to really set up the standards and get it infused in. So it's really more of you're shepherding AI in. Talk about how you guys are doing it and how you're doing the job because this is a big conversation.
People think it's a new CIO. No. It's a little bit different. Talk about what you're doing, your job. >> I mean, there are implications on how IT works. But the real learning for us was, look, at the end of the day, if you really want to embrace and deliver AI as part of an enterprise, which is the magic because is if you do that, you can materially change the performance of your business, the productivity of your business.
But if you truly want to do that by definition... Because remember the definition of enterprise AI is the application of AI against your most impactful processes in the most important parts of your business to improve your productivity. That's the definition. So if you want to do that, by definition, you have to be willing to change your processes.
It's not about the AI technology, but unless you can adjust how you work, you can't really apply AI to anything useful. And so what we found is that most of the chief AI officer job is not a evaluation of technology or selection of technology. That's an afterthought. It's helping people navigate
to this point where they can identify the processes worth changing and then correlate that to an AI path to make that change happen. So you kind of have to deal with the psychological side of getting people to work differently, to organize differently versus to implement technology. >> Okay. So you as the chief AI officer are not an expert on every process in the company.
The individuals that are closest to that process are. >> Bingo. - And they're wed sometimes to that process. >> Absolutely. - What technique did you use to get to what the right process should be? >> Let me give you a couple of learnings, and it's not a single technique, but it's an approach. The first is you have to separate the AI project from which process you're applying it to. Now, you're absolutely correct. I know very little about
how our inside sales organization works or how the supply chain really works. I am not an expert in those things. I can learn it. But we realized though is when you engage with the business unit, the most important thing to do is to not jump directly into the technology to start with this very clear goal, which is to find a process that is worth changing that will have a material impact on your business. At Dell, we picked these four big areas, supply chain services, sales, engineering.
Now, we went inside of them and we discovered that within them, when you start looking at the processes of how our salespeople work, we discovered that they spend 40% of their time preparing for sales meetings. And it was mostly all content navigation. So the data was very obvious. Oh, I didn't have to understand sales.
I had to get the data about what the salespeople were doing. And then you look for these hotspots where there is an appetite and an opportunity to apply AI to dramatically change that particular metric. Once you did that, the result is now there's like 20% more time for salespeople to be in front of customers. That seems like a good outcome. I act as a facilitator to force people to say, "Look, you first have to identify where within the overall process of that function, there is an opportunity to improve it by using this technology.
" By the way, you may have noticed we never talked about a product or a technology at that point. We talked about a process. Once you know that, then you can go and have a discussion about technology.
In fact, we have an approval process and it's actually pretty interesting. It starts with do you have a material ROI? And if the answer is you can't generate significant revenue, change the profit margin, take significant cost out, or reduce the regulatory risk that matters, we're not interested. You are in experimentation. You stay there forever.
If you can prove that to me, the second gate is are you applying or trying to get that ROI on a process we want going forward? And that was critical because what we are not willing to do, and people do this all the time, is they use AI to cover a bad process. I have this crappy structure that doesn't work and let me put an AI thing on top of it and the problem will go. We don't do that. >> No, paving the cow path.
>> Yeah, no lipstick on pigs. None of that. That we do not do it. And the way we don't do it is that we ask, are we actually applying this to a process that we think is a good process? If it isn't, you go through process re-engineering. And then the third piece, which helps us kind of decouple these is even though we've experienced... Everybody who comes to me with an AI idea also brings their own opinion about the tech stack.
That's a terrible thing because if we implemented all of them... We've done about 300 AI tools. So we actually decouple that. You can come with an opinion about the tech stack, but the third phase is an architectural review board, which is AI engineering and enterprise architecture sitting down with you and saying, "We appreciate that you like this startup.
We're not going to use that because we already have a tool that does this. And while we want to solve your problem, you as the business owner do not get to choose the technical architecture of the company. " And by decoupling those two, you get to a point where you start getting repeatability and scale. All of those are techniques to guide people to this journey of find things that matter and do them in the right way. >> So just one follow-up on that, if I may.
How scientific were you in terms of determining where the problems were, take X percent of the sales force spending their time? Or did you just ask them? >> No, no, it's data drive. No, we asked to gather data and every time the data led us there. In supply chain and services, we already understood that really well.
They're very heavily instrumented. With sales it wasn't. So we actually spent months doing a process diagnostic. We did the day in the life of 20,000 sellers. >> So time and motion studies? >> You bet. I have more data about our salespeople than they could imagine. And because we have that data, we could look at it and find the patterns of where the actual inefficiencies were.
And honestly, if you're clear-eyed and you look at the data, it doesn't matter who you are, you're going to go, |Right there. Content preparation." That's the way it should start. We actually discovered another thing. We realized that that wasn't the only problem. That was the big glaring one that we went after with Dell sales chat. But right behind it, there were things like visual coaching.
We realized in our contact centers and inside Salesforce, there's a huge amount of human energy that goes into providing feedback to our inside salespeople, to our contact center attendants. And it's done human to human. Well, it turns out there's digital humans and avatars and all kinds of tools that can make that an AI project. And if you think about our scale, if I take 10% of the effort out of running an inside sales force or a contact center, that's material.
And if I have a technology that can do it, I should go do that. But it's the data that led us there. It's not somebody's opinion. >> Did you use process mining tools? >> Yes. We already had... - Some stuff you had.
>> The reason that services, for instance, didn't have to go through this exercise, they already do that. They digitized all their processes, so we knew exactly what their processes were, we knew how they worked, we knew what the inefficiencies were. >> You had the data. - I tell people, do not jump to a technical answer. Your first problem is to figure out where to apply it and you shouldn't make that somebody's opinion.
You should go look at the data and the data will tell you where the inefficiencies live if you just bother to look at it. Once you do that, interestingly enough, when you apply AI to it, you have a baseline and you know what the improvement is going to be and you can measure it. So then you know the thing is successful. >> All right, so now let's translate this into the big opportunity which Michael talked about on his keynote.
Enterprise AI is huge. The data's all there. It's unrealized through value extraction. POC prison. You talk about it all the time, many times with me. I know that for sure. The tech stack is not well understood at the AI factory level.
It's emerging. Platform engineering, Kubernetes, machine learning for fraud detection for banks. All that is in place. Things are coming into production.
How do you accelerate that AI stack for those processes and how do people know where to add value? What's your thoughts on this? I know it's kind of emerging, but how are you framing it? What's your thoughts? >> Well, okay, so it depends on... There's two parts to that question. How do you accelerate the stack is what's happening right now.
I mean, you saw our announcements about cohere. You have announcements about glean. What we've realized is that there is a bias. The way you can move fast as an enterprise is to consume technology, not to invent it yourself.
Now, we are not at a point and we will never be at a point where enterprise AI is completely turnkey. And for one simple fact, enterprise AI is applying this technology to your company, which means it will be your data, your process, your product, your people. And so at some point it does become a snowflake. But if we can move the line up and we can progressively make more of the decision-making and the architecture standardized to some degree or worked out in advance, that just speeds the process. The AI factory journey as you know it changes. We're at 2.0. But what we've done is covered more surface area, absorb more of the complexity of infrastructure into that discussion.
But then once you have an AI factory, what do you run on it? Cohere is a great example. That's not the AI factory, that's the workload you run on the AI factory and now you can consume it kind of as an appliance. It's standardized. Coding assistants are heading in that direction. And so there's this kind of natural inertia in the industry where every week more and more of the underlying work, whether it's the infrastructure or the fundamental platforms are becoming consumable technologies.
That I think is probably the biggest accelerant. The add value discussion depends on if you're the industry or the customer. If you're the customer, the value you want to create is deciding what to do is linking your data, is picking your process, not building Kubernetes. You can buy Kubernetes infrastructure.
You don't need to build it from scratch. And so I think from a customer perspective, their value creation is actually as the orchestrator, the organizer, the owner of where to target it. And if they spend any time on any other thing, it's not good value add. From an industry perspective, it turns out that we benefit from the fact that enterprises are incredibly diverse.
And so whether it's vertical industry or a particular use case, there's a huge opportunity for people to come in and figure out how to get a data flow to work in healthcare or how to connect a particular data architecture because we have lots of them to a knowledge graph or to a vector database. And so we will never fully standardize those because the diversity of how enterprises are implemented is so broad that there's this layer of value added integration and value added attachment that happens. >> I was talking with Doug at one of the briefings on the analyst today in the services. He runs the services group. They have a hard job because you have two things going on, all the stuff you're talking about.
And then as people start using rags, start using some of these tokens and now with agents, more tokens demands or the token demands increasing. The compute is increasing, hence more AI factories. So question to you is as agents, and this adopts with more tokens that's going to be a predictor to the cluster size. So how should customers think about that in the process? Because the CapEx and the OpEx spend is going to be around the AI factory scoping? >> Yeah, so I'm the chief AI officer and in fact Doug and I, a couple weeks ago had to tell Jeff and Michael how much money we're going to need for the next few years.
Capital spent, which is code for how many GPUs we're going to have to put into production. That is not a completely scientific exercise today. And so the way we ran it is we did a top-down. We looked at what we knew and extrapolated it forward a year plus. And it led us to a number and that number seems reasonable.
What we also have now begun to do is build a mathematical structure underneath it. Because what we realized is when you try to predict demand, think about all the moving parts. There are supply and demand moving parts. The supply parts are, "Okay, how much performance do I get on the next and the next GPU cycle? " Can I use things like key value cache optimizations like dynamo, which change things dramatically? Can I use things like GPU as a service and other capabilities to create virtualization? What level of efficiency do I think I can push into the existing GPU cluster so it's not just brute force? And there are a ton of levers to pull there. That's the kind of supply side.
The demand side is way more complex because today if you're just talking about a standard off- the-shelf rag-based chatbot, it's pretty easy to characterize for non-multimodal content. Well, now you have multimodal content. Next thing you know have reasoning models.
They happen to have a very different behavior with respect to how many tokens they use. Then you move to agentic, but then you have counteractions to that. If I quantize the model down, if I do some smart things about distributing my agents, if I have reasoning models that are done in different ways. >> So there's probability involved in the usage patterns. >> Absolutely. And we've been saying this for ages that the actual way to think about an AI system is it is a probabilistic problem.
By the way, that's why I think there's so much affinity to quantum because a quantum computer is a probabilistic computer. So inevitably these two things are going to come together, but right now this is not predictable statistic type of information. This is probabilities. And you got to have the ability to look at the parameters that influence the probabilities. We're doing that work. The good news, back to the AI factory discussion, as we figure it out, we will just include it in the AI factory architecture.
We will share this with our customers. You don't have to figure it out yourself, but what you will end up doing is not just having a top-down guess. You will have math that actually helps you understand based on what AI technology you're using and what infrastructure you're building, how these things intersect. >> And then you'll double it.
>> And then you might double it. Or you might have it because this is one of... It's S-curves. It's this crazy S-curve model that we spike. >> Oh, interesting. - Well remember this is a technology like every other technology that always goes through two oscillating cycles. There is the cycle of all-out innovation at any cost, and then there is a pause that reflects on that and optimizes it.
So we see things go like this and then they drop down. And then we see them go like this, net- net, they're going like that. But we have to kind of manage those cycles. If you don't do both, if you don't manage the growth and the optimization cycle, you'll run out of capacity very quickly.
>> I mean, I love that whole token. And then the factory deep configurations. DeepSeq showed that innovation can come in- >> You bet. - ... and change the S-curve.
>> It was one of those. - So as a customer, where's the problem to spot? This is the final. I know you have another meeting go to, but I want to get it in there. What's the blind spot under-predicting token demand or over-provisioning factory? Or is that even a word? >> No, I don't think you'll have any problem if you overprovision your AI factory.
Our experience... To give you an idea, look, if enterprise AI is all about applying AI to the processes of your company, a typical large global multinational enterprise might have a million processes to run their business. At Dell, we probably picked off like the top 20.
So I got a whole lot of surface area to go after here, and if I keep moving fast, I will use that infrastructure. I'm not sure I could pre-build it fast enough to have it sit there idle forever. So that's not the problem. >> It's Jensen's law. You buy more, you make more. >> You buy more, you make more. - John, it's always great to have you on theCUBE and I think the services side is good tie in there because I think they go together because absolutely there's a demand and supply side.
And if you get it wrong on the token side, you could be not buying enough factory but reconfiguring. It's a problem too. >> That's the problem. If you didn't understand agentic demands and you build an AI factory that could not scale to support it, you can't do agentic. And that's a big deal if that's the path for your profitability and your impact on your business. >> Shout out to your sit-down with Neil today. That was really interesting.
>> That was fun. - He brought up ELIZA. >> Yes. I was thrilled about that. >> Which was amazing. - Well, here's a statistic for you. Patti Maas who's one of the professors at MIT Media Lab, her and I were doing a podcast and she's very interesting. She's doing all this memory augmentation. Very cool AI stuff.
But she shared with me, she wrote the seminal paper describing AI agents 30 years ago at MIT. 30 years ago in her PhD, I think she talked about this idea of autonomous agents. And so it is fascinating. ELIZA was like 20 years ago. This is not entirely- >> Even more. In the '60s, I think.
>> Yeah, I think it was '60s actually. So this is not new, but we're in a new era where we have tools and capabilities we've never had before and it is accelerating at just an amazing pace right now. >> Yeah, it just goes to show you where AI academically was way ahead. Now we got the compute. You got the clustered systems. You got the AI factories. John, great to see you. >> Great. - Thanks for sharing the real-
time insights on theCUBE here. Action-packed content. >> Great guests as always. - I'm John Furrier with Dave Vellante.
Real-time insights here on theCUBE from the show floor, Dell Tech World 2025. Thanks for watching.
2025-05-24 00:43