Welcome to HPC Tech Talks. Thanks for joining us. My name is Tony Ray and I'm going to be your host.
Today we're going to talk about modular data centers, and to do that, we've invited a couple of guests to speak to us about that today. Tyler Duncan and Ty Schmidt, two guys from Dell Technologies are totally engrossed in modular data centers and are going to elucidate us with a lot of information today about them. So guys, welcome to the show. Thanks, Tony. Thank you. So let's start out, Tyler, why don't you talk a little bit about what is a modular data center anyway? Yeah, sure, Tony.
So a modular data center is basically a data center like any others, except for we're building it in a factory and we're doing more processes in parallel so that we can get a more consistent delivery of the solution and we can also deliver it in a faster, more concise manner. I would add Tony, there's a factory build aspect to it. There's a factory integrated and factory test. So basically you're getting the data center complete turnkey with all power, cooling controls, security, accessibility, monitoring full of the IT gear, fully validated, all as a factory built, factory integrated solution. Okay. All right. And Ty, why don't you tell me some customers we talked to think a container is those things you see on a truck, and I think this is something different than that.
Can you tell us the difference? Yeah, absolutely. And by the way, shipping containers are still used for modular data centers, really when mobility or disaster recovery is a requirement. But the history of modular data centers, yeah, it started as a repurposed shipping container, 20 foot or 40 foot ISO shipping container. The first ones were literally once used shipping containers that were converted into modular data centers. And what that meant was taking a container, basically cutting holes, cutting walls out, taking the floor out, basically taking it, disassembling it almost down to its frame and then reassembling and rebuilding based on whatever the requirements were.
What made it nice was that it was a known shipping solution, meaning the ISO container blocks all of the rigging gear things that were required to move heavy containers were part of that design. And so you didn't have to solve for that necessarily. What made it difficult was you were constrained by the footprint, height, width, length of the shipping container. Dell's first modular data centers dating back to 2008 2009 were in effect modular or were container data centers.
And we learned a lot about the supply chain for that. We learned lot about the difficulties involved in converting a shipping container and quite frankly, how difficult it was to work within those constraints. And so we learned early on that that we can deliver the benefit, the value of a containerized data center in a factory built purpose, designed modular data center.
And so what you see is a history of the first couple of years were all shipping containers. Many companies were testing this, the supply chain, the shipping container supply chain is very robust, but the modified shipping container as a data center was not. And so we had to expend a lot of resources to make that work and we were very successful. We learned a lot from it, but it was the next generation that we said, Hey, we're going to take a different approach.
We're going to look at companies that specialize in air handling equipment, structural builds, things that can be built in a factory but not necessarily like you would a shipping container. And so everything from roughly late 2009, early 2010 until today is made in that sense. So we have multiple suppliers, partners that specialize in whether it's power, whether it's cooling, whether it's integration that we use based on our designs and they're built to our requirements. And so what you see is form factors that are wider than containers, longer than containers taller sometimes they come together in various pieces that form a modular data center, which is different than call it a single shipping container.
And so there's been a lot of evolution and as the supply chain has matured, as the technologies that go inside of these have adapted, we're able to use that kind of flexible modular approach, not constrained by a shipping container to provide the maximum value to our customers. Okay. Alright. Thanks. Boy, a lot of sophisticated stuff and a lot of knowledge and experience has gone into building these over time. That's fantastic. Tyler, can you talk a little bit about maybe what different kinds of modular data centers we have? We have small ones, big ones, 20 foot, 40 foot ones.
What kinds of data centers do we have? Yeah, so we have really large, we have really small, I know that's all very subjective, but we really focus on the IT first. So we typically will state it as more of a feature set or how many racks of it. Our really large modules, we'll go up to about a hundred racks of it in a single continuous space. Those do require more assembly on site just because of the number of units that we have to break that into in order to be able to still ship it over the road. Our most common size, we would call our click MDC. And that one goes in increments of five, 600 millimeter wide racks.
And so it'll go 5, 10, 15 or 20 racks in a single deployment. You can also put larger racks in there. You just get a little bit less overall total racks in the space.
And then in our micro series we will have what we call our micro eight 15, that's a single rack module. And then we have a micro four 15 which is a half rack. So we have everything from a half rack to a hundred racks. And then I mentioned there's other ways we look at it too. We'll look at power.
So we go down to six kilowatts for a micro four 15 and then we can go up to two and a half, three megawatts on the larger ones in some of those exceeding a hundred kilowatts per rack. We'll also do things like Tempest and ICD 7 0 5 skiff ratings for fed customers. So lots of different feature sets really will define what those modules end up being. Alright.
So what about cooling liquid air? Can customers choose between the two or how do you make decisions on that and do you offer liquid and cooling? Excuse me, liquid and air? Yeah, all the above. So I'll say one of the things we really focus on efficiency as a top priority in our design. So we've done, I'd say around 750 to 800,000 servers to date in a hundred percent outside air cooled modules. So what we really try to do is figure out how to give every last watt that we can back to the it.
And so that means trying to not dedicate it that to mechanical cooling. We do mechanically cooled systems, but if we don't have to, we try to utilize more free cooling methods and that applies to the liquid and the air. Liquid cooling is definitely a newer thing that we're doing, but we're doing liquid cooled racks up to a hundred kilowatts per rack now or I say up to, but really exceeding that by a little bit.
So liquid cooled there, it's usually direct to chip on the liquid cooling piece of that. And then we'll do air cooling that can go up to somewhere in that 30 to 50 kilowatt range. Really depends on what the total configuration is on what the maximum capacity that can be supported with each technology. Okay, that's great. Now I was just thinking of, you mentioned something about the power and things like that.
I've read somewhere, Ty, maybe you could answer this, something about efficiency and about the PUE difference between a data center and a modular data center. Are there differences in the efficiencies? So PUE Probably by the way, awesome metric introduced roughly around 2005, 2006. It is basically dividing the input power from the utility by the power that is delivered to IT equipment.
Very simple, very easy to describe, call it consume and use, but also probably one of the most misused metrics in the industry as well. And so why I say that is to just kind of throw around PUE values, all data centers including modular data centers. The PUE operates on a curve. There's a lot of things that factor into it, environmental workload, the efficiency of the components themselves, and we are completely sensitive to that and we measure and have third party companies come and validate the requirement or the performance of our modular data centers.
Typically speaking, a really good data center today is going to be somewhere between 1.25 and 1.5. I think the industry average, if there is such is around 1.5, which is significantly better than it was back in the late 2006, 7, 8, 9 timeframe.
We came out the chute in 2010 with our a hundred percent outside air cooled modules and we were measuring and validated that that we were operating between the 1.02 to 1.05 range. And that was back when data centers, if they were even measuring it, we're probably close to two, 2.5 PUE. And so just as an example, a 1.5 PUE data center,
if you've got six megawatts from the utility, that means four megawatts is being delivered to IT equipment. What's more interesting to me though is PUE doesn't really describe or measure the effectiveness of how that IT power is being used. And so we are very focused on that. So not only are we looking at reducing the amount of overhead, increasing the efficiency of our solution and not just call it at the design point, but along the entire operational modes of the solution, but also looking at how effectively is that power being utilized by IT gear. And so it's not just PUE, but yeah, PUE is a very important metric and the industry uses it, we use it, but there's a lot more that measures carbon utilization, water utilization, it utilization, effectiveness. And so all of these metrics are part of our behavior.
And it wasn't mentioned earlier, but I will say that Tyler, myself, we come from a lineage of server designers and so we are very aware and understanding of what goes into server effectiveness, server performance. We're very, very closely related to the teams that are developing our server and storage and networking products. And so when we are looking at a solution for a customer, we're looking at it from the utility, the environment, all the way down to the component level and looking at how we can optimize the total solution. You could work with a customer who's concerned about sustainability and work with them on their energy requirements and things like that. Yep, absolutely.
Yeah. And kind of a selfless plug here for Tyler, but Tyler authored an awesome white paper that's on our website, dell.com in our modular data center site that actually lays out a case study on sustainability. And you can look at it through two different lens. At the end of the day, if you make something more efficient, you can get more IT gear into it, you may still consume the same amount of power, but more of that power is going to IT gear, which is typically what customers want. So there's a carbon footprint aspect to sustainability and having modular data centers optimized as a system, there's a reduction of materials, there's a reduction of logistical costs associated with all this gear getting on site.
There's a measurable carbon footprint improvement or reduction, and we qualify that for customers on a case by case basis. So yeah, but at the end of the day, also, if you're getting more performance per watt provided, that's a good thing as well. Most customers don't want to strand power, they want to use every ounce of power they're getting for performance. So we're very mindful of that as well. And then at the end of the day, with rising power prices, how we optimize not only the design point, but what is happening during either lower performing or lower workload states to try to reduce power as much as possible so that you are one, reducing your carbon footprint but also reducing your opex is absolutely top priority for us. Yeah, that's a great green story.
Thanks for that. So Tyler, let me ask you something. Where can I put one of these data centers? Can I put it in my driveway? Can I put it on top of a building? Can I put it in a desert? I mean, where can we put these things? Yeah, your HOA may have a problem if you put it in the driveway, but maybe you're right.
Maybe I'll skip that. What about on the roof We have done on the roof? So I would say typically what we try to do is all of our modules that are outdoor rated. So if we can, a majority of the time we will put them outside.
It prevents having a building around it that's just a secondary shell. It prevents the complexities of trying to put something in a space and around pillars and everything else, support beams. But we often are deploying outside in a parking lot in an open field.
We have deployed on roofs, we've deployed in parking garages. And as far as locations, you mentioned the desert. We've deployed north of the Arctic Circle and in the desert in Dubai as far as some different extremes. Wow. Yeah, I would add on that, Tony. It's very case by case and often what we like to ideally be involved with a customer at the onset when they're scoping, first of all, what is a modular data center? What can I achieve with this? But looking at and actually walking sites, they may have adjacent parking lot, they may have a green field, they may be looking at their parking garage or their roof.
And so the sooner that we can be involved and help them understand the trade-offs, quite frankly of those different sites is the best. But yeah, we've deployed on top of three story buildings and in parking garages, I would say 80 90% of them are typically out on a slab adjacent to a building or adjacent to an existing data center. Okay.
All right. So Tyler, so if I want one of these modular data centers, do I just go to Dell and Dell just does the whole thing, we agree on what the spec is and Dell puts it in. Is it that simple? Yeah, well the Oversimplified, It was that simple. So I will say yes, there's ways to start that process.
And first off, a lot of our customers, they'll work with their account team. You can go to dell.com and under the enterprise infrastructure you're going to see server storage, networking and modular data centers as well as a few others. So there's a few different ways to initiate that first contact. But I will say we started out doing hyperscale deployments for really large customers.
Everything was fully custom. In the last few years, we've really changed that model. While we still do the custom, we have now taken what we've learned over the last 15, 16 years and made more modular components similar to what the Rackmounted server did to the mainframe. We started standardizing the components inside, making it configurable so customers can come and myself and others on my team will work directly with them on what are the challenges that they're trying to overcome.
And we'll try to figure out how to configure the solution in order to meet the needs and overcome the challenges that they have with deploying it, helping them deploy more it. That's really what we do. So there is an engagement process. We agree on the specs, we try to provide the trade-offs, not just here's all the features you can get, but also what are the costs, what are the impacts? It's not always a dollar aspect.
What are the trade-offs? And then we try to guide them through that process and ultimately agree on what that is. And then release to manufacturing, I mean Tyler, wouldn't you say? Although standard driven, call it consistency driven, there's still a lot of flexibility. There's a lot of variables involved, whether it's the amount of power, the amount of cooling, the number of racks.
And so that inherently requires a level of engagement that's beyond Just point and click order a module. I would say that the smaller the modules, the less variables there are involved. And so that process is a little bit more streamlined. But typically speaking, it's starting with your account team visiting the Dell website and then going from there. We Also, oh, I'm sorry. I was just going to say one of the big things that
we try to do is try to provide a little bit of the art of what's possible. Sometimes I'll say it's not uncommon for a lot of our customers to have a mental constraint based off of their previous experience. They may be thinking that, well, I just can't exceed a certain capacity per rack or a certain amount of airflow because that was inherent in the design in the data centers that they already have. And we've done a lot of stuff to remove a lot of these constraints. There is an aspect of trying to understand what the challenge statements are, read between the lines on some of these, and then really help them with just what is possible.
Alright. Now I got a question for you, Ty. Gen ai, it's everywhere. We're all going to be replaced by chatbots, right? So the thing is this that we're working a lot with customers and they're running into challenges.
A lot of the data centers that they're trying to get into don't have enough power to supply for the training and the inferencing for the generative ai. And would a modular data center, let's say they need something around five megawatts, 10 megawatts, maybe even more like 30 megawatts, would a modular data center be an option? Customers exploring gen AI citing their equipment? Would this be an option? Absolutely. I tend to look at It through a couple of different lenses here. Just to address your question. What we're finding are customers, although they may have available Utility power and maybe even excess capacity from a cooling standpoint, what they don't have is the technology to provide the solution, whether it's at within their room or within their rack. So this move from 10, 15, 20 kilowatts per rack To 50, 60, 70, a hundred, 150 kilowatts rack, that's a lot of disruption.
There's a huge transformational effect. And quite frankly, it can be very to their current environment. So many times they're looking at, okay, we are looking for the solution.
We don't have an easy route or a lower risk route for adopting liquid cooling or this amount of density we want to explore. Is this achievable by putting something like this adjacent to our existing data center or maybe in a different location? That's a very important line of discussion to have. Do you have enough power from the utility is kind of a separate discussion. And whether that power goes to a data center or whether it goes to a modular data center, the IT equipment doesn't care necessarily, right? So we like to engage customers early on and upfront on where that power is coming from and help them explore and understand what can be achieved from a IT critical load under that amount of power provided. And so the short answer to your question is yes, there's a lot that's involved in that. So it's understanding the customer requirements, everything from day one usage through refresh.
Is this going to be used for multiple refresh cycles that plays into the architecture that they need to select, but kind of walking through all the security accessibility, if there is size constraints, even if it is outside, we have to understand all of this, right? But we have solutions for it's absolutely solution. We've been shipping modular data centers that are in excess of a megawatt a piece in excess of 50 kilowatts of rack for over 10 years. And so what we see ahead of us doesn't scare us. I would say what does provide a real level of sensitivity is having the deep discussion with all aspects of the customer, the operations team, the site team, the IT folks, the data center folks, and really understand all of these various sets of requirements that they're sensitive to and making sure that it's being accomplished by our solution. Yeah, so there's one thing.
One really interesting thing we've talked about with a few customers that have very large training farms of GPUs. What they're looking for a site, they want to be able to put a container where the cheapest power is and they want that flexibility and you can't necessarily get that flexibility with with the brick and mortar data centers. And Tyler, do you got to comment about that or I think that's a great, to me it just makes sense for modular data center to give 'em that flexibility. Yeah, I think there's a lot of things that actually drive some of the solutions, I guess, that our customers are trying to implement, and power is definitely one of them. So I think that there's an aspect of one, these locations where there's cheap power may also be an area where maybe it's not highly developed. Maybe I don't have all the resources there, and they may also want to be wanting to do this in multiple different regions.
And having standardization across that is great too. So the more that I can do where I can build this back in a factory, I can test it, I can create it, deliver it, and then get it operational, but not have to have as many resources onsite in order to be able to do that, though, that's a value for a lot of our customers. And then again, the standardization that goes with that and being able to have the same solution in multiple locations is great. There is an aspect of transportability, I will say the modules, they can be picked up and they can be moved, but for the most part they are, they're not on a trailer left that way.
We do have a few that have done that for lab purposes, but for the most part they're deployed. They could be picked up and redeployed to a new location. And so that is an option with these.
We're seeing that too, Tyler, we're seeing a growing interest, like some of the small, to call it medium single ship size solutions customers or whether it's, I'm interested in having something that I can actually move to attach to, whether it's cheap power or maybe renewable power or maybe it's latency or connectivity. I mean, there's a lot of things that play into the location, the optimized location and whether that needs to be movable or mobile or not. But the smaller form factors that are comprised in one shippable unit lend themselves to that. And so we we're seeing an interesting uptick in interest there. Yeah, I think just to add to that just a little bit, I think another key thing there is the modular aspect of being able to build out more as you grow.
So you don't necessarily have to build out a really large data center all at once, whenever maybe you have a smaller amount of compute that on day one, but your plans are going to grow. You may not know, especially in a new area or newer area of really how fast that growth is going to be. So you can deploy a certain amount now in modules, a certain number of racks, and then you can add more modules and more rack capacity as you go. And you can change the requirements a little bit as you go.
It doesn't necessarily have to be, I'm planning for 15 to 20 years out on day one. I can actually start to change the feature set of the modules as I go and then kind of shift the workloads around a little bit so that you're best utilizing the space as you have it. Yeah. So Ty, help me out here. So customer is interested in a modular data center, but they don't exactly know where to put it. They have some ideas and they're flexible.
Can we help them find a site? We can, we have, well, short answer is yes, there's multiple routes there. So one, and I think I mentioned this earlier, but if a customer has a rough idea like, hey, it needs to be in this radius within my campus and we've got areas that we are under our control or ownership or we have leases or whatever agreements that make those kind of known entities, but they don't know which one might be best, and they're exploring that very often we will engage them on site level studies, right? Either if there's engineering required, we may have engineering folks on site. Typically it's a service deployment service and service offering that goes along with that. If a customer's like, listen, I don't have land, I have no idea where this should go. I'm looking at multiple different locations, that's a deeper engagement.
And we have partners that help us along with that. For our part, we're wanting to make sure that they have all of the requirements to understand local code, which is a huge aspect we haven't talked about today, but the local code permitting requirements, everything down to soil sampling, water sampling, things like that we can help assist or own, Right. So Tyler, I've got one last question here for you.
And that is, so if a customer wants to explore getting a data center, does Dell handle everything from soup to nuts? Well, it kind of depends on where you draw the box around that, right? So we do, and the team that I'm part of, we own the concept, we own the ip, we own the architecture design, we oversee the manufacturing. We're also there for the sustaining services as well. So all that's there. The deployment, the things that we don't typically own is we don't typically own the concrete pad that it's on. We need the power in the network to be brought to the module. We can own all the rest of it from there.
And that said, I'll say there are some other aspects here of we also can provide power modules and generators and other components to be able to support the infrastructure. So there is a lot of it that we own and the things that we don't own. We want to make sure that we're working very closely with the customer's engineering team or their GC on those components. So we're not just saying, Hey, go pour us a pad, but we're actually involved in the reviewing of that design checks beforehand, before concrete's actually poured to make sure that things are in the right locations. We really want it to be successful and we want our customers to really treat us as an extension of their engineering team.
Well listen guys, thank you very much. Wow, this is really informative. I've learned a lot about modular data centers that Dell has, and there's a lot of choices. There's a lot of sophisticated expertise here, and this has just been great. Really appreciate you guys joining us here today on HPC Tech talks.
Ty Tyler, thank you very much for joining us today. Yeah, thank you Tony. Thank you, Tony. Alright, we'll see you all.
Thank you for joining us at HPC Tech Talks. Please hit the subscribe button and we'll be able to update you on future episodes. You've got a lot of cool stuff coming and looking forward to seeing you soon.
2023-12-12