my name is Bob Wagner and I'm going to take you through a overview of the data Ser market and how it's being affected by AI we'll talk a little bit about Trends and then seeing and seeing what really is changing with the data center Market actually the data centers themselves so if you take a look at the Box on the you're going to see a lot of things are changing drastically and when we say AI Revolution is here we mean it's the whole Data Center Market Market is being flipped upside down and that's because the requirements for AI are much different than they've ever been you have between interprise and hyperscale and now you have ai it's even more of what the hypers scales are doing and it's also different different we're going to talk about the growth in gpus it's from 2023 to 2028 they're going to quadruple and frankly that might be uh a non-aggressive way to look at it it might even more than that that means that the AI GPU servers could be upwards of maybe 50% of all servers going into Data Centers that is a tremendous change from just like two or three years ago when lots of people it was and so what comes with that change you're gonna find power and that's going to be probably the biggest thing track right now is around 10 kilowatts we're talking about well over 100 kilowatts for the latest Nvidia Blackwell system systems that's a 10 or maybe even 12x Delta and that's just over matter of like five years you're also going to see a lot more cables when I talk about 4ex more fiber cables Network again I think I'm being you know not aggressive at all it could be much much higher than that you're going to see more cooling liquid needed in fact you're going to see more of pretty much everything so when you think of AI think of I'm going to need a lot more of everything I'm doing today and here's that the growth chart believeable 2019 the GPU Market that just sales of gpus was 6 billion projected to be 14 billion in just six more years from now if you look at the Nvidia road map and this is just the last several years every two years they come out with come out with a a new uh G view system new server and as you can see from the far right the power goes up now when we're looking at 2024 they introduced the Blackwell system and that's at a th000 watts per GPU that's really high but at the same time that is around 15 act more compute performance than the Ampere which came out just four years ago now don't be too frightened about the black well at 2700 Watts the reason that's so high is because there's two gpus and one CPU on the same substrate so it's kind of doubling up but what's going to come out in 2026 called Ruben and we don't know a lot about it still pretty early but we do know that Nvidia is gonna be focusing on trying to reduce the power but remember the computational computational performance is way out stripping the amount the power is going up even so even so as Justin will talk about a little bit later on that's going to put a lot of stress on the data center physical atmosphere all right so when you're thinking about putting AI in what do you need to do first of all you need to make sure you understand that there's two networks there's the training and the inference and they're separate networks and they're different and different and they're used differently so for the training that's basically collecting all the assets you want it's basically teaching the system to learn collect the information do some realization but now when you want to use that data to produce new content or predict what's going to happen in the trends that's the inferencing comes in so you could have the training and inferencing portion of your network together in the same building but they don't have to be a lot of times you're going to see inferencing out closer to the customer and training maybe in a data center where it can get the maximum amount of power as cheaply as it possibly can not need to be coated and here's just an example of the many many millions of different ways AI can help you out um you can do predictive it can do new content we really won't get into it but the benefits are real this this is not going to Bubble people are seeing Big Time Savings and um I think this is going to be a growth market for the next decade what's really interesting is Right interesting is right now if you look at it mostly it's the hypers scales the Amazon's gole they're the early adopters on everything and they're doing the same thing here with AI but but we know it's going to hit to the premise the Enterprise and it's going to hit the edge and it's going to be where where it needs to be and everybody will be able to use it so it's going to be some really as Baron says some very exciting times the growth is is unbelievable right now and Pand can help you with that growth so just to kind of recap a little of the data center trends the Power Cooling and fiber densities are going to grow beyond anything we have today and in fact most data centers even two years ago may not be able to handle AI it's mostly mostly because of the power and the Cru liquid cooling that they may need so expect a lot new data center builds and expect them to be built in areas that do not necessarily have data center in today because again they're going to be looking for specific needs of the AI system especially with the power power and cooling the power consumption is going to be also driving the data center cers to become potentially suppliers of their own power the um the grid is not going to be able to respond as quickly as it needed and you're going to be seeing a lot of renewable energy you're going to see wind you're to see solar even nuclear being added on and expressly for the data center most large AI is going to be racket stack and that means an integrator is going to take cabinet load it up with all the compute switching the cabling even and it'll just roll it right into the data center and that's being done so you can do the installations very very quickly Enterprise customers on you know there's there's GNA be large clusters but Enterprise customers are only going to have probably one to two gpus per rack I should say GPU servers per rack and that's because of the power if you have more than two of these you really are going to be running out of power and you're going have a hell of a time trying to cool that and you're probably not going to have more than say 10 of these servers per building because frankly their usage isn't going to be that large and again the power is going to be a limiting factor the director chip cooling is going to be a must the Blackwell gp22 200 came out and it is a must that is it is gold it also does air cooling as well a few people like maybe the supercomputing and in cryptocurrency are looking at version cooling but not that many so right now it looks like direct to chip cooling is going to be the the main way to cool these big AI systems and fiber is going to be used for all rack to Rack you will still see copper daak cables used in rack but outside of that it's going to be fiber so to talk a little bit more about the fiber Network and archit is changing I'm I'm going to go into some slides here that show the inter prize from many years ago which used a three tier approach and then we move into spine leaf for hyperscalers and now what does AI need AI wants all the gpus talk together seamlessly as quickly with low latency as possible that all those gpus even if there's thousands of them are basically treated as one giant brain so it's seamless communication very quick and very agile hyperscalers do that to a good effect but new rail optimization networks that AI is requiring do that even better and that's with the EnV link switches that basically take those gpus and let them talk together very quickly what you'll find is a classic Leaf spine architecture but that leaf spine is going to be a little bit different you'll see that the speeds will be a little bit lower on AI for now because infin ban has not come out with 800 gig and that'll be changing very quickly but for right now if you're going to use infin band it'll be at 400 gig and don't worry even if your frontend network is using ethernet you'll be able to take the infin Bandon ethernet and seamlessly put those together because the industry has made sure that that works out most of the Clusters will be shorter reach so they'll use multi and um you can see the data centers themselves the megawatts are dramatically larger for an AI system and this is what I mean when I talked about rail optimization that top slide this is an Nvidia h100 each of those boxes at the bottom are a server each server has eight gpus each GPU attaches to just one leaf there's no over subscription but as you can see when you go to the leaf up up up to the spine you get that classic meshing but what's different here is the mo cables you're not taking two fibers out of many no cables and distributing it all eight fibers from every Mo goes from Leaf to spine to GPU so it's a little bit different in the spine Leaf otherwise it's it's very similar once you get out to leaf and spine this is what a superod dgx b200 Nidia looks like you'll have 16 server racks two Network racks for a total of 18 you'll notice that each of these racks those those brown boxes those are the servers with eight gpus each they only have two per rack because they're trying to keep this to an air cooled level if you can add more and go to liquid cooling you can do that but this is the simpler system easier for most people to adopt because putting liquid cooling into a new data center is easy putting it into an existing data center not so these pods take up 500 or they need 58 M8 cables so you can see as you add pods the Fiber goes crazy you have a lot of it and this represents exactly how that'll look the bottom are the servers the middle row are the leaf switches and then the top of the spine switches and that beautiful geometric pattern shows you just all that fiber cabling and you can imagine that's hard to install a physical representation of what those switches look like is this so this is where Pand can really help Cable Management experts we've got some great systems and that is all that fiber is gonna be a problem this is what you you can do with cable management the pictures on the right it gives you better airf flow it gives you better ability to fix things if they're broken make changes it protects the circuits it has proper Bend radius Pand is there to help you as you migrate into AI Solutions with uh with good cable management and Rapid ID free with all of our fiber cables we have unique barcode labels on that you can get a barcode reader wherever you download our free software you scan all those barcodes immediately gives you on your system exactly where each cable is going to or it's coming from and going to and it's a seamless way to quickly know exact where all your cables are and it's free another way that Panduit can help is we have the very best multimode cable on the market it's called Signature core we developed it 15 years ago and it's still by far the best multimode cable out there it gives you the longest reach if you take a look on the right side you have the four different parameters or three different parameters and each one of those the black line represents signature core the green line is om5 and the red line is 04 so you can see the performance advantages by using signature door and when you're going to be putting racks in they're G to cost 500,000 maybe even a million dollars each I think using the very best cable and components you can get is very very important now I'm going to turn it over for Justin to talk about Ai and rack power uh thanks Bob for the introduction uh as Bob said my name's Justin blumling I'm a Solutions architect with Panduit and I'm going to handle the rack power portion of today's AI presentation and we'll progress to myh First slide and I look at the AI Revolution or situation that that we have today through the same lens that I've really looked through the data center industry in my whole career and that lens is the same three variables of of space Power and Cooling and as time has progressed and the rate of change with Hardware has happened you know we've thrown different uh designs and different considerations at that space Power and cooling Dynamic uh in the mid 2000s we had of course the big trends of virtualization blades and kind of the birth high density uh Once Upon a Time everybody thought every rack would be uh would turn into a furnace with blade servers I've certainly been in some hod ises in my time that have both felt like a furnace and sound like an aircraft engine um so that certainly happened to some users uh in the late uh 2000s early 2010 we Genesis of the sustainability movement which dub tailed off of that kind of higher energy possibilities for data center hardware and we also saw the birth of these giant Cloud factories those of AWS and Microsoft and others and they became that term that we use so frequently today called hyperscale and then in the late 2010s through now we've seen aggressive AI developments and aggressive may not be too strong strong of term uh to describe what's happening uh today and through that time we've had to change our dialogue or our language about how we've talked about power within data centers um before the birth of the high density blade server uh context we typically talked about power and data centers using a real estate term of of wats per square foot um when density started to increase we realized that that term was no longer significant or no longer accurate enough so we started talking about KW per rack and that was good to talk about the differences between a compute rack versus a network rack versus a storage rack and allowed us to have granularity uh about where power is consumed across the data center and we still use that metric today but now as we're in this you know late 2010s uh 2020 becomes more apparent to talk about maybe KW Peru or KW per chassis we think about just the total sum of the rack without considering what the individual chassis or the individual use space might um and that's especially relevant you know for the cooling topic and conversation that we'll have later on in this presentation as well if we think about if we think about some of the GPU Generations that Bob talked about before um he gave kind of a an overview of how invidious Generations have progressed over time here's a little bar chart with the kind of the polinomial graph um which shows what that that trend line Looks like is really you know different and really outsized compared to what you know something as simple as mors law talked about you know through uh through even just yester year um so very pronounced uh Improvement or increase I should say in GPU power you can contrast that to the visual over here on the right where who is a a very well-known research organization uh tries to illustrate or estimate what the typical rack densities are in some of the largest operators out there so you can see they they Peg meta between 12 and a half to 15.8 kilowatts um they pegged AWS around 17 uh they Peg Microsoft around 24 of course there are variations and ranges there that's why you see the little you know TI is an approximation imation but as Bob said being that these guys are probably going to be the heavy adopters of the gpus that are available um by Nvidia and others and looking at what these power profiles look like uh there's ample opportunity for these power values to shift uh dynamically and then it'll be interesting to see what kind of the cascading effect is on you know others uh after these right what impact because these are the early adult guys that go all in it'll be interesting to see what how that Cascades and uh and trickles down to the other users who are out there in the data center space so I'm going to take that same graph and contextualize it a little bit and I'm going to use my trusty Panduit Flex Fusion cabinet here as the source of my bar graph um so on the x-axis here I've got three generations of of gpus Amper Hopper and Grace Blackwell and this is going to be my bar graph and I take a look at you know what that looks like from a rack perspective uh potential rack power I should say right so if I outfitted a rack with the maximum possible servers that uh it could support in this generation you can see 26 Kow 40 Kow and then 120 Kow I kind of feel like I'm being misleading with this uh this first entry at 26 kilowatts only occupying maybe a quarter of the rack because that is a pronounced Improvement or or uplift from where rack densities are today as an industry uh whole as you can see here using that same data from from omia this comes from their rack pdu report where they take rack pdus and they extract you know what is the average rack density you know across the data center industry they Peg that is about it should be 12 kilowatt in to the current calendar year and you can see we go from you know doubling it to overt tripling it to a kind of an order of magnitude difference you know for what this particular platform looks like um so we're definitely talking about potential being the operative word rack power that is beyond you know what we're used to today as an industry level um if we were to take that 20 6 kilowatt example right move that over here and think okay what are the rack pdu possibilities that we could use to address a 26 kilowatt load um some of the more popular varieties at least in North America are shown here you've got all of these are three phase so youve got 28 volt 60 amp which gets you about 17 kilowatt of power per pdu chassis kilowatts at 20800 and you get 34 kilowatts at a 41560 amp installation so you've got those uh options available to you if this was the platform that you were to choose or the rack density um certainly 26 kilowatt uh cabinets racks existed before AI uh so there are operators out there who have practice in addressing you know these types of loads what's interesting though is we talked about space Power and cooling on the topic of space you know take a look at this particular plug type here uh the image doesn't really do it justice uh this is an example of what a 28 volt 100 amp plug could look like and and if we think about space uh the height of this particular Beast is 5.5 Ines according to the spec sheet and the depth of this uh housing is 12 inches so again the image here on the slide doesn't really do it justice but when you're thinking about space it's not just the macro level of space in the data center about how much square footage these pods and clusters would would occupy but how do I have a rack that's deep enough you know to accommodate multiple of these circuits and how do I sneak that through the top of the rack and connect it to either the bus way above or a whip um the space considerations become very macro I'm sorry micro uh to the cabinet and then put that in the context of other things that you may need like Bob talked about liquid cooling manifolds deeper servers uh the space Paradigm is is going to become increasingly focused at the at the rack level if we take a look at what the another generation looks like again potential rack power um all of this information comes from the Nvidia reference architecture for this the h400 h100 excuse me um at maximum you've got four of these chassis in a cabinet uh with a max power of 10 kilowatts so kilowatts here in number two they've got three rack pdus which you can see here which are pulled out and you can see how they're designed to support the power supplies in any given chassis these are three 34 kilowatt pdus so these are 415 volt 60 amp models in a horizontal form factor and you can see that each one of them is designed to attach to a different c19 Outlet of course for load balancing and other considerations um according to Specs four of these power supplies have to be energized in order to support the units operation so as you notice here that's why they've got three pdus here because if I were to if I just had one pddu and I had three of these power supplies connected to this pdu and three connected to this pdu if I lose one side I don't have four power supplies available I only have three um so even though this is fed by a single Source uh upsb in this example you can see that it's divided amongst three pdus to provide that support so that really challenges the uh the Paradigm of kind of two pdus per cabinet and an AB you know dual fed uh type of installation so we have to think a little bit differently about how to power these and then what the load balancing looks like from the three-phase side to make sure that utilizing all of those phases efficiently and we don't have any stranded power so the math becomes uh more complicated to put to put it mildly when we're thinking about this so I'm going to wrap this all up um this quick summary by saying um these this graphic here on the right is uh the common themes that we hear in the world of rack pdus regardless if it's an AI application or traditional computer application um of course you've got power capacity um you've got Outlet density you've got the intelligence that you need at the pdu level for Outlet energy measurements and other visibility and then of course uh there's not a day that passes without you know hearing about some kind of cyber security incident um where you know information is compromised um so obviously having cyber security rooted at the pdu level which is powering all of these servers that run you know process other billions of dollars of transactions or hold other Mission critical data cyber security is important and of course given the rate of change and the rate of which these deployments are happening you know the ability for a manufacturer to have speed to get product to Market is absolutely essential uh but in terms of AI I think the most important bullet here is the top one that your mileage may vary um Bob pointed out that we're talking in some cases about the full development deployment potential of what some of these power values are um but is is everyone going to deploy you know for of those chassis per cabinet no Bob pointed out a an example where there someone may deploy two per chassis or maybe you're a different Enterprise where you don't need to uh build a large language model that's read every bit of code and every text uh word of text that's ever been written um it's more tailored to what that business use case is so don't be alarmed by the power forecast uh depending upon the nature of your business your mileage may vary I think we've articulated here that you know space Power and cooling are definitely going to have a more Micro Focus given all the things that have to fit in the rack and attach to infrastructure above um manifolds cable tray busway it's in all it's it can become very crowded up there so we've got to think a little bit differently about how we do that um I think 415 volts will become increasingly necessary for new builds we know that here in North America 415 has a foothold amongst some of the hyperscale players and large data center operators but there still are a lot of 208 full data centers out there especially those who aren't in kind of that that type of operator space that we talked about um so as new facilities are investigated you know 415 is a is a decision that merits you know investigation and I'll wrap up my section here just talking a little bit about from a product side what Pand do it has in the rack pdu space to try to address uh some of these applications so if you think about you know high density pdus like we talked about we have a line uh within our umbrella catalog of pdus that we call high density which are three-phase 30 amp at a minimum so 8.6 kilowatts you know going all the way up to 34.5 or
44.5 and a single chassis uh depending upon whether it's you know in North America or rest of the world 42 or 48 Outlet counts right to uh to accommodate those high density deployments and there's also the UN unversal pdu um which leverages the same combination Outlets that we're shown on that high density pdu where you have C13 15 1921 compatible Outlets along this bank and then and then you have the ability to mate a variety of different cords to that pdu depending upon your use case or depending upon where you're deploying uh pdus globally um so if we think about growing into an installation you know maybe it's we have 208 volt 60 amp to start but we need to go to 415 volt um 32 later on you know the same chassis could accommodate those power capacities just with the simple swap of a of the universal cable here so a lot of potential that the universal pdu will provide I thank everybody for your time uh now I'm going to pass it over to Ann to talk a little about AI in the cooling World good afternoon everybody today um we're going to cover data center cooling thank you for the introduction Justin my name's an casaron development manager in the data center practice here at pandt and data center cooling has been a hot topic for all of our customers uh that we've been engaged in and I'm sure it's top of M for you as well looking at the data center cooling Market the overall Kar growth is at 15.63% now looking at the different Technologies and methods for data center cooling you can see you know in 2023 um that inro and inra cooling was at 58% market share followed by that is direct to chip at 29% followed by that is rear door heat exchanger and then there's a little 6% uh of Aer now if we fast forward to 2032 you're going to see a complete opposite of inro and inra cooling methods versus direct to chip so it's a complete flip from 58% compared to 57% come 2032 um and the reason being is that AI is really taking a hold into all of our uh data centers right and into the environments and the needs for cooling are dramatically increasing to help support all those applications now there are challenges and you might be faced with you know some that are listed here you know as we discussed as rising rack power densities it's really making traditional data center schooling very challenging and all environments including AI including supercomputing now you might ask what are these industrywide challenges well there's insufficient uh cool air insufficient airf flow through the racks equipment is failing there's system downtime and this is key to take a look at manufacturers are not meeting the output size and the requirements as stated on their product specification sheets and this is because um every environment is different in terms of H water pressure water temperature and we're going to share some tips today on how to exactly determine what that output requirement and that output is going to be in your environment and lastly As We Know energy costs are rising every day some of the strategies to look at when building your data center road map as well as your cooling um strategy you would want to complete a thorough infrastructure assessment then you would like to complete a compatibility and integration analysis to fully understand what you have in your environment what you'll have in your environment in terms of requirements and cap capacity as well as what that integration is going to look like also You' want to take a look at your energy effici analysis to fully understand how can I reduce my fixed costs with a different cooling methods Associated when implementing these types of Technologies in my environment you would also want to take a look at your life cycle where you are today where you're headed and what that road map looks like to provide the utmost output to keep your data center cool to support your AI environment thus resulting in the utmost Roi uh that you're looking to achieve and lastly future Roofing your environment you might understand where you are today but it's very critical to take a look at your three to five year plan moving forward and here at pandt we could definitely help your organization in terms of the roadmap planning of your infrastructure needs for now and in the future as well this provides a apology and actually a landscape of understanding where the appropriate Technologies are for your rack density levels now historically if you look at the 10 kilowatt on the bottom left you know that's more of a legacy environment but as we're moving up into AI this is where you are exceeding 5 50 to 55 kilowatt um and that's where the active rear door heat exchanger would come into play as well as the direct to chip to support your environment up to 200 plus kilowatts these are the various data center cooling Technologies in the marketplace there are the universal a containment hot and cold aisle attainment air cooling Direct liquid cooling air assisted liquid cooling and these are the pictures of those Technologies Associated so to the left you have your Universal ey aisle containment hot and cold you have your in row cooler and that goes in the line of all your cabinets in row then to the right you have the rear door heat exchanger that's built with a frame and back of each of your cabinets that provide the cooling method to your servers and everything that's located in the cabinet then you have your director chip and that's what we talked about the market analysis and the forecast where AI is leading the direct to chip initiative and this type of cooling methodology and Technology will cool each of your servers safely up to 200 kilow plus and lastly Bob previously talked about immersion Cooling and this is more along the lines of supercomputing liquid cooling uh for all your servers we're not seeing much of that primarily looking at AI uh today in the marketplace where we are in 2024 going up to 20 32 um we're seeing a very high forecasted increase in rear door heat exchanger as well as directed chip this is just an example of the Nvidia l72 gb200 and what Bob discussed earlier there's 18 servers 36 gray CPUs 72 Blackwell gpus and this is all liquid cooled the reason being is that it exceeds uh that threshold and these have the capacity of 120 kilowatts per rack and in addition to that there are cooling pumps located on the bottom that you see to the right the liquid cooling manifold connections are on the rack and they face inward towards the servers I want to thank you very much for your time and I would like to pass it on to Bob thanks Ann um now I'm going to wrap it up and we're going to talk about what pandemic can do for you our goal is to be a trusted partner if you succeed we succeed and that's what we're been trying to do for the past 50 years so we have experts we have R&D team we have engineers and product managers Business Development people we've been doing a lot of the searching and and researching and we've learned so much about AI now that we can sit down with you we can help you so you don't have to go do this and remember we've always been that trusted partner we have a whole ecosystem we'll help you from every step from initial development all the way into the final product and that's what we want to be there is is your partner going forward I'm sure you all probably all know this but we'll just kind of wrap it up for re R talk about it for AI systems we have a total infrastructure solution you saw the power you saw the fiber we're GNA be coming out with cooling systems we've got the pathways we've got all the other data center requirements it's a total package you you won't need to go anywhere else and while we're learning more and more about Ai and AI is quickly changing and will be changing with it we're going to be bringing out more Innovative products more than the ones we' just talked about today again we talked about it we have the expertise we'll give you the support we're there for you and how are we there for you we've got a global partner program we've got certified installers you've gone through in-depth training on all of our products and what that does that gives you a quality system when it's all said and done that you can rely on for your very critical AI data center products and here's the global partner we have Distributors we have U System integrators as you know we have salespeople we have warehouses we have plants and every different region of the world there to help you out whenever you need us and so we have three different sections of a data center whether to be a traditional or as we've been talking about AI you've got your gray space and that is when the building goes up early you're going to be putting in your power you're going to be putting your Cooling and all of our electrical products like the ver safe or the cable Cates and grounding bonding and the safety systems are there to keep your employees safe make sure that the system is smooth as possible and make it the production of this new building be ready for when you start rolling in all the electronics first thing you're probably going to do is do the meet me room or entrance facility on the left you'll see our odf we just brought that out about two years ago as you can see see from that picture it gives you really good Cable Management in a very easy passive environment where you can make changes you can move things around and this is where you're feeding that outside planning cable coming into your building and feeding it to the entire data Hall so this is a very critical portion of it and our high density odf called Flex score is as high density as you're gonna find and really gives you that keable management that a lot of people just absolutely have to have in these critical environments we also have some high volume wall mount splice enclosures that go up to 6,912 fibers you can see the dark OSP cables coming in and then we switch them into the indoor fiber going out we have all the fiber cabling and cable assemblies whether they be plug-and playay or spice and then what's really important is we have we're the leader and Pathways and that's the yellow fiber Runner as well as wire basket and when you're talking about AI systems you're going to want to go to the maximum size pathway so you can go to a 24 inch uh fiber Runner and you can even go all the way up to maybe a six Deep by 36 wide wire basket tray for maximum cable capacity and then when you get into the data center itself that's where the AI compute is that's where all of our different fiber Solutions again those those high-capacity Pathways and protect your fiber and manage them you have the pdus that Justin talked about and then obviously we have cabinets and the containment that an was talking about basically a total package and how does that help you all that plug- andplay fiber and all these systems that we can work out with the system integrators to give you something that rolls in plugs in very quickly that'll reduce your workload in the data center by % we also have quick ship programs where our very most common products can ship within sometimes a matter of days to no more than a couple of weeks we can't always guarantee this but the products we have we stock at Distributors and ourselves to make sure that we can always try to ship those very common products very quickly and then reduce risk or circuit protection pandt always puts a lot of design into our products so that no matter what they do they last a long time and they give you exactly what you need a good long-term quality product so when you bundle all that together put in the support I think you'll find pandt is a great partner for you to work with on your AI Journey thank you for your time and now we're willing to take some questions so couple questions that came in that Bob I think you'd be great at answering you mentioned that Enterprise customers are likely to have one to two Jeep GPU servers per rack is that enough to have an AI system absolutely I think you heard Justin talk about it scales depending on what you need um a good example is this Law Firm I recently talked to they are going to be using the internet for their training they just have a lot of briefs a lot of documents and they're you keeping it private so they really only need around four cabinets each with just one GPU server by the way that's a lot of compute power and what that's going to do for them is they'll pull up all the briefs and you'll say hey based on this subject material pull these briefs up and give me a synopsis of what it means and in a matter of minutes or maybe a few hours at the very most what used to take weeks of having um law assistance pull this data up is going to pop up and it'll be summarized for you and you can do that with four or maybe even less AI servers so as Justin said you don't need to have massive clusters maybe just a few works so absolutely Enterprise applications they don't have the power to fully load four or so servers per rack they'll probably put one in and if they need multiple racks they may have those racks situated in different areas not next to each other so that they can keep their Cooling and not have a hot spot in there data center environment great great thank you um here's another one how are AI clusters connected to the internet right I didn't really go into a lot of detail so that's a good question everybody has their Network system now if you do with a hyperscaler they've got the normal Network that is the front end the back end is the AI Network and so as we talked about in the networking segment you have the compute going up to lease going up to spines and that is your backend system now when the spines the backend systems talk to the spines and the front end system that's when you can be switching from infiniband to ethernet but that's how you lay out your whole total architecture so you use your existing Network that runs all the networking that you're doing today out to the internet and then you just tag into that AI system in the back end and you make sure you design that for seamless applications perfect thank you um last one here so with the high number of fibers used with AI do you need any specialized Pathways no I as I talked about it just a few minutes ago the the fiber Runner and the wire basket that we talked about that should meet all your needs we have seen several of the really large customers going with multi- tiers of maybe a a tier of wire basket and a couple tiers of of fiber Runner or vice versa that helps you really have a lot of capacity but you probably will be sticking with a 24 inch wide fiber Runner and a 24 inch wide wire basket we we are also offering upwards of 36 inches but you're you're not going to be at the 12 and 16 wide Pathways like most people are today sounds good thank you Bob appreciate it uh so Justin I think we got a couple questions in that are within your wheelhouse here so which variety of pdus do you think will be the most necessary to handle AI power uh densities if I had to guess reading the tea leaves or the magic eightball or whatever you want to call it I would say that the 415 volt 60 amp variety will probably come out in the lead just due to the fact that that is in the Nvidia reference architecture um so that to me seems a good one a good guest for the one that will have poll position um we're still seeing in North America you a lot of facilities right because of the way we do power into North America that we have 208 volts and they are locked into that 208 volt architecture and they can't get to 415 without a very very invasive exhaustive project that may not be suitable for an operational data center um so you will see you know a need for 208 volt 100 amp as well which will give you that 28 kilowatts of capacity um and then there's even talk about you know exploring higher voltages you know 480 volt is a a predominant distribution strategy here in North America commonly St to 415 or 208 but will we see more deployments that will accommodate that uh I I if I were a betting man I'd say yes but I think it's probably too early to S great thank you and uh one more here how could a 120 kilowatt rack be powered yeah that goes back to the kind of the last one right um if we look the example that um that Ann and Bob had in their slides right that was the 1220 kilowatt liquid cold rack that was actually if you look at some of the pictures that are available online that is actually done in an ocp style cabinet with a uh direct current bus bar so they bring um Powers shelves into the rack they bring high voltage AC into the rack and then that converts uh there's Hardware in the rack that'll convert that to DC and then Supply power to a bus bar in the rear of cabinet so that's how that particular design um has been per um of course that's a you know really Progressive way of doing it it'll be interesting to see if that uh if that takes hold in the market and that's an architecture that a lot of people will choose to deploy good deal thank you appreciate it hey and we've got a few questions that uh you could answer for us here so what are the best five practices for optimizing data center Cooling that's a great question thank you so much for asking I would say the first one is it is a necessity to create technical capacity and the reason being is that any plan for a data center must account for how much data can be warehoused and processed moreover you want to be sure to account for possible future growth even if you don't initially invest in all the hardware that you will eventually need your plan must allow for growth without M making that awkward layout you know because it makes things very cramped or messy secondly you must account for thermal needs this is very critical because all of your data center facilities are going to generate tremendous amount of heat which requires successful pre-planning and resources to dissipate that cooling man malfunctions happen all the time and this is because of improper computations that can cause components to quickly overheat and Destroy sensitive Electronics um that you have in your data center uh popular I would say layout Solutions include hot and cold aisle layouts that we discussed earlier as well as rear door heat exchanger uh directed chip liquid cooling techniques these are the best possible airf flow to pull away from the network equipment uh thirdly I would say apply the most efficient power consumption in the environment by doing all of the assessments and you know and anal is that we talked about earlier today and you definitely want to collaborate with a trusted manufacturer a trusted partner and the fifth would be collaborate with panwood for this great thank you that's a lot of good information there another one that's here is uh what cooling method uh are AI data centers using today and in the future another great question it's a combination of air and liquid cooling um you know and we talked about this in the presentation earlier where AI is heading today in the future is you know the Techni and Technologies on what we're seeing in the marketplace definitely rearo heat exchanger behind every cabinet as well as uh direct to Chip and also CDU manifolds uh that will be added on to the environment we're seeing that all day and all night in AI environments and super Computing environments as well great thank you um so what are five tips for future proofing your data center five tips that's a great question I would say the first tip that I would give out is installing more cabinets and infrastructure within your environment Let It Be Brownfield Let It Be Greenfield to help support the growth um I would say work with our pandaa team most definitely to acquire the proper racks and cabinets and power that's needed for your environment I would say managing and monitoring your environment to see what's actually happening today and what changes and alterations need to be made as your cup cup R in terms of kilowatts and lastly you know work with a cooling expert uh with a trusted manufacturer work with panda so that's what I would like to say
2025-03-25 07:38