welcome everyone to today's webinar from fiber to intelligence as we navigate the era of AI the rapid evolution of data centers has become undeniable at the heart of this transformation lies fiber connectivity a critical backbone supporting the high density high performance demands of Aid driven operations in this session Panduit will guide you through how F how advanced Fiverr infrastructure plays a vital role in optimizing data centers for AI applications this webinar offers actionable insights into structured cabling and connectivity Solutions tailored for the AI era today's presenter is Mike Compton Mike is a Senior Solutions architect that has spent over 25 years in the technology industry with a focus on the data center Marketplace in his 12 years of Panduit he has produced architecture guides around key data center Technologies to enable successful customer deployments for Connect ity installers to Fortune 100 companies Mike's focus is on value added solutions that solve customer problems related to Technologies such as converged infrastructure high performance Computing and AI applications Mike I'll hand the call over to you now great thank you Kelly and good morning all uh as Kelly said I'm Mike Compton a Senior Solutions architect within our pands data center fiber group um I'd like to speak to you guys today with about some AI Trends and insights we're seeing in the marketplace today so our agenda for today is who is Panduit uh first of all and then we'll give you some current and some future AI trends that we're seeing um also what does AI cabling plant look like after all of these Trends are are gone through in the marketplace and then lastly how can Panduit help you with your AI Journey pandroid approaches our business different because we're a unique company as a privately held company PID has freedom to make decisions that meet our customer needs our business model is focused on earning customer preference and our commitment is to cultivating long-term customer Partnerships that delivering business value we offer the most comprehensive product line in the industry and we leverage and apply the latest Technologies to manufacture solutions that create competitive advantages for our customers uh here are a few things I'd like to highlight about pandu uh like I mentioned we are a privately held company which means we're not tied to shareholder returns with that in mind we place a large emphasis on research and development to ensure that we bring the latest and best solutions to the market we're also a top three connectivity provider you will see our name on many customer specs as an approved vendor and lastly Panduit products and solutions are 90% of the Fortune 100 companies as a trusted manufacturer of data center components these high-profile customers trust penuts run their multi-billion dollar companies on our Solutions now we're going to talk a little bit about trends that we're seeing in the AI Marketplace itself so let's first Define some terminology in artificial intelligence um AI systems can be defined in two parts in which I'll discuss and then give you some examples to be able to understand how they relate to products you've heard of so to start GPU Rex will be the will be purpose-built to be either training or inferencing uh for this you'll need both to complete an AI system and then training must occur before inference can be implemented what training means it's a developer feeds the model a curated data set so they can learn everything it needs to about the type of data that it's going to analyze uh and then you also have inferencing inferencing is the model creates output or predictions based on live data to produce actionable results uh there's two things better are called that you'll hear in in regards AI one is called gen AI or gender AI that creates new content some examples can be chat GPT or designs um you'll mostly see this with like chat Bots many corporations are training customer service facing Bots to be able to respond as first level customer contact points um secondly there's predictive AI which uses Trends to predict future outcomes in Industries such as manufacturing finance and Technology te ology an example of that is in technology firewalls or IDs applications seeing new types of hacking attacks and then using AI to proactively disable the threat or alert analysts to the attack in process so some of the trends we're seeing in the marketplace you you've probably heard of before um Power Cooling and fiber density are beyond the cap capacity of most existing data centers um most recently Google and Microsoft announced deals for small modular reactors data centers that weren't built for AI and are quickly adapting their delivery methods to suit customer deployments uh by having to add cooling power to smaller locations or pods we've even seen new data centers with their own solar Farms so then you can actually expect most new data center builds to be designed specifically for AI power demand is driving the need for additional offgrid sources such as wind solar and SMR nuclear which I mentioned that Google Microsoft are looking at right now uh an issue is retrofit is difficult as it requires large investments in Power Cooling and structured cabling and small areas like pods most large AI installs will be rack and stack done by integrators these pods are not easy to build given all the constraints around cooling power cabling and they many are being purpose built by these integrators uh Enterprise customers are likely to only use one or two GPU servers per rack due to the power and cooling constraints that I just mentioned some of these AI systems Enterprise siiz customers are having less gpus per AI systems just to make sure that they can have the power and cooling to be able to drive these racks uh smaller AI pods today are air cooled and we're seeing a trend of moving to direct to chip models as the density increases uh this saves uh requirements the direct chip cooling which brings additional requirements to the customer's environment that might not be able to support today this is the GPU densities are smaller in some of these environments and lastly fire used for all rack to rack and most interract connections factors such as cable diameter distance limitations and cable management make direct attach copper active Co copper and aec cabl as an unattractive option uh you'll see fiber predominantly used in AI connectivity we're seeing customers using direct attach options having issues with cable management due to Cable rigidity and the diameter of the cabling themselves so the AI Revolution is causing disruption in the data center Marketplace starting in 2023 and going all the way to 2028 we're seeing a four times suspend on gpus in the marketplace this is creating a demand for gpus that is outs spacing the supply or outpacing the supply for it itself when AI systems started becoming popular just a year or two ago the spike on gpus being sold to service technology went through the roof uh we're seeing 10x more power needed per rack as the density grows so does the cooling need uh DC providers were used used to seeing their average server cabinet in the five to 14 kilowatt range as you'll see later 100 kilowatt cab per cabinet is not out of the question out of the question with these dense AI systems and finally we're seeing four times the cable density per rack so meaning your cable management is going to be Paramount in making sure your installations are uh usable with this density brings cabling issues and as you'll see a little bit later that transceiver Trends is called it causing cable issues in AI as well and uptime Institute has actually predicted that 10% of all data center power in 2025 is going to be used specifically just for AI and that's going to continue so just to give you guys a quick comparison of the data center types are out there there's multiple out there I'll uh give you a generalization of it um excuse me so we have Enterprise hyperscale and AI D data centers I'd like to focus on some of the options like server connectivity traditionally all the endpoints in hyperscale and Enterprise data centers were terminated in duplex LC as you're seeing here with AI we're seeing a lot more multifiber mpo based connections at the end device itself also if you gave me a build of materials a year ago with the multimode angled polish Conn Conor on it it would look like an outlier application now rarely I see aill materials without one let's move down to protocol so eithernet has been the networking standard for 50 plus years with the requirement of lossless connections and high bandwidth inin man is popular in AI applications obviously as AI grows the large ethernet switch vendors like Cisco and Arista have con created a Consortium that Tau the use of e ethernet in AI networks to not be left behind we'll move down to power and rack considerations so power uh gpus are power hungry and 100 kilowatt cabinet is not unheard of in AI applications as you see in the Enterprise space here 5 to 15 kilowatts is usually your average well if you move over to the AI Marketplace we're seeing upwards of over a 100 kilowatts per cabinet and then we'll move down to cooling most data centers were air cooled and only liquid cooled in very limited fashion with AI the smaller less dense implementations are air cooled but we're seeing a trend on density so liquid cooling will become the norm as we go forward and let's lastly let's move down to location this one is actually pretty interesting often you'll find these data centers near large cities which provide easy access to power and internet interconnection with AI we're seeing a trend of getting away from these populace regions because they're trying to tap large areas of unvailable power supply we're even seeing new data centers by offline power plans and turning them back on to sue power needs now that we've defined Ai and went through some Trends let's discuss how this changes your cabling plan so Nvidia who's the market leader in AI has published a 55-page guide on how to connect all of these components in an aiod the issue with this is everything is listed as pointto point or Direct Connect cabling seeing that some of these pods can be 10 to 20 cabinets what do you think cable management is going to look like when you're home running 30 meter cables like they're suggesting in their guide Direct Connect would only make sense for smaller Enterprise installs where there isn't a larger pod over maybe three one to three cabinets these are the four fiber cable types that Nvidia suggests in their guide notice also that everything has a green connector housing that designates that these cables are termin ated in angle polish connectors that I discussed earlier in the guide at these speeds ensuring that you have less reflection and angled mating versus air gap connections can help with better performance notice the Y cables as well this is for connecting 400 gig to 2x2 200 gig applications for those of you that might have seen these two to1 breakout connections before what I'm showing here is a layout of a su or scale mobile unit rail optimize AI pod this design is used is using 12 infiniband switches four for spines and eight for leaves the rail optimized design shown here is a mesh with one to all which is different from Legacy or primary failover U networking rail optimize is the term that Nvidia uses to indicate maximizing performance while reducing any network interference key here is looking at the GPU server at the bottom each GPU has links to every Link in the middle and every leaf has an Uplink to every spine up top a full mesh this diagram is just showing logical lengths too a GPU Nick usually contains two fiber connections per 800 gig link using dual M M8 400 gig transceivers as you can see this is going to start causing some cable congestion in your network as I mentioned in the previous slide here is that cable congestion if you look at the transceiver in the bottom left this is the most popular 800 gig transceiver in AI implementations it uses two mp8 cables running at 400 gig and the transceiver is merging the lanes to bring that 800 gig throughput uh what the picture up top is showing is a typical 64 Port AI switch or up to 128 M cables uh this example is actually showing a melanox qm 9700 just for reference remember each scalable unit has 12 switches 128 cables per switch going from GPU to GPU GPU to leaf or Leaf switch to spine switch goal of AI is density so what happens when you go from one pod to four these counts start ballooning just one pod you can expect almost 800 fiber cables remember that doesn't include out of- band copper liquid cooling lines power cables are you starting to get a good feel as what these AI cabinets are going to look like hopefully what I've been able to show you is what AI comes with a lot of challenges for customers and data centers alike uh being a trusted partner with worldclass Solutions Pand can help you with you or your customer's AI installs we can do that by using the benefits of structured cabling so what Pand is known for in the connectivity space among other things we're trusted structured cabling provider many recognize us as copper cabling company but we've also been in the fiber Marketplace for decades what does structured cabling benefit you in AI installs so one AI is a connector high dollar value cabling environment as I mentioned density is maximized so there is no room for additional slack and cabling and as well as slack May prohibit cooling of uh expensive infrastructure uh structure cabling also makes it easier to add a pod for growth so now let's discuss the benefits of structured cabling so if we look at the high high cable density um I think I've shown that Network longevity longevity at 800 gig you're not using om3 patch cords you're going to want to use oh for grer for all your horizontal lengths um we can skip down to the uh instulation section here uh AI brings you multiple points of connectivity think GPU to leave to switch and it sounds like a pretty good application uh application need for structured cabling if you ask me so how does structured cabling look for well look like in in AI applications pan has published a guide for how to complete structured cabling in AI environments this guide is being used by installers and is being posted to nvidia.com showing Panduit as an approved structured cabling vendor here's what the guide showcases we've defined the essential applic a types in an aiod which is switch to switch GPU Etc and highlighted structured cabling options for every connection in this guide multiple offerings based on application as well as there's actually multiple offerings based on application uh and it's also up to customer preference on which one you would want to use as well uh to F to house the fiber itself we have multiple options HD Flex as you'll see above is an award-winning enclosure offering maximum density uh sfq quicknet offering offer you an ease of plug-and playay and our opticom offering offers slide out or tilt trays for ease of install the connectivity can be completed with trunks or interconnects based on distance or customer need typically if you're not pulling through fiber pathway and can lay in your cabinet the cabinet U fiber in place complete you can complete these installs with no fiber jumpers or interconnects as we call them so let's take a look at the guide each application will have a wireframe image at shown up top so you can visualize what the link looks like itself showcase here is an 800 gig switch to an 800 gig switch link each transceiver is using dual M interconnects to a fiber adapter panel or fap with a structured connection of another interconnector trunk then repeating this process on the far end to link that transceiver each application will show transceivers used panel enclosure and cassette option as well as fiber interconnectivity to ensure you have a fully supported AI Network link this guide highlights nine applications and what Pander infrastructure you would actually need to Cable it successfully in your environment uh lastly let's discuss some products and applications that make Panduit a differentiator in the AI Marketplace so rapid ID all of our interconnects and patch cords come pre-labeled with rapid ID labels what this means for you at time of install you can scan labels on both ends with a Bluetooth scanner into our software give it the location of the two and from connection and it will keep your connectivity map for you so you can feel free to ditch your Excel spreadsheets now signature core fiber so our patented mm four plus fiber that will enable extended distances in multimode applications uh much that you're going to actually see in AI using multim mode p is the only company on the market that will give you those extended distances Beyond om4 or om5 often at 25 to 30% additional reach our panm connector another award-winning panote product that allows for field gender and polarity change without a separate tool the tool is integrated into the connector housing and will Aid in gender change did you order the wrong part or was it supposed to be male and you specified female you can extend the pins uh in the field if you ordered poity a and you actually needed polarity B this is also field changeable just remove the connector housing rotate uh rotate the connector housing 180 degrees you're good to go panm is also the same price is a standard no connector so there's no reason not to safy it in your in your builds uh finally HD Flex it helps with dense applications the nice thing about HD Flex is it split tray design allowing you to minimize circuit Risk by only sliding half a tray for any moves and and changes so these are some of the products that we offer in the AI Marketplace and what you guys might be seeing when you're starting to spec some of the jobs for your AI customers or for yourself did I have any questions today Kelly okay thank you very much Mike um it does look like we have a couple questions in Q already first one why do you think eight fiber angled polish connectors are more popular in AI than 16 fiber connectors considering the high bandwidth nature of the application ah good question uh a few reasons in my opinion to be honest with you for one the better availability of eight fiber connectors versus 16 fiber in no form factor uh 16 fiber isn't plug and playay with existing 8 fiber or 12 fiber infrastructure either without some type of media conversion uh there also isn't a lot of optical transceiver support right now for 16 fiber connectors as well the the APC connector also helps with back reflection and signal loss which is critical at these high speeds that we're seeing in AI uh but as the speed and Lane requirements increased uh 16 fiber connectors will be begin to come more popular in the marketplace I think okay thank you uh second question can't you just Direct Connect everything in DAC aoc's in AI uh so yes and no so backs work well in short distances with lower bandwidth applications I would say 100 Gig or lower uh with AI most applications are 400 gig or 800 gig the the distance Li the distance limitations you're going to see as well as the physical size and inflexibility makes it a poor choice for cable management and these uh connectivity dense applications like Ai and also having to pull aoc's from cabinet to cabinet for this application may break the cable which would negate any benefit that you're actually getting from any any aoc's right next question what is the connectivity Trend you're seeing for the next generation of AI okay so um what we're seeing is some of the GPU manufacturers would suggest that simplifying your deployment with aec cables or active electrical cables was
2025-03-26 02:42