Artificial Intelligence and Machine Learning on Cisco UCS

Artificial Intelligence and Machine Learning on Cisco UCS

Show Video

I. Recently. Had a chance to interview a couple of guys from the NBA, technology. Leaders. And. The research for that led me to a, company, that they work with for analytics, and they were doing big data machine. Learning, and. They, were applying it in ways that I didn't think could be done they were able to tell a coach and this is just one little example about, these are your players who, are good. Players but they take bad shots but. Here are some relatively. Speaking bad players who can make very good shots and so you have this level of data that I wouldn't have thought could be measured in that way and it, reminded, me to, the point of today's show that anything can be analyzed, it can be tweaked and, it can be improved and. Artificial, intelligence is a top priority for every organization, machine. Learning which is showing up at at some level within every company that I get a chance to meet with most. Of you are probably familiar with the the challenges, that this is brought to IT all. Kinds of new software flooding. Us with new types of data much. Of it certainly isn't always happening in the data center but it still needs to be supported, it still needs to be under our control, but. We're here to help here at techwisetv. Because, we've got a new GPU. Optimized, UCS, server we're gonna show that today in fact this right next to me we've, also got a guest engineer, from Nvidia, who's gonna explain the tesla v100. Tensor, core GPU and then we're gonna have a real live data scientist, that we dug out of cisco who's gonna share his point of view all of that type of stuff but first let's kick it off with our good friend KD, well. Katie thank you so much for coming in again I we. Obvious I have some big things to talk about today but our topic, around artificial. Intelligence, machine learning it, feels like hype, out in the market but you're saying it's not I know it's not high and it's really the firestorm.

Of A few things coming together the. First piece is the, fuel which is the data there's just enormous. Amounts of data being generated in the world not. Just from users but also from IOT devices okay. Add, to that some, fuel which, is companies, are realizing that if they don't participate they're. Gonna get left out, an. Analyst, said and you've been there before right and Alice says. Companies. That invest in this a IML. DL techniques will, generate 1.2, trillion, dollars. Of more value than companies. That don't right so it's an imperative yeah so if you feel you think you have to be doing is very much a bandwagon, thing going on here it's, like you've got to be doing it because there's there's numbers behind, it that support, it in multiple ways it, does and sends your combining, is what you're saying to create these. Two are combining but then the third ingredient which, is can. You do it and and it's the democratization, of techniques. It's not just the purview, of a. Large. Cloud, provider anymore there, are techniques that are available today both in terms of frameworks, for a IML, DL that are readily available open, source often there. Are, processing. Elements. CPUs, GPUs, accelerators. That make this possible and then, finally, companies have invested in big data techniques and created data leaks that, can, be leveraged to, feed these techniques so, all these things coming together it's really creating that firestorm, I love when technology becomes more accessible and then then of course user so you get all the smart people that you never would have spoke. To or something or generating, ideas that we never would have thought of and so big things come out of that I love the democratization piece, that's right I mean sometimes it just takes time but now when all the ingredients are there it takes off like wildfire, well so let's back up for just a second kind of from a broad perspective for, anybody that may not be completely familiar with thinking those terms are interchangeable how. Do you describe what's important to know here yeah so um so I think that there's a lot of terms and it's. Also a, new, paradigm with AI nml so, let's back up a little bit traditional programming, was all about the developer, the developer, had, the logic in his head he, would code the logic into into, a program, feed.

That Set of instructions, to a computer the. Computer would querida so really depend on his knowledge to impart the structure and part the information, grant, the computer's going to do it faster but someone still had to know it because that's the difference we know we're doing now with Michigan difference now, is that the machine teaches itself the machine learning system, the AI system ml, DL system is. Its kind of model over the like, the human brain and what. It does is you show it data and it says you know what I've noticed that, that four-legged, creature looks, like a cat I'm gonna train myself to recognize that it's cat and, so the Machine trained its L there's no developer, in the picture anymore there's a data center that feeds data and tunes the parameters but, that's, that's the difference okay the, difference I was going to talk about a couple of terms as well since we're here right. So training, I refer to training quite a bit that this process, is called training where you where, you feed the, system large. Amounts of data right and the system, then Tunes its parameters, in its model and really fine tune its model once. That's done the, models sent out into the world to. Work, on real data and draw, what. They call inferences, from real data and that's the inference part of it. Wanting. To to. Recognize the inference piece doesn't require quite as much horsepower as the training training. Machines right but they do require a, ability, to manage these large, numbers of inference nodes. Remotely. And potentially, centrally remotely, those are magical words to Cisco I think, we like to connect remote things and, make stuff happen at scale right that's right what kind of use cases would you highlight for, understanding, how this is being done so. The. Use cases exist in every, vertical but. They're a little bit different in every vertical so if you don't retail, you're. Talking recommendation, engine okay if you bought this you might like to buy that yeah of course if, you're talking security. You're talking, about crowd analytics, and face analytics. Enormous. Exactly. FinTech. Is all about you know fraud, detection and. Credit card you know fraud detection so forth I, talked about a couple of examples that we've been dealing with kind of on a direct. Customers of ours as well as Cisco IT so. Fin, tech example, which was the customer of ours we helped them create a, data Lake based on existing techniques. They. Prepared and cleanse the data and they attached to it and a infrastructure. That is then trying to drive much, more personalized, interfaces, with their customers, so they have a much more a. Personal. Touch with their customers, as opposed to the pathway, it becomes much more automated and a, machine like. The. IT. Example, of the one touch bond is actually for our services, departs a port department, now cisco has world-leading. Support, attack, yeah, TAC. Has a ton of data and they've been running AI techniques on that data to, to, predict. Where failures will occur a little proactive kind of services from them when you call in and we're we're on top of things faster what we promised, a certain SLA, for our customer saying hey you know if something goes wrong we'll be there, within a certain amount of time and fix it and what we're doing now is we're predicting those failure that pre positioning the spares in different parts of the world. Along. Our supply chain to, maintain those SLA than and maybe even improve upon those how does how does this data and what we're doing with it how does that reflect on what we care about his IT and how to do this is a new space and there's. Lots of lots. Of challenges and on different people who are operating here so first, of all the, way these projects come about is really. From the top the. Cxl CEO, CIO, potentially. Here's a high for the buzz word or learns about it or doesn't to get left out says we. Want to launch, a initiative, around AI or launches, a new project in this space which. Involves two kind, of two pieces of the puzzle one is that blinds of business hire. Data scientists, write the data scientists, have a challenge, of dealing with a lot of data, they. Have a challenge of leading with dealing, with an. Infrastructure. And ecosystem which, is fairly, immature so they're having to piece together infrastructure.

Rather, Than actually, work on their, task at hand, and. Then finally they, need, to scale, their, like their starter, hello world models. To real world production, trade model operationalize. Everything yeah yeah and that's not that's not trivial, the. Not. At all so, I seriously, said that I'm like he means it's not trivial yeah it's not the, the IT team, on the other hand, is. Dealing, with an infrastructure, that either could be a silo, on its own but. You can never predict how fast it'll grow you really want it to be part of your continuous, infrastructure. Same manageability, not. A new silo, same, sparing. Same operation, models and so, they really want something that fits into the existing infrastructure, not just for operational ease but also because they need to connect it to the data sources that, they've been working on in the past right so they, need some to tell fit in and they need something that is more curated, there's, a lot of. Fragmentation. In the in the software stats and the, hardware stacks for that matter and just. Getting it all together and piecing it all together is a non-trivial task risk, right because we're putting more things out there that we don't know as much about we don't know exactly what the doing but, you saying we've made announcements now that allow us to help kind. Of mediate or mitigate some of those things yeah it's about time to value for and how do we enable, the. IT teams to, provide for the data scientist how do we enable the data finders to create, time to value for the company so what we've done is we've. Have a portfolio, products. Okay there's, a set of products that are intended for the inference side of the market okay, lesser horsepower, manageable, remotely using, the UCS management system and it is our UCS products which everything you see our products is part of the same family same management there's, a set of products for test F for, those more. Of the hello world operations. And then. There's a brand new product, a built from scratch for. The deep training deep learning part of the market this really requires requires. Processors. It requires. GPUs. And lots of them and it, really is a complete system designed, around not only the, GPUs in the processing power but also having the data on. On the system itself with, a large number of drives as, well as that fabric to connect it all together it. Looks like we're ready in the lab as when I understand though you've got some partners that we've been working with to, make all this come together why is that important, it is really important because it's not just about the hardware it's about the software ecosystem that, goes with it so we've been working with a lot of software because the partners to get, those tax validated, and optimized in our systems as, well as with channel, partners because they're important part, in this makes they're they're actually what stitch it all together and make it real for our customers, so we've got an ecosystem of partners, both, channel and software and then a world-leading.

Portfolio. Of infrastructure. Elements excellent, Katie I want to go get in the lab and look at some of the new hardware thank you so much guys hang with us I'm gonna go in the lab victus, is here to show us the new hardware, that we have to play with next. Well. Welcome to techwisetv, Vickers I'm so excited because you brought hardware, just like you promised, and, make sure I get this right this is the C 480 ml, yes, sorry not to be confused with the 480 that we already have, although it's the same chassis for the most part that's quite close this is purpose-built ml being machine learning I'm guessing yes and that is the right gear it's purpose-built. For, accelerating. Deep, learning machine, learning and the enterprise, that's. Excellent okay so this has different components you guys may, look same on the outside but it's a lot different on the inside I'm assuming it is in fact the rear side of this is where the lot of G have gone in the modularity of c4 it allowed us to roll it out fairly quick so we share quite, a bit of things in the front but, this rear, part is where all the magic recites, we'll start with the magic then so tell me what is it that makes this different, absolutely. First of all we are really excited to welcome this, new platform, into the same incredible UCS, you know compute line of products this, one has four key. Components Rob a one is the GPU subsystem, second. Is the, network fabric subsystem. Third, is your. Computer. At the CPU subsystem, and fourth, the stories let's. Talk about the GPO which is at the heart of accelerating, I want to bring pad in here for this one to make sure that no one misses anything so point, out to me what's what's what's happening in here that's different that went into the design absolutely so, if you look carefully at this section Rob you will see that there are eight and really a GPU, sxm 2v hundred sitting. In there for in the front and four in the back each. One of these GPUs have over. 5000, course cuda cores oh my gosh okay so total, we are looking at 40,000. Course what. Does that tell you you can really accelerate test, me all the gamers right now are looking at this going how do I get this thing running what, I want to run but you've obviously purpose-built, this for for something else so this stuff this stuff is powerful so, this must generate a lot of heat is what's, - the fact that these things look like they're at different levels, here and that's, when you sergeant yes when it is running at the peak it can generate tons of heat and when, it generates, heat you need to have right mechanism, in place to, cool it down so that the, performance is not artificially. Throttle so that's where mechanical engineering, brilliance has come in we have. Implemented. And optimized end-to-end, airflow and please, take a look at this interesting, differential. Heat sink that we have one, down one up so that the air any, hindrance, being caused by the front sort of heatsink could be compensated, with this elevated. Heat sink that you have here this really helped us so, the air is gonna pull from your cold side it's gonna go over here it's gonna exhaust right out behind it here and your hot oil but, these are these are different so that these don't get hidden behind here and just simply suck in the heat from, the one right in front of it that disconnect rocker and that allows us to you know let it run at the peak without worrying about darkling, the performance, down either at the GP or the CPU that is in the front because the system is gonna respond it keeps itself going well right you don't want that to happen even though won't die but it will start not processing, as many bits so to speak well. So as we work our way this way is it time to talk about the fans these look like you've got hot swappable fans I'm guessing here absolutely and that is another thing this is a very hot swappable fan and you know as you can see. And. Yet. They do make you know a lot of noise but they. Are the key weeks, to make sure the system is turn, them on here cuz I never be able to hear you yeah okay so as we go from the fans but, this and you said this part is completely different this should look a lot like what other people that's what we're familiar with on a 480 that's got same modularity. As. You move forwards the same argue laterally what we have in the front is is. The CPU complex, which is where Intel latest, styling. Generation of process around, 3.

Terabyte, D-damn ok and also, you have in the front as 24, drives, ok and we. Pack up to, 120. 180 2 terabyte, of raw capacity you also have LSI controllers, to give you a choice of having different. Rates so that you can have the store is local and you, can have your machine learning things happening local but most. Of the times you will go out on the network ok, because that's where your data liquor sites that were that's where your you know data resides so we made sure that you have the Cisco fabric, right so that you can bring in the data fast enough to, keep your a. Big learning job running smoothly and fast all right and one final point we also have up to six nvme tries if, you didn't bring it up I was going to because, I'm like there's a that seems to be extremely popular. And necessary, for what a lot of people are trying to achieve in nvme. Is where people are wanting to go right now yeah that's right and that helps us really clear. The performance, pipeline, from the drive to, the DM and all, the way to the GPU a complex, subsystem so you really don't want any bottom like anywhere that's formants, has to be all the way through all of it well so speaking of that so this is part of UCS you mentioned obviously you know that a big important thing is that this is part of the family but. What would you say are the the things that we need to understand and remember about why this is different than say run-of-the-mill server I could get you know from Fry's or something and, that's, a the important point the first thing is exactly, that it's part of the UCS family ok so as a result you get the same level of manageability, ease of use everything, that you have heard in, the synonymous, with you CSM and enter site ok comes perfect nice all the easy management plus if I'm already running UCS somewhere else this isn't an island of, processing. This actually brings it into the family so if anybody was starting to kind of go off on their own as their starting to crunch data this, is an easy way to keep them happy that is exactly you, know if you're already using it just to slide it in there's. No need to learn new thing the, manageability was it is the same thing that you are used to all right what else best one the second the performance our big focus on performance, we are talking about accelerating, the deep learning and machine learning so we have put, a lot of emphasis to make sure entire. Pipeline from the data intake, to, the data load into the D ramp all the way to the GPU complex, is all unclocked and then, of course Nvidia does the magic of making, sure that and we link and sxm two that, are in there are optimized, enough to do the you know the. Deep learning processing. Fast yeah actually I want to bring in video in here in just a second but tell me so I think there was one other thing you wanted to cover that's right and the third important, vector of the UCS, franchise, that we have is distillation so when we design a system we, always keep our North Star as customer. You said in customer workload so, for example when, we'll be rolling it out for the Big Data customers, well shown the initial, propensity.

To Go of the deployment. Of a IML DL workload we, will have solutions, ready so that they can bring in this box, and get, started without, much hassle okay ideas, you're not testing, applications. And such and then finding out that there's some mismatches of some sort we've already gone to the trouble of making sure that this is all gonna work seamlessly as you would expect that's that's exactly it I don't know but I do of solutions that we are going to be you know qualifying, it for perfect. What I want to thank you I want to bring it looks like Koresh is it's not ready to come on in here so I'm gonna bring pressure and press please come on in man thank, you very much for your time hey, thank you so much for coming at my having me let me switch the slide over here because I want to ask you. Guys are, are well, renowned for having the most amazing GPUs. But I, have to admit I understand at a basic sense GPUs are about parallel processing, versus CPUs, being you know just a threaded, what, do you call it serial processing sequentially. Why. What, is it that you guys are doing different because it's not just about the GPU and the fact that we could cram, eight in here and I guess we had to stop at eight because of the heat and different things that could be dealt with but you, guys are doing a lot of other magic too I wanna make sure we cover yeah so AI, is very complex. And it requires, unprecedented. Amount of computing, power so, what GPU, computing enabled, for AI is, V. GPU, computing made it a practical, tool made AI a practical, tool for businesses to gather more insights from their data and, what GPUs it help is they, accelerate, the, time it takes to train these complex, AI networks. From. Weeks that it could take without. The GPUs, to just a matter of few hours and we. Register our weeks, two hours and when it comes to running these complex, network also known as inference, it, makes it practical, to use these complex, networks, for, real-time AI inferences. For, really powerful, experiences, that businesses, can offer to their customers, so we're constantly trying to obviously get as. Much power into a small space as we can because customers. Who have realized that there's so much return to be had based on their ability to gather, insights, and and plumb, that data so, to speak they are throwing more compute, power like this at it and you, guys are helping us do that absolutely. So, UCS. 480, ml comes. With eight of our V 100. Sensors. Called GPUs, and. The. Great thing about v110. So-called GP uses we've really. Innovated, completely. On the design, on the processor, as well as on the software stack and, built unique, tensor code technology into these GPUs, each.

V100. Provides. 125. Teraflops. Of performance. And eight, of these in, UCS. 480 connected together via Nvidia Envy link provides. One better flops of computing, so, that's, 125. Total, and that's what you guys yes what you're trying to tell me earlier it's, a hundred twenty-five per GPU, exactly. Yes whatever a teraflop is it sounds huge it sounds u21, petaflop yeah, it's basically a, million, billion, operations, every. Single second, so, it's massive. The computing, capabilities, here yeah. Okay, so that's exactly, what everybody is looking for when it comes to this you guys have built this but there's more to it I don't know if it's if it's going to the next thing here I'll go ahead and bring this up but, I wanted to talk about K you talked about the reduction in time here but before we talk about that you were talking to me about envy link because, I thought this was fascinating because it's not just about the GPU but it's your communication, between the GP, it's absolutely. It's not just about the GPUs it's about the end-to-end design, all, the way to servers, with, our partners, like Cisco, to. Make sure they are deployed differently, in the enterprise's, what, envy link allows is. It allows eight, of these, GPUs, to talk, to each other at ten times the, speed of a PCIe. Basic. We don't have that as a bottleneck then we don't have that as a bottling, basically, eight GPUs, behave, as one single one. Petaflop accelerator, if you will and are, available transparently. For the developers, as a single, giant processor, oh that is excellent and obviously the benefit. How, much of a reduction in time you get which this is really this is the kind of thing that equates to money when, you talk about what, you guys are able to do and I appreciate your partnership on this because I've never seen this much of a focused. Beast if you will is there anything else that we need to understand about the GPU side and NVIDIA partnership, before we go yeah. I think I think we talked about the important thing so we work with all the development, frameworks, for AI so, when developers, develop, AI they. Are working on frameworks, like tensorflow, or cafe, or flight or and. All, of these frameworks are. Accelerated. On on. Our GPU, computing, platform, so all of this computing, power is available, seamlessly. To developers, so they can easily use all of this to, develop and and get benefitted uses the tools that they're already using right now now they could do a scale now they can do it at scale and it's, a modern Network like resonate 50 which is what we're showing here it. It, would take almost. A month to train that Network on. A CPU only server with. With. A server like UCS 480 that, comes down to just four hours and, you can connect a lot, of these 480 servers, to. Even bring down at time to a few minutes and that, basically, translates to, companies, solving. New classes of problems because it's now practical, anything.

That Can be done in a few days was, a practical amount of time needed to solve important, challenges and also, it accelerates, their time to solution as they try to bring advanced the IBS services, to. Market and optimizes. The productivity, of their, great, data science teams I think, I'm just translating I believe, what you're saying is is that Cisco, is your favorite vendor for, Nvidia I think, we just go on the record and just say that you prefer working with Cisco everyone else you don't need to say anything else appreciate. That it's good working with you guys absolutely, you know. I think it works fantastic and, I love what we've done here because it's not just simply slapping GPUs in there's, a whole lot of thought that goes all the way through this and basically. This is a physical instantiation of, time equals money that. I've ever seen I love it thank you so much appreciate that guys sit tight because we're gonna go talk to our data scientist next and he's, gonna tell us his point of view stay, tuned all, right Deb oh I'm so excited that we finally got to this part of the show because we've been promising, that we need to meet a data scientist, and and, we have them here in Cisco I've never been allowed to go visit you my badge doesn't work anywhere near your office I don't, think it's probably thing it's supposed to be that way but, tell me what does a data scientist, do yes so there's. A lot of myth. And buzz. Around data science so a data, scientist, is essentially, a person. Who, man manages, to look, at data and, analyze, it clean it up or, analyze it and then extract some useful, insight that could be valuable to business for example okay yeah, you. Have logs and television. All your servers and, infrastructure. Our. Data scientists would look at those data, sets, and say hmm. That's. An anomaly. Something's. Gonna fail soon. Before. It actually fails I, get. This feeling so tell me I'm just gonna go out on a limb here is that that when, it comes to corporations, who are looking to hire data scientists, because that's a it's a hot area and there's, so much money tied in who's getting this stuff first who's coming up with the unique insights the quickest, that's why processing, part like we're talking about is important but, I also think that if I was a CEO just imagine for a moment and, I'm looking to expand I want to get this a IML stuff in my company, and I'm wanting hire a data scientist, I don't, know that I know what I telling, you to do all I know is so I it sounds like part of your job must be about also advising. On you. You say well give me the data and then you're gonna go through some things and you're gonna come it you don't know where you're gonna go so you're gonna dig, into that and then provide direction back I would imagine in a lot of situations yes, so many data scientists a good actually, are also advisors, very good at Rises because they, look at data and they, understand. What what. Are the business needs and they try to kind of provide. Support, the business with the right insights and also, these days data, scientists, also get to pick and choose their, infrastructure, their, software. Tools because. They have to, deal. With the tools themselves right, okay so when we talk in here about IT, being, challenged, by this growth in. The data that's being produced and all the stuff that's happening here what's the reality from your perspective, is that is that truly does it have a chain, like effect throughout. An organization, with what's happening in your group you. Both from a you know how do we support you kind of perspective to what do we need to understand to make sure you're successful what's. That I think a good way to look at it is so. IT is a, mature, organization. In most companies right and they, have, very. Well-rounded. Policies. And governance models the, data scientists. Role. Is a new one new, ish and. IT. And data scientists are working together to figure out what's best for the organization today, and right.

Now It's. A lot of it's a two-way street and. From. Our perspective what, we do is we, do both data. Science as well as being. An engineer, we, also figure out what are the right tool chains and we, go and build our own tool chains off in an open source so, not only you have to go out and figure out the problems you need to solve and kind, of get a direction established, sometimes you have to go out and also create the tools to do so absolutely, now I understand you guys actually have done this a little bit with, some partners and you've partnered to create some things that we may have even heard of if you're in this space and they form or fashion it's kind of interesting tell me about that yes so when we looked at the problems that we faced our own pain points we realized that we were spending a lot of cycles, in you. Know, repeatedly. Figuring. Out the, DevOps and, the training and inference. Stitching. Them together so, essentially there's a gap between data, science and engineering. In. Production, yeah so we. Said okay hmm so. We looked at the landscape and said we, should partner with. Somebody. A market, leader Google and, they. Will jump, starting a new project called cube flow which is essentially, you, know a layer and tool chain to simplify. Machine learning and AI on kubernetes. On kubernetes, an. Extension of Cooper net week or invested, and we are building. This in the community, along with many other companies so, obviously you guys are using containers, to, do a lot of this type of thing and kubernetes, is how we're scaling this type of thing and all the things that go along with that is we keep going down that trail. How important is hardware like with what we're doing in UCS, and is. There anything unique about UCS, in terms of how it helps you do this well. So absolutely, so the, hardware is what. Runs, the. Software. Life cycle do you see an easy answer yeah yeah but if, you think about hardware. We, need a plat hardware, platform, that is scalable and easy, to manage it's extremely. Hard to, manage the. Life cycle of bare. Metal and you don't want to be doing that yes as data scientist you don't want to be touching it that's not your thing yeah, and you serious is amazingly, good at that really. I mean I really like I don't know of course I don't know but it's good here from your perspective you know from our perspective. UCS. Just takes the pain away from right us, we. Did and you know you see this is the. New servers, they're, really powerful and and. With, all the hyperconvergence, it's. Really, what we need and it helps shorten, the. Time to Train the, time to infer these are some of the key things that we data scientist worry about is it the training process that, uses all those GPUs, that we have now on this new platform mostly, yes ok and. Time. To Train is a really important factor so shrinking, that literally, makes a difference in how fast you come up with the answers exactly, ok and that is a business implication, because if, you can come, up with answers faster, you can make. Business decisions faster. This is it true you're probably to go down some potential. Answers and find out they weren't and it, was it was like no, big deal so the faster you could fail essentially, you get to start over and go fail and, then come up with something goes good yeah it's the Nitro different, data science is not one shot ok very good there. Anything else that we need to understand about the importance, of what's going on here you're doing cube flow you're doing that stuff on UCS, what. Else do we need to know so I think the, thing, to remember is as, enterprises, transition, into AI nml they, need to worry about how. To make, their operations, smooth and easy and simplified. And consistent, across. You. Know all their, portfolios, so, UCS. Helps, with managing. The. Complexity, at, the bare metal level it really does a phenomenal job and you. Know on in. Order to manage the software lifecycle queue. Flow does a pretty, good job it's early in the life in the lifecycle management, tool chain but. You know it's, gonna grow like crazy difference, and you guys are using it yeah make it better if something doesn't work we go fix it excellent, well thank you for taking the time to come share that with us I got to see a metadata scientist.

Very, Very neat that's how can I hang out and ask you a few more questions afterwards, I'm sure I'm with them but anyway, thank you so much for that hey guys don't forget that it goes beyond just the one server that we talked about right there's a broad portfolio with. GPU, support and everything you need including, UCS. Management which is really not an ingredient I mean that is the solution because. Of the simplification, this means you can pick the right server for, the right workload. And completely. Avoid even a hint of the complexity, it's about simplifying the operational, model keeping, costs down and then extending, the capabilities, of your administrators, so we can all focus on what we know we need to be better at thank. You so much for watching techwisetv. Appreciate, you joining us for this one I hope you had fun we'll. See you in the next one. You.

2018-10-05 03:31

Show Video

Comments:

Interesting topic

Other news