Zero Trust: It takes a village | Supercloud 3

Zero Trust: It takes a village | Supercloud 3

Show Video

okay welcome back to supercloud three we're live in Palo Alto for a live event we're mixing in some pre-records with simulive those I'm John Furrier with Dave vellante super cloud 3 is Security Plus AI we have live on the remote High degree of difficulties John Rose is the CTO of Dell Technologies and distinguished Cube alumni John great to see you thanks for promoting in live into the Palo Alto Studio where Dave's here live too hey John great virtual it's live virtual all fun um we'll get this move down more and more it's fun to get this these events going super cloud 3 is Security Plus AI you know you've been on many times talking about the trends that are important to the industry but also Dell participates in that as a big player with with number one market shares across the board in all equipment from servers Edge to whatever um you guys are the leader in this area where is these Cloud Trends going from your perspective AI is a big part of it how are you going to participate in that will start with how you see the big market trends the 30 000 foot level yeah we I mean we could spend all day on on there's many Trends but the the most topical ones right now uh and it is actually this intersection of AI cloud and security um you know no particular order obviously multi-cloud architectures are becoming much more relevant people are realizing that there is no single Cloud infrastructure that makes up the composite of their entire I.T capacity and even small companies that may start with a single Cloud end up building multiple instances of their environment in that cloud or they move from Cloud to cloud and so so I think you know Trend number one independent of AI or security is you know we are in a world where the composite infrastructure of any business is a collection of cloud infrastructures public-private on-prem off-prem Edge Telco whatever it's nice to get to that point now the trend is really how do you make order out of that chaos how do you not end up with a bunch of silos how do you make your data move across them how do you treat them as a system and that that's a really an early shift not everybody has figured it out not all all companies know how to do it but I think if you look into the future and ask where do you want to be in I don't know three years you want to be able to take advantage of the compute diversity and capability of the collective Cloud ecosystem but you wanted to do it in service of your business as a platform that is a multi-cloud so so that's number one number two is we have a a gigantic Trend happening right now around the acceleration and Adoption of AI technology specifically around large language models more advanced Ai and you know I've recently been quoted saying something along the lines of the entire Cloud ecosystem was not engineered to run this stuff we've been running AI in Cloud environments but not at this scale not as the primary workload and so there is a trend that says even if you get your multi-cloud right and you built it in a perfect way to run containerized code like web browsers and other things uh that is not the same workload as AI an AI workload could be orders of magnitude more intensive it could be extremely expensive to run if you put it in the wrong place and so multiple Cloud discipline about knowing which parts of the AI system should run in which part of your multi-cloud actually have profound consequences if you get it wrong you know to give you an example if I develop an AI algorithm I can do it anywhere in fact I would recommend you do it in the tool chains of any of the public clouds they're quite good they're easy to use Simple you get started really quickly tick okay we've done the the academic phase somewhere then we decide okay I have this algorithm but I need to train it I've selected you know Mosaic or whatever large language model but I want to make it an Enterprise thing ah well then it gets a little more complex can I move my data to that environment is it a public instance or do I need a private instance do I have the money to pay for a large-scale training event which can be very very expensive and more importantly is this the only time I'm going to do it or am I going to constantly retrain this model forever as part of my business model which means I may not want to be paying on the drip for that I may want to do it somewhere that's a little more predictable again training you'll have to make a decision and then you get to the fun one which is inferencing which says you know now I'm going to flow data through this machine that can run at machine speed and consume it resources at machine speed where do I put that you know I would make an assertion that you'd probably want to put it close to wherever the data it's ingesting is so that it runs really efficiently maybe not on the other side of the internet but on the other hand even if you put it in an edge you have decisions do I put it on a co-located environment or actually in my factory and so these are all things that the same set of decisions can be made in building an e-commerce application but AI it's just three orders of magnitude bigger and so lots lots of of thinking right now not only building multi-clouds but how do we mult make multi-cloud work in the era of AI that we're about to enter so it's complicated John yes good to see you again thanks for coming on okay so you got multi-cloud complexity you bring in AI multi-cloud security cross-cloud security and Cloud economics are complicated multi-cloud economics makes that even more complicated so my question is have you seen a change in patterns as to how customers are deciding where to place workloads yes uh in fact and it was a it was a reaction to one very simple thing that their budgets got blown up their bills were more than they expected the utilization was higher than they anticipated and so about a year ago we started to see at least big big companies get much more deliberate about the decision of where they would run their workload and it wasn't you know it used to be it was very much keep the developers happy keep the line of business happy if they want to do it let them do it then try to clean up the mess and then they realized the mass actually can blow up your economics of your company and so we did see a pretty significant shift and it happened after people experienced a surprise Bill a budget overrun and and I think that's led to a more uh deliberate use of cloud and that it's a good thing I'm not advocating for any one cloud or another put the workload in the best place economically and technologically but it turns out that up until recently A lot of people didn't really do that level of analysis they kind of put it there and then assumed they'd figure out how to afford it or make it work and so I you know and to be candid it hasn't really slow slowed down Innovation at all and companies that are doing that at least have a more predictable Cloud experience and as they go into again this AI era where the bills for building AI systems are quite large compared to what they were doing before they at least have a mindset of I have to contemplate affordability I have to contemplate best economics I have to contemplate things like Regulatory and compliance as part of that cloud decision about placing a workload so not everybody's there but you know you've been looking at this a long time also you know three years ago it just blindly put it wherever it is and deal with it later that's not the tone anymore for people who understand the consequences yeah I think that's a good call out on the the old cloud model which was a lot of portability discussions workload management check check you guys been there done that on the AI side love your angle on this whole Dynam power dynamics between cost performance stage of evolution as the the builders come in and then the hosting and running the runtime and then you operationalize AI it's the same wine new bottle so you've got sustainability you got as a service you got workload portability issues and then you got a cost and an AI so you know contextual behavioral now you got training and inference so I have to ask you do you see any legitimate player or way customers can deal compete with Nvidia for instance a lot of Nvidia Demand right now shortages you've got cloud services that are offering uh inferenza for instance from Amazon as their own chips you guys have relationship with Intel and video all these guys so AMD Intel is there any alternatives in the GPU side that's going to help move the needle yeah you know I look we're we're in a privileged position we're kind of Switzerland um you know ever since you know we don't own VMware anymore they're just one of our partners a good partner but we also work with all the cloud the hyperscalers we work with all the semiconductor players and and so so we like them all and and legitimately when you understand the AI ecosystem what you realize very quickly is that there is a extreme diversity of AI workloads even large language models I mean running you know Roberta as a chat bot in front of some automation function you can do it on a single server at an edge it doesn't require a lot of capacity to do that building a giant chat bot for your entire Enterprise it's a general purpose llm might need a much larger infrastructure and so we're actually you know feeling pretty good that there there's Nvidia which is definitely probably the best executing company in terms of a semiconductor space in this in this space on accelerators but you are you see that there are other places and other entities that can fill gaps or address other layers of of the Irish doc yeah by the way a lot of the nji and the simpler AI use cases you can run on a CPU you don't need an accelerator we have plenty of examples of that in fact you'd run them on a Precision workstation or PC that you know and so we actually don't see it as being there's only one semiconductor layer for this in fact semiconductor diversity Works in our favor for two reasons the first is well three reasons if we throw in Supply the the third one is you might not be able to get the parts so having Supply diversity is pretty important number two is uh performance you know you don't necessarily need the performance of you know the top end Nvidia chip to do something that doesn't need that and and quite frankly you know you'd probably benefit by having it on a smaller footprint entity um you know and and and the last is power you mentioned it earlier we constantly customers are interested in is this AI Evolution that we're about to go through gonna break the the planet is it going to create more energy demand than we can keep up with is it going to basically be the anti-sustainability metric and our belief is look you know you you have a fairly wide range of diverse accelerators and they vary in terms of their effective mips per watt for an AI function an Nvidia chipset is more of a general purpose processor it has pretty good power pretty good performance it's kind of good on all Dimensions but there are some specialized ships that are emerging neuromorphic processors some of the 4-bit Precision stuff that's out there and even in some of the hyperscalers they have optimized TPU is different than an Nvidia GPU and it turns out that you can characterize them that for a specific workload choosing the semiconductor may not be a function of price or even availability it may be that that's the most energy efficient place for you to run your code and you know today that might not be the primary reason people are choosing and infrastructure but over time especially in Europe and the developing markets sustainability is a very important criteria of selecting how you do it and so you know we actually do see silicon diversity we have Nvidia big kudos for being in the front and you know bending the curve on performance there's a lot of people in that pack and they're picking up other areas AMD just made some nice announcements and Intel obviously has the Gaudi chipsets Gaudi 2 coming out the hyperscalers have different chipsets they're providing the tpus continue to exist uh and so so overall between the accelerator ecosystem the CPU ecosystem it gives us a pretty good pool to give us lots of choice about these dimensions of availability performance and environmental impact and and all you know it's pretty sophisticated back then I think Dave said it's complex and you're going to have to pick and choose but if you want to have a sustainable AI strategy you're going to have to get into the detail of not just picking which Cloud you run or which I.T architecture but you know can you run this on a more efficient semi conductor that will result over time in a much less environmental impact of being resulting from your AI activity and so yeah it's it's an interesting race right now uh but but it's definitely not a one horse race it's an ecosystem of semiconductors kind of all moving in the same direction John I'd like to get your perspective on how you see customers using AI specifically as it relates to security because we're talking about super cloud Ai and security at this event and so when you talk to customers about how they're using AI they go right to all you know we're helping us write code summarization ideation writing marketing copy so obviously they're using you know chat GPT is just you know overwhelmed the the discussion but I'm interested in where it is in terms of understanding the use cases for AI specifically as it relates to things like zero trust yep yeah 100 I mean it kind of uh left behind in the dialogue or some of these really important areas like security let me give you a couple of examples um one of the most interesting ones is uh you know generative AI systems are really good about automating content creation if you actually look at the behavior inside of a sock and you look at somebody who's doing security as an analyst a big portion of their time is generating reports and content about events that happen they publish what they do effectively and so one of the things that we're excited about you know in a very general sense is generative AI co-pilots to automate the content production that documents the security event that does all this rudimentary work might actually bend the curve in freeing up time to solve some of the human capacity issues we have in just finding people to operate security environments so that's that's kind of a nuanced one the second point though is we do have a problem right now and it's uh you know not to scare everybody but if you look into the future AI application in security and I.T is potentially going to lose if we don't start to think differently about our relationship between Good and Bad actors what I mean by this is today most of the AI application of security in the security space is around co-pilots it's around a human in the middle surrounded by a bunch of automation that makes them more effective now that's good it does make them more effective that's the good guys you know what the bad guys are doing they're taking the attack and they're fully automated there's no human in the loop it is literally going to be a race between a machine versus a person with a few machines helping them now you do the math which one of those can move faster which has more capacity and so at RSA this year this was a big discussion of you know we know we need human in the loop we know we need humans to make decisions but the bad guys don't and if the bad guys don't do that we're going to have a performance mismatch so one of the biggest challenges we have right now in security and AI is we've got to figure out ways to shift more of the full security function into automation with machine intelligence because that's what the Bad actors are doing just to stay at parity with them and that's a really traumatic experience for people because you know how do you trust the system now you brought up zero trust and one of the things that zero trust does that allows you to do that is it's instead of perpetually reacting and having a human interpret events a zero trust environment says everything is authenticated ununderstood policies are about defining the known good behavior not the note preventing the known bad behavior and because you have very authoritative behavior that you are instructing the infrastructure to allow the ability to put AIS in place to just do that they know what the rules are and they just run and humans don't even see what they're doing until they've done something becomes much more predictable than asking an AI to interpret the unknown and act on it it's much easier to ask an AI to it to understand the known and enforce it versus interpret the unknown and act on it so zero trust is a key technology architecturally that allows us to push AI into a more front and center position around Automation and the reason we have to do that is the bad guys are going to attack us with machines not people so that's a great great point I want to just ask a quick follow-up question how does what you just said impact the customer's decision to move from public Cloud to hybrid multi-cloud because again more service area okay I get to zero trust piece yeah and you know like you said the bad guys are going fast that's a pro game the speed between college ball and pro ball is two different things as we've been saying on the cube so you're talking about Pro Securities that the velocity is there it's hard to compete if you're slow so how does it impact workload placement portability choices what does that mean for multi-cloud Choice yeah well it turns out in the multi-cloud dialogue one of the decisions you have to make is what pieces of your architecture are not in inseparably Bound to a particular cloud multi-cloud is not just a collection of clouds it's a collection of clouds and then things that turn them into a system and this is a debate we've been having like we have a strong opinion we think certain storage layers ought to be horizontal we definitely think Edge ought to be a common platform we think things like cyber recovery ought to be horizontal but an interesting one is zero trust security the control plane for security if you want to do multi-cloud right and you want to be able to do what I just talked about to be able to automate and control it as a machine you cannot have a collection of security control planes in each one of your clouds and so for three very important functions you almost have to make a conscious decision to treat them as an overlay identity management policy management and threat management detection those have to be things that are in independent authoritative control plane over any infrastructure you use and it turns out whether it's a hybrid environment or it's just by the way even if it's a single Cloud going to the edge or if it's a complex multi-cloud system if you want to do zero trust and we've been you know we we're building a full-on zero trust implementation that nobody's ever built before called project four zero but if you want to ever get there one of the things we tell people right now is you got to get your control plane in order and if your control plane isn't separable and authoritative across your whole collection of infrastructure not bound to each one of them you'll never really be able to describe this end-to-end behavior and if you can never describe the end-to-end behavior back to the previous point you will never fully automate it because all you'll be doing is automating silos and then reconciling Behavior between them and so so you know I the path people need to take to get to that point of being able to even compete is your control plane identity policy and threat management whether you do zero trust or not pass to be something that exists across your Cloud estate not a function of the individual clouds themselves okay we got one more tape one more question sorry um I thought you had one more no I got to ask you John So based on what you just said all the technical debt all the inertia and all the Innovation on the the technology side with the with the the technologist Community ultimately who does AI benefit more the attackers or the Defenders in the security world today it benefits the attackers um we don't like to talk about it but it allows them to just move faster and to move at a speed and scale we've never seen before we're already seeing that um defensively we've used it we do great work on fraud detection and event correlation with AIS and that's kept us tread water properly but over the long term again if the fight is between a machine or a person with a few machines helping them and it's a volume fight because that's what cyber is about these days You're Gonna Lose and so we've got to find a path to be comfortable shifting more of the work into the machine layer by the way if that sounds like a broken record that's the narrative with all AI we got to find a comfortable way to let it run our supply chain and run our finance systems or do customer support that's the shift let the AI take over in a way that we trust rather than us just sitting in the middle of this process I think I saw that episode on Star Trek years ago um the AI War um final question real quick we got one minute or so left when you talk to like top csos and CEOs of say big bangs you're in there given the future and they ask you the question I want to I'm a Dell customer I mean long-term competitive strategy just watch watching you guys be competitive over the years where will you be in a few years why will you be around what's what answer do you give them when they ask that question why will you be around a few years and continuing to be the Dell technology yeah well all specifically for multi-cloud or AI or security those those three domains are not built with a single product you cannot solve an AI problem with a single box you cannot solve it with a single technology or even a single Cloud interestingly enough and one of the things that we've done strategically as a company is we turned ourselves horizontal we said not only do we have a broad line from PCS to servers to hyper converge to Apex offerings but we also have an ecosystem and if you saw Dell Technologies this year it was like the Parade of CEOs it was you know CEO Microsoft CEO of Nvidia CEO of Red Hat all shown up saying hey we work great with Dell because if you think about it from a customer perspective your job right now isn't about picking a specific technology it's about hurting this really complex ecosystem of big giant companies that all have their own opinions and don't necessarily work well with each other turns out they work well with one company that company is Dell and so we find ourselves in a privileged position that if you're trying to solve an AI problem or a multi-cloud problem or even navigate zero trust you gotta have an anchor somewhere and we've carefully positioned ourselves to be that kind of foundation horizontally across all of these ecosystems and so hopefully yeah you know what we'll go on that journey and it'll help the customer make sure that they actually build a collection of clouds working as a system to be the platform of their I.T environment and you know I think that's playing out in real time as we speak John always a pleasure always a master class with you thanks for coming on thecube we really appreciate you sharing the insight and the data here on Security Plus Ai and super cloud 3 our third edition again thanks for promoting in and we'll see you soon yeah great thanks very much for having me okay super cloud three next up we got the EVP of Cisco security G2 Patel and Tom Gillis with Cisco coming up next super cloud 3 stay with us foreign

2023-08-24 14:17

Show Video

Other news