Azure CTO Talks Confidential Computing and Confidential AI Intel Technology

Show video

- [Announcer] You are watching "In Technology," a video cast where you can get smarter about cybersecurity, sustainability and technology. (soft, bright music) - I'm Camille Morhardt, host of "In Technology" podcast, and today, my co-host is Anil Rao, who's VP and GM of Systems Architecture and Engineering in the Office of the CTO at Intel. We are going to have a conversation with Mark Russinovich today, Technical Fellow and Chief Technology Officer of Microsoft Azure. Welcome, Mark.

- Thanks, Camille. Thanks for having me. - So Mark, we know you as CTO of Azure. Actually, I'm gonna pause right there and ask you how you're supposed to pronounce Microsoft's cloud-computing platform, because I feel like a long time ago, when I started working in cloud, I was not sure how to pronounce it.

And so, I don't know if it was ever up for grabs, or if the entire industry knew, but people who aren't in the industry aren't so familiar. - It's kind of funny 'cause I was just at a meeting last week, where we were discussing how people aren't unsure how to pronounce it, and I haven't been in a discussion like that for many years. But the way that remember it is say, "As you're going to the cloud, "use Azure." - Mmhmm, okay. (Camille chuckles) - So, we know you as CTO of Azure, going to the cloud, but you're also widely known as author of not one, or two, but three cyber thriller novels that you published between 2011 and 2014. And they've done quite well, actually.

"Zero Day," "Trojan Horse," and "Rogue Code." So, your first one, "Zero Day" is about terrorists using cyber weapons, primarily attacking Windows to take down the internet. And you have said in the past that you wrote the book partially to put out a warning to the system of the kinds of attacks that are possible.

And I realize that this was a fiction book, but a decade has passed, and I'm interested in your perspective on whether the threat level has changed in the last decade, and whether it's gone up or down, easier or harder to launch these kinds of attacks. - It's kind of interesting, because it seems like security's in a never-ending escalation, both on the attacks and on the defenses. And so, I think you've just seen that over the last decade or so. The defenses have gotten a lot better.

I think move to the cloud has made defenses stronger, in many cases, because of the API-driven nature of the cloud, the fact that you can understand your inventory of your resources and use policies to secure them and then use monitoring at scale, and the homogeneity of the systems make it easier to understand what's going on. At the same time, the attackers have gotten more sophisticated, so there's a lot more sharing of knowledge that's been going on in the attack community, the tooling is better, the processes are better. And so, they are also commensurately gotten more sophisticated and able to carry out attacks. In some cases, ransomware is very similar to the type of attack in "Zero Day," although aimed at financial gain rather than just simply crippling systems.

And we see a lot of ransomware attacks. Just actually in the last couple weeks, there's been some high profile ones. - Can you also, I'm gonna just jump right into a separate topic, but can you define for us what confidential computing is? - Yeah, so confidential computing is the use of hardware to create enclaves or computational containers where code and data can be protected while it's in use. And that's in contrast to the kinds of protections we've had up to now, which are protecting data at rest with encryption at rest, protecting on the wire with, for example, TLS.

Now, confidential computing brings protection through the remaining part of the data lifecycle, which is while it's being computed on. And there's another important aspect to confidential computing's definition, which is not just protecting that code and data from external access and tampering, but also being able to attest to what's inside of the container, so that some container outside, or some compute outside can get authoritative claim about what's in it and then establish trust with that code inside the container, so it understands exactly what it's talking to, and then, therefore, can release secrets to it, for example, to allow it to decrypt data it wants the container to process. - So, confidential computing's something you're also familiar with, Anil. (Camille chuckles) - Absolutely, I am. Mark, you and I have been working on confidential computing for years now, and now, we're witnessing something really interesting, which is the convergence of AI or machine learning with confidential computing. Tell us what are some of the novel applications of this new approach.

Confidential AI is something that I use as a term, but is this something that you have seen and are fascinated by? - Oh, absolutely. So, confidential computing, first of all, it provides an extra layer of defense and depth just for everyday workloads. For sophisticated workloads, it provides that extra protection where you can get very strong guarantees about what is accessible from the outside, and what's not. And the rise of AI creates some interesting scenarios for confidential computing. One of them is just protection of the IP of models.

The models, these large foundational models, cost tens of millions of dollars, or more, to train. And so, many cases, the IP in those weights is extremely important. And so, you want confidential computing to protect the weights while that AI is active. Another area where you want to protect something in an AI workflow is the data that you're processing while you're performing the AI. So the prompts that go into a large language model, or the images that go into a classification vision model, are also something that should be protected end-to-end. And so, you don't want the platform provider, the AI provider of those models to have access to that data, which is, potentially, extremely sensitive, and similarly with the outputs to that data, also extremely sensitive.

- I'm actually kind of interested. When you're talking about AI, we usually think of it as most of the processing being done in the cloud. And obviously, Azure, you're in charge of the technical element of protecting this data as it's being processed in the cloud. What is your take on the migration, or evolution, of some amount of processing of AI, at the edge, or the near edge, or toward the edge? - So, the protections that the cloud provider has for data in general include physical security of our data centers, the personnel that protect the data center, the biometrics that grant access, and the systems that say only authorize people with specific purpose should be allowed to come into a data center. Then, we've got lots of logical controls around our security, as well. When you get out to the edge, first of all, physical security is something that is typically much weaker.

You might not even have physical security. The server might be sitting in a closet inside of a retail store, for example. And so, yeah, there's a lock on the door, but you don't have all the systems that we've got in a data center in a hyper-scaler cloud, like Azure. So, that aspect is missing, and so that means that physical access is something that, there's an extra degree of risk there in compromising data. But certainly, confidential computing actually provides protections that are currently just not there on edge computing. Once the system is up and running, basically, everything's out in the clear and accessible to anybody that gets access to those systems.

And so, confidential computing raises the bar tremendously on protection of that data while it's being computed on, or the AI models that are sitting on the edge. - So Mark, I think this is very interesting that you're talking about edge in this particular scenario. One of the things that we know is that, with AI getting so pervasive and data flowing in so many different areas, we see that training is going to be done, most often, in a cloud, kind of, like an environment.

Not to say that inference is not happening there, but then, the models get distributed, data gets distributed. And inference may happen at the edge, or even like incremental training may happen in something like an edge environment. So, given this to be the holistic scenario, what are your thoughts on a SaaS service, like Intel Trust Authority, and what role does it play in order to provide assurance of security for those AI models that may float anywhere from cloud to edge to, potentially, even devices? - Yeah, well so, a key part of confidential computing, like I mentioned, is attestation, and the verification of the claims that come from the hardware about what's inside in the enclave. What's an extra degree of complexity for somebody that is saying, can I trust this thing to release data to it, or do I trust its answers coming back? And basically, do I trust that it's being protected by confidential computing hardware, like TDX, for example, or a confidential GPU, that attestation report carries a lot of information that's complex to reason over and come up with a valid, yes, this is something I trust. Not only that, but there could be configuration that is part of the attestation report that needs to also be looked at. And then, typically, there's some policy of, I'll trust things that are these versions and that have this configuration, and I won't trust anything else.

And so, for something that is gonna establish trust in the enclave or the GPU, it actually simplifies things tremendously, if you can offload the complexity of that policy evaluation and verifying that the hardware claims are actually valid and signed by Intel, for example, to a manifestation service that does that complex processing and reasoning and policy evaluation. And so that's exactly what Intel Trust Authority is, is a system, an attestation service at the core of it, which takes those claims, the relying party, somebody that wants to see if I can trust something, can rely on the Trust Authority to say, yep, this meets the policies that you've got, and it is valid, confidential computing hardware that is protecting this piece of code, so you can trust releasing secrets to it. And so, it is absolutely essential, I think a key piece of confidential computing foundation, to have an attestation service like Intel Trust Authority. - Now, outside of some of the basic things that we're doing with simple infrastructure as a service, as in confidential computing being an infrastructure service, talk to us more about things that we need to do in order to get to realize some of the things that you're talking about.

And that'll probably include things like PASS, SaaS, Cloud Native, and even distribution to edge. - So, confidential competing something that Intel played a large role in pioneering with SGX enclaves, has moved from the sub-process isolation that SGX gives you, that requires actually porting software and had small enclaves, very limited initially, that had high performance overheads, no accelerated device support, where it was useful, at that point, for only the most sensitive workloads that would actually were able to go through the work of porting into those constrained environments, and being tolerant of those high- performance overheads and the lack of the associated device support. We've been advancing together, Intel and Microsoft, along with the rest of the industry, confidential computing, to make it more mainstream by making it more like general purpose computing, in terms of its capabilities, in terms of its performance.

And we are on the verge of really removing the last caveats on confidential computing to make it ubiquitous. And so, Microsoft's goal in with support of Intel is to aim for a confidential computing cloud, a confidential cloud, and that means that our PASS services will all be confidential and have that extra layer of defense and depth that customers can protect their own workloads with very high degrees of policy controls and assurances that their data is being protected end-to-end, regardless of what kind of computations they're gonna be performing, whether AI, ML, data analytics, or their own data processing on them. And so, we've been building together this foundational pieces, and you know, Intel Trust Authority being one of those key pieces in that foundation. We've got confidential virtual machines that allow us to, for example, have confidential virtual desktop in Azure, or confidential Kubernetes nodes in Azure. And we're moving to flesh out the rest of that environment to have confidential containers, confidential PASS services, and in fact, we've got confidential Databricks we've announced, in partnership with Databricks.

So, this foundation pieces of landing into place, the barriers to adopting confidential computing are falling, by the way. We've got confidential GPUs now with working with you. We've got TDX-IO. I know that's not the name, TDX Connect, TDX Connect, the new name for it, to allow complete protection between a CPU and an accelerated device like a GPU.

So, these things are landing in a place where we're about to enter the phase of, hey, now the reason will be why confidential computing, or why can't I do confidential computing? It would be, why am I not doing confidential computing? - What's the timeline, I guess, for this? It sounds like you're thinking this is gonna become ubiquitous at some point, that all clouds will essentially demand or require, or have to offer, confidential computing. And I still think that walking around on the street, most people don't really know what it is, aren't so tuned in. So, what, is the timeline? It sounds like a lot of barriers are coming down, but how soon, and is it going to be fast enough to keep up with AI? - Yeah, I mean, I think we're talking about within the next two years with the roadmap from Intel and others that the last barriers, the last performance overheads, the last lack of accelerated support with low over performance overhead, are falling away. And at that point, then, it's just go as fast as we can.

That's not to say that we're already not moving very quickly. Like I already mentioned, a bunch of Azure services that have confidential computing, and we have customers that are starting to let migrate their workloads into confidential virtual machines, because their workloads are able to work within the constraints that are currently exist, which are not significant. They'll only impact some workloads. And so, there's a lot of workloads that just are fine today. But we're removing the last vestiges, and like I said, at that point, then, we plan on having confidential computing be the default, because there just won't be any reason not to have it be.

- I wanna ask about some of the emerging, I'll just say, government policies that are cropping up around the world around data sovereignty. And retention of personal information, I think it's exploding now because of AI, and so much processing being done with sensitive data or personal information, where some governments are asking that that data remain geographically within their scope, within their borders. And out of that, I've heard that there is this kind of an emerging concept of partial processing of data in various locations, and that there would be in a requirement for fairly complex orchestration of where and which processes are being run and sort of monitoring and tracking all of that.

Can you talk to us a little bit about the emergence of this? - Well, we've seen, since I started in the cloud, the rise desired for data sovereignty, meaning the data set sits inside of country borders and more and more under control of the data owner. And this is part of the reason why Azure's in so many geographies today, is to go meet those data sovereignty requirements. In many cases our customers are bound by local regulation, depending on whether it's financial data, healthcare data, or government data has to be within the country borders. What my vision for confidential repeating from when I started was that we could remove the need for that kind of boundary and say that it doesn't really matter where the data's being computed on, because it's being protected. The idea, I think, with data sovereignty is, if it's in my country borders, access to that data is more controlled by the laws of my country and somebody can't go get it without going through the laws of my country. But if it's technically not possible to get access to the data, then, you know, I think to get access to the data, whoever needs access to the data for whatever legal purposes, has to come through the data owner and the legal processes that the data owner's beholden to.

So, I'd love for one day where these governments and various vertical sectors inside of these countries that today are saying, with regulation, has to be in my country border, and say, well, you know what? If it's protected by confidential computing, it doesn't necessarily have to be, if it's got this kind of configuration on it, and these kind of controls, and you've got attestation with Intel Trust Authority, these kind of controls on top of the data, then it can be processed anywhere. I think we're still a ways away from that. Like you said, kind of observed, a lot of people aren't even aware of what confidential computing is yet. Certainly, I don't think they're regulated industries and the regulators aren't necessarily fully aware of it yet.

So that's, I think, one of the goals, too, of things like the confidential computing consortium and our work with Intel is to educate regulators about the value of confidential computing. And they can start by saying, you know what? You're gonna get this extra protection. So, in country, protect your workloads with confidential computing. And Azure, I'll tell you our ourselves, some of the sovereign cloud deals we've recently won have been anchored on the fact that we have confidential computing, because that gives, even within country, the government organizations an extra level of control over the privacy of their data, the protection of their data, that they wouldn't have without confidential computing. - Yeah Mark, I think you bring up some excellent points here.

One of the things that we've been seeing when we talk to our customers, the government is a huge adopter of confidential compute technology and also a regulator of confidential computing technology. And in this regard, a lot of industries which are regulated and controlled, be it healthcare, finance, these are all industries and verticals that are extremely excited about confidential computing. And the fact that we have Intel Trust Authority as an operator, independent attestation solution, which will maintain records that can be audited, is also an area that we are seeing a tremendous amount of traction. And I think this very much goes to your point of having an attestation solution to go with the core elements of confidential computing that our customers can consume in clouds like Azure. It's almost like trust but verify and maintain these auditable records so that you can continue to verify in the future on everything that you did, including on some of the custom policies and what was the environment where things were happening, right? It's just gonna strengthen the holistic security posture in the industry, which is very much needed.

And like you were earlier mentioning, all of these come with very, very little hit in terms of holistic performance. So, our customers should be pleased in consuming these services in Azure. - Yeah, and you touched on something else, which is audit-ability, and I think that is a key part of confidential computing. I think that there's some people that believe that, hey, if I control all the bits, I'll know exactly what is being done with my data. And there's a fallacy in that line of thinking that is actually highlighted by a Turing speech, "Reflections on Trusting Trust," which talks about the fact that you can't even necessarily trust the compiler that generated the code. And so, unless you're trusting everything back to its base, you don't really know what your code is gonna do.

And as we know, systems have gotten much more complex, and we also recognize the fact that cloud services, the services delivered at PAZ deliver a huge amount of value that you can't just deliver or use yourself with IAS and bringing all the bits yourself. And so, more and more, we're gonna have to get to a point where, yeah, I want the highest level of assurance as possible that my data is being protected always, but yet, I've got to allow it to be accessed by this software that I don't necessarily have control over. Not just that, but software that, even if you gave me the source code and let me spend as much time as I wanted to, I wouldn't be able to know that there's not something malicious happening, or vulnerabilities that would expose my data. And so, what we've been working on is the idea of something called a code transparency service, which would be able to record and sign artifacts that meet certain policies regarding transparency.

Specifically, one of the policies could be, the source code is available, and there's a reproducible bill that goes from the source to the binaries that were deployed into the confidential computing environment. And you could be able to go to this code transparency service and get receipts to verify that, hey, I'm trusting something, Intel Trust Authorities vouches for the fact that it's sitting inside of TDX and the CTS service shows that the code, actually, I can go and audit it, if I suspect that something's gone wrong. And that kind of service will allow that kind of audit-ability you talked about, but also allow you to depend on services that are being upgraded frequently, as past services and cloud services need to be upgraded with you not necessarily being able to sit there in the middle and approve every update, which is kind of just a meaningless gesture when you're approving millions of lines of changes, and you need to do it immediately, 'cause there's a security vulnerability that needs to be addressed, or a performance issue that needs to be addressed. I think that's the path.

And we talk about it as first step, you provide transparency to our confidential foundation, things like the attestation service, things like the firmware that is sitting inside of our confidential trust boundary. And then, moving up, and I think what I envision, just like confidential computing becomes ubiquitous and is just part of computing, that code transparency is just part of computing, as well. But it needs to start with a small targeted problem set that we can build on and then expand from there. - So, we look at right now for artificial intelligence, or machine learning, we look at training, at least, as you were saying before, Anil, requiring significant server power, and there's really only a handful of cloud service providers out there in the globe. And I wonder your perspective, Mark, on just democratization of AI, or the ability for all different kinds of people and companies, small companies, big companies, to get insights from artificial intelligence.

- When you say democratization, I interpret that as a few things, possibly. One of them is access to large-scale infrastructure, to be able to train models, as well as access to models themselves, to be able to leverage them. And I think, so we at Microsoft, of course, are one of those hyper-scale cloud providers that has massive infrastructure for training these foundational models like GPT-4. We actually make that same infrastructure, that we're training GPT-4 on, available to our customers through our virtual machines.

Our GPU-enabled virtual machines with InfiniBand networks, exactly the same types of systems, same systems, that GPT-4 has been trained on, and we continue to follow that principle with our creation of these next generation AI supercomputers of, what we're building bespoke for open AI, is actually the general mainstream that we're making available to our customers. So, the infrastructure itself is democratized. The software actually helps democratize it, in the sense that, through things like Azure Machine Learning Studio, it makes it very easy to go set up training processes. So, whereas before you need teams of experts to go stand up your own Kubernetes clusters and install all the software and manage it, now, you basically have a PASS service that handles that for you. And then, same thing when you have a model come out the other end, you can host it on Azure, deploy it very easily with a few clicks, and now, you've got serving as a PASS service. Plus we've got Azure OpenAi, which gives you access to OpenAI's large language models through an API interface with all our compliance and security and regional promises that give you data sovereignty.

And then, we also have open source models that we provide, like Llama 2, as a service on top of our Azure Machine Learning. So, all of those things, I think, are part of the democratization. And I haven't even got into integration into software, like through Visual Studio with Copilot, for example, that makes it very easy for anybody to take advantage of AI in their workflows or inside of Dynamics 365. Basically, just like confidential computing is gonna be everywhere, AI is gonna be everywhere. - And confidential AI is gonna be everywhere, too.

- It's actually, confidential AI is gonna be everywhere. I should have said that. - I should actually ask you guys, I don't know if there's a standard or commonly-agreed definition of confidential AI.

Do you two agree on the definition of confidential AI? What would you call it? - I think we agree. The AI today, even if you have data that goes from the source to training, even if that is encrypted, a lot of the encryption is, sorry, encryption in rest and encryption in transit. You gotta bring the paradigm of encryption in use, not just simple encryption, but encryption, creation of an enclave, access controls and attesting to the elements that is going to have access to data once it's decrypted inside of the enclave. So, from this perspective, anytime that you're going through anything with respect to AI, whether it is data prep, whether it is training, whether it is deployment, or whether it is inferencing, you wanna make sure that the models, the data, and the methodology are always encrypted, and decryption happens only when you pass through all the attestation checks that Mark was talking about. For me, anytime that you go through this entire train of thought and train of deployment, if you're running things inside of a trusted execution environment, that's what I call a confidential AI. - So Mark, I wanna ask these things, because, my God, CTO of one of the largest cloud service providers that's out there, I guess we know what kept you up at night 10 years ago with the three books that you wrote.

Like, what do you worry about now? I mean, what's coming? I think everybody says, oh, AI, you know. But that's a pretty vague or generic thing to worry about. I guess, what are you concerned about for the world, when it comes to the internet, AI, the cloud? - Well, the things that worry me in my job are pretty much the same things that worried me 10 years ago.

Still security, reliability, scale. So that's kind of on the defensive side of worry. On the offensive side of worry, it's how can we make services that make it easier for people to build cloud applications and edge-enabled applications. And then, more and more that's become AI's key ingredient, and all that.

So, how do we make it possible for people to build these AI-infused applications? So, that's kind of the worry. Now, I think you're touching on, is there a worry about, I mean, there's been a lot of talk about people worried about the existential risks of artificial general intelligence, AGI, and reaching human level intelligence, or superhuman intelligence, and what that means, the implications of that. I don't know, it seemed like you were touching on that, but then you kinda backed off. - Well, yeah,

I'm trying to figure out, too. I'm not sure how to think about it, right? Because you definitely want, I'll just say everybody, right? You want everybody to be able to access powerful tools. You don't want that to just be in the hands of the few. And right now, probably one of the most powerful tools on the planet emerging is AI and what it can do. And so, making that accessible, making that easy to use, making that ability to run that, even if you don't own the infrastructure yourself, is like an incredible tool that can be made available.

But, right? (Camille giggles) It's also fairly unconstrained, at this point. There's not even that many regulations out there. I mean, we're just now seeing laws come into play around the world. And so, is there any concern that, by providing or making accessible all of these tools to anybody, anywhere, anytime, without all of the sort of policies in place, is there a risk, is there a problem? How do you feel about that, what do we do about that? - I think Satya was just this week, or last week, in Washington DC meeting with Chuck Schumer and a bunch of leaders from other companies that are working in the AI space to talk about these very questions.

I think Microsoft's position is that it's likely that some regulation is necessary to ensure that AI is handled in a responsible way, and I think responsible me means more than, you know, it means safety controls on top of the AI so that it's not doing harm, making sure that these models don't have bias in them that is privileging one group over another group, those being some of the key things about it. And I think that the existential risk side of AI, that one's not an immediate, pressing concern, at least I don't believe. And I also think there's different opinions, even in companies like Microsoft, about the degree of risk to humanity that there might be, and you see this in the computer science community, where you have Turing Award-winning AI researchers on one side saying, "Risk to humanity," on the other side saying "Nonsense. "No risk, or little risk, it's overblown." So, I think that one remains to be worked out. But I think that from our perspective, two things.

One is just making sure basics are in place. You don't want to overregulate an industry where there's a ton of innovation happening, because what you'll do is stifle it, and you'll actually make it so that new entrants are blocked by providing innovation, and you don't wanna do that prematurely, either, if it's not necessary. So, I think there's a balance that has to be struck here. And this is why we're such huge supporters of open source models with Meta, for example, where we partnership with them, hosting Llama 2 on Azure Machine Learning. - So Mark, if we kind of like go back about 20 years and drawing parallels, right? When the cloud first came into being, people were always wondering about, oh, I can easily get computational resources by the swipe of a credit card, so I can get it anywhere in the world. So, maybe I can get lots of these compute everywhere, where I can create distributed denial of service attacks on networking and other kind of infrastructure, right? Now, fast forward, the industry innovated around these things in order to make sure that, yeah, you can continue to get computational resources by the swipe of a credit card, but it's still difficult for you to create a distributed denial of service, kind of like a tax.

So, you are spot on in that, we should encourage newer technologies to come into being, and putting too much regulations is just gonna stifle innovation, and I think all of us are gonna hurt from the wonderful automation that AI can bring in. So, I think these are some fantastic things as an industry that we can do together. Possibly, is this part of your next book? Are you thinking about your next book, possibly true, or it's gonna be, like, AI-based attacking machines that we can go make into a movie? - Yeah, the natural successor to "Rogue Code" was one focused on cloud computing, and I thought, maybe that's a little too close to where I work, so yeah. But not just that, but I've been more and more busy in my role at Microsoft and spending more and more of my time on things like AI, so I've put writing books on pause, for the time being.

I'm sure I'll get back to it. - I wanted to ask both of you guys, since you both come from entrepreneurial backgrounds, from very small companies. And I'm wondering now that you have big positions at medium-sized companies, just kidding, Fortune 100 companies, do you have time to do like anything scrappy or entrepreneurial anymore, or do you ever see code? Either one of you, you both have computer science degrees.

- Yeah, outside of my job as driving security infrastructure at Intel, I also have systems architecture and engineer. And we, in my group, do a lot of little scrappy things. And some of these things are fast-fail scenarios, where we come up with interesting ideas, and we wanna go conquer the world through these interesting ideas. In certain cases, we work with customers like Mark. In certain other, cases we work with the government, because DARPA is one who wants to do some interesting programs of this particular nature.

And luckily, some of these things are going to make its way into products technology, either as how we envisioned, or some subset, because you go through certain amounts of learnings. So yeah, I do see the code. In fact, I've reviewed certain elements of code which are part of what is Project Amber. I ask my team to give access to all of these things, read all the specification document, and the teams get nervous when I open up the specification documents, because they'll say, "I'm gonna get a flurry of requests from Anil today." (Camille chuckles) So, once an engineer, always an engineer, so you wanna try and improve things, try and break things, and that's part and parcel of what I do. Mark? - So, I've still managed to find time here and there to work on coding.

So, one of the things that I've continued to code on is a suite of tools called Sysinternals Tools, which is something that I created with a colleague back in 1996 and that was acquired by Microsoft in 2006, and just have recently made some updates to a screen zooming and annotation tool called Zoomin for Windows through Sysinternal. So, I continue to do that, and I also just took a sabbatical this summer, and I did a lot of AI programming, which I continue with the projects that are following up on since I got back. So, I continue to do that too. And I think it's fun for me, it's like my creative release, but it also keeps me grounded. And I think, also, when you're a senior leader, and you're leading through influence rather than authority, it's important to gain the trust and earn the credibility of the people that you're trying to influence.

And one way to do that is to show, hey, I'm actually still grounded. I'm not an ivory tower architect. I actually have real world experience with the things that I'm talking about with you. - Is it true that the next programming language is English? - In Copilot chat, in this AI project, it is just amazing the amount of productivity boost that I've gotten out of it. It's really hard for me to estimate.

Kind of interesting, it's made me lazy as a programmer, but lazy in a good way, because I'm more productive. But now, my first instinct when I want to do something, is, let me just have ChatGPT or Copilot Chat do it for me. And so, I'll just give it the instructions, say I want a piece of code that takes these inputs and traits this output, and it generates it for me. And even if it's not right the first time, it's done it so fast, and I can say, you got this wrong, change it like this. I don't have to worry about offending it.

I don't have to worry about it getting tired of answering me. It's just the perfect assistant that's just gonna do what you want and really boost your productivity. And I just tweeted this a couple weeks ago, there's just no going back from this, and it's just gonna get better and better. - And with the natural language translation that all these AI engines can do, can be any language you want.

Your next podcast can be in Python, Camille. - Yeah. - Okay. (Camille chuckles) I would like to do that one. Is there anything that we haven't covered that we should, that the world should be aware of, as we're talking about confidential computing, confidential AI, and cloud computing in general? - I mean, I think we covered pretty much everything.

I think the thing to take away is that confidential computing is on its way to becoming just computing. And it's kind of interesting, 'cause I'm looking forward to the day where people aren't saying confidential computing. Again, when you talk about computing today, it implicitly means encrypting your data at rest and encrypting it on the wire. Nobody thinks of putting up a service without it encrypting its traffic.

That's just a given, and you don't have to be explicit about it. I think we're well on our way to that with confidential computing. - I think so, I agree with Mark. We're on our way with confidential computing where it it's gonna become ubiquitous, it's gonna be computing.

I think the more interesting thing is the opportunities that it's gonna open up, things where people are keeping data close to their vest. How is this technology gonna open up new opportunities in collaborative computing, in data clean rooms, and really not worrying about where you need to process data. Go process it at the most efficient location and confidential computing is gonna take care of making sure that your data is, indeed, confidential and you're in control of your data all of times. - Yeah, good point on the clean room scenario, the multi-party collaboration where parties can bring their data together, perform computation on it, and know that their data's not being revealed to any of the other parties in any direct way, or to whoever's hosting the competition. (soft, bright music) - Well, on behalf of Anil and myself, Mark Russinovich, Technical Fellow and CTO of Microsoft Azure. As you move to the cloud, Azure.

Azure, or Azure? - Azure moving to the cloud, use Azure. - Azure moving to the cloud. (Camille giggles) CTO of Microsoft Azure. Thank you so much for joining us today.

- Yeah, thanks for having me, a great conversation. - Thanks Mark, as always, great conversation. - [Announcer] Never miss an episode of "In Technology" by following us here on YouTube or wherever you get your audio podcasts. - [Announcer] The views and opinions expressed are those of the guests and author, and do not necessarily reflect the official policy or position of Intel Corporation. (soft, bright music continues)

2023-09-26

Show video