Full Keynote: Satya Nadella at Microsoft Ignite 2023
One year ago, at the dawn of a new age, we took the first bold step into a world of unlimited possibilities. We’re going to get to think about how we use our imagination to solve some of the biggest problems in the world. It allows for the patient and doctor to be a patient and a doctor, like it used to be. With AI as your copilot, you are supercharging innovation.
It’s like, reading my mind. It’s magic. You built new solutions and inspired us to imagine more. In our journey of generative AI we have over 700 use cases across all divisions and functions. So, we really see a big impact.
Copilot makes day-to-day life easier so you can spend your time thinking how I want to improve the experience for people learning a language. This technology has the potential to completely reimagine the way every single student learns in the world. It enables our people to get back to what they’re brilliant at. You redefined the art of collaboration. I've been using Copilot on exciting problems that are having real impacts in people's lives. I can just go home and be like, “Yeah, I did that! I’m a superhuman, I’ve done something amazing.”
This is the Age of copilots. Morning. Good morning. Good morning and welcome to Ignite. It's great to be together in person right here in Seattle and all of you joining from all over the world, welcome.
Little did we know when we scheduled Ignite that we will schedule it on the same day that there is a World Cup semifinal going on in cricket. And I've been up all night, but it finished five minutes ago, so I'm glad it did, and this is the short version of the game, by the way. But here we are.
So, look, it's just been a fantastic last 12 months. It's hard to believe that it's just been a year since ChatGPT first came out and lots been done, lots going on in terms of the pace of innovation, which has just been astounding. Just last week I was at the OpenAI Dev Day and GitHub Universe and now of course Ignite.
But the interesting thing is we're entering this exciting new phase of AI where we're not just talking about it as technology that's new and interesting, but we're getting into the details of product making, deployment, safety, real productivity gains all the real world issues. And that's just the most exciting thing for all of us as builders. We're at a tipping point, this is clearly the age of copilots. From digital natives like Airbnb or Duolingo to Shopify as well as the world's largest companies, whether it's BT or Bayer or Dentsu, Goodyear, Lumen all are deploying the Microsoft Copilot.
To companies in every industry who are building their own copilots from LSEG in finance, to Epic in healthcare, to Rockwell Automation, in manufacturing and Siemens in manufacturing, it's fantastic to see people deploy their own copilots. And today we are sharing lots of new data that shows real productivity gains that Copilot is already driving. We're taking a very broad lens to deeply understand the impact of Copilot on both the creativity and the productivity. And the results are pretty striking, with the Copilot, with Copilots, we're able to complete tasks much faster, and that's having a real cascading effect on work and workflow everywhere. People who use Copilot are spending less time searching for information, they're holding more effective meetings, they're able to collaborate on work artifacts, whether they are Word documents, spreadsheets, emails.
All of them have richer context about their role, about their organization so that they can collaborate much more and stay in focus. And of course, we're just getting started. The way to think about this is Copilot will be the new UI that helps us gain access to the world's knowledge and your organization's knowledge.
But most importantly, it's your agent that helps you act on that knowledge. So this week at Ignite, we are introducing 100 new updates across every layer of the stack to help us realize that vision. Our end-to-end Copilot stack for every organization spans the infrastructure, the foundation models, data, tool chains, and of course the Copilot itself.
And today I'll highlight five key themes across everything we're showing you this week. So let's dive right in. It starts with AI infrastructure layer and our approach to Azure as the world's computer.
You know, we offer the most comprehensive global infrastructure with more than 60 datacenter regions, more than any other provider. Being the world's computer means that we need to be even the world's best systems company across heterogeneous infrastructure. We work closely with our partners across the industry to incorporate the best innovation from power, to the data center, to the rack, to the network, to the core compute, as well as the AI accelerators. And in this new age of AI, we are redefining everything across the fleet and the data center.
So let's start on how we power the data center. As we build them, we are working to source renewable power. In fact, today we are one of the largest buyers of renewable energy around the globe.
We have sourced over 19 gigawatts of renewable energy since 2013. I mean, just to put that in perspective, that's the equivalent to the annual production of 10 Hoover Dams. And we're working with producers to bring new energy from wind, solar, geothermal, and nuclear fusion as well. And as we pursue this ambition to not just be carbon free, but to even erase our historical carbon emissions, I'm really excited to share that we are on track to meet our target of generating 100% of the energy we use in our data centers from zero-carbon sources by 2025. Now let's talk about the network that connects our data centers.
It's one of the most advanced and extensive in the world already, and to meet the demands of AI and the future workloads, we are driving up the speeds. Our breakthrough hollow core fiber technology is delivering a 47% improvement in speed because photons are able to travel through these microscopic air capillaries instead of through solid glass fiber. This is really cutting edge technology. In fact, we are manufacturing this fiber ourselves in the world's only dedicated factory for hollow core fiber production. Our first deployment, in fact, is already live connecting our data centers in the United Kingdom.
We are very, very excited about this. And now let's see, let's just step right into the data center. Today, I'm excited to announce the general availability of Azure Boost.
You know, it's fantastic to see this new system that offloads server virtualization processes onto purpose built software and hardware. This enables massive improvements in networking, remote storage, and local storage throughput, making Azure the best cloud for high performance workloads while strengthening security as well. Now let's go inside our servers. We're tapping into the innovation across the industry, including from our partners AMD and Intel, and making that available to you. For example, organizations like Vestas use AMD on high-end compute and memory optimized service in Azure to run simulations on massive amounts of weather data. And the largest SAP database deployments are powered by our new Mvv3 virtual machines running the fourth generation of Intel Xeon scalable processors supporting up to 32 terabytes of memory.
In fact, Intel is putting their own SAP instances on these machines and it's great to see. You know, as a hyperscaler, we see workloads, we learn from them and then get this opportunity as a systems company to optimize the entirety of the stack from the energy draw to the silicon to maximizing performance and efficiency. And it's really thanks to this feedback cycle that I'm thrilled to introduce our very first custom in-house CPU series, Azure Cobalt, starting with Cobalt 100. Cobalt is the first CPU designed by us specifically for the Microsoft cloud and this 64 bit 128 core ARM-based chip is the fastest of any cloud provider.
And it's already powering parts of Microsoft Teams, Azure communication services as well as Azure SQL as we speak. And next year we will make this available to customers. Yeah. And when it comes to AI accelerators, we're also partnering broadly across the industry to make Azure the best cloud, no questions asked for both training and inference. It starts with our very deep partnership with NVIDIA.
We built the most powerful AI supercomputing infrastructure in the cloud using NVIDIA GPUs. And OpenAI has used this infrastructure to deliver the leading LLMs as we speak. In fact, last week Azure was the largest submission to ML Perf Benchmarking Consortium with 10,000 H100 GPUs, three times more than the previous record, delivering better performance than any other cloud. And in the latest top 500 list of world supercomputers, Azure was the most powerful supercomputer in the public cloud and third, all up. That made news.
What didn't make news is we didn't submit the entirety of our supercomputer. We submitted only a fraction of our supercomputer. So I'm thrilled to be number three with that, as we and by the way, that's the only one that made the list as a public cloud. And as we build supercomputers to train these leading large models, InfiniBand gives us a unique advantage.
And today we're even going further. We're adding NVIDIA's latest GPU AI Accelerator, H200, to our fleet to support even larger model instancing with the same latency, which is so important as these models become much bigger and more powerful the ability for us to have these new generation of accelerators is a big deal. We are also introducing the first preview of Azure Confidential GPU VMs, as you can run your AI models on sensitive datasets on our cloud, we codesigned this with NVIDIA, so if you're doing what is referred to as retrieval augmented generation, or RAG, you'll hear a lot about this throughout the conference running on this confidential GPU VM, you can enrich, for example, your prompt with very query place-specific knowledge from proprietary databases, document archives while keeping the entirety of the process protected end to end.
And so it's very exciting to see us not just lead with GPUs, but lead with GPUs with confidential computing. Now let's talk about AMD. I'm excited to announce that AMD's flagship MI300X AI accelerator is coming to Azure to give us even more choice for AI optimized VMs.
With 192 gigabytes of high bandwidth memory and 5.2 terabytes per second of bandwidth, the MI300X offers industry leading memory, speed and capacity. Again, this means we can serve large models faster using fewer GPUs. We've already got GPT-4 up and running on MI300X and today we are offering early access to select customers.
And we're not stopping there. We are committed to taking the entirety of our knowhow from across the system and bringing you the best innovation from our partners and us. Today we are announcing our first fully custom in-house AI accelerator Azure Maia. Starting with Maia 100 design to running cloud Starting with Maia 100 design to running cloud Starting with Maia 100 design to running cloud Starting with Maia 100 design to running cloud AI workloads like LLM training and inference, this chip is manufactured on a five nanometer process, has 105 billion transistors, making it one of the largest chips that can be made with current technology. And it goes beyond the chip, though.
You know, we have designed Maia 100 as an end-to-end rack for AI, as you can see right here. AI powered demands required infrastructure that is dramatically different from other clouds. The compute workloads are, you know, require a lot more cooling as well as their networking density. And we have designed the cooling unit known as the sidekick, to match the thermal profile of the chip and added rack-level closed loop liquid cooling for higher efficiency. This architecture allows us to take this rack and put it into existing datacenter infrastructure and facilities, rather than building new ones.
And by the way, they're also built and manufactured to meet a zero-waste commitment. So we are very, very excited about Maia. With Maia, we are combining the state of the art silicon packaging techniques, ultra-high bandwidth networking design, modern cooling, power management, algorithmic co-design of both the hardware and the software. And we're already testing this with many of our own AI services, including the GitHub Copilot, and we will roll out Maia accelerators across our fleet supporting our own workloads first, and we'll scale it to third party workloads after. This silicon diversity is what allows us to power the world's most powerful foundation models and all of our AI workloads from Copilot to your own AI applications, so that when I say systems, this is the end to end innovation.
From glassblowing the next generation fiber optic cables, sourcing renewable energy, designing new approaches to thermal distribution, innovating in silicon, our goal is to ensure that the ultimate efficiency, performance and scale is something that we can bring to you from us and our partners. Now, let’s go to the next layer of the stack: the foundation models. These are only possible, of course, because of these advanced systems I’ve talked about. Generative AI models span from trillions of parameters for LLMs that require these most powerful GPUs in Azure to a few billion parameter task-specific small language models or SLMs. And we offer the best selection of frontier models, which you can use to build your own AI apps while meeting your specific cost latency and performance needs.
And it starts with our deep, deep partnership with OpenAI. They're just doing stunning breakthrough work to advance the state of AI models and we are thrilled to be all in on this partnership together. And our promise to you is simple: As OpenAI innovates, we will deliver all of that innovation as part of Azure AI, and we are bringing the very latest on GPT-4, including GPT-4 Turbo, GPT-4 Turbo with Vision to Azure OpenAI Service.
Yeah, you can clap. [Applause] You know, GPT-4 Turbo offers lower pricing, structure, JSON formatting, which is sort of my favorite, and extended prompt length. You can in fact fit 300 pages of text now into a single prompt. GPT-4 Turbo will be available in Azure OpenAI Service this week in preview. And the token pricing for the new models will be at parity with OpenAI.
Also, soon you'll be able to connect GPT-4 Turbo with Vision to Azure AI Vision, allowing you to prompt with video, images, and text. In fact, our customer WPP is already using this today with one of their largest clients. I mean, take a look at the video behind me.
Pretty amazing to see video prompts as inputs and with summaries coming out on the other end. It's fantastic to see it. [Applause] Finally, we will be introducing fine-tuning of GPT-4 and Azure OpenAI Service as well, allowing you to bring your own data to create these custom versions of GPT-4. You know, we are also all in on open source and want to bring the best selection of open source models to Azure and do so responsibly. Our model catalog has the broadest selection of models already and we are adding even more to our catalog. With Stable Diffusion, you can generate beautiful immersive images.
With Code Llama, you can generate code. Mistral 7B, you can translate and summarize text. With NVIDIA's Nemotron-3 family of models you can build general purpose AI apps. All these capabilities are deeply, deeply integrated with our safety guardrails.
And today we are taking one more big step in support of open source models. We are adding a new “models as a service” offering in Azure. You know this, yeah, it is a big deal. It makes it simple because this will allow you to get access to these large models that are all available in open source as just hosted APIs, right? Without you as developers having to provision GPUs so that you can focus on development, not backend operations. We are excited to be partnering with Meta on this. It starts with Llama-2 “as a service.”
you can finetune Llama-2 with your data to help the model understand your domain better and generate more accurate predictions. We want to support models in every language and in every country, and we are partnering with Mistral to bring their premium models “as a service.” As well as with Group 42 to bring Jais, the world's highest quality Arabic language model, again, just “as a service.” Now, when we talk about open source, there's one more very exciting thing that's happening in this space, and that is SLMs. Microsoft loves SLMs. In fact, one of the best is Phi, a model that is built by Microsoft Research on highly specialized datasets, which can rival models that are even 50 times bigger.
In fact, Phi 1.5 has only 1.3 billion parameters, but nonetheless, you know demonstrates state of the art performance against benchmark testing. Things like common sense language, understanding and logical reasoning. And today I am thrilled to announce Phi 2. [Applause] You know, it's a scaled up version of Phi 1.5
that shows even better capabilities across all of these benchmarks while staying pretty small. I mean, relatively small at 2.7 billion parameters. In fact, it's 50% better at mathematical reasoning and Phi 2 is open source and will be coming to our catalog as well as “models as a service.” Once you have these models, you know, the next up is the tooling consideration. With Azure AI Studio, we offer the full lifecycle toolchain for you to be able to build, customize, train, evaluate, and deploy the latest next generation models. It also includes built-in safety tooling.
Safety is the most important feature of our AI platform. It's not something we bolt on later, but we are shifting left from the very beginning. And with Azure AI Studio you can detect and filter harmful user-generated and AI-generated content in your applications as well as your service. The other thing we're doing with Azure AI Studio is extending it to any endpoint, starting with Windows. You can customize state-of-the-art SLMs and leverage our templates for common developer scenarios so that you can integrate these models right into your applications, right? When you combine the power of the cloud and the edge, it unlocks super compelling scenarios. Let’s say you want to build an NPC helper for a game.
You can start with an SLM like Phi as your target model in Windows. We then help you compose solutions to steer your game to do what it needs, like the retrieval augmented generation templates to apply on your dataset to answer questions about quests. You know this can all happen locally on your Windows machine.
The NPC can guide players with their quests or even generate complete new storylines based on prompts from players. For more advanced use cases you can adapt and finetune the SLM on Azure specifically for your game using the power of even frontier models like GPT-4. It's incredibly powerful to see all of this come together. And of course we're not stopping there. Earlier I mentioned our partnership with NVIDIA. Together we are innovating to make Azure the best cloud for training and inference.
Our collaboration extends across the entirety of the stack, including our best in class solutions for AI development. Today we are expanding our partnership by bringing NVIDIA's generative AI Foundry Service to Azure. Really it brings together NVIDIA's foundation models, frameworks, tools, as well as its DGX Cloud AI, supercomputing and services to provide the best end-to-end solution for creating generative AI models and custom generative models.
To share more, I would like to invite up on stage the NVIDIA Founder, President and CEO Jensen Huang to join me. Jensen, thank you so much for being here. You know, this partnership, I talked a lot about all the things we have been doing on the systems side of course. We wouldn't have been able to train the OpenAI models or make all this progress over the last few years without sort of the unbelievable systems work we've done.
But today we are even going a step beyond bringing, in fact, all of the software innovation that you're doing. Do you want to share a little bit about what we're doing or you are doing on the software side on Azure? I would love to. First of all, I'm so happy to be here to celebrate the amazing work of our two teams. This last 12 months when I was just listening to you. Unbelievable amount of progress for the whole computer industry, frankly, in the last 12 months. While our two teams have been super busy.
AI and accelerated computing is a full stack challenge and it's a datacenter scale challenge from computing to networking, from chips to APIs. Everything has been transformed as a result of generative AI. Now over the last 12 months, within our two teams have been accelerating everything we could. Now one of the initiatives, of course, is accelerated computing, offloading, general purpose computing, accelerating all the software we can because it improves energy efficiency, reduces carbon footprint, reduces cost for our customers, improves their performance, so on, so forth.
We built the world's fastest AI supercomputer together. It usually takes a couple of two three years to plan one easily a year to stand one up. Our two teams built two of them, twins. One in your house, one in my house. We did it and we stood it up in just a few months. It is now the fastest AI supercomputer in the world.
And it seemed seemingly without barely even trying. It's the third fastest supercomputer on the planet. It's really quite, quite amazing. We worked on all kinds of computer breakthroughs, computing breakthroughs, confidential computing, of course, a very big deal, an invention between our two companies all the way to deploying large language models from the cloud to the PC. The work that we did together so that Windows can now be a first class client for large language models, opens up a few hundred NVIDIA powered PCs and workstations around the world. It's the largest installed base on the edge of very powerful AI machines, happens to be Windows PCs with, you know, GPUs from NVIDIA.
And now with Studio AI. Unbelievable, right. Everybody could be a RAG developer.
Everybody could engage large language models. Now, we've also and this is something that that I'm so proud of, we talked about a year and a half ago and this is such a great idea, such a great vision. And you really you really deserve so much credit for transforming Microsoft's entire culture to be so much more collaborative and partner oriented. That NVIDIA’s platforms and you invited NVIDIA's ecosystem, all of our software stacks to be hosted on Azure.
Today we're announcing the two largest software stacks of our company, NVIDIA Omniverse. And in fact, just now you saw the WPP video. In fact, that's actually computer graphics. That computer graphics is running on Omniverse and now you can connect Omniverse to generative AI. The second. And so Omniverse is for industrial digitalization today we're announcing that Omniverse Cloud Omniverse which is a stack originally on-prem on large computers now available on Azure Cloud. The second is a brand new thing that we're we're announcing and you just mentioned that we're offering an AI Foundry Service.
Generative AI has opened up the opportunity for every enterprise in the world to engage artificial intelligence. For the very first time, it is now useful, versatile, quite frankly, easy to use. And companies all over the world will use it in multiple ways, but here's three basic ways, one, of course, public cloud services like Chat GPT.
Second, embedded into applications like Windows. We are very, very happily also a full site license customer of Copilot. And so we are going to be augmented by Microsoft Copilot. And if you think that NVIDIA is moving fast now, you know, we are we are going to be turbo charged by Copilot.
And then third, of course, customers want to build their own AIs. Their own, they want to create their own using their own data, create their own proprietary large language models and create their own RAGS. And so today, leveraging what NVIDIA’s core assets are, our AI expertise, our AI end-to-end workflow, NVIDIA AI Enterprise, and our AI factories, which is now available on Azure called DGX Cloud. We are going to make these built on these three pillars, help customers build their own custom large language models. We're going to do for people who want to build their own proprietary large language models what TSMC does for us.
It’s fantastic. Right? And so we'll be a foundry for AI models. It's just so amazing to see. I mean, us partnering on everything on the system side and everything up the stack on the software side, whether it's on Omniverse or DGX Cloud and this AI Foundry is fantastic. I love that metaphor of TSMC for AI model development.
Talking about this arc of AI, Jensen, of course you're being know at the core of this long before it became fashionable to talk about it, you were talking about it. What's your arc here of AI innovation going forward? Well, generative AI is the single most significant platform transition in computing history. You and I both have been in the computer industry a long time. In the last 40 years, nothing has been this big. It's bigger than PC, it's bigger than mobile.
It's going to be bigger than internet. And surely by far. This is also the largest TAM expansion of the computer industry in history. There's a whole new type of data center that's now available.
Unlike the data centers of the past, this data center is dedicated to one job and one job only. Running AI models and generating intelligence. It's an AI factory, this AI factory you're building, you know, some of the world's most advanced. You're building the world's computer.
That computer is now going to be augmented by factories all over. The second TAM expansion is where our industry has focused on building tools of the past. Now you have copilots that use the tools.
So in hardware, there's a brand new segment, AI factories. In software, there's a brand new segment, copilots. These are brand new things that the world's never had the opportunity to enjoy.
Big, huge TAM expansion. The first wave is the first the wave that we enjoyed, which is incredible startups at OpenAI and others, who create, who are part of the generative AI startups, cloud Internet services. That's the first wave. We're now beginning the second wave and is really triggered and kicked off by Copilot, Office or Windows 365 Copilot. Basically the Enterprise generation.
The third generation, the third wave is the wave that I think is going to be the largest wave of all. And the reason for that is because the vast majority, the world's industries, run on it, which is heavy industries. And this is where NVIDIA’s Omniverse and generative AI is going to come together to help heavy industries digitalize and benefit from from generative AI. So we're really, quite frankly, barely in the middle of the first wave, starting the second wave. This is yeah, this is going to be.
I love that three waves and all happening somewhat in parallel, but the staging of it and I think it all accrues. It compounds across all three. Maybe we can close out, Jensen.
You and I work together for decades, Microsoft and NVIDIA have worked together for decades. You know, partnerships are these magical things where you're innovation our innovation comes together ultimately to enable people in the audience. So just talk about like what do you when you think about the Microsoft partnership, what's your vision for it? What's your expectations of it? And just any thoughts on that? Well, we have a giant partnership, and many of you are our partners with Microsoft here and and I think you all you all all agree with me that there's a there's just a profound transformation in the way that Microsoft works with the ecosystem in the industry. We we are suppliers to you, building the most advanced computers together, you’re suppliers to us. And and so we're customer partners with each other.
But one of the things that I really, really love is the fact that we partner on advancing fundamental computer science like confidential computing and generative AI and all the infrastructure that we build together. I love that we're inventing new technologies together, but I really love that you're hosting our native stack right there in Azure. And as a result, we're ecosystem partners. NVIDIA has a rich ecosystem of developers all over the world, several million CUDA developers.
Some 15,000 startups around the world works on NVIDIA's platform. The fact that they could now take their stack and without modification, run it perfectly on Azure. My developers become your customers. My developers also have the benefit of integrating with all of the Azure APIs and services and the secure storage, the confidential computing. And so all of that richness amplifies NVIDIA’s ecosystem.
And so I think this, this partnership is really quite unique. I think that there's not not one like it, We don't have one like it. We're incredibly proud of the partnership and incredibly proud of the work that we do together. And, you know, so I... Thank you so much, Jensen. I really deeply appreciate everything that you and your team have been doing.
As you said, the last 12 months have been unlike anything I've seen in my professional career. And we are obviously setting pace and we plan to continue to do so. Thank you so much for your partnership. Thank you. Jensen Huang.
All right, so let's go one more layer up the stack to data. You know, it's perhaps one of the most important considerations because in some sense there is no AI without data. Microsoft Fabric brings all your data as well as your analytic workloads into this one unified experience. Fabric has been our biggest data launch perhaps since SQL Server, and the reception to the preview has been just incredible.
25,000 customers are already using it, and today I am thrilled to announce the general availability of Microsoft Fabric. Let's roll the video. Microsoft Fabric is redesigning how we work with data. By bringing all your data and analytics tools into a single experience with Fabrics Data Lake, OneLake, your Teams can connect to data from anywhere and all work from the same copy across engines.
Your data professionals have all the tools they need, all in one SAS experience to reduce the cost and effort of integration. Features like Direct Lake mode and Power BI, which provides a blazing fast, real-time connection to your data, can save you time and cost while providing up to date insights. This intelligence can then securely flow to the Microsoft 365 applications people use every day to improve decision making and drive impact, all backed by Fabric’s tight integration with Microsoft Purview to govern and protect your data no matter where it's used. AI powered features like Copilot help everyone be more productive, whether it's creating data flows in pipelines, writing SQL statements or building reports.
And as we enter a future built on AI, you can unify, prepare and model your data to support truly game changing AI projects. All your data, all your Teams, all in one place. This is Microsoft Fabric. Yeah, it's fantastic to see, you know, Fabric, the vision come together.
In fact, today, really, it's exciting to add this new capability that we call mirroring. It's a frictionless way to add existing cloud data warehouses as well as databases to Fabric from Cosmos DB or Azure SQL DB, as well as Mongo and Snowflake, not on not only on our cloud, but any cloud to Fabric. And they're all in open source, Apache Parque format and the Delta Lake format that's native to Fabric. And to bring this home, let me just kind of walk you through a simple example.
Let's take an electric car charging company that wants to proactively alert its maintenance Teams and crews about stations that need maintenance and servicing. So the real-time data is streaming in. So the IOT stuff is flowing right in from the charging stations into Cosmos DB. They can use mirroring to keep the Cosmos DB and Fabric automatically in sync. Inside of Fabric they're already connecting all the other relevant data. It could be maintenance schedules, weather from Azure Databricks, AWS S3 or ADLS together into this one single lakehouse.
With all this data unified, you can then obviously model on top of it using just data in Fabric, but you can also use now this new integration between OneLake and Azure AI Studio to build a preventive maintenance model that alerts maintenance teams when an EV station is likely to need servicing. And of course, you can build a simple Power App that delivers these alerts to the maintenance crew. You can even embed a chat function into a Power App to gather more context about the alert. This type of example is how all your data operations, store, analytics and AI all can come together. In fact, we're integrating the power of AI across the entirety of the data stack.
This retrieval augmented generation, or the RAG pattern is core to any AI powered application. It's what allows you to bring together your data with these foundation models. And the first thing we did is we've added vector indices to both Cosmos DB as well as to PostgreSQL. And we're not stopping there. We moved the management of AI powered indices out of the app domain into the database itself with Azure AI extensions for PostgreSQL.
This makes it easy and efficient for developers to use AI to unlock the full potential of all their relational database, all that relational data in a database. And with Azure AI Search, we built a first class vector search plus state of the art reranking technology, right? Delivering this very high quality response is much beyond what you can just get from a vanilla vector search. In fact, just last week when OpenAI moved some of their APIs, like their agent API from a standalone vector database for ChatGPT to Azure AI Search, they saw unbelievable scale benefits and it's fantastic to see this now powering ChatGPT.
Now let's move up the stack and talk about how we’re reimagining all of the core applications in this era of AI. Let's start with Teams. Our vision for Teams has always been to bring together everything you need in one place across collaboration, chat meetings and calling. More than 320 million people rely on Teams to stay productive and connected. It's a great milestone.
Just last month we shipped new Teams, which we reimagined for this new era of AI. New Teams is up to two times faster, uses 50% fewer resources and can save you time and help you collaborate a lot more efficiently. And we've streamlined the user experience. It’s easier to get more done, fewer clicks. It's also the foundation for the next generation of AI powered experiences, transforming how we work.
And with new Teams is also available across many places now. It's available on both Windows and Mac, of course, on all the phone endpoints. But Teams is more than a communication and collaboration tool. It's also a multiplayer canvas that brings together these business processes directly into the flow of your work. Today, more than 2000 apps are part of the Team store, apps from Adobe, Atlassian, ServiceNow, Workday have more than 1 million monthly active users. Companies in every industry have built 145,000 custom line of business applications in Teams.
And when we think about Teams, it's important to ground ourselves that presence is, in fact, that ultimate killer application, and that's what motivates us to even bring the power of Mesh to Teams. Reimagining the way employees come together and connect using any device, whether it's the PC, HoloLens, or Meta Quest. I'm excited to share that Mesh will be generally available in January. It's of been something that we have been working on diligently behind the scenes, and it's great to be bringing it like using avatars. You can express yourself with confidence whether you're joining a 2D Teams meeting or a 3D immersive space.
With immersive spaces, you can connect in new ways and bring discussions all into one place. With spatial audio for example, you can experience directionality and proximity, just like in the physical world and with your own custom spaces, you can create a place tailored for your specific needs, like an employee event training, guided tours, or even internal or external product showcases. Using our no-code editor or the Mesh toolkit, we are looking to see how Mesh in Teams helps your employees connect in new and very meaningful ways. Now, let's move up to the very top of the stack, which is the Microsoft Copilot.
Our vision is pretty straightforward. We are the copilot company. We believe in a future where there will be a copilot for everyone and everything you do. Microsoft Copilot is that one experience that runs across all our services, understanding your context on the web, on your device, and when you are at work bringing the right skills to you when you need them. Just like, say today you boot up an operating system to access applications or a browser to navigate to a website, you can invoke a copilot to do all these activities and more: to shop, to code, to analyze, to learn, to create.
We want the Copilot to be everywhere you are. It starts with search, which is built into Copilot and brings the context of the web to you. Search, as we know of it, is changing and we are all in. Bing Chat is now Copilot. It's a standalone destination and it works wherever you are: on Microsoft Edge, on Google Chrome, on Safari, as well as mobile apps coming soon to you.
Our enterprise version, which adds commercial data protection, is also now Copilot. You simply log in with your Microsoft Entra ID to access it. It'll be available at no additional cost to all eligible Entra ID users.
And just two weeks ago we announced the general availability of Copilot for Microsoft 365. It can reason across the entirety of the Microsoft Graph. That means all the information in your emails, calendar meetings, chats, documents and answer and complete tasks. It integrates Copilot into your favorite applications, whether it's Teams, Outlook, Excel and more, and it comes with plug-ins for all the enterprise knowledge and actions available in the Graph.
When it comes to extending Copilot, we support plug-ins today and we are also very excited about what OpenAI announced last week, with GPTs. GPTs are a new way for anyone to create a tailored version of ChatGPT that's more helpful for very specific tasks at work or at home. And going forward, you will be able to use both plug-ins and GPTs in Copilot to tailor your experience. And it goes beyond that.
You will, of course, need to tailor your copilot for your very specific needs: your data, your workflows, as well as your security requirements. No two business processes, no two companies are going to be the same. And that's why today we're announcing Copilot Studio. You know, with Copilot Studio, you can build custom GPTs, create new plug-ins, orchestrate workflows, monitor in fact your Copilot performance, manage your customizations and much, much more. It comes with a bunch of prebuilt plug-ins to incorporate your own business data as well as from applications such as SAP, Workday, ServiceNow. It can connect to databases, custom backends, legacy systems that may even be on premise.
All of this allows you to extend Copilot with capabilities unique to your organization and the systems you use every day. For example, you can have Copilot help with expense management, HR onboarding, IT services. Just take a look at Copilot Studio.
It's super exciting to see Copilot Studio come together. What Power Platform was for the previous generation of applications that we built and the app platform, I think Copilot Studio will be the model equal and for the copilot era, and it's exciting to see this all come together. In fact, we're already using this pattern to extend Copilot across every role and function.
For developers, Github Copilot is turning natural language into programing language, helping them code 55 times faster. For SecOps teams, Copilot is helping them respond to threats at machine speed. In fact, this week we are adding plug-ins for identity management, end point security, for risk and compliance managers as well. For sellers, Copilot is right there helping you close more deals.
Whether you're responding, in email or in a Teams meeting, you can enrich that customer interaction by grounding the copilot with your CRM data, whether it's in Salesforce or Dynamics 365. And for customer service teams, today we are very excited to announce Copilot For Service to help agents resolve, you know, resolve the cases faster. Yeah! You know it provides agents with access to the right knowledge across all the data within the tools they use every day, like whether it's the Teams, Outlook, and it can be embedded directly inside the agent desktop applications. Copilot for Service includes out of the box integrations to Salesforce, ServiceNow, Zendesk, as well as Dynamics 365. It's the one Microsoft Copilot with all the data, plug-ins and skills you need. You know, we are already seeing a new Copilot ecosystem emerge as you all extend Copilot.
Dozens of ISV's including Confluence, Jira, Mural, Ramp, Trello. All have built Copilot plug-ins for their applications. And customers are building their own line of business plug-ins too, to increase productivity and create deeper insights. And not only can you access these in Copilot, but you can surface them across our applications. For example, Bayer has built a plug-in so that their researchers can use natural language to ask Copilot about crop science models and their suitability for new projects right within Teams as they accelerate the development and delivery of their products to farmers. Right, this idea that you build copilots, you use them as plug-in inside of the Microsoft Copilot and Teams is going to be one of the powerful patterns that will play out in the years to come.
These are just a few of the 100 plus announcements we'll make during the conference. But I want to close out by talking about the arc of innovation going forward in two critical areas AI in mixed reality and AI and quantum. AI is not just about natural language as an input. Of course it starts with language, but it goes beyond that to see, to hear, to interpret, and make sense of our intent and the world around us. I want to show you a glimpse of what's possible when the real world becomes your prompt and interface. That's what happens when you bring mixed reality and AI together pay attention to how, not just your voice, but your gestures, even where you look becomes a new input and how transformative it can be to someone like a frontline worker using Dynamics 365, let's roll the video.
This is work. - What’s up Mike? - We're giving eyes and ears to AI, so the world becomes your prompt and your interface. This is work working. These are thoughts happening. - Hey, Copilot. - Plans populating.
- Copilot when was this last replaced? - Information gathering, instructions guiding, schematics aligning, people connecting, problems solving. - Uhh, Copilot? - Hey, Copilot. - Copilot. - This is confidence growing, eyes widening. These are questions.
- I just finished, Copilot what's next? - Rotate the engine block 90 degrees. - Questions the way you ask them. - Please get me the detailed view of the optical cable layout. - Never too many questions.
- All right, let's highlight the relay switches. - Go ahead, keep asking. - It's in position. What's next? - Keep learning.
- The hydraulic filter was changed six months ago. - Okay, can you pull up a service record? - Copilot, what's the tolerance of this locking ring? - Here is a copy of the last service record. The tolerance is nine thousands of an inch. Remind me where this fits into the assembly. - Step four, align the spindle.
The locking ring is highlighted. - This is work, getting smarter. This is work, working better. This is work, making sense of the world around you. - Copilot, close out the work order. - This is AI for the front line.
It's pretty amazing when you bring these two powerful technologies together. And this stuff is real today. In fact, it's being deployed in preview. With Siemens Energy, Chevron, Novo Nordisk.
So it's great to see the power. And I think this is going to be even more powerful in the years to come. The other area I want to talk about is the convergence of quantum computing and AI. Key to scientific discovery today is complex simulation of natural phenomena, whether it's chemistry, biology, physics, on high performance computing today. You can think of AI as an emulation of those simulations by essentially reducing the search space. And that's what we're doing with Azure Quantum Elements.
In fact, we built a new model architecture called, Graph Formers for this very purpose. Just like large models can generate text, you will be able to generate entirely new chemical compounds. Just imagine if you can compress 250 years of progress in chemistry and material science, into the next 25 years.
That's truly using the power of AI to change the pace of science. In this example, I'm just using a Python notebook. I mean, think about it, just a Python notebook with quantum elements, to discover a new coolant. A process that would have, taken three years if we just use traditional computational techniques. But, it probably takes 9 hours now. I can reason over these results with a Copilot, narrow them down, find the most promising candidates.
Using quantum elements, any scientist can design novel new molecules with unique properties for developing more sustainable chemicals, drugs, advanced materials or batteries. And this is just the very beginning. In parallel, we're also making progress on quantum computing because quantum will ultimately be the real breakthrough for speeding up all these simulations.
In fact, just last week we announced the strategic collaboration with Photonic to expand this full stack quantum, approach that we have taken to quantum networking. Photonic’s novel spin, photon architecture natively supports quantum communication over standard telecommunication wavelengths. And so combining that infrastructure and bringing it right into Azure takes us one more step closer to the promise of quantum networking and computing inside of Azure.
At the end of the day, though all of this innovation will only be useful if it's empowering all of us in our careers, in our communities, in our countries. That's our mission. We want to empower every person and every organization across every role and business function with a copilot. Just imagine if 8 billion people always have access to a personalized tutor, a doctor that provided them medical guidance, a mentor that gave advice for anything they need. I believe all that's within reach. It's about making that impossible, possible.
I want to leave you with a video of Anton, a developer from Ukraine, who shares his story of how Copilot has empowered him. Thank you all so very much. Enjoy the rest of Ignite. Let's roll the video. Thank you.
My name is Anton and I’m from Ukraine. As a freelance developer living with cerebral palsy, tasks like typing and speaking can be difficult, which limits my ability to communicate effectively. I realized early on that I was different from others and encountered discrimination and inequality at a young age.
I always inform my clients about my disability. Moreover, I present it as an advantage because it provides me with extensive experience in solving complex and unconventional problems. When I first heard about GitHub Copilot, I was doubtful that it could handle such a complex task as coding, but I was surprised at how helpful and relevant the suggestions it gave me were. Because of my disability it’s easier to type fewer characters. With AI, with Copilot I can code my intention more precisely. I can now not only write the code itself, but also detailed comments, documentation and project descriptions.
Previously, I physically couldn't afford to do this. I firmly believe that AI has enormous potential to make a positive impact on the lives of people with disabilities. It has truly assisted me to optimize my workflow. I aim to harness the power of AI for the betterment of our lives.