[MUSIC] >> Innovation hinges on our ability to see things differently. Breaking boundaries and looking between the lines in an effort to solve some of the world's toughest challenges. [MUSIC] >> Working together across disciplines and pushing ourselves to see the future from an alternative perspective. [MUSIC] >> Hello, everyone. I'm thrilled to be here. Today, our entire goal is to spark your imagination.
I'll be sharing some fascinating projects from our research labs and the impact this work is already having in the world. At Microsoft, we innovate to enable your innovation by providing technology that ignites new ideas for you to drive meaningful impact in your business, your communities, and even the world. All we need to spark imagination is a new perspective and a willingness to break boundaries and look between the lines, which is exactly why Pablo Picasso is one of my favorite artists. He broke every conventional rule of art by deconstructing regular objects and rebuilding them in abstract form.
He invited us to literally look between the lines and experience his paintings from many different perspectives. For me, that's the true heart of innovation; sharing alternative perspectives and inspiring others to do the same. The researchers and projects you'll see today live at the intersection of art and science, biology and computation, and data and ethics, representing their unique perspectives and inspiring us to look at the world differently.
Let's start with AI and how we can approach human machine collaboration in a fundamentally different way. Many know the story of Garry Kasparov, a chess grandmaster who was definitely not happy when he lost to a machine in 1997. But visionary ideas often come out of these kinds of experiences.
After time spent analyzing how to ultimately beat the machine, Gary came up with the idea to bring humans and machines together, creating a new kind of chess player called the Centaur. Not the half human, half horse kind, this Centaur combines the best of human creativity and strategic thinking with the analytical capacity of the machine to crunch and calculate astronomical numbers of chess moves. It turned out the best way to beat the machine was to create a partnership between human and machine, resulting in a player that could perform better than either a human or a machine could on their own. The significance of this underpins the most powerful vision of all, which is how we can use machines to amplify our own human ingenuity.
Bringing this vision to life requires a fundamentally different approach to AI. We need AI that can co-reason in partnership with us, expanding beyond performing narrow and repetitive tasks. AI has already mastered well-defined narrow tasks like object recognition, for example, where it's achieved human parity. It will take more general aspects of AI to achieve this vision in areas like common sense, causality, logic, or even knowledge. It turns out there's a benchmark for that. It's called superglue.
It tests the ability of an algorithm to follow advanced reasoning. Two months ago, a team in Microsoft research was the first to achieve human parody for this benchmark. They used a massive AI model, which will be part of the Microsoft Turing family of models with billions of parameters which along with increasing power in the Cloud is helping to fundamentally change how we develop AI.
Today we're envisioning AI that not only frees us up from repetitive tasks, but also augments our own capabilities. Imagine the possibilities of this. Any field where human reasoning is applied can be taken to a whole new level. Now to have some fun with this and show you the concept in action, let me invite David Carmona, General Manager of AI and Innovation. >> Thank you, Mitra.
One of the things that we're learning about these massive models like Turing, is that they can be used across different domains or even across different modalities like audio and video. Let me show you that with a cool example. Let's see if AI can understand a movie. The movie that I'm going to use is Avengers: Endgame.
AI and Avengers, it can't get much cooler than that. Okay, let's go for it. Let's search for something. For example, Tony Stark in a serious conversation. There you go. It was able to identify that Tony Stark is in this scene and the expressions around the table were serious.
Behind the scenes where you see models like VinVL. VinVL is a massive visual language model that we just announced. It will be available in Azure Cognitive Services and also in Azure Open Source.
We are using Microsoft Turing, as well, in this case to provide a summary of the scene, you can see it here. Tony Stark and the rest of the Avengers are still trying to figure out where Thanos is. So, the model understood that these are Avengers, that Tony Stark is one of them, and then it captured the essence of these entire conversation.
We can also ask questions. For example, let's go with how many lives did the villain of the story take? The answer as, you know, Thanos took 50 percent of all living creatures. That one was tough because the AI model had to understand who in the movie is a villain and then how many people he killed, which isn't super obvious. Let's do one more. What gesture did Thanos use to do that? Thanos snapped his fingers.
Super cool. It understood the movie perfectly. You're probably thinking that this is seriously cool, but it has nothing to do with your business. Well, actually it does, because this model was trained generically with huge amounts of data from the Internet, not specifically for this scenario. You could reuse these massive models and customize them for your own scenarios in retail, manufacturing, finance, or any other industry. We do that in Microsoft too. For example,
Outlook customized Turing for their auto-reply feature, which takes an e-mail as an input and it generates the most likely reply. We can use the same concept but for our movie. Let's take a summary of the movie to a particular point, then let's use Turing to predict the rest of the story.
It's like creating an alternative ending. That would be very cool, it's like the auto-reply for movies. Let's see how that goes. Are you ready? Write an alternative story.
Absolutely amazing the fight of Avengers versus Thanos is like a light saber fight. Light sabers, I love where this is going. Thanos is getting beat down, but he's very intelligent. He builds another weapon with his daughter's help, which makes sense and they call it the Thanos-inator.
Super cool. A new weapon, this looks amazing. This was all generated by the Turing Model. Every time that I hit "Enter", it will create a different story. Let me try a couple more.
Thanos managed to go to a planet and they are there in the middle of a fight between two giant creatures who look like dinosaurs. Dinosaurs, light sabers, this is the kind of story that I like. Let me try some more because I want you to see something. There you go. It is a spoiler, so I won't tell you the end. This was generated by the Turing Model. It learned from the Internet data that telling the end of a movie is a spoiler. That is mind-blowing. Back to you Mitra. Thank you.
>> Wow, David. I love those endings from an alternative perspective. That was really cool.
Our AI at Scale Vision incorporates a new generation of AI that we believe will be a game changer in how AI is developed in the future. While that may sound a little far off, you can start putting AI at scale in action today. Our goal is to help you leverage this new generation of AI by providing a full-stack in Azure, opening the door to innovation in ways that simply were not possible before. The foundation of this stack is the AI Supercomputer, which provides the advanced supercomputing infrastructure that's needed to train massive models with billions and even trillions of parameters.
We're already seeing amazing innovations being built on top of it. OpenAI is using this infrastructure to train their state of the art GPT-3 AI model, pushing the boundaries even further on the size of these models. Most companies probably won't need to create models of this magnitude for themselves.
Many are likely to use our package services powered by these models that enable new scenarios of augmented knowledge or co-reasoning. Microsoft Turing is already powering Azure Cognitive Services and Azure Search, which are being used by many companies today, like KPMG, for example, who built a fraud detection solution using Azure Cognitive Services that automatically flags potential breaches of confidentiality with speech services and text analytics. For those companies who do want to go beyond this, they can customize one of these massive models for their own scenarios, like AvePoint did. Let's hear from them. [MUSIC] >> At AvePoint, AI scale is unlocking new opportunities for innovation.
We can now examine any business challenge and not only solve pain points, but also re-imagine our entire approach. One example is employee onboarding. We offer 24/7 customer support and our business is growing fast. Keeping up with the speed of innovation and absorbing new knowledge every day is a challenge.
It's also tough to gauge if employees are retaining the most relevant information. But with AI Scale, we've created a personalized learning experience for each team member. Customizing the Microsoft Turing Model, we've been able to extract the critical knowledge across product guides, release notes, support case histories, and all kinds of unstructured data like text and videos. That massive amount of knowledge is what makes us unique.
It's our DNA. Now, we can make sure that our employees have that knowledge. We can automatically generate learning and testing materials personalized for every employee on-demand. And finally measure how well we're keeping our people up to date with the latest knowledge. That's huge. [MUSIC]. >> This is only the beginning.
The possibilities behind AI at Scale, are sparking new ideas for innovation every day. Not only are we re-imagining our solutions, but we are uncovering new opportunities to serve our customers in a more powerful way. We can't wait to see where the future of AI at Scale will take us. >> AI systems like the one created by AvePoint can co-reason with us. But why stop there? The next natural step beyond co-reasoning with us is to have AI actually learn from us.
This is about expanding beyond training AI with data to actually have AI learn from our own knowledge. AI systems that are able to learn from our knowledge, are even more vital as they're used in the physical world to help improve safety and reduce cost and time. Let me show you what I mean. Say hello to my virtual drone.
Now traditionally, I would train an AI model to fly this drone through thousands of iterative experiments, which takes a lot of time and can result in a fair number of mishaps as the model learns by trial and error. What's missing? The AI model doesn't have access to the knowledge we have as it relates to flying drones. So, what if we could give the model a jump start by teaching it what we know? Now, I personally have limited experience flying drones, and by limited I actually mean zero.
So, I'm going to need your help to teach it. For those in the interactive version, you can fly this drone yourself and get on the scoreboard. The better you fly through the rings, the more points you get. I played it right before I came on stage. So, see if you can beat my score and I guarantee you, it'll be the easiest thing you've done all day.
Now, as you're flying this drone, you're using your knowledge, moving the drone left, right, up, and down to give it the best chance of successfully flying through the approaching ring. This concept of using your skills and knowledge to teach a machine is aptly called machine teaching. Think of it as a simplified low-code approach to create AI and machine learning models that enable you, the subject matter expert, to specify goals, learning lessons, and safety criteria without needing to learn data science. Now, what if I want to take this drone out of the game and teach it to fly in the real world? Well, for that, we need to create a hyper-realistic virtual world where the drone can learn to fly safely. Simulation technologies like AirSim developed in Microsoft Research are great enablers of AI. Making it easier to create and test intelligent autonomous agents by bringing together precise real-world mapping data, weather data, physics engines, and sensor models to create a high fidelity virtual world to teach your AI agents.
To teach the drone to operate in the real world, you'll need to introduce things like dynamic obstacles, different light conditions, and yes, even weather. I brought my Mary Poppin's umbrella, so I am prepared. If you're using our interactive experience, feel free to change the weather in the studio. Today building these systems is concentrated in the hands of a few companies, since it requires highly complex techniques that rely on specialized skills. With breakthroughs like machine teaching and simulators like AirSim combined with Azure AI and Edge services. Our goal is to democratize and bring these innovations to everyone so that any company can develop their own autonomous systems, from driving vehicles to operating machinery or controlling manufacturing processes.
One of the companies already using this platform today is Bell. Let's hear how they're re-imagining unmanned flight to solve some of the world's biggest challenges. [MUSIC] >> Our vision at Bell is to solve some of the world's biggest challenges using unmanned aircraft. Imagine being able to monitor entire forests for fires around the clock. Search every nook and cranny of a mountain for a lost hiker, or deliver lifesaving medicine anywhere. Autonomous drones will allow us to operate at a scale that we can't operate at today.
We'll be able to reach more customers. We'll be able to make a more global impact with the limited resources we have. We start by using AI to teach drones everything we know about flight. How to take off, land, navigate, adjust to the weather.
Because they need to do it safely, without causing any damage or injury in the real world, we're using Microsoft autonomous systems to practice in a simulated environment. Simulation is the great enabler of AI because it gives us a hyper-realistic environment that allows you to train AI as if you were doing the operations for real, in the real world. It allows us to compile hundreds of thousands of hours of AI training rapidly and to deploy AI at a scale that wouldn't be possible without it. When they fail, they learn, but without causing disruption or downtime.
Once deployed, autonomous drones can operate continuously, safely, and reliably while keeping humans in control of every flight. Success for Bell is accomplishing missions that are going to change the world, and that's why we're building with Microsoft. >> This new generation of AI creates amazing opportunities to augment our capabilities, but it must be used responsibly.
Digital responsibility is itself an important area of innovation. We're pioneering tools and technologies in support of the six principles of AI that we established back in 2016; like fairness, privacy, transparency, and others. In the area of privacy, Microsoft Research is leading the way in advanced techniques like homomorphic encryption and differential privacy, where we can realize the benefits of AI trained on shared data while keeping your personal data private. In the space of transparency, our work on explainable AI, helps you unlock and understand the behaviors of AI models with interpret ML, or identify and mitigate bias with technologies like Fairlearn. As AI evolves, as an industry, our approach for it's responsible use must also evolve. Advancements in AI can enable new challenges that must be addressed, like the generation and distribution of disinformation at scale.
As long as media has existed, so has the manipulation of it. With AI, methods of manipulation have become much more sophisticated. Like deep fake videos that mimic the likeness of a person, often with harmful consequences. A team at MIT analyzed 126,000 stories tweeted by three million people and found that false news stories were 70 percent more likely to be re-tweeted over true stories. Some of this can be attributed to how quickly social media propagates information, but it also has a lot to do with just how good the fakes are, making it hard to distinguish fact from fiction.
If we can't actually detect the fakes, then what do we do? No single organization can solve this on its own. It requires a coalition of researchers, institutions, and companies to collaborate with joint commitment to an open system approach. A coalition called Project Origin has been established to do this. Microsoft Research has been working on a technical approach to help develop a chain of trust from publisher to consumer for end-to-end authenticated news and information. Let's take a look. [MUSIC] >> Since its inception, media manipulation has come in many forms. The most dangerous, seamlessly tip toeing the line between fact and fiction.
We call these modified videos deep-fakes. With the evolution of AI, we're going beyond just simply cutting and pasting. With deep-fakes, we can create videos and images that really mimic the likeness of a person and that can have very harmful effects.
What can we do if we can't detect the deep-fakes? We believe the long term answer relies on authenticating the source of media. We need to establish a chain of trust from the publisher to the consumer. To address that, we're forming a coalition of many institutions. We call this coalition Project Origin.
The technical layer of Project Origin is a process of authentication that links the media to its publisher. Then, if there's any modification, any change in that media, in the distribution process, we alert the user that that media has been tampered with. In short, we're building a chain of provenance.
In doing so, we hope to rekindle the users trust that the content they are consuming is directly linked to a known source. Let's imagine I'm a news producer and I'm ready to push the "Publish" button. As part of the publication process, the new tools will create a digital fingerprint that gets stored in a Cloud service. In return, I receive a certification of authenticity, which then gets stored in a distributed ledger.
The advantage of a distributed ledger is that you have multiple servers spread across companies. That means there is no single controlling entity. Instead, you have a federation of entities, much like a blockchain that databases are cryptographically secure. Another important aspect is the user experience. We need to develop good user interfaces so that users feel comfortable in knowing the source of the media they are seeing. What we hope to see in the future is that all publishers will use a verification system like the one we're designing that will allow people to trust their chosen media sources.
A challenge like this requires collaboration and commitment to an open standard approach. [MUSIC] >> By building on a strong foundation of responsible AI, our focus is to empower people around the world to positively transform society through AI. The last technological revolution, the software revolution, was defined by our ability to encode ones and zeros on silicon. The next technological revolution will be defined by our ability to encode As, Gs, Cs, and Ts, the building blocks of DNA.
What happens when computer science meets biology? When we bring the tools of computer science and massive amounts of data to biology, the unimaginable becomes possible. Like programming our bodies to fight diseases that currently evade our defenses. What if we thought of the immune cells in our bodies like they were little computers. Somehow there has to be a program running inside these cells working in a distributed manner, communicating, and coordinating across your entire immune system. Sounds like living software.
What if we could figure out these biological programs? It could transform our ability to understand how and why cells do what they do. And in a sense, be able to debug them when things go wrong. This might be possible a lot sooner than you think. To share more, I'd like to introduce Dr. Chris Bishop,
Lab Director at Microsoft Research, Cambridge. >> Imagine a completely personalized patient experience customized for each person's unique characteristics. Imagine being able to accurately predict the effectiveness of a treatment pathway before trying it. Imagine the impact on the health of our societies if medicine was more precise and affordable.
These are changes that can be created with collective health data. At scale, this would transform the healthcare industry as we know it. I'd like to tell you a story.
When she was just five years old, Emily Whitehead was diagnosed with acute leukemia. Now, children diagnosed with this type of leukemia generally have a good chance of being cured. But Emily's situation was different. Her cancer develops in a way that bypass her own immune system. Her T cells, the specialized white blood cells that typically fight invaders, could not recognize her cancer.
Emily's doctors tried everything, including chemotherapy, but nothing worked. Her cancer was simply too aggressive and eventually hospice care was recommended. Emily's parents heard about an innovative new treatment called CAR T cell therapy that had just become available in a phase 1 clinical trial. The goal, to reprogram T cells by modifying their DNA. But these are living cells. How on earth do change DNA inside a living cell? But it turns out there's a piece of biological machinery that does just that.
It's called a virus. Emily's team took a modified virus and reprogrammed it to install new DNA into the nucleus of Emily's T cells to generate receptors matched to Emily's cancer. Then they expanded the number of T cells and infused these modified fighters back into Emily's blood. This first in the world experimental treatment was Emily's only chance for survival. And it worked.
She has been in remission ever since. Emily's treatment sounds like the stuff of science fiction, but it's real. It's not an exaggeration to say that we're on the cusp of something phenomenal here.
The same therapy was made available to other cancer patients whose bodies had also rejected chemotherapy with a resulting 80 percent recovery rate. Now the task is to make such treatment available at scale so that we have more stories like Emily's. That's where computer science can help.
One of the key breakthroughs we're looking at right now in healthcare, is the ability to simulate the interactions between proteins at the molecular scale. These simulations are powered by machine learning, together with massive amounts of compute. In fact, datacenter-scale computing. With these tools, we can look at how two proteins come together, how they move, and how they interact with each other like pieces of machinery all at the nanosecond time scale.
Once you have the resources to power this, you can imagine zooming out to simulate interactions at every level, from molecular to cellular, all the way out to the human level. When compute meets biology, it allows incredible things that would have been inconceivable before. >> Thanks, Chris. We are truly at the cusp of a new era for human health.
At Chris' lab in Cambridge, they've developed a programming language and a whole tool chain to design, simulate, and predict the behavior of a biological program. There is immense power in bringing massive computation to biotechnology. Imagine having the ability to accelerate the entire cycle of drug development from design, to test, and manufacturing by applying AI to every stage of this cycle.
This new biotechnology lifecycle augmented by data and AI, could dramatically increase the speed with which we create new treatments and scale them to more and more diseases. This is our vision. While the platform itself won't cure disease, putting this innovation in the hands of medical professionals will empower them in their quest for discovery. It's this magical combination of human ingenuity with technological innovation that can change the world. I'm deeply inspired by the personal stories and motivations of people who are changing the world for better everyday. I'd like to share the story of Dr.
Ernest Darkoh and Dr. John Sargent, Co-founders of BroadReach Healthcare in Botswana and South Africa, who are re-imagining healthcare through AI to fulfill their shared vision of improving access to healthcare for underserved populations. Let's hear their story. [MUSIC] >> I want to make a positive difference in the lives of others. I spent my formative years in Tanzania and Kenya. You see a lot of suffering, a lot of poverty.
As a 12 year-old, I encountered this gentleman. His leg was swollen size of a tree trunk. I remember feeling I want to help. That has stuck with me until now. I met John at medical school, we instantly clicked. >> We're cosmic twins.
We share the same mission in life, which is to improve access to healthcare for underserved populations. I worked in Sierra Leone in a refugee camp. It was literally a war zone.
The camp had no running water, no electricity, not enough doctors and medicine. But yet, I could walk outside and I could buy soda, I could soap. And I'd wonder, what do these companies know about delivering products that we don't know in healthcare. >> We realized to change the current healthcare system, you have to become a doctor of systems. >> We imagined healthcare differently. >> We solved the big problems so that people don't need intensive medical care in the first place.
>> AI is the game changer. We can identify clinics that are understaffed so we can appropriately staff them. We can equip a community healthcare worker with information on what she needs to do that day, which patients she needs to see. That's real change. >> When you improve the effectiveness of healthcare delivery, you are rebuilding trust.
>> For HIV, we're using predictive AI to predict which patients are likely to stop taking their medicines. We then reach out to them directly via SMS to re-engage them. We're working on diseases like diabetes, COVID, and cancer. >> We've improved tens of millions of lives.
>> The richest part of this journey has actually been doing it with John. >> My calling in life is to help people and I have been blessed to do that with Ernest. [MUSIC] >> Behind every innovation, is a person with an idea, a dream, or someone who has simply asked, "What if?" It's an honor to share a little slice of our innovation journey with you today.
But truly at its core, innovation isn't about the technology, it's about the people behind it. It's about all of us. It's our imagination, our unique perspectives that give it power. I hope what you heard today encourages you to look at the world a little differently, and sparks ideas that inspire meaningful innovation. We can't wait to see the future you build. Thank you.
2021-03-10