Azure Machine Learning and open source Designed for each other BRK2016

Show video

Hey hi, everyone thank. You so much for braving. It out I know it's a very tough long. Night at Universal I appreciate, it. I'm neighbor Ron check I lead. Open source machine learning strategy at asier and. With me is Dan Jevons who's from, shell and, we're, gonna be talking about how Azure. Machine learning and, open source are two, great tastes that taste great together. So. If. You've, looked. At ml at all you probably see two slide like this. Obviously. Everything. Going insanely up into the right, ml. And machine, learning and artificial, intelligence touches. Basically. Every application, that you have today in some form whether or not it's things, like recommenders. Or, filtering. Looking. For security issues. You. Know identifying, things in in images, and. You're just gonna see that go up into the right over, time and, Azure. Is very. Serious, about ml, oh sorry I should say please I'm. Looking for feedback and things like that please do, rate this in, your talk in your app I'm supposed to say that at the start and I just realized and yeah, aser, is very serious about ml as well. Microsoft. Research is one of the largest research arms in the world as well as Azure obviously being, a top cloud and. We were making very deep investments. In machine learning whether or not it's things like hardware. With our FPGAs, deployed, to data centers all around the world, specific. Tests, that. Are better human parity coming, out of research or. You. Know collective applications, like BOTS that also have. Met and exceeded, human parity as well, and, and. The reason we're doing this in the way that we do there's excuse me our tenants, is what, you see here we. You know aim to be productive we, want to offer tooling, and in. A platform that is a available for everyone at every skill level but. We don't want we want to do that and bridge, to where you are obviously enterprises. Have very deep requirements, and we want to make sure that we're able to meet them with, this brand-new technology and we want to make sure it's trusted, this is your data you, should hold on to it you should be able to do training on things, and keep it confidential okay. So. That's all about asier but, as you might have seen machine, learning is really about open, source software, you. Know what you see generally, is that the the, biggest and most innovation, is happening in the open source and I'm gonna detail a few of these right now the. First is here it really, starts, here every breakthrough every release, every, you know before there were Rubik's Cube handling, robots and you. Know BOTS that could be confused for human beings it. Started, here and what you see is just, a few, of the thousands. And thousands of research. Papers that, have come out in the past five 10 20, years that. Have moved the ball forward deep. Neural networks and. Capsule, networks neural, machine translation, transfer, learning they, all started, in research and once, they were published, they released, those papers, to the world so, that other people could build on top of them and the. Way that they were built on top of was, through, frequently. Through commercial, organizations, like you see here. Folks. Like Google Microsoft and others built, technologies, for their internal, tooling, and then, took that same thing and release it to the world with Google tensorflow, tf-x Burt those were big ones, Microsoft. Released onyx, gradient. Boosted trees natural, language sir, processing. And I'll talk to you about some of those in a minute and lots, lots more you have tons of open research. That was done first, for, an individual, in a company or scuse me an organization.

In A company that then was released to the world and, when. It got released to the world frequently. It was released as part of a consortium and. You can see it here I've categorized, it into large areas. Of specific. Work but, you know around data science, the Jupiter projects I kid pandas, those are all consortium, in. 4ml. Engineers looking to bring things to production, you, see workflow and platforms. Like cube flow coming. Along again, mentioning, onyx which has now been donated. To a foundation an. Airflow from. The Apache organization, as well or. It might be standards, this is just things where the community, gets, together and agrees, that, these are the ways that we want to do core, functions, of machine learning making. It easier for everyone. To share and collaborate. Okay. And ideally. It really, is all of the above when. It when open-source, works great it works, with all of these components, so. By. The end of this I plan to make you all experts in data science so you can go out and get great new jobs. But. Quickly, I'm gonna walk you through some of the biggest breakthroughs that happening right now as, far as natural language processing in, the past couple of years it's been absolutely, transformational, so. What you might see in open, source machine. Learning or Sydney open source natural language processing right, now is a very complicated sentence, like you see here you're like four words how a complicated gonna be the. Rabbit, quickly, hot okay we as English. Speakers, understand. That quickly is modifying. Hopped, what. A computer doesn't understand that a computer might, start, by reading left to right so it says though rabbit quickly is quickly. Now modifying rabbit or their. Turtle slowly crawled is slowly, modifying. Turtle we don't know and a computer doesn't know that there's anything else next right it's just getting a stream maybe. There's not a period maybe it was poorly punctuated, what do you do next. So. At, the beginning of this year in February. Google. Released Burt. Bi-directional. Transformation. Tooling. And and. What it did was it, changed, the way that a computer would, analyze. A sentence, so. Instead, of a computer, simply looking for word it now looks bidirectionally. And it, encodes entire, sentences. So, you see that separator, there it's, able to understand, as you walk through and so see as you're looking at it it gets too quickly and it sees the entire sentence, and it says oh there's, actually more here, I can, now encode what, makes sense and they use a bunch of other very very complicated techniques, it's a wonderful, paper I would you, know I highly encourage you to go. And look at it that came from Google okay. So that's great we now have Bert this is wonderful, off, to the races. Once. They published, that paper they, came out with a brand new benchmark. Called glue and glue, is a benchmark. For looking at natural language understanding to try and figure out what, happened when. Bert, was released and when glue released, humans. Were number, one now. In 8ish. Months, we, now have seven, frameworks. From open, sort and that all of which have been open sourced and released, by organizations. Across the spectrum, Google.

Facebook Alibaba. Microsoft, has two up there all, of which are better than human parody today, right. That's, the power of open source take. What you learn and release, it to the world and there are actually you know another 50 on here that, you know that you're like oh it doesn't match human performance that helps too it brings the research, in new directions it helps, encourage people. With new thinking that's, the power of open source and that's how seriously, we at asier believe. In open source. Now. I, missed. To say you. Should participate too, oh I, should also mention at, the end there'll, be a single slide with all of the links that you see in here if you want to capture that know you can take pictures now I'm not saying don't but. You. Don't have to like capture every one if you miss it don't worry I'll leave it on this light of data but. We, released a single, command. To, go and train on Azure. With, bird it configures, everything and I'll walk through what that looks like there but it is in an open source repository right now you could take it you can run it on your own hardware or, any other cloud or with, one command you could spin it up on Asscher okay, so. Open, source isn't just recommended. It's the law. So. But let's. Do some math if ml, loves open source and as, your loves NL. My. Transition, property of equality as, your loves open source and we do we, really do, and. Here's how we do the. First is its, support, for native state, of the art it, is impossible. To say how quickly. Impossible. To be a you, know to guess and how quickly the, industry, is moving forward, and we, deeply. Believe that the. Best and newest technology, will, not be invented solely, by a single, company it is up to us to, support all the libraries, and frameworks that are out there and we, do we. Support all the latest, frameworks. PI torch PI. Torques 1.3 which just released tensorflow 2.0, you. Can see a you know brand new libraries, and SDKs that are coming out we also give you a variety of ways of running those so, whether or not it's something completely hosted. Like as UML could. Be through our data science VM which spins up an entire notebook ready, to go for you or it could be through, managing, your own in a large pod. Of. Machines. Here we, you know you could spin up a very. Very large to point for a, thousand. Or two twenty, four hundred GPUs, across, twelve thousand cores in. A 100. Gigabyte 100. Gigabit connected, cluster and, that would be something you would manage yourself we, also support that behind the scenes but you. Know that's that's the difference between your managing yourself and then, finally through Microsoft, first party things as you saw making, it very easy for you to run Bert on our platforms, but also taking, the technology that we have and releasing, it to the world and I'll show you some of that too so. You. Ask well, that sounds very good. Is it hard no, it is not I'm gonna show you literally, the code necessary to do some of those things tensorflow. 2.0, you want to train using template to point oh and as your ml there. You go that's. It I promise. There's some setup here I don't want to overstate it there, is a little bit setup you got to have your workspace configured but other than that there you go you select your framework version you. Select whatever packages, you want presto, you're ready off to the races maybe. You like are maybe you update a scientists who like our we support our as well there you go that's how you train in our or. Maybe. You're using onyx as a brand new and we're at the onyx runtime you, register your model you say what your inference is you're hosting that's, it that's. How seriously, we take this this is the code necessary to spin it up and run, it and it's.

Not What you see here is nothing proprietary. You. Know this is using latest, libraries, that are out there in the world we obviously do all the testing we make sure it works but, other than that it's ready to go okay. So. That's those are the high-level frameworks let's talk about Horeb on so. One. Of the hardest parts about machine learning is that you're really going to need to do for very large scale workloads. Distributed. Training so a single machine is very easy to spin up you. Just roll down your drivers, and get going but once it comes to large. Clusters. That's, where you can actually get, very serious performance impacts, because, you now have to distribute, the, learning every. Every. Cycle you have to distribute the learning in order to do back propagation, and get all those weights shifted, across now. What. The, folks from uber came out with was horribad, which was a brand-new, way to distribute, the, training, of those, clusters and you can see here on this slide the, ideal, case for, training the equivalent, of training on a single node is represented. By those black. Outlines, and you can see up to 256. Nodes, were, virtually, equivalent, so, that's a very very large cluster, and we still do very very well even up to 512, nodes so, what, you're looking at here is the latest generation libraries. For, just doing these very large-scale workloads. From. The folks, at Boober. Well. That. Sounds awesome, why would you need it well here you go we're gonna talk about bird again again the reason we're talking about it is it's brand new and it has enormous, requirements, for compute this, is Nvidia who just released one, of the largest, scale benchmarks. Against bird and it, requires, eight point, three billion, parameters. In order to do this training not. A trivial amount of compute, to say the least but if you do it and you. Do it in this configuration that Nvidia, did you could train in 47 minutes that's. Not too bad Bert often would take you know weeks months, in, order to do it right all. Right well. That sounds very complicated what, does it take to spin it up on Azure here. You go first, I'm, gonna spin up a dataset so I'm gonna take that Bert data that Robert data and put it up there. Second. I'm gonna create my compute in this case I'm just gonna do 16 nodes you can see it here very. Straightforward I say this is the Machine type and this is how many nodes I want go, create it and.

I Run it that's. It that's. The code that's. How serious we are about open source now, you're saying well where, is the horror bod you, see that lying there right in the middle which. Says distributed, training equals MPI. When. We see that we. Know you want horeb on and we're. Done no, installation no, worry about drivers, don't worry about whether or not it's compatible, with the underlying OS behind. The scenes we take care of everything for you and now, are able to run that eight, point three billion parameter, model. On the, the azure, ml okay, so, the idea is we, understand, there's open source out there we're not going to go out and reinvent, we're, not going to go and create a brand new distribution framework. Or a distribution, strategy for your weights and parameters what, we are going to do is take the best of open source use. It and support it and contribute. Those changes, should there be any backup stream okay. All. Right and again I'm, gonna reiterate you, could take it you can look at it yourself nothing. Here is secret it's all open source I get the easy easy, job all right so that's the name of support for state. Of the art let's, talk about something else if you're, using those things on Azure what. You want to do is avoid being locked in right, oftentimes. People will have solutions. On Prem they, may be doing, machine, learning in a variety of from locations, other clouds, you, know on your laptop, whatever it might be and so you might engage with, Azure machine learning say oh you, know am i giving you everything and now I can't run it anywhere else nope that is not the case people. Look at your machine learning as this monolithic, thing but that's not it at all it. Really is a whole set of layers so, you start at that top layer we, offer for every command, every, feature on Azure machine learning one. Of these three things actually all three of these things we, offer a command-line interface we. Offer a SD. K which. Is now both Python and R and we offer an azure ml, UI, behind. That you have a common, API layer so if you don't like any of those three you can write your own and engage. Behind. That or every one of the services, that, we offer Fraser machine learning and these are not even all of them I just you know kind of an example and then. Behind that is audit. Trail and interpretability. So, if you use one of these services we, give you an audit trail and the, ability to do interpretation, for, free, you don't have to do anything else it's, built into the box and, you. Can take that audit information with, you no matter where you are okay. So that sounds great, what, does it look like well, going, back to a theme you're gonna see this a lot. One-line, cut this. Is the running. An SK learn on, Azure, machine learning from. The SDK you, can drop this in any, Jupiter no book in the world running, anywhere as long as it's able to connect back you, include our Azure. Ml SDK, in the notebook off it goes freely, distributable, SB SDK run. Your estimation, second. Profiling. A model as an example here you can see it here you do have to upload your models to us don't, worry about that I'll get to that in a second, but, once the model is uploaded, you can now profile, it meaning we will run an entire search, across, a whole configuration of, different machine types to, tell you what, this this particular, model will perform best on needs, them a certain amount of GPU need a certain amount of CPU needs a certain amount of RAM so on and so forth. Or. Our, brand, new data draft so. Data. Drift is the idea that you want to watch your data over time to, make sure that input data that, you're getting now is no, different than the data you trained on and this, is the code to do it right here you can see it first you're gonna get a data set again we need some information in order to generate this data drift, information, so you're gonna upload your data to us and. Then we give you a way, to describe, a baseline there you can see I'm describing, everything before 20 January. 1st 2019 as baseline. I say, what features I want you to watch the drift on and then. I say ok, now that you've done that I want, you to create a data drift detector. And that. Detector, I want you to look at the next six months so now we're doing baseline we're not doing live comparison, we're doing baseline so everything, that I trained before January, 1st versus. Everything after was, there a significant, difference there that's, it and you get an object back that object is a rich Python object and you can use it however you'd like so. Again the, idea is we. Give you the tools we're going to give you some services, to work on those tools but, you can take that information and, do whatever you'd like with it that's what it means to be supporting, freedom from lock-in but.

Let Me go even further. Who. Has heard of probabilistic, matrix, factorization. Come. On people. Everyone. Knows this now, this is the technology, behind, auto, machine excuse, me automated. Machine learning and what. What it is is basically you. Give us a model you, give us your data set and you give us some features and then, we run a search across. An entire array of, different. Like permutations. To determine which model and hyper, parameters, are the best and this, is really really powerful because it's honest to god this is not something you're gonna differentiate, on your, ability to figure out like oh you know what my a box needed to be this big you know that, doesn't benefit anyone the. Problem, is that many automated, machine learning solutions. You, do it there and now your models locked in for good what are you gonna do right now you're the cloud, or whatever solution it is can't, run it anywhere else you're you're, screwed. On. Azure that is not the case so again exact. Same thing you saw there except. Two. Lines to code you, go grab that model, you download, it it's, yours take. It wherever you like okay. Use, our services, be, free from lock-in we, don't want to stand in your way from doing great things and. Then. Finally, it's, about bridging enterprise requirements, okay, so. I'm, gonna take you through a little story on one side you have a data scientist on the other side you have an asari an engineer the. Data scientist wants to move very quickly, she has figured. Out she wants to test out her model model, looks great tensor, board no problem, she. Says looks great let's go to production she, checks it into code kicks, off the C ICD pipeline rolls. Out bad. Times. And. The. Reason is is because more. Often than not there. Are lots of enterprise, requirements. That are necessary, you, need to make sure you're doing security, right you need to make sure your and then encrypted, you need to make sure that you. Have a trace for what's going on you, need to make sure cost controls, are in place you. Need to be able to roll forward roll back you need to inform, everyone what's going on and. Most. Importantly, you, need to be able to monitor and alert, once, that thing is out there and if you're rolling out arbitrary, code more. Often than not that is not the case okay. But. With Azure and Azure, machine learning we, offer you a very clean way to get, those things for free again. The client on the left-hand side there engages. Using, those CLI, UI, SDK, that we had before submits. It to the, SDK. And you're, done you get the rest of the stuff for free it appears in a workspace that you set the criteria on it uses, cosmo DB to give you reliability. Around that and then, within your. Subscription. You're able to add. Things like key vault storage, use, your own container registry whatever. It might be okay. So, the idea is even, though it's a brand-new next-gen, library, it takes, place within an environment that makes sense for you and your enterprise. So. Now data, scientist does the same thing drives, through, our ml, ops pipeline, wit which has this automated, and makes it very easy to use rolls. Out of the cloud good, times ok. Again. On. A theme, try. It yourself here, we have an example this is tensor flow plus, Q flow latest. Generation, stuff running. With Azure machine learning and Azure, ml ops and you can see the link there by, one of our absolute best program managers Sasha, rose mom now. That's, Microsoft. Saying words you. Don't believe me I'm here to get paid right let's. Hand it off to someone who's actually gone out and employ in something like this let, me hand it off to damn, thanks. David great. To be here thanks, for having me so. My name is Dan Jevons I'm. The, general manager for data science, at Shell and I, wanted to talk to you a little bit about what we call our shell, de AI program, which, is the program that we're using to.

Drive AI, technology, across, our enterprise, but. I thought I would start with a little bit of the why so why is shell, interested, in AI. Why, are we trying to drive, this across our enterprise. If. You look at shells overall purpose, what we're trying to do at the moment is provide, more and, cleaner, energy solutions. Both. Of those are important, if, we, look at the, way in which our world is constructed, we're heavily dependent, on energy and. Much of the wealth and prosperity of, our global economy is, driven, by the access that we've had to energy and, predominate, to predominantly. To a hydrocarbon. Based energy, supply, chain but, of course we recognize, that, this is a challenge, going forward because, we, have a growing population and. At the same time we, need move into, a situation where. We're trying to also, clean up the energy that we provide and use cleaner. Energy solutions, both within the hydrocarbon, supply chain and also, in, things like electrification, and, it's an exciting purpose, we're, trying to do that as an organization and one, of the levers that we have to pull is AI, because. AI has the ability to make our existing. Energy, provision, much, more effective and efficient, but, it also has the opportunity to play a really key role in driving, electrification. As. We look to smarter, grids and, smarter, electric vehicles, and the like. But. Within Shell we have also a bit of a problem which, is we, have vast, data, volumes. So. Just, to expand. On that a little bit and give you a sense of the scale one, of our seismic surveys, is between 10 and 30, petabytes. One. Of our inspections. Of subsea. Pipeline, is about 7, terabytes, one, of our refineries, generates, about, a hundred, thousand, measurements, per minute now. This, is cool because, on on that, gives us a great opportunity to. Actually, be able to leverage, machine. Learning on those, data sets to make the whole way we do this much, much smarter, and of, course with, the cloud for the first time we're able to democratize. The access. To, compute, at scale and enable. All of our data scientists. To develop solutions right. Across the, end-to-end from exploration through. To retail, and into, the emerging, power, and new energies businesses. So. What we try to do is we've, tried to make this an enterprise-wide program, we're, trying to develop a platform, we're. Also developing an external narrative, and the reason for that is we want a partner, we want a partner with large, companies, like Microsoft.

As Well, as small emerging, startups, who I see in the organ in the audience, like Caspari, so, right across this we're trying to develop, these partnerships. To. Allow us to scale this quickly and to take advantage of our overall and to achieve, our overall purpose, we've. Also built a community within shell of about, 2,000. People and we're, developing standardized, ways of working, to allow us to do this consistently and. Enable, data scientists, to share code. But. Here comes the problem. One. Of the issues we have is that we're not monolithic, we're not in a world where. Everything. We do is on Azure we. Have on Prem we, have cloud we. Have things going on in AWS, and so the challenge for us is how do we leverage that and we don't want to be locked in and we want to stay very close to the open-source community and. There's two reasons for that one is the innovation which David spoke very well about but. It's also for, us about enabling, the researchers, and the data scientists, that are working in this domain to use the tool change that they're most comfortable with but, also to move it to production quickly. So. We developed, an integrated, strategy which takes advantage of things, like Azure. Ml but, also takes advantage of other, frameworks, as well and, I'll talk briefly about this we develop what we call the, Sheldon AI self-service. Platform this. Is all about enabling. Citizen. Data scientists. To rapidly, deploy models, into, production we, want them to be able to take a piece of code written in our and deploy, it as an app tomorrow and make it available to their end users because actually a lot of innovation, happens, closest, to, where the problem happens which means on the, sites in the facilities, in. The. Grids etc, and so. We, want to do that but we also want, to be able to train models that scale and to do that we want to be able to use some of the frameworks that David was talking about we want to be able to use tensorflow, and cube, flow and we want to be able to run that on standard, cloud platforms, like Asia and. So. We've developed what we call the shell delay I professional, framework, which, allows our, developers. To take advantage of that and, then. Finally we also need to be able to develop end-to-end, software, applications, that we can globally, deploy based, on machine learning in. Particular in our asset domain solutions, like predictive maintenance we've. Got to be able to manage, thousands. Of machine learning models in production, and so, we've been working with c3 to help us to do that to roll out solutions, like predicting failures, on valves and compressors, right, across our organization. We're currently live in around 23, different different, assets with that and. Obviously underpinning, that is all the data sources that we want to bring together.

But. Let me drill down on this a little bit and. Talk about the shell today I professional, framework which is really where I wanted to go with this what, we've tried to do is take, advantage of, the base functionality, that Asia provides. But. We've also tried to then build on top of that and what's been very exciting, is we've, been working with Microsoft to, co-develop these things to, make sure that Asia avoids, the lock-in is tight. To these open source frameworks, and enables. Us to move. Quickly, but also support. Between, our on-prem environment, and the Azure environment, using the same frame works based, on kubernetes, and open, source and, what. We're doing as well is we're trying to help Microsoft, as they're developing, these, frameworks. To also try and bake it into the azure product, which means that we don't have to maintain these things long term so, trying to leverage the azure ml frameworks, where it makes sense and where we can avoid that lock-in, and where we can have that portability. So. I thought I'd just give you a little example of what this looks like in practice. This. Is a piece of work that we did together with Microsoft, looking, at cube flow and in particular a piece, of functionality called, cube flow faring, and. And what I just want to show you is a little bit of a demo that hopefully is gonna come up on the screen here what. You can see is this, is an extra boost model we're, running, it in a Jupiter notebook what, you can see is some pretty standard data. Science, code that, a typical data scientist, would develop in. A Jupiter notebook to, build out an extra boost model, what's. Cool though is at the end of this code what you'll see is that just, with a couple of lines of code we're now using, coupe, low fairing, to, containerize. And publish that model into a kubernetes, based environment, and then, to train that model, very very easily from, within the. Jupiter notebook right there it for the data scientist so the idea is just going back to to, David's slide we're trying to shorten, the distance between. The. Data scientist, who's rapidly, prototyping. New ideas, in Jupiter, and the, ability to test that and then productionize. It very quickly through, containerization. You, Ferenc. So. In summary, we're on this enterprise scale. Transformation, around. AI we're. Serious about it we're trying to use it to drive our purpose, we're very excited about, it but, we're also trying to work. With. Microsoft, to ensure that we stay very close to this open-source, community, which, is where the innovation is coming from and to, make sure that we avoid those lock-ins we can move between environments. But, we can also take advantage of Microsoft. Doing the work for us as they start to help us productionize. So. David come back to you. And. And that is a perfect, example of the kind of collaboration, that we were able to do with shell right, shell, had a need they, needed it in upstream. Open source kubernetes, excuse, me cube flow they. Were able to work with our engineers, we contributed, all that work upstream they, were able to get their needs met and now you're able to very quickly train, models. At scale, using. Azure infrastructure. But only using, upstream. Shells. Commits ok. So. Another thing that you might have heard is about, hybrid, of course as. Dan, so Emily talked about, ml. Is taking place everywhere, and something. That we just invested, in and announced was around as your art and that was about how do we make it easy for you to take your, workloads, and run them anywhere and, the. Azure arc idea is we. Understand. Things, are complicated, you can't move everything to the cloud tomorrow and there are very serious reasons, why maybe. It's around governance or data issues, maybe. You already have an existing infrastructure, in place that you want to reuse, maybe. Your data is in some place that it doesn't make sense to move around over. Slow bandwidth or maybe. It's just extremely costly, if, you can do your training, right next to that but, still get the scalability. And performance and, API is that azure provides that, would be ideal and, that's exactly what we offer so, here is an architecture, in just a description, of, arc. You, can see there on the left hand side you, have an API that you engage with maybe, it's the UI portal.

CLI, Whatever it might be it. Communicates, with Azure in our, control plane and gets, you all the necessary Enterprise, requirements, that I talked about earlier our, back policy so. On and so forth but then it, communicates. With that on Prem cluster, through an agent, that you're gonna run on-premises. On that kubernetes cluster and, just. That just that very low, latency. Aresome you low bandwidth connection, back and forth keeping. Everything in sync makes it incredibly easy to now do it so. What this will look like, is. You'll, take your Azure, machine learning experiment. You're. Gonna hand it to that CLI. API, SDK, just. The way you would as though you were running it on our cluster, and then, you then, that's gonna go and begin the work but, as you do that you're going to identify the, compute, cluster, where, you want to run that as my. On-premises, cluster that's, it your, data scientists, your ml. Engineers no. Flow changes, required it. Works just like this, and. If you're interested in this we're, we're in very early days on this right now please come talk to me either now or afterward, and we and, we can talk about getting. You ramped up on it. Okay. So, as. We said sure loves. Open source we, offer you the native, native. Support for the state of art we help you with freedom from lock-in and we, help you by bridging the enterprise requirements, necessary to, meet your customer. Needs, but. That's only part of open source open source is also about participation, and Microsoft. Participates, in three big ways the, first is around, first party research, I'll get to that in a second the, second is going, and doing upstream, contribution, where where, there are already platforms. And and tools and technologies we don't want to reinvent it we want to go work on it work. On it and give that back to everyone and then. When there isn't a new project but we see a need we're, gonna go out and start a project but, do it in the open source way so that everyone can participate as well, so. Microsoft, first party open sourcing this is something we're really really proud of Microsoft. Has one of the largest internal. Needs for machine learning of anyone, in the world but. We don't want to just sit on it we, don't consider that to be our you, know competitive, differentiation, we want to give it away and make sure that other people can leverage it so, for example Bing, uses, ranking, algorithms, it uses own NLP, we've. Open-source knows and release those to the world and you can use those, Microsoft. Translator we. Just released 12, new languages, particularly with, a focus on South southeast. South. Asian languages that. Have been underused, Winokur when it comes to Burt. And another NLP frameworks, we release those and you can see an entire set of recipes when. Is an office have some of the largest inference. Requirements, in the world if you have a surface, and you open it and it recognizes. Your face that's, actually running onyx and an onyx runtime behind the scenes. Extremely. Low latency, extremely. High performance requirements. We've, released those as well exact, same version we use internally we, really release publicly and this. Is a great one that that I don't think a lot of people know about but is really powerful as your open datasets so, frequently. What you'll have is you'll have a model, and you'll say hey you know what I'd like to train on my model with, my data but I'd like to layer on something, like. You. Know socio-economic. Data or maybe, weather data, satellite. Imagery, stuff that's much more generic with. The azure open datasets you're able to add those features in using. Our hosted, datasets when you're running on Azure and that is all open source as well so now you train that model you can add in those features very very trivially, and. Make your model smarter without having to go out and buy some very, expensive thing. We. Also do upstream contributions, as I said we're, very serious about contributing in other frameworks, right tensorflow. PI towards Jupiter or you. Name it we. Have two, times, as many upstream, pull requests, as we, had just, two years ago and you can see them here and this is obviously, there's far far too many for me to list here but we.

Are Very committed, to working in other, communities, and helping those other communities work better, whether, or not it's on Azure. But. Finally like I said I want, to touch on a couple of new projects, that we introduced. And I'll give you a little bit of reason why, the. First is around Onix so at. Microsoft. We ran into a big problem because so many data scientists. Come, in with knowledge of their own and we really didn't want to have to retrain them and force, them to work in something that they weren't familiar with so, they might come in with tensorflow knowledge or R or PI torch or chain or or you know you name it. But, that was really putting a big weight, on our, s our ease and our ml engineers because. That meant there was this enormous, test. Matrix you, might have this chain er model you know and it doesn't run on the latest version of CUDA and you, know on this particular GPU said and on this particular you, know iOS device it was an incredibly, painful, thing for us so. What we did was we, went and partnered. With Facebook, Amazon and, others to. Develop a brand, new intermediate, representation. And so, instead of our. Public. Excuse, me RS are YZ and rml engineers having to support all these, different frameworks for running in production we're. Able to condense, down those. Frameworks, using. Converters. Into, a single, intermediate, representation, and then, build out a runtime, that, we know how to support in production, and make sure it just runs, phenomenally, well and what. You see here is this we, have 30 different converters, for, moving. Between ml, frameworks, and Onix. We, see a 14. X gain in some, in. Some workloads, with an average of a 5x, gain that's not 5% that's, 500, percent gain when, converting, from the underlying framework into, this into. Onix and the Onix runtime and. We have more than a billion, devices today, that, have Onix, runtime running already, okay. So, that's Onix we, built it because we knew we needed it but we also gave, it away because we knew the industry needed it there are so many IOT devices and I'm much less concerned with the, existing, billion and I'm much more concerned with the next 10, billion or hundred billion. Devices out there and making sure they're able to run ML models in a performant way the. Second thing that I want to talk about is as we just released is the interpret ability SDK, so who's heard about machine learning interpret ability and extensibility, @rx and me explain, ability issues, the. The basic idea is that mo. Models are great but, they condensed down into a black box and you really don't know what's going on behind the scenes and that, can cause very, serious issues because it could cause bias issues, it can be pernicious, bugs things, like that that make it very challenging to, understand, and make sure that you're doing your best on behalf of your customers now. There are a lot of things out there to help address this but, very frequently, they will have different contracts. And SDKs, and things like that and you can see just a few of them here there, are great open source technologies, around this but. What we wanted to do was help bring that to developers, and bring that to engineers, to make it better so. What we did was we went and we built. The interpretability, sdk, we've. Open, sourced it and released it to the world and what, that does is it provides a single, layer on top. Of a variety. Of different explainers, and other things that, you can use to, understand, exactly what's going on with your model and. So what that means is you, know using, an sdk a very, standard, contract, very, standard. Signature, I can, run a tabular, explainer just, as you see here or I can run a mimic explainer just as you see here I have my data set I describe. What I want to use and I could say a PFI explainer right, and you don't need to know what these are just know that we're using open, source technology, under the hood that has been vetted and developed, by the best universities, and researchers and commercial, entities in the world and all, you're doing is have using a very simple SDK over the top to, develop these explanations and, from. Those explanations you can now get visualizations. Just, like you see here where, you're able to see exactly. What's going on to slice your data to look at particular, dimensions, and domains, and make sure that you're not developing, a bias model or things like that so that's the interpretability sdk again, and a, you know sorry to come back to a theme we've, given that away as well. So. In, summary. We. Support open, source in three ways first we support it natively through, our AML and our sdk second.

We, Release, Microsoft. Tooling. Techniques, models, everything, we possibly can and third. We, contribute, upstream whether or not that's a new project or an, existing project we, want to make open-source better and we want to the tide to raise all boats so. That's what we mean when we say Microsoft loves open-source. You. Can try these all for free everything. You saw here it. Will be is available. Now today. Except. For the Ark stuff and then just come talk to me. But, we do have a free tier, where you can go and use this you, get free credit or there's, a permanently, free tier and, of course all we have all our Docs and as, promised. Here is every link that you saw in the deck go. Use them tell. Me what's broken tell me how to make it better tell, me how to take, a sure and bring, it to you and. With that I'm done. Thank. You any. Questions. Come. On. Okay. Come up after cuz you're afraid. Thank. Y'all. Hello. Hello. Yeah. Maybe a question that, could interest other. People we're. Interested, in deploying such, kind of model in on the edge in, a flight so. To, control let's say with cameras, or sensors whatever. And. Open-source obviously, yeah have, you have, got some use cases like this and material. Needed or. Partnership. That you've got that's a need, for, sure even on Tuesday, we're going to talk with that with the flight airplane guys, yeah, yeah absolutely so aim is to deploy a very. Efficient. Email. And redeploy. Obviously, to have, their DevOps yeah we're good, already in AI we want to know how we could use that yeah of course so, there's a lot of there's a lot there. We. The. Question was, you. And let me see if I got it right you want to deploy models, to. Flight, scenarios, where you have theoretically. Low latency do you want to do is it just for training or, inference, or both. Sorry. Do, you do you want to are, you are you trying to get a score from, the model or are, you looking to do localized. Training, meaning, like in the plane itself do you want to train new models. To. Get this part okay so. You. Know we do we work with a number of IOT, we, didn't work with a number of deployment, mechanisms. And things like that. My recommendation, is you know come talk to me afterward it's very there's, a lot of stuff there. You. Know I would, say that that distributing.

Your Model is a. Very hard thing you, know even, if you used onyx and the onyx runtime. Reliably. Distributing, that model and maintaining, versioning and understanding, exactly what version is out there and how to do clean, upgrades, and, if the upgrade fails what happen very, very complicated, and highly, recommend, using a partner, to, think about like or not not necessarily a partner but but some solution. That understands. The difficulty. In that when, it comes to rolling out you know what does it mean to have an agent that sits there and those kind of things but. That is the scenario you've described is precisely, what onyx and onyx runtime was designed to do to, make. It easy for you not to care about the specific hardware. Profile let us and onyx take care of that but. Getting. That model out there that is very subtle thing and I would just encourage you to you know make sure to think that through very thoroughly and ideally. Use, a software solution to accomplish it rather than writing your own that. Helped. Yeah. Sorry. Okay. What. Else any. Other questions. 3o. Seconds, how. Could we engage you on this topic for example just. Mail me sorry my. Email is not up there sorry everyone it's, just my first name dot my last name you'll see it on the first slide at Microsoft. Okay.

2020-01-20

Show video