Lessons learned from implementing real-world machine learning solutions: Common | BRK1007

Lessons learned from implementing real-world machine learning solutions: Common | BRK1007

Show Video

Oh what. I'm going to talk about I'll. Do, a few minutes on just. Giving an overview Roger ml just, for context. How many of you have. Gone to sessions this week on Azure ml. Many. Most good, so, I'll just I'll just sort of zip through it you get everyone on the same page and. Then we'll talk about common. Patterns, that we see customers. Use for solving business problems using machine learning I have a couple of a few examples of customers sort of using. Ml and want to share that with you and a sort of how they're broken it down and. Then we talk about the people process and platform, that and. Lessons in the in all of those that, help machine, learning projects become successful. So. Azure machine learning is part of the azure AI portfolio, of solutions and machine. Learning is clearly one of the solution areas we, have AI apps and agents these are pre-built. Models, and the bot framework that helps you build sort, of a natural. A, I enable, of applications, and, but, and. Then there's knowledge mining, which is the ability to sort of look at, use. AI to understand, the, semantic meaning inside documents, and get, value, out of them there. Are a lot of sessions on each one of those so, I'm going to talk about machine, learning today. So. Machine learning we we have four major. Investment. Principles or. Mason areas one. Is the, first one is productive for all skill levels we, think it's super important, as enterprises, go, onto ml journey that. They have the right tools and. The right infrastructure, to sort of build ml and become, successful not. Every company has a plethora, of PhDs. Who have done our or Python and, able to sort of build. Models. Using, code and so, we, we have a code, code first experience in, in Azure machine learning we, also have a no code experience using a visual drag-and-drop, experience. And then, we have a wizard, experience, based on automated machine learning that allows someone, who deeply, understands the business and understands the data to, build models without fully a deeply, understanding machine learning so. We have tools for all three skill. Skill. Levels and preferences, and all, of that is built on a platform that, allows you to manage all of your assets the machine, assets such that you can put those models into production and. So the next one is around how to deliver. Industry-leading. Ml operations, and this our idea of making, sure that the model to build bio, data scientists, can be operationalized and run in production with, your rest of your business processes and your applications. The. Third super, important principle that we work on is open. And interoperable this, is important because so much of AI. Innovation. Is happening in the open and so as a design principle we make sure that anything that's available in, the open as, a paper, that's been written and a data science wants to use they're, able to easily get all that information. And. The code into, Azure machine learning to, do that experimentation, we also support the full R and Python ecosystem, which is basically covers for for 95%, of all machine running at this point, and. When. We have as, Microsoft, when we have innovations, to share we, share that all, the algorithmic innovation, in, open. As well things like our contributions, to PI Torche ahora WOD. Cube. Flow kubernetes, so we work, making sure that all of the work that we do in AI sort, of shows up in the open and then it's usable. Inside Azure machine learning, and. The last one is trusted, we. Think of trust at three, layers that is the trusted. Platform and that's a sure so in addition to all of the trust that you get from security, privacy, region. Availability, compliance. We also think about trust of the process, which is the process that made your machine learning model work how, do you make sure that we capture all of that so how do we keep, the keep track of data the, lineage the versions. Who. Gets to approve so, all of that stuff is tracked as well and then finally it's. A new area for us is trusted. Models which is is the models are you the models are you build your building are they, any good are they fair do they have bias, can. You explain what the model is doing so there's a new area for a lot of companies and they're already interested in so we have a bunch of tools, that allow you to sort of start working. Through and understanding your models. So. In a nutshell, Azure machine learning sort, of has sort, of three piece of news from the from this, ignite. Week. So, we're introducing Enterprise, Edition which is a. Which. Is a Enterprise, ready, set. Of capabilities that include the visual, designer. Automated. Machine learning UI and a bunch of high-end. Enterprise, readiness features we. Have a single data science experience the. New studio that brings together all. Of the tools and and the assets that you need for our data scientist to succeed and then. You also are announcing our support so, that you can capture the rest of the data.

Science Ecosystem, that we did not cover with Python. There's. A blog post with all the gory details that, we have released we. Have literally, hundreds of features that we have sort of announced so. Take a look at that don't. Understand, the full breadth of capabilities. So. Just as a just. Finish, up at. A high level Azure, machine learning you know is in four layers five layers we. Have the experience layer which is what we talked about there's an SDK. Notebooks. Drag-and-drop and wizard we, have an Emma Lux governance, layer that, talks about reproducible. Automatable, integration. With github the. CLI the rest layer, allows. Everything to be automated then. We have four major sort of ml services underneath it there's the data, sets how do you project, data into, a machine learning workspace so that a data scientist has the. Flexibility. Of to, do data science without. Sort, of Y. Of being, compliant with all the governance policies of the company. Training. Which. Is the inner loop and they are a loop for doing training model. Registry our single, source of truths are having your models and versioning, them and then, influencing, which is putting, these models into production both. In in. Online in. The cloud and the, IOT, IOT, as well as a batch. And then. This works on a abstraction. For cloud, compute as well, as IOT edge and so we integrated, both both of those. Systems, and then, we take, advantage. Of all the best AI hardware, that's available in, the cloud and in, any IOT device so. This is a not, an architecture, I'm always told it some architecture, but, it sort of gives you a good idea of, the layers, and of the core capabilities, of the product. All. Right with that I'm going to shift from the overview to, patterns. So. I read, this article by cog Kalitta, I don't know how to pronounce their name but they're down here, they've. Done a really good job analyzing. Just hundreds. And hundreds of customers and. The abuse cases for ML and, it turns out almost ninety, ninety ninety-five percent of time you can put them into one of these seven patterns, and. It's really really helpful so when you start looking at your business problems through this lens you can start seeing you know how to put machine learning into practice for. Your business so. Let me take a couple of examples hyper. Personalization. This. Is the ability for using, machine learning for you to work really deeply understand your customer so. If, you're if, you're doing ads like, we do you, can do ad targeting you really understand the customer behavior you understand what ads to project them you. Could be a source which is showing the next piece of clothing that you want to have and you, know and so how do you sort of recommend it, could be personalized. Er which is like personalizing, experience for your dashboard, right and so in each one of these cases what you're trying to do is using, the data and the user behavior, to, better match.

The Customer where they are and so you really started giving a very very high quality. Experience. For the customer, and therefore. Sort of increasing your lift in, in your business. Another. Common example that. We often talk about is, predictive. Analytics, and. So this is when you're, using data. From the past to, predict the future where there could be forecasting, sales, predictive. Maintenance there's, a ton of ton, of problems that you can sort of just look at what, you're already analyzing, and saying how can I play it forward how can I make sure I look at them in the future and. Then another example is a sort, of patterns, and anomalies and. This. One you're saying look. I have so I have some knowledge of how the past is working and and if I just look at any, data point in in the current I can say it's is anomalous so if you are looking at your sales and you find something to suddenly drop off you, can say oh look I want to get alerted it added it could be something from a kind from, a signal from a IOT, device and if something goes wrong you're able to detect it very quickly so. These are common patterns so. Well, I talked about in the abstract I'm gonna give it a few examples, now from customers who have you, looked, at their business problems through this lens and I. Wonder sort of deep dive into a few of them. All. Right so she's neither electric is a company that is building. Pumps. And, for oil rigs and so, their. Problem was hey look I want to make sure that when if it breaks often these rigs are in all kinds of places it's really hard to fix and it's a very high down time for their customers and so they wanted to do predictive maintenance on it and so, what. They've done is sort of broken down the problem into four major parts so they figured out what, data do they need so. They're ingesting the data from. The. Pump itself so displacement, and load measurements their. Sensors, are sitting on the pump and the rod that's going going up and down and so, they've stream all this data to Azure and then, they ma and they train their model with that time series data now. What, they did in this particular case is they use automation machine learning and, and. What automation machine learning found is that it's an ensemble model, of a. Support, vector machine and a logistic, regression, and so, they pull that together sorry, it's, a random force sorry. So. They pull that together and that's their sort of the core training model. Then. They deploy that through. Our ot edge integration. To, iio t gateway. So. What they do now is they have a custom, application which, is their Schneider, application that's run on site that is calling this edge device that with a web service call and it. Is it is given basically every rotation of the motor this that prediction. Is being. Created, that says is, it likely to break or not and this. Gives the operator really. Great insight into where. All of their their devices are, one. Of the things you'll see in this pattern is that it's.

A Complicated, problem but. Once you start breaking them down into individual. Pieces it, starts becoming a little bit more clear. The, other thing that we've noticed is that we. Often talk about ml server training and in fencing, but there's a lot to be said about how do we consume, which, is you want to make sure that the ml prediction. Is consumed. In the most natural way in your existing business process, or in a new existing business applications, so. As you think about your projects. For putting. Machine learning in we. Found this to be a very good. Way to sort of break the problem down. So. I'll take another example. Totally. Different area. Say not, different industry but, again you'll see starting what's, being common about them so, this is new crest mining these are they're, a mining company in Australia and they, have these giant open pit mines and so. The problem they had was to figure out, as. The, trucks are coming from the bottom of the of the mine they're coming up and you know these are this huge trucks with feet. Highs, tires. They're. Coming up and they have three crushers at the top level and sometimes. These these crushers, jam up because there. Might be metal, in them there might be a stone, or material. That is too big or it's, just, or. It's getting jammed itself there's like something going, wrong in there and so, what, they've done in order to predict what they want to do is to say I'm. Gonna predict what this crusher is going to be jammed. And it's. Gonna slow my thing to slow my processing of my material, and so they want to reroute all the trucks that are already coming, up to mine right, and so they basically reroute, them to an to a new place to one of the other crushers that, way they can keep their time to move. With. An SLA, so. If you look at that problem it's. Kind of cool and they, sort of they have some very, simple thing around cap the camera is placed in, every crusher. That just, really looks at images that says how much volume is going through the system get. A detect metal it, detects size of the object as it does object detection and then it starts triggering a, an. Alert, they. Use a sort, of DNN models for doing this training and. They also what they've done is as the, mind, makes changes, for. Instance if they if you go deeper down the color of the the material. Changes they do a retraining because they find that the images have sort of drifted. From the training data. Then. They do inferencing, and here the inferencing is actually in the cloud and so there's an endpoint in the cloud. We have the consumption, from again their, application. That is built and the process that they have so. This, has been very successful, for them in terms of understanding. Sort. Of excuse. Me to. Improve. The flow of their material and is a good example of machine learning working. So. Next I want to sort of give, one more example and to do that given that example I want to invite you Leica. Varanasi from KPMG. She's. A senior manager there and we're. Going to talk about. Their. Use use case for recommendations. Welcome. Thank you can, you describe what you do I'm, a solution, architect at, KPMG, and I currently drive the cloud strategy. On KPMG. Tax practice for data science and data engineering excellence, and tell us how. How. You've been using ml in this in the tax area so. KPMG. Deals. With a lot of corporate clients and tax. Touches, every aspect of their business transactions, so there's a lot of data and we have these highly trained tax professionals, who understand, tax law and have deep expertise, in going. Through the tax regulations, and applying it to all these transactions, but, most of the time they find themselves actually, focusing more on identifying. The hidden data in these transactions, and applying. The, and it leaves, them a little time to actually give, the value-added, tax services, to, our clients so our. Group, comes in and kind of expedites, this by adding, technology, and, moving.

This Faster in, that aspect awesome, so, I know that you're a very very demanding, customer, of our ml and, so the requirements. So, I know you tell. Us how you use as your ml and why you use it so what are the my, favorite features about from al-azhar ml, compute and that. Allows our data scientists to kind of have these pre-configured. Ml compute deployed in our private, network and as. They data scientist work on these data loads a lot of times they look at the data and they say okay I can probably use this particular algorithm and probably a CPU cluster is good enough but, as new data comes in and they see a, need to change to a different type of algorithm, and say they want to do deep learning and they certainly realize. That I need a GPU, if, it were to be on Prem we typically go through a change management process, create a GPU cluster and then work through that but over here because we already have it pre-configured, and doesn't cost us anything we're just switching, it without, actually changing our much, under the configuration, or going to a change management process that is huge. Advancement. For us the. Ability to be able to run and submit experiments, without having to be explicitly, logged into the computers, great cost. Management because, nothing costs if we have those pre configure out there and. Also the flexibility, to use. Any algorithm. Python algorithms I could learn or any other deep learning algorithm, without having somebody to dictate, what to use is great. Wow, using, all parts of the as machine learning platform, but. I've heard from a lot of customers that the. Platform, is one part of the enabling ml but a lot of other problems that happen too and other issues that really think through can you share us your journey on how that, happened so when we embarked, on data engineering and data science journey it, was definitely not the norm their, IT. Implementers, would actually, work with the production data. It, was definitely two different silos. But. As technology start advancing, and we, serve working with our additional security group and tax risk management group and kind of build that awareness and, we. The the lines, started. Diminishing, we, now treated as a single group, with varied, skill set both business and IT working, together and, working. To a single. Goal to solve and move, forward so yeah, that, was definitely a journey and we also had a lot of huge. Support from our leadership and moving, forward right there that's, great to hear well, thank you so much for for, being a great customer of ours and for pushing us to build a better platform thank you for having me here thank you. Alright. As she mentioned. There. Are a lot of issues that are not in the platform and I think that's what we've seen time, and time again in our product, in our deployments. Is that, there are other concerns that your to think through and we talked about people and processes as two big items that we want to talk through so, here, are some sort, of thoughts that we've had on sort, of how to recognize.

The Issues, and sort of ways in which you might want to think think about it going forward. So. First there are multiple, personas involved, I think one of the analyst firms talks about this is a data, science is a team sport you saw sri talk about it as well as working as we team and so, we, talk about data engineering, as, a sort of practice that has, developed in the last five years to really build your data Lake to feed your BI engines, and. Your analytic engines you've introduced data scientists, and you've, always had, engineers. Software engineers, but also now there's this idea of like an ml engineer who is aware of the concerns of doing, ml for instance taking Python, code to production or converting. Deep. Learning model and optimizing, it for your for. Your hardware, so, we have these three roles and it is IT in the center that's really, orchestrating. And enabling. All of this work and. So you notice some interesting patterns about, sort, of these personas and how while, they're all really. Well intentioned sometimes, they might be in conflict, so. First is data engineers and, so their focus really on. Developing. The source of truth the single source of truth you know that's what they do they curate the data set they. Add business logic they work closely with the business to, clean data to you know to remove dirty data etc and, they're. Also all the time concern about reliability they're running 24/7, pipelines, that, need to be always correct, so the BI system be a downstream. Can consume it this. Is great they have a set, of practices. Now. The other data scientists, who, are looking. At the data they're looking at it from a very different perspective they. Want all the, data they want raw. Data they want dirty data because they want to look at everything because they're always looking for some signal right anything that will move the model and move the metric and, so they have less concerns, about 100% accuracy in fact in machine learning hundred-person. Accuracy is not that interesting you actually some noise actually helps make a better, model and, the. One they are most concerned about is agility and you again Hertz we talked about how, data centers are really happy that they can just sort of keep iterating and they had a GPU, enabled, algorithm, they could just have it because otherwise this, the, speed goes down because their core work is experimentation, they keep trying it and so.

I Marked these things in green because they all seemed like really good things but it sometimes can be in conflict like, you know you can't be both. Reliable and agile without some thought process around it and. Then you have the ml engineer who has a totally different set of concerns they. Are interested in putting the model into production in, a live system as. We talked in new crest we, about how the engineer, is putting model. That's in an IOT device that's, running on site. So. That is they, are really careful about what they bought they ship and so, they actually take the model and modify it they optimize it they modify the application and then, they they're, sort of super concerned about performance, and operational metrics so. Now you have another persona. That is like looking at a different set of problems and then. Finally have IT you're, supposed to enable all of this work you're supposed to trade-off between speed and safety. And meet, all these requirements and, so, the, sort. Of lesson we have learned from this is really you. Can't totally be successful, unless you realize that all, these folks have really good reasons, to want what they want because that's how you succeed in the profession but, there's some times in conflict and so, you have to start thinking through how, are you gonna negotiate and, how are you gonna engage these teams I've, had probably. In the last six months a, hundred, customers I've met one-on-one and they, often talk about you know the data engineer would say like the data scientists, are crazy. They want all the raw data or all the production data how many of you have heard that. Some. The. Reverse is the data scientists say the, data engineers are really slow and they're not giving me everything I need because that's what I want to get back to raw right. And so, the. Thing is they are trying, to do their jobs they're not crazy or they're not slow they just have different constraints, that they're working towards and so, one, of the things we always talk about we, are asked is and place, we've seen it successful, is IT really. Enabling that that. Conversation, and sort of creating a bunch of sort. Of rules, and governance around how we sort of make make this journey happen, so. It's a critical certainly, critically, important, activity. You have to do to, make that move forward so, I think, many many times we've heard, the, data scientist has built their model and shown uplift and then. You know only 80%, or 70% oh sorry 60, or 70% of models never go into production because, they run into trouble with, the next two or three parts. Of the of the of the system so. It's really important to understand the people and organs, they often these are also in encapsulated.

As Organizations, so, the organizational, element is also an issue so, what. We always say is be, be aware of this and you have to start tackling them on head-on otherwise you're not going to actually make this thing successful. So. Let me get tricky. Water here. All. Right the next is the process so you have these three folks they have different constraints, and different aspirations and different goals that they were trying to optimize and you recognize that they have different. Needs so let's go through what they're doing so. First. We'll talk to the data engineer so they are looking at the data lake then they're getting the data in through ingestion they're, running these pipelines. To create these curated data sets and clean data sets they're putting in our catalog awesome. Then. You probably have engineers, app devs so already working on building, great applications they. Are adopting DevOps they're building, you know them, they're creating their release, pipelines, they're building their application, validating, it and doing continuous C R and C D that's. Another part of your organization, that's, great. Then. You have the scientists, they're. Also doing their own thing and so they're you know often I hear stories about the data, scientists, are doing their experimentation, which is their inner loop and they want to they. Want the data they get some data into their laptop or some, small data set and then they're migrating it they're showing their manager how how great the solution, is and then, the next step is I want. Production. Data I want all the data because, otherwise I can't build a model and then they run into troubles with with, the rest of the system, so. Now what do you do so. What do you think about what we say is you have to think through each one of these lines. That's gonna interconnect them you, have to think about them as a organizational. Process and, policies. So. First you got to say mr., data scientists let's. Work with data data, engineid to figure out what data you need and get. The data that you need in, the right way the. Right way maybe giving raw data it may be giving production data but the ideas that it is agreed. Upon between the teams to say how are you going to get the data there what, are the policies, and governance for, for. Our back who gets to see the data who gets to use the data do they get to understand. It so you have to like that little line has actually lot of work because you're. Going to run afoul of a number of things the data sets you you might be making, for bi maybe. Completely not right for AI. And we've. Had that problem in spades at Microsoft, we built great data sets for our office for, bi and then, when they wanted to do a bunch, of AI work we just didn't have the signal because, we had cleaned all that stuff out because the bi, user just. Didn't need it and we thought it's all noise and we could remove that out and so, we had to start having a new conversation, about what, is the the, ml data set that is different, than the BI data set, so. You got to have that conversation you got to do some new work to, make sure you get the data in you. Got to help the data scientists, start, using github. Or something else to sort of say look all the code you have you can't leave in your in your laptop it's got to be source Depot it's you got a source, peoples our internal system you, got to do something like github and, to go to have versions you got to make sure everything all the code is available and, reproducible. Then. You go into the next step which is once, you build the model you can't hand over the pickle file on, a flash drive that. Has happened before or, some equivalent of that and. So you got to put all your models in a place that is a, company, asset that is versioned has metadata right and so you got to get better to the model registry, so. Again some, of these changes will, be difficult, for data. Scientists because they're gonna say hey look it's really slow for, me to do all this extra work and so, you have to sort of start thinking the conversation, that says look you, want to put them into production there are some set. Of tools and practices, you have to do, to, make sure your. Model, goes into production. Then. You now invoke the ml. Engineer persona which is a software engineer or your IT to. Start think about what are you gonna do with ml and so, you're do okay release, release processes. So. When, a model comes up and you get a new version of it you need to be able to say look how am I going to package. The model you know how am I going to optimize the model how am I going to validate the model approve. Deploy the model so all of these now, have to be done for models as opposed, to just code and there's. A bunch of skill involved and often the data scientist will be very.

Much Participating, in this exercise, because. They understand the intent of the model well the ml, engineer will be spending the time sort of doing the operational, work and making the technical. Work to make sure that this is optimized. For the endpoint sometimes. You'll deploy that as a real-time. Endpoint sometimes, you'll deploy it as a batch sometimes. You might take it as a container and put it inside your application sometimes. You might just take the resultant. Code and just compile, it into your other application, so that's the work that the ml engineer will be doing. Then. You. Have to make sure that the model as it's running is collecting. Good data for, both the operational, part as well, as the, the. Machine. Learning part and so you I have to collect, metrics. Like. Performance. Memory. Size etc for. Operations. But you also have to collect data like your what you predict what data you're getting for predictions what. Data you're getting as input what, is your error all, of those. Information. What's the feedback for that for that prediction did, the prediction was it a good prediction, or a bad prediction that comes from the application. You. Collect all that put it back to the data Lake and now since you have talked to your data engineer you're building, the feedback loop so you not only have the data for training you also have the feedback loop from the model in production and now, suddenly you have this process this, golden process that says I have, the data going in and I, have training I have, the great production, release. Management and I, get the feedback and I'm good to go once. You have this established it takes a bunch of time and it often each. Line is a conversation, each line is a set. Of decisions. You're gonna have to make about, policy, about governance about who owns this who. Does what and so, don't, underestimate the, work, involved. There now, we as Microsoft, we. Will provide you a lot of the technology, platform, for, the next slide to enable each of these things but as a customer, you have to reason about a lot of these things with. Respect to your, company's, DNA your values, in your organization. And your priorities these, don't happen automatically, and so our big sort of Oscars think, through this as a as. A sort. Of as a human element as well. So. Just quick show of hands like how much of this rings, true in your ml, endeavor. Some. Okay, for. Others is the true I just haven't run into it yet. Okay. Ask the question Bradley how many how, many of us is you haven't run into it yet. Okay. Right we'll, see the rest of them okay so I think this is what will happen and so don't be surprised and we will try to make sure as Microsoft, will sort of take, this information and, sort of make, it available as. Guides to help you through your journey but, the fact is just having, the technology or, just having a data scientist not enough you have to sort of make sure the whole thing works. All. Right the. Last part is. Relatively. Simple so let's go through the same. Same. Walkthrough. So. There's there have been lots and lots of sessions so far on, all. The technical capabilities, I'm not gonna go to each of the capabilities, but, I what, I want to show is as you think about machine, learning you. Want to think about the all of as your as your platform I often. Have fine customers saying like hey look you're, not end-to-end the, platform, is an agile machine not as machine learning itself, but for your entire machine learning journey it is all of a sure and so as I walk through these four dots and the three personalities let's. Let's look at the list of products that are involved, so. You're doing data engineering you're, looking at data as your data factory you're doing ingestion you're. Looking at as your data Lake to store your data you're, looking at synapse. The new, announcement. As well as data breaks for doing your data engineering, and your, analytics you're doing your cooking there you're building these data sets I shouldn't. Also include as your catalog, but I think that's going to be sort of part of the synapse naming. So. So, you have a whole bunch of assets, to build your data platform, to, build the data sets and to fund data, into machine learning. Then. You have data scientists, who are going to use things like Azure machine learning to.

To. Get a sort, of secure environment, in which they can do that experimentation, work, with github to store your code new models, use. The, old open source technology, that's around for doing a I said like tensorflow parts, so, I get when basically entire Python and our ecosystem. And, then as. She mentioned as your compute you will use the hardware that's in. In Azure to make sure that you have the best capability. At the lowest cost and, then. You work with your, whole app and infrastructure, to, deliver these applications, and so you're, gonna I didn't include vs code or Visual Studio but basically you're going from all the way from app development, as. Your DevOps to orchestrate, that whole life cycle again. Github as a key point for having storing, all these assets as, your kubernetes service to, build out and. Story, or to. Run your online online, cloud, models. IOT. Edge to deploy it to the edge and. So you have this entire. Ecosystem or, set of products and capabilities that you're gonna have to look at as you look at your reference architecture, as you look at your your. Application, as. Your machine learning as, a whole has been defer has been built with this exact, sort of set. Of roles and, products. In mind so we have deep integration, with data factory data, we, can use the information, and data like directly with with our data sets we, are working, closely with synapse as they go into front preview into GA we, have great integration, of data breaks whether it's it's two-way integration so. When. You're in database you can use our Azure. Machine learning SDK, you can connect the. Ml flow, MFA. API tracking server to Azure machine learning that. Allows you to sort of keep track of all of your experiments, and your entire, lifecycle you. Can also do it from Azure, machine learning if you're doing that and you want to sort of run a job in Azure data breaks you can run it the other, way too, we. Talked about machine, learning and github and. Then we also are now deeply integrated with Azure DevOps, so you, can take as your devops project, point, it to an agile machine learning workspace and all of the assets in there become first-class citizens so when your code changes, when, your data changes when. Your models, version. Each one of those generates triggers that then can be used as a driver. For that lifecycle. We, also provide you know direct azure. Kubernetes service. Sir. Clusters, so we actually provision, the cluster for you and and. Run your. Run. Your model there and then we can. Take our models. And put them into IOT edge so, this is sort of really the whole. The. Whole circle so to speak for. Agile, machine for the life side 4ml. All. Right so, I'm gonna come to the summary. So. Just. In a nutshell so we have patterns. Almost. Every, business problem can be can, be enhanced with machine learning and. I think I found, this seven, patterns. To be super, helpful it, helps me understand, it it does really quickly say oh I have a forecasting, problem when I have a anomaly, problem and, so just as you, have your conversation with your business owners use, them but, I use this 7 or use something else use, a way to sort of think about how machine. Learning can can. Help I've, had a lot of customers. Sort. Of say hey look Microsoft. I have this big data Lake I have all this data can. You help me do some AI and. The fact is we can't we, don't know your business we, have no idea right and so I always defer and I'm saying I can't.

Help You what, you should do is talk to your business owners write and talk and talk to them and guide them through, this. Process of saying hey look machine, learning can do all of these things like where do you have problems that can get benefit of this use, examples like the ones we did. Then. The second problem is once you have this business problem you have to break it down and, so you really want to break down the problem into smaller chunks otherwise, it just becomes an immense, problem. Of all new things and so each, of these problems are on ingestion you probably have already solved this problem. But. The thing is how do you connect it to the next step which is your, data, science part what tools do you use for data science then, into into. Production where you where you run your model do you run it as a notebook because that's how you probably were to show business value in the first place but, the production you have to say is it batch is it a batch inferencing, system is it a online system, is it online system in the cloud so, those are decisions you have to make and then, you have to think through how to get the feedback loop. The. Next one is people so. Successful. Machine learning will require culture, change I think everyone has heard this I think, we're trying to sort of show you what. That culture change looks like it means that the. Data scientists, will, have to learn over time to say look at with a little bit more formal about my work I just can't say sort of make. A notebook and call it good and sort of show of cell that says look at this uplift on this metric it great and that my job is done their job is not done they, have to make it reproducible, they got to put in a pipeline they got to talk, to data science to. Data engineering, and 2ml, engineering data. Engineers on the other hand have, to change their mind - they can't say I can't. Give you all that thing, you only have to use accurate data it's a different, problem it's of course different data so you need to understand how to maybe. Build a second pipeline or a derivative pipeline, for helping ml. So. The. Concepts change has to be driven through business through, i.t i.t is, well place to do that but you have to sort of have this as a as, a team sport and. Then. I sort, of mentioned this again different, personas have different needs and they're all good needs they're all positive words I've made them all green because no one can argue against agility no one can argue against reliability, or performance, or speed, or safety they're all good but, when they all put them in the same place you're. Having tension so you got to sort of figure out how the culture is going to handle that. So. Then we go into the process part develop. Policies and governance for data and deployments. How. Do you get the data who gets to see the data is it what level of data it is all. Kinds of tier, 1 tier 2 tier 3 bronze, silver gold. We. Have I've heard all kinds of versions of tearing of the data sets and they all have different SLA s so, you need to have this discussion with data, science on what, kind of SLA do you need for your data how do you you know what guarantees, can you provide now, you'll be surprised how little. Isolated data scientists want all, they want is all the data that's. A different problem but you have to have this discussion in your company. And. Then. Attach. Data science to your existing. App and data lifecycle that's the super important part we talked about that and. Then finally what a platform that actually does that it supports you on your automation and reproducible journey Azure. Machine learning and Azure syrup ride a lot of those tools and. I think it's a forward-looking, thing I think it's really interesting to think about IT. Providing. Self-service. As a data. Sense in a self-service manner if, you think about those four boxes. What. Data do you need for. Training what sort of tools and what sort of compute do you need for. Infant saying where, do you want to deploy and in. Terms of in terms of the application. You sort of have a bounding box of stuff that you want to sort of provide a data scientist and so, as, you look forward thinking through high make operationalize. All, the questions that even ask for these data, set in the computer everything else make, them policies and then create a workspace just. With all of those things together and that allows you to sort of get in the way give. You give. The data scientists what they need in terms of a free environment, to work in but, they also allow, it to be compliant with all your governance and policies so. We're working on this problem a lot as, part of our sort of enterprise, readiness we next to sort of really think through how we build. Those. Kind of capabilities, in but, we also think what, I teach you think about how do you think about this is a self service, offering.

All. Right so with that I'm done I just, wanted sort of point out it's late in the in the week so there's you can see me again if you want ask. Me anything we, have a great. Session on labeling and then, tomorrow we have David our unchic who's talking about how, we do open source it's a huge deep subject. For us and we also make sure your open source in multiple levels so we have those those sessions as well so. With that I'm happy to take any questions thank. You.

2020-01-19 22:33

Show Video

Other news