AI Hub: The One Place for Everything AI (Cloud Next '19)
Ladies. And gentlemen welcome very much to the inaugural, talk about the AI hub we're very excited on, behalf of AI platform, team to share what we've been working on over the last year. Really. Quickly some, housekeeping, items for. Those of you who have not used the Dory application, for questions, please. Feel free to enter your questions there and vote on questions, we. Have a lot to demo today so I don't know that we're gonna have time at the end of the presentation. To answer questions, over. You know face to face but, we will be but till and and Lily, here will be answering questions in real-time during the, show as well, as afterwards, we'll continue to add the answers, to the Dory app. So. Really quickly the, data that the AI platform, team has been very very focused on trying to solve real world machine. Learning problems for our customers, and as, we've been talking to customers over, and over and over again we hear three, main concerns, every. Single time the, first one is people, have trouble managing infrastructure. Whenever. You're talking about machine learning you've got GPUs, TPU, CPUs. You have to do distributed. Training it's, kind of a nightmare sometimes, and some, people would rather not do it some people just can't, in, addition, the real world is hybrid there's. Stuff on cloud their stuff on Prem anytime. You have one of those boundaries, it gets to be a little bit of a nightmare trying to move workloads from one to the other that's. A problem that we want to solve the. Second, problem we hear is really about a talent problem there's just not enough talent, to go around you've, got a data scientist, you've got production, engineers that are used to putting those machine, learning jobs into production there's. Just not enough so, we, need to figure out how to create 10 times force multipliers, for those teams so they can do 10 times more. Projects. And really enable, and empower your. Organizations, to. Do that we need collaboration tools, that actually, focus on reuse, and collaboration, across teams and essentially. Create a center of excellence, within. Your organizations to meet your needs so that's the theme you're going to hear over and over again this talk the. Third problem that we keep hearing is people, are having tough time getting started and the reason is because machine. Learning is a very large area of, learning. And so some people are new to cloud some. People, are learned, are new to machine learning some people are new. To specific. Techniques. Within machine learning or to putting those jobs into production each, one of those has a different learning front and so the question is how do each of us get started more quickly. So. How does cloudy I help we have a number of new products, we're launching today including. The AI hub, along. With our new managed notebooks and we're going to demo a number, of these today as. Well. As our, coop flow fairing system which will also demo, as well, really. Quickly. One. Of the things to remember is AI is really a team sport if you've ever seen an organization. Put. A machine learning model into, production it, usually takes a database, expert. The Machine the actual subject matter expert, the data scientist, and somebody from the production engineering, teams to actually get this to all work together so, you really need to make sure they're all working together, so.
If, We think about this each person. Has, a role to do in a company that, role or that person might have a different skill set they, need different tools to be able to do their job so. The question is is how do we make each one of these roles each, one of these people successful. Using machine learning in the way that they need to to do their job. So. Really quickly to make sure we're all talking on the same on the same page let's, talk really quickly about the the cloud machine, learning stack. At. The base level at the foundation, level we, all are building on infrastructure. Right so whether, it's compute, or storage, or processing. Elements could, be on Prem or on GC B does not really matter that's. The, the basic legal block that you're working with, above. That. GCP. Has a number of managed services to make your lives easier whether it's spending up a virtual. Machine to get your jupiter notebooks going or to, do training and/or serving those. Are services to make it easier, for you to manage, your infrastructure. Or. If you choose to manage your own infrastructure, we can make. Creating. The kubernetes cluster is easier with gke. Most. Data scientists, actually spend their life above that stack more. On the tooling, and sdk level so. We're, we're, talking about the notebooks, we're, talking about poo, flow pipelines, for orchestrating, workloads, and. Poop. Flow fairing is a new service which will demonstrate a little bit later today which. Enables you to take the same training code and run, it locally on your VM or, with. A small amount of configuration, change to, make that run in, a distributed fashion on machine, learning, infrastructure. On Google where, we spin up the machines you do the job and then we'll tear it down for you automatically, or you, can. End up having it run on any kubernetes, cluster that, you choose whether it's on Prem or a GCP, in a, true hybrid fashion, and. At. The top of the stack is the AI hub what. What the AI hub really is is a collaboration, tool, where you can find, then, use, and then share any AI. Asset, that you guys need to do your job obviously.
Google Is providing some some of those assets, to share, some, of our partners, will be sharing as well but. It's also a place for you to put your own assets, that don't get shared with anybody except within your organization, that way you can actually start to share and collaborate. Across, the, teams and, have. These teams actually go, much much faster. So. Instead of having 12 different teams doing OCR, very differently, you, get one team doing OCR, very very well and each, of the other 11, then configuring. It for your clients to. Me. So. Really. Briefly one of the the patterns that you're going to see is we, want the AI hub to be a center of excellence so that, for every single one of your people. We, want a familiar, pattern of discovering, what you want to do the. Detect, the asset, to actually help you do that then you go use it you. When. You actually get something trained, or working then you can share that back with your team that. Could be a model that could be a notebook or that could be a pipeline, and we'll see this later on and that in the demonstration. So. Today. The, demonstrations, that we're going to be giving will be focused, on these three. Personas. Or users first, of all the app, developer, who's trying to put, some. Intelligence, into their app how, do they do that easily, it. Will be about the data scientist, who's trying to create a, new. Training algorithm, using a notebook using different, assets from the hub and then. Once they've got it trained sharing it back with their team or, the ml engineer now that they've got a pipeline. How, do they configure, that pipeline how do they then deploy it how do they monitor, it. So. First of all let's. Talk about the, building, the smart apps, so. I'm going to be jumping be in and out of the the presentation, so, just. Just to warn you so, the first thing that we've done is, we've. Put all of cloud a I. Api's. Into, the hub so that really easy to find so. You can see right here that they're they're, highlighted, as services. So. You can see all of the different api's, here so. Let's say for example, I'm. Building a smart out and I get users to submit photos to me I might need to understand what the heck that photo, is whether. It's a cat or a car, or is there something that's maybe not family safe in it so. I can click on the, vision, API here, I, can. Quickly visit, the products page for that product. And. I. Might. Need to change the resolution here, and there's. A demo right here that you do. Really quickly so we can click on that and I've. Got a photo right here that, a friend gave me and. I, have to say that I'm not a robot.
And. Off it goes, and. So it is actually done bounding boxes on a couple of people in the photo that it's identified. If. You look at labels it's identified, it correctly as a baseball park if. You look at the web entities, it's, actually. Identified, this as the, Oracle park here just down the street so it's correctly actually identified, the location of, this photo and. If. You go to the safe search you can tell that it's it's fine to share, publicly so. That's actually really if, you're an app developer and you just need to consume some of these services. In. Addition. There. Are times when you as a developer might, have data that's labeled, but you don't know necessarily a whole lot about machine learning and you might need to customize. That model for, your case let's say you're a manufacturer, you know the the vision API is gonna be able to tell the difference between tables, and benches. And. Screws. And bolts but it not might not be able to identify all 10,000, parts on your warehouse, lines so. In order to do that you would supply, the, images from, your. Repos. And. You would go to auto ml and. Here's. The auto ml vision right here it's a beta service and. From. Here it then walks you through the process of giving the data identifying. The problems and the labels you're trying to solve and puts. It in your own project, and then it will find the right machine learning model for your problem, and optimize. That model and then give it back to you as a developer you don't have to know what's going on inside it you've just given it all the parameters, to do your job and. It's actually quite simple to use. Okay. So, the next journey we're gonna talk about is now as a data scientist. Now. As a data scientist like, we mentioned, before one of the big problems we find is it's often hard to get started on a new problem, you, might be an expert in one thing and then your boss gives you another problem to work on and you have to go figure out how to do that so. We have a number of great. Notebook. Samples in the hub and so, right here I've identified these are all the notebooks. And. So maybe, I want to. There's. A fun one like let's say I want to do style, transfers. So. I can look for style transfers, and here's an audio style transfer, here's a 3d style transfer this one's kind of fun so. Basically you can take a 3d asset, you. Give it a texture, and then. It applies that texture to that 3d asset so you can imagine in gaming, or, entertainment, areas, this might be something that's interesting you see this quite a bit and you can, go off and kick, that off in a notebook and go mess with the code and change it and share, it back with your with your.
Or Let's, see maybe instead you want to do you. Got structured data and maybe. It's in bigquery or something like that and. You. Want to be able to do gradient. Boosted decision trees so you'll be using XG boost well, right here you. Have a pipeline. Or. If. We want to restrict it to notebooks, you can. Restrict notebook and here's. An example of how to use XG boost using. Coupe flow faring which is that service we're going to demonstrate a little bit later that, shows you how to. Train. It and then. Deploy. That on different locations. Okay. So just so you can see how it works, here. Is the notebook right here I can. Click open and GCP we, now have a new managed service, so. That we. Automatically, spin up a jupiter notebook for you on a VM on GCP, to make that easier, and no more DevOps need to go manage that you. Can either create it in a new VM or use an existing one for speed I'm just gonna use an existing one I. Can. Click, continue. It. Spins up but I've been using this demo VM and so. Here it is and, so you can go ahead and run that if you'd like and so. What you can see here's all the standard. Code. To be able to read your inputs etc, okay, and. We'll talk about that in a second. So. Another thing that the hub does is, it. Isn't added. In a bunch of great support for transfer, learning now one, of them the critical. Problems we hear about over and over again is is about, a lack of data so. If you're trying to do an image classification, model for instance you may not have millions, of images but you might have a hundred or two hundred or a thousand, for. Your particular problem, well how do you solve this problem, the, the the technique of transfer, learning is a way that you can use an open-source, pre-trained. Model. That, has been trained on a general, data set to learn the general features and, then, you can then switch over to your data set and fine-tune, it on your data to, get a really, quick, trading. To quit really quickly train on your problem. And so, we have a number of these here and so you can see in. The hub, how. Do I do that well let's go to the tensorflow modules. And. So. Here's a bunch of different ones and let's say I want image classification. And. So, here's the inception v3, embedding, here's mobile, net a little.
Bit Later on down there's ResNet. Okay. So. Let's say we want ResNet. So. Here's one of the classic, embeddings. Right here, so. You can see the original paper where it was published. In 2015, so you can go learn about it down. Here is the code that you can copy and paste to put into your notebook, to make that really easy from a boilerplate perspective. Or the. Tensorflow team has made it really easy to copy and paste with a URL so you basically what you do is you go click that URL and now. To make, it simple, I'm, gonna jump to a notebook. That's already got the boilerplate in it so you can see how that's used, okay. So, right, here let's. Assume. That I'm a cancer researcher, and I might have some images. Of cancer cells and non cancer, cells and I want to be able to tell the difference between these and I want to use that ResNet 50 embedding that I just saw well. I can look, at this notebook here and I. Do the, the common updates. Of my libraries. And. Set. Up my environment, and. Convert. The images of TF record so it works better with tensorflow, all. Standard, stuff I, can. Visualize, some of these images really quickly just to make sure that I've done it properly I. Then. Set up my model and you. Can see right here that's. The URL of, the embedding that you just copied and pasted the rest of that code around it is the boilerplate code that the tensorflow team has provided for. You to just use that and then, what you do is you. Wrap. That in a one layer Karras Network so. It's got one output for binary, you, know cancer or not cancer and you'll, basically be ingesting. That embedding. From the answer flow module, and train, tree and refining, that model, using, your data that you supplied to it it's, really really simple to do to do and. Then. Basically here's what you do to using it wrapped in a Python class you go ahead and go train on it in that VM. Or. Now, if you're using coop flow faring all. You have to do is provide a few parameters about where's, your docker registry where, is your DCP, project, and, give. It the the base docker image and then. With, Ferenc you. Specify, the gke back-end on GCP where you want to go run it and then, submit the job and it goes and runs it on your gke cluster, as as, its set up or instead. Of running it on gke, you may choose to use the AI factory. Training, service, which used to be called ml engine up to up, until this morning, and. With. That service we, set up the, cluster for you we, distribute, that we take care of all the infrastructure, it. Goes and trains when it's done we tear it down and you only use for the services that you use okay. So, that's a great way to do it we also have demonstrations, at how to use prediction, from it okay, so. Think. Now that you, are a data scientist, and you're actually training. In a hybrid environment you're training on your VM wherever, you are you're training on cloud, either at a managed or an unmanaged, situation. Or if you want you can create. Configurations. For running those on any cluster you want on pram or any other, hybrid. Cloud that you might. Ok. So now that I've got my model trained and I've got my notebook working, I want to go share that with my team so.
We Provided directly, in the, service the, publication, capabilities. So you can share it with your team this publish button is not. To publish to the world it's to publish within your workgroup ok, and you. Have quick ways so you can see all the assets that you've created, and these. Filters down here, if. I can. Get. Rid of that. Oops. That doesn't do it. Okay. So. There. It goes away okay I'm gonna jump, straight to the app so I can show you how to do this live if, I click on this publish, you. Can choose to share a pipeline, which we'll talk about later Sasha, will show us that a trained, model that just the wait files or, the notebook itself I'm going to demonstrate the notebook so. Let's say that if this is the cancer demo. Notebook, and. From. A data type this is image so. I'm setting up the metadata to make it easier to find for remembers, your team and this. Is a training, workflow. Maybe. It's a transfer, learning from. A labeling, perspective, and from a summary this is use. ResNet. 50. To, train. An. Image. Classification. Model. For. Cancer detection. If. I can type there, you go. Click. Next, I can. Now upload, a file and I've got this file right here, on my desktop I open. It I click. Next it can now preview it and then. I click, publish oops. And. If, I don't get that error then it basically shares, to the rest of your work group and they, can start using it. Okay. So. For, with that I'd like to turn the time over to Venky. Rao who is one of the AI, principals, at Accenture and he's going to talk about this experience within the the Accenture workflow. Thanks. Justine, Venky, Roth from Accenture, I, work. In applied. Intelligence. Organization. With an accent or at. Applied intelligence, you know we help our clients, drive, new, business outcomes. Using. Analytics. Automation. And AI. We. We. Have a large team. Of data scientists, over 3,000. Data, scientists, across the globe and we. Also bring, in. A, foundation. Of data. Services. And. That are pretty unique, to. To the industry, in which you know we we. Have relationships, with over 200. Data providers, and we also have. Curated. Data sets in various, industries, that's. Pretty unique to to us and then. We. Have. Strategy, consultants, you know who are deep experts, in various, industries, you know together with. All these capabilities you, know that's how we drive, a. New, business, outcomes, for our clients, now. When, it comes to AI. Hub, we, are super excited because. You know with, the size, and complexity, of applied. Intelligence, team. At Accenture, we. Are constantly looking for new. Tools. In other will help our data scientists, in a collaborate, and and, you know speed, up our projects, and improve efficiency and, all of that, so. When. You. Know we saw AI, hub it it, really, fit into the needs of our. Organization. And. I'm. Going to show you how we are using it. So. One of things you know I forgot to mention was you know. You. Know India signs there are a lot of repeatable. Processes. And, there, are a lot of steps you know that you could share. From one project to the other and a. Lot of times you know you you you come in with your data set and you have a hypothesis. And you want to experiment. Faster, a, lot, of times you know we find teams kind of you know doing the same thing or an Oregon and and. With AI hub if, there's a nice, catalog, that they can look up and and. Include, you know those assets, into, their. Projects, and build you know pipelines, quickly, that's, a big win for us and and.
That's What we see you know with AI hub also. So. I'm going to show you a while Justin, showed, you, how. A I have, looks, you. Know the public, view of AI, hub as a. Customer. How does it look. Now, for you so. So. This is the public view that Justin, showed, and. This is the restricted, view which. Is you know Accenture's. In a view this is where we have published, a few. Of our assets. You. Know within, applied. Intelligence, you know we have, literally. Hundreds, of notebooks. That that. Are that, can be useful for our project. Teams there. Are pre-trained, models. You. Know and. Various other assets. And that's, how we surface, them you know here within AI, hub. Let's. Say no let's, and. You can see that the icon there, it. Shows that this is only. It's. A restricted thing and, it's only available for our organization. Because, you know you have to be careful. And. You can't even actually, you know publish, this to, the. Public view anyways but, you know you can. Let's. Say you know I'm interested, in this particular. Notebook. See. This is a great example of a shareable, asset. One. Things that we encourage our data. Scientists, is to know, when they get a new data, set to. Make, sure that. They. Apply. Some of the responsible, AI principles. That. An, a we encourage, in. This particular case, let's. Say I'm interested in and. Understanding. How my data distribution, is instead. Of you know writing. All of that, tooling. Once again and I could easily look. At this and. As. Justin, showed you know I can actually, bring this up within a VM instance. Right so I can select this. Here. Yeah. I've already opened one here so, you can see that this is a nice. Notebook. For data scientists, and to quickly. Take a look at it now you can run it and then, start changing it and then making. It useful. For their own use. Case so. We. We find this in an extremely, useful it's, all one click. You. Know, deployment. Of these, notebooks and. And. It really speeds up. The. Whole process so we are super excited about this. So. I hope you can start to see the the power of being able to put, best practices, in your own private, section of the hub to share within your organization, and get everybody aligned on best practices, whether it's for fairness or, recommendations. Or, fraud detection or, whatever it might be that your organization, cares about now, within Google, we we do something, similar as well and so for instance these TF modules, that we use we use those millions of times a day on all sorts of products throughout Google for. The notebooks, we share those best practices as, well and with, pipelines, we have introduced, a. Concept. To do industrial. Google grade pipelines, both, in and out of Google and Sasha. Will come up and talk about this, and. While he's coming really quickly. One. Of the complaints, that. Customers. Often talk to us about is why the heck does it take so is it, so much work to get a machine learning model into, production, I created, this model in three weeks ten, months later I'm still fighting with my DevOps team do you actually get it going and well the reason is because there's. So many pieces of the, process around, the model weights that actually, are really critical, to make sure it works you, need to make sure that your your data that streams that are coming in in production, are actually, the same. Distributions. And not, distorted, from what you trained the data, you need to make sure that you've got the infrastructure, right you, need to make sure you got enough RAM so the things don't die you need to be able to scale up to handle any arbitrary. Volume so you've got you need to be able to do a/b testing, to make sure that any time that you make a change that your canary it and making sure no something doesn't break there's a whole lot that goes into making sure that this actually works you don't get disasters, on your hands right well, the the tf-x team created, ways to. Understand. Each step of pipelines, and the Kubb flow pipelines, team created. A way to orchestrate. These workflows and so. Sasha. Will now come up and talk about that. Thanks. Justin. I'm Sasha, sovereign I'm an, engineer. A machine, learning engineer, on the AI hub team and, I'm. Gonna talk about how AI hub, can, jump-start your. Coop, flow pipelines, so what, is coop flow pipelines. Gufo, pipelines, is a container. Orchestration. Framework. For. Production, and. To, end machine, learning it. Comes with a rich, UI that. You can use to interact with, Kufa pipelines, as well, as a Python SDK for, more programmatic. Execution. It. Covers the full machine, learning lifecycle, from. Data. All. The way to serving. Your train model. So. Let's. Say I'm a. Machine. Learning engineer, at my company ok, and, I. Have all our data in bigquery. And. I want to take that data and I want to use it for machine learning one.
Of The best models to use for, tabular. Data is XG boost and it's, usually a good starting point for a lot of engineers, the. Next step I want to take is after training that model I want to deploy it to AI, platform. Prediction. So. Then I can make an API available to, the rest of my team. So. Let's. Talk about how. The. Content. That we've created for, AI hub can. Fit, into that use case all. Right so fir we, have ready. To go pipeline. Templates, okay, these are full end-to-end pipelines, that you can just bring your data fill the parameters, and deploy them next. We have example, pipelines, and so these are our reference, pipelines, that, give. You a good idea of what kind of the best practices, for machine learning are and, then. We have broken down components. So each, pipeline, is composed. Of multiple components. And we've, created this. Suite of components, at multiple. Steps of the machine learning lifecycle, this, includes pre-processing. For your ETL, jobs. Training. For. Any out-of-the-box. Model, you want to use that, you don't feel like coding up or if, you have your own in-house, and proprietary, models in your favorite machine learning library then, you can use our distributed. Frameworks. That work out of the box you just provide your model, next. Is the, push to production okay, so once we actually create that machine learning artifact we want to put it in a place that it can be used. So. Going, back to that use case of I'm. Going from bigquery and I want to use XG boost and then I want to deploy to production so, let's see how a iHub can help me do that I. Will. Click out to. AI hub, okay. I've, already set the filters here for public assets, so we're only looking at what's publicly, available next. I've also set the filter for queue flow pipelines, so this will show me cupola pipelines and components, and, I. Said I wanted to run an, extra boost trainer, so. Let's search. Okay. So. Right. Away we, can, see that we already have a good amount of content. With. Relation, to XG, boost okay, we have an extra boost trainer in spark we, have an extra boost trainer, that takes advantage of GPUs, and.
It's, Also distributed, and all that is handled. Under the covers. So. The. First asset that came up well, training XG boost, model, using. Data from bigquery so. Let me go check that out. In. Our, documentation. We. Kind of give you a quick overview of what, a component, does so you can quickly figure out if you actually need to use this or not so. Here we can see it selects training and validation, data sets from bigquery awesome. It trains, an XG boost model just what I wanted and then it will deploy that model to AI platform, prediction, so, let's go ahead and use this model we. Simply go up here and download. The. Model or sorry, the pipeline and then. Let's. Jump to, cube flow. So. This is coop flow okay it runs on kubernetes, it. Can run on any kubernetes, engine, that you like okay, including, GK and. This. One was deployed using. The. One click deploy feature of queue flow you. Just go to. Our UI and you click deploy and you, have a queue flow up and running so. Pipe lines is a. Part of coop flow I just, selected it on the left and, this. Is the initial. Dashboard, into pipelines and it shows you all your pipelines, we've, already seated a few in here's that you can see and. Let's. Go ahead and add that pipeline, I just downloaded so I'm gonna pick a pipeline. And. Then. Let's, go ahead and rename this to. BQ, 2. XG boost. All. Right go. Down and upload. So, this will add, this pipeline to. My deployment. Now. I can go click on it and. I'll hide this here. Alright, and I'm gonna zoom out just so you can kind of get a view of the full pipeline so, this is a four step pipeline, it, grabs your, bigquery, queries. For both your training your validation here, and here, and will, then use that data it pipes that data into. Our. Extra boost distributed, trainer and then deploys that model so. One. Thing to know here is that this pipeline is was. Completely, composed. All our individual, components, which are also available on a hub so if you like any of these components you can take them out and use them in your own pipeline so. To, run this pipeline I just, hit create run I give, it a name, okay. And then, I go down and I. Fill in parameters, okay and so, this is very, straightforward. And will get you running very. Fast now in interest of time I've. Went. Ahead and ran this pipeline earlier, okay. And so let's go take a look at the output of that. So. Experiments. Is the. Place. You go to to, see where all your runs have occurred now, an experiment. Is a logical, grouping of runs so. Let's say that you, have a production pipeline, you. Run it at. Some type of schedule, this is a place that you can group those together to see kind of how your production system is changing, over time so. For this particular experiment. I've, only run this one pipeline as a demonstration, okay. And I can click, on it and, now I can see that my pipeline is successfully, executed. Each of these checkmarks. Comes in and I, can go and look at the logs for. Each of my pipe for, each of my steps looking, my evaluation, errors as they're logged. Out and finally. I can actually get my. Model. Endpoint, in AI. Platform. Prediction. So. Let's. Take a step back and, go. Back to AI hub. Okay. So. Let's, say I'm don't want to use XG boost and I, actually. Do. A lot of my work in tensor flow, one, of the big frameworks. That google recently fully. Open, sourced is t FX which, is tensorflow, extended. It's, basically, a. Production. Machine, learning pipeline, tailor-made. For tensorflow, it's, a Google grade and that, means that it runs a majority, of the machine learning pipelines, inside, of Google as well and now you can take advantage of it the value. At here is that a I hub we. Host a. Reference. I plan for you to look at, you. Can also use that reference pipeline, to, then see. How your workflows should, change so. Let's check. That out. So. Here's T, effects okay. And the, first thing that comes up is this TFM. A taxicab. Classification. Example. So. Let's click on that and take a look so, this, reference, pipeline, it, uses T effects. For. The use case of trying to predict, taxicab. Predictions, or start. Trying to predict whether. Or not someone will tip a taxicab so. Let's. Say I'm interested in using this pipeline as a reference, pipeline I can.
Download It. Let's. Go back to the cube flow UI and. Go to pipelines. And. When, I just simply upload, the pipeline again. So. One. Thing to. Note here. Is. That. I don't have to use, the. UI, I. Can, use the Python SDK if I. Want more programmatic, execution. And entry point into cube flow pipelines, so. Let's. Take a look at this pipeline and. So this one's a little bit more complex than the last one we saw and it. Has many, steps including, a validation, of data. Step pre-processing. Training. And then. A model. Analysis step, a batch. Prediction, step for a final evaluation and, then, a deployment step out to AI. Platform, prediction. So. Again, to, respect your time we've. Run multiple runs of this pipeline so, let's see how we can use queue flow pipelines, to, analyze, those runs so. Let's go back to the experiments, page oh. One. Second. Okay. Alright. So we, have this taxi experiment, here and I've. Said earlier that this is a logical. Grouping, of pipeline, runs okay, and right away you. Can kind of see that there's a lot, of information here that you can use to compare these runs, so, the first thing to note is that again let's say if I'm in production, okay and all my runs are succeeding my, fourth run okay. Did not succeed I see that right away as a. Failed, status, okay, and I, can just click on that and I can see that my data validation, failed so, I can click into this look, at the logs troubleshoot. Fix and then, run. The pipeline again. Now. One of my favorite, features, of. Queue. Flow pipelines, is the, ability to aggregate, these. Runs and then, view them all in one dashboard so, let's take a look at that so. First I'm, gonna select my runs that. Have all succeeded and, then. I'm gonna go up here and select compare, runs. Okay. So this is a run comparison. Dashboard. And let's. See what it tells us so, first it gives us things like the duration, of the pipeline okay, it also gives us a quick overview of these high-level machine, learning metrics that, we may be interested in including accuracy. And roc, and, we can kind of see that run one are sorry run three marginally. Performed, better than the other runs here based, on accuracy, NRSA. So. Let's. Go down and see why that is so. This also captures, the parameters, that, you entered. Into your pipeline so it's easy for you to see the. Difference here so the only variation, between, these three runs was, the hidden layer size and like, we said run three was, the best and we can see that the smallest hidden. Layer size actually. Performed. Marginally. Better so these are important insights that you can pull out just off this quick, view let's. Go down a little bit. So. Next and also I love this is a, tensor. Board okay, so for. Those who of, you who are not familiar. Tensor. Board is a powerful. Tool for, tensor, flow it allows, you to deep dive into your model so that you can. See. Its performance as it trains it as well as any other type of metrics, you're trying to track, now, generally. To use tensor board you have to write all your. Models the same directory, and kind of manage that if you want to get everything together, now what. Q flow pipelines, does is this able to aggregate all these together all these runs together into, the, same tensor, board so then you can easily make comparisons, at once and that's what we're doing here under, this aggregated, view of. Tensor board so, let's, open that up so. Here, is sensor board it's already running this works out of the box when you deploy q, flow and qu flow pipelines, you don't need to set it up and let's, go down here and let's say I want to look at my lost curves for all my runs that, is, super. Simple it's written here and I can go ahead and look. Run by run what the lost curves were it also has everything else you expect from tensor, board, including. Your graphs distributions. And so forth now let's. Go back to Q flow and. Let's. Go down we can see we can see there's some other visualizations, that are also very beneficial. Confusion. Matrix for. This. Problem and then, the. Next thing or the last thing that I kind of want to highlight on is that, T, FMA is also. Available through, Q flow, pipelines. And so, what is TF MA so TF MA is a part of T FX it stands. For tensor, flow model, analysis, so let's break that open so again, Q fo pipelines, is a rich UI and it lets you pop out to the. TF ma visualization. So. TF, MA lets. You view, your models, performance, at different cross sections, of your data so, then you can pinpoint where. Your model is not performing, very well and then you can iterate and, improve that model this is extremely, powerful as, you're trying to find. The best model, for your use case so, here we can kind of see everything that gets back to us the trip start ours was, split. Up by time and we can see how well it performed, at each, at each hour of the day, so. Let's close that there. Are a few more features down, the bottom I'm just going to gloss over them it. Shows you the ROC, curve, right. Away and then these tables actually give you a, view.
Into Your data that, was used to train your model as well for, each of your runs. Okay now. Let's. Go back to a iHub and, so. I hope, in summary, that you kind of saw how powerful AI. Hub and cue flow pipelines, can be when they work together we, have a lot of. Components. And pipelines already, up and available for you on AI hub that you can use today so, please take, a chance to check that out and now I'll hand it back to Justin. Thank, you very much Sasha. Okay. So hopefully. By now you've been able to see how impactful. Sharing, and finding. Assets. Within, the hub can be whether that's your. API is that you're wanting to use within, a smart app or whether it's sample, notebooks, to, be able to learn how to do new techniques, from from a notebook setting, as a data scientist, or from, a production. Experimentation. Frame were to be able to do End and machine learning pipelines. In a scalable industrial. Fashion, and to, really understand, how to iterate on those pipelines, and to debug them it's it's a really fantastic tool, all, starting. With the hub okay, so hopefully you can see that but. We don't want you to really just take our word for it we've actually been testing, this with a number of early access, partners. And we really want to thank them for all of their hard work and helping us, you. Know iterate, on this we, also. Have. A number of topics that we've touched on here we know that we were going lightning fast so we encourage you to go see any of our other sister, presentations. Here next or if you missed some of those they will all be on on YouTube, later on so you can go watch each one of these presentations, right there as well. And in closing the, AI hub is now open, as a beta product for you to use anybody, with a Google, account can use any of the assets there and, if you've got a G Suites account you can easily. Share that within your organization, and we'll be rolling out access, controls to make that much more widely, available as we see fit or as you see fit we, want to partner with you we want your feedback this, is the beginning of a joint journey that we want to do with you so we really. Value your feedback there's a feedback button on the top right-hand side of the hub we would love for you to submit hey what you like what, you would wish we would add whatever, it is that's a very valuable feedback for us and we hope to go on this journey with you if you, have content, that you would actually like to share to the world and publish as part of the public hub we are completely. Open to that and please contact us there's a forum at the bottom of the hub where you can actually apply and then we can start that conversation, with you and in closing, we just want to thank you very much for the opportunity to share this with you and we hope to talk to you more.