TensorFlow Extended (TFX) & Hub (TensorFlow @ O’Reilly AI Conference, San Francisco '18)
Hi. Everyone my name is Clemens I'm a product manager in, Google research and, today. I'm going to talk about tentatively, extended, which is an antenna machining platform that we build around tensorflow, at. Google. And. I'd like to start this talk with a block, diagram and the. Small yellow box or, orange box and, that. Box basically, represents, what most people care, about and talk about when they talk about machine learning it's, the machine learning algorithm it's, the structure of the network teacher training, how. You choose, what. Type of machine learning problem you're solving and. That's what you talk about when you talk about tensorflow, and using tensorflow, however. In. Addition to the actual machine learning and to tensorflow itself you have to care about so, much more and, these are all of these other things around the actual machine learning algorithm that, you have to have in place and. That you actually have to nail, and. Get, right in, order to actually do machine learning in a production setting so. You have to care about where you get your data from that, your data are clean how. You transform, them I returned, your model how to validate your model how to push it out into a production setting and deploy, it at scale. Now. Some of you may thinking well I don't. Really need all of this I only. Have my small machine learning problem I can live within that. Small orange box and I, don't really have these production worries as of today but. I'm gonna propose that all of, you will have that problem at some point in time because. What I've seen time and time again is, that. Research. And experimentation, today is, production. Tomorrow like. Research and experimentation never ends just. There eventually, it will become a production model and at that point you actually have to care about all of these things. Another. Side of this coin is skill, so. Some of you may say well at. All of my machine learning on a local machine in a notebook everything, fits into. Memory I don't, need all of these heavy tools. To. Get started but. Similarly. Small. Scale today is large, scale tomorrow at. Google we have this problem all the time that's, why we always design for scale from day one because. We always have product team missed and say well we have only small amount of data it's fine but, in a week later the product picks up. Suddenly they need to distribute the workload to hundreds of machines and, then. They. Have all of these concerns. Now. The good, news is that we. Built something. For this and tfx is the solution to this problem so. This is a block. Diagram that we published in one of our papers that, is a very simplistic view of the platform. But, it gives you a broad sense of what. The different components, are, now. FX is a very large platform, and it contains. Up a lot of components and a lot of services so. The. Paper that we published and also what I'm going to discuss today is, only a small subset of this. But. Building. Tf-x and deploying, it at Google has has had a profound impact of, how, fast product, teams at Google can train. Machine learning models and deploy them in production and. How. You picture this machine learning has become at, Google you'll. See later have a slide to show to give you some sense of how widely tf-x is being used and. It. Really. Has. Accelerated all of our efforts to being an AI first company and using, machine learning and all of our products. Now. We. Use defects broadly, at Google and we. Are very committed to make all of this available to you through, open sourcing it so, the boxes that I just highlighted, in blue are the components. That we've already open, sourced. Now. I want, to highlight an important thing tf-x. Is a real solution for, real problems, sometimes. People, ask me well. Is this the, same code that you use at Google for production or did you just build like something on the side and open sourced it and.
All. Of these components are the. Same code base that we use internally for our production pipelines. Of course. There's like some things that are the cuckoo's specific, for our deployments, but. All, of the code that we open source is the same code that we actually run in our production systems, so. It's really cold that's all real problems for. Google, the. Second part to highlight is. So. Far we've only open source libraries, so each one of these libraries. That you can use but. You still have to, glue. Them together you still have to write some how to make them work in. A joint manner that's, just because we haven't open-source the full platform yet we. Actively working on this but. I would, say so far we were about 50%. There. So. These blue components, are the ones that I'm going to talk about today. But. First let me talk about some of the principles that we followed when we didn't develop defects, because I think it's very informative, to, see, how we think about these platforms, and how we think about having. Impact at. Google the. First principle, is flexibility, and there's. Some history behind this and the short version of that history is that I'm. Sure at other companies as well there used to be, problems, specific, machine, learning platforms, and just. To be concrete so, we had a we. Had a platform that was specifically. Built for large-scale linear models so, if you had a linear model that you wanted to Train at large scale you, use this piece of infrastructure, we. Had a different piece of infrastructure, for large-scale neural, networks. But. Product teams usually don't have one kind of a problem and they usually want to train multiple types of models so, if they wanted to train linear Antibes model state to use to entirely, different technology stacks, now. A sense of flow as I'm, sure you know we. Can actually express any kind of machine learning algorithm, so. We can train tensor flow models that are linear there are deep. Unsupervised. In supervised we can train three models in any. Single algorithm that you can think of either, has already been implemented intensive flow or or is possible to be implemented intensive, law so. Building on top of that flexibility, we have one platform that, supports all of these different use cases from all of our users and they don't have to switch between platforms. Just because they want to implement different types. Of algorithms. Another. Aspect of this is the, input data of course. Also, product teams don't always only have, image. Data or only have text data in. Some cases they may even have both right, so they have, models. That take in both images and text and make a prediction so. We needed to make sure that the platform that we build supports. All of these input modalities and can, deal with images. Text, sparse. Data that you will find in logs videos. Even and. With. A platform, as flexible, as this you, can ensure that all of the users can represent, all of the news cases on the same platform and don't, have to adopt different technologies. The. Next aspect of flexibility, is how you actually run these pipelines and how you train models, so. One very basic use case is you have all of your data available, you train a model months and you're. Done this, works really well for stationary, problems, a. Good, example as always you want to train a model that. Classifies. An. Image whether, there's a cat or a dog in the damaged cats. And dogs have looked the same for quite a while and they will look at the same in 10 years or, very much the same as today so. That same model will, probably work well, in. A couple of years so you don't need to keep that model fresh however. If you have a non-stationary problem. Where. Data changes over time recommendation. Systems have new types of products that you want to recommend new types of videos that get uploaded all the time you, actually have to retrain these models or keep them fresh, so. One way of doing this is to train a model on a subset of your data once. You get new data you throw that away you train a new model either, on the super set so on the old and on the new data, or only, on the fresh data and so. On now. That, has a couple of disadvantages. One of them being that you throw away learning from previous models in. Some cases it's you're, wasting, resources because. You actually have to retrain over the same data over and over again and, because. A lot of these models are actually not deterministic, you, may end up with vastly, different models every time because, the way that they're being initialized, you may end up in different Optima in. Real time the Chaney's models so, a more advanced way of doing this is to, start training with your data and then.
Initialize. Your model from the previous weights. From, these models and continue training so. We call that worm starting, of models. That. May seem trivial if you just say well this is just a continuation of your training run you just added more data and you continue, but. Depending on your model architecture, it's actually non-trivial in. Some cases you may only want to warm start embeddings, so. May only, want to transfer the weights of the embeddings, to a new model and. Initialize. The rest of your network randomly so, there's a lot of different setups that you can achieve with this but. With this you can continuously. Update your models you retain the learning from previous. You. Can even depending, on how you set it up is, a model more on more recent data but you still not throwing away the. Old data and. Always. Have a fresh model that's updated for production. The. Second principle. Is portability, and there's. A few aspects to this the first one is obvious so because we rely on tensorflow we. Inherit the properties of tensorflow, which. Means you can already trained your tensile models in different environments on different, machines so. You can train a tensor flow model locally you, can distribute, it in a cloud environment, and. By cloud I mean any setup of multiple. Clusters it doesn't have to be a managed cloud. You. Can train or perform, inference with your tensor flow models on the devices that you care about today, and, you can also. Train. And deploy them on devices that you may care about in the future. Next. Is, Apache beam. So. When we open sourced a lot of our components, we, face the challenge that internally. We use a data processing, engine. That allows us to run these large-scale data processing, pipelines, but. In the open source world and in all of your companies you may use different and, data processing systems, so, we were looking for a portability layer and the patchy beam provides. Us with that portability, layer it allows, us to express a paragraph, once with the Python SDK and. Then you can use different runners to, run those same data graphs in different, environments. The. First one is the direct runner so that allows you to run these data, graphs on a single machine there's.
Also The one that's being used in mode books so. I'll come back to that later but we want to make sure that all of our tools work in notebook environments because that we know that that's where they are scientists start. Then. There's a dataflow run o with, which you can run these same pipelines, at scale. On. Cloud dataflow in this case, there's. A flink runner that's being developed right now by the community, there's. A turret ikut that you can follow for. The status updates on this I'm being told it's gonna be ready at. Some point later this year and the, community is also working on more runners so. That these pipelines are becoming more portable and can be run in more different environments. In. Terms of cluster management and many. Your, resources. We. Work very well together with kubernetes and the coop flow project which. Actually has the next talk right after mine and, if you're familiar with kubernetes, there's something. Called mini cube with, which you can deploy your kubernetes, setup on a single machine of. Course. There's managed kubernetes solutions. Such as gke. You. Can run your own company this cluster if you want to on, Prem and again. We. Inherit the, portability aspects. Of kubernetes. Another. Extremely important aspect is scalability, and I've alluded to it before. I'm. Sure many of you know that problem there's there's different roles in companies and like some very. Commonly, data scientists, work on sometimes. Its downsampled, set of data on their local machines maybe, on the laptop in, a notebook environment, and. Then. There's data engineers the product software engineers, who, actually either. Take the models that were developed by data scientists, and deploy them in production, or, the trying to replicate what data science data. Scientists, did with, different frameworks because to work with a different toolkit and. There's. This almost impenetrable wall. Between those two because to use different tool sets and. There's. A lot of friction in terms of translating from one toolset to the other or. Actually deploying these things from from the data science process to the production process and if, you've heard the term throw. Over the wall that. Usually does not have good connotations, but, that's exactly, what's happening right, so. When we built the effects we, paid particular attention to make sure that all of the tools that we build are, usable, at a small scale so. All of you will see from. My demos all. Of our tools work in a notebook environment, and to, work on a single machine with small data sets and in. Many cases or actually. In all cases the. Same code that you run on a single machine scales. Up to large workloads, in a distributed cluster and the. Reason why this is extremely important is there's, no friction to go from experimentation, a small machine to, a large cluster and you, can actually bring those different functions, together and have data scientists, and data engineers, work together with. The same tools on the same problems and not, have to dwell in between them. The. Next principle is interactivity, so the, machine learning process, is not a straight, line at. Many points in this process you actually have to interact with your data understand, your data and make, changes so. This visualization is. Is, called facets and it. Allows you to investigate, your data and, understand. It and again, this, works at scale so. Sometimes. When I show these screenshots they. May seem trivial when you think about small amounts of data that fit into a single machine but, if you have terabytes of data and you want to understand them it's, less, trivial. And. On the other side I'm, going to talk about this in more detail later this, is a visualization we have to actually understand how your models perform its. Scale this is a screen capture from tensorflow model analysis. And. By. Following these principles we've, built a platform that. Has had a profound impact on Google, and the products, that we built and. It's. Really being used across. Many. Of our alphabet companies, so Google of course is only one company on the alphabet umbrella and, within, Google all of our major products, are using tensorflow extended. To. Actually deploy machine learning in their products. So. With this let's look at a quick overview I'm. Going to take questions later. Let's. Look at a quick overview of the things that we've open-sourced, yet so, this is the familiar graph. That you've seen before and, I'm. Just going to turn all of these boxes. Blue and talk about each one of those so, data transformation, we have open sourced as tensorflow transform.
Tensorflow. Transform, allows you to express your. Data. Transformation, as a tensor. Flow graph and actually. Apply these transformations, at training in the serving time, now. Again this may sound trivial, because you can already express your transform, your. Transformations, with a tensor flow graph however. If your. Transformations, require an analyze phase over your data, it's. Less trivial and the. Easiest example for this is min normalization, so if you want to mean normalize a feature you have, to compute the mean and the standard deviation over, your, data and then you need to subtract the mean and divide by standard deviation right. If. You work. On the laptop with, a dataset that's, a few gigabytes you can do that with numpy and everything, is great however. If. You have terabytes of data and you actually want to replicate these transformations, in serving time it's less trivial so. Transform, provides, you with utility. Functions, that. And. For minimization there's one that's called scale to Z score. There. Is a one liner so you can say I want to scale this feature, such. That it's mean has it, has a mean of 0 and a standard deviation of 1 and then. Transform, actually creates a beam graph for you that computes. These metrics over your data and then. Beam handles, computing, those metrics over your entire data set and, then. Transform, injects the results of this, analysis phase as a constant, in your tensor flow graph and creates. A tensor flow graph to toast the computation, needed in the. Benefit of this is that this tensor, flow graph that, expresses, this transformation, can, now be carried forward to training so, training time you, applied those transformations to your training data and the. Exact same graph is also applied to the inference, graph such. That at inference time the, exact same transformations. Are being done now. That basically eliminates, training serving skew because. Now you can be entirely sure that the exact same transformations, is being applied it, eliminates, the need for you to have code in your serving system that, tries to replicate this transformation, because, usually the. Code paths that you use in your training pipelines, are different from the ones that you use in your serving system, because that's. Very, low latency.
It's. Just a code snippet of how such a pre-processing, function can look like I just, spoke about scaling. To the C score so that's me normalization, string. To int is another very, common transformation. That the string to integer mapping by, creating a ball cap and, packetized, me feature again. Is also a very common transformation, that requires an analyst face over. Your data. And. All of these examples are relatively simple. But. Just. Think about. One. Of the more advanced use cases where you can actually change, the getter transform so you can do a transform, of a already transform feature and transform. Actually, handles. All of this for you so. There's a few common use cases I've talked about scaling and packetization. Texts. Transformations, are very common so if you want to compute engrams. You. Can do that as well and the. Particularly, interesting one is actually applying a safe model, and. Applying a safe modeling transform, cakes, and already trained. Or create a tensor flow model and applies it as a transformation, so. You can imagine if one, of your input is an image and you want to apply an inception model. To. That image to make to, create an input for your model you, can do that with that function so, you can actually embed other tensile models as transformations, in your own tensor flow model. And. All this is available on. Tensorflow. Slash transform on, github. Next. We talk about the trainer and, the trainer is really just tend to flow are we going to talk about the estimate a PI and the Cal CPI. This. Is just a code snippet that shows, you how to train a wide. And deep model a wide, and deep model combines a deep. Until. Tau of just. A fee for a neural network and the linear part together, and. In. The case of this estimator it's a matter of instantiating, this estimator and then, the estimate API is relatively straightforward there's, a train method that you can call to. Train. The model and. The. Estimators, that are up here are the ones that are in court hence the flaw so. If you just install tensorflow you get DNS linear teen and then you combined and push the trees which is equated in boost and tree implementation. But. If you want to do, some searching in. Tentacle. Contrib or, in, other repositories under, the tensorflow on github you will find many many more implementations. Of very common architectures. With the estimator framework. Now. The estimator, there's a there's a method that's, currently in country but it will move to.
The Estimate API with, 2.0, now has, a method called exports, if models and that. Actually, exports, a tentacle graph as a safe model such that it can be used by a tenth of the model analysis, in tentacle serving. This. Is just a code snippet from one of our examples of how, this looks like for, an actual example in this case it's, the Chicago taxi data set we. Just instantiate. It in and linear combined, the fire call trained and exported. Produced. By downstream components. Using. T of cameras it looks very similar in. This case we use the cameras sequential API where. You can configure. The layers of your network and the. Cares API is also getting a method called save Cara's model that, exports, the same format which is the saved model such that it can be used again by downstream. Components. Model. Evaluation validation. Is open, sourced as tensorflow model analysis. And. That. Takes that graph as an input so the crafted which is exported, from our estimator, or Kerris, model, flows. As an input into TF ma and Tifa. May compute evaluation, statistics, at scale, in. A, sliced manner so now this. Is another one of those examples where you may say well i already get my my. Metrics, from 10 support 10. Support metrics are computed in a streaming manner during training on mini-batches, TF. Ma uses, pin pipelines to compute. Metrics. In an exact manner over a large with, a 1 once, pass over, all of your data so, if you want to compute your metrics over a terabyte of data with. An exactly one pass you, can use TF MA now. In this case you run TF, may for that model and some some. Data set and if. You just call this method called render slicing metrics with. The result by itself the. Visualization, looks like this and I. Pulled this up for one reason and that reason is just to highlight. What. We mean by sliced metrics, this. Is the metric that you may, be used to when someone trains a model and tells you well my model has a 0.9. For accuracy, or. A 0.92. A you see that's. An overall. Metric over. All of your data it's, the aggregate, of. Those. Metrics for, your entire model that. May tell you that the model is doing well on average but, it will not tell you if the model is doing how the model is doing on specific, slices of your data. So. If you instead, render. Those slices for a specific, feature. In this case we actually, slice. These metrics by strip. Start our. So again this is from the taxi. Chicago. Taxicab data set. You. Actually get a visualization in which, you can now in, this case we should look at a histogram and. Tell your symmetric we, filter for pockets that only have 100 examples so that we don't get low buckets and then. You can actually see here how. The model, performs, on different, slices, of future values for a specific. Trip. Start our. So. This particular model is trained to, predict whether a tip is more or less than 20% and, you've, seen overall it has a very high accuracy. And very high AUC, but. It turns out that on some of these slices it actually performs, poorly so. If the trip start hour is 7 for, some reason the, model doesn't, really have a lot of predictive, power whether the tip is going to be good or bad now, that's informative to know because maybe that's.
Just Because there's more variability at that time maybe, we don't have enough data, during. That time so, that's really a very powerful tool, to help you understand how a model performs. Some. Other visualizations. That are available in the FMA are. Shown here we haven't shown that in the past so. The calibration, plot, which. Is the first one shows, you how your model predictions, behave, against the label and you would want your model to be well calibrated and not to be over or under predicting, in a specific area, the. Prediction distribution, just shows that this, distribution, and, position. Recall and our C curves are commonly. Known and again. This, is the plot for overall so, this is the entire model on the. Entire email, data set and again. If you specify. A slice here you can actually get the same visualization, only. For a specific slice, of your features and. Another, really, nice feature is that if you have multiple, models or multiple evil sets over time you. Can visualize, them in a time series so, in this case we have three, models and for. All of these three models we show accuracy. And AUC and you. Can imagine if you have long-running, training. Drops and as I mentioned earlier in some cases you want to. Refresh, a model regularly and you, train a new, model every day for a, you, end up with 365, models, and you can see how it performs over time. So. This product is called tensile model analysis and it's also available on github and everything that I've just shown you is already. Open sourced, so. Next serving, which, is called tensorflow serving. So. Serving is one of those other areas where it's. Relatively easy to set something up that performs, inference within. Machine learning models but, it's harder to do this at scale. So, some of the most important features of ten service serving are that it is able to deal, with multiple models, and. This. Is mostly used for actually upgrading, a model version so, if you are serving a model and you, want to update that model to a new version that, server needs to load a new version at the same time and then, switch over the, request to the new version that's. Also where isolation. Comes in you, don't want that, process, of loading a new model to actually impact, the current model serving requests, because. That would hurt performance. There's. Batching implementations, intensive, or serving that make sure that. Throughput. Is optimized in, most cases when you have a high. High. Like requests per second service. You, actually don't want to like perform inference on it on a batch of size one you can actually do dynamic batching, and. Sensible. Serving is adopted, of course widely within Google and also outside of Google there's a lot of companies that have started using. Tentacle. Serving. What. Does this look like again the same craft that we've exported from either our estimator, or a keras model. Goes. Into the tents of the model server tentacle. Serving comes, as a library, so you can build your own server if, you want or you can use the libraries to perform inference we, also ship a binary and this.
Is The command of how you just run that binary tell. It what port to listen to and what, model to load and. In. This case it, will load that model and bring up that server and, this. Is a code snippet again from our Chicago Tex example of how you put together a request and. Make, in this case HT RPC call to. That server. Now. Not, everyone is using chair PC, for. Whatever reason. So. We built a REST API there. Was the top request on github for awhile, and. We. Built. It such that the tenth of a model server binary. Ships. With both the TR PC and the rest api and it, supposed to same api's. As. The. Chair PC. One. So, this is what the API looks like so you specify the model name and, as. I just mentioned it also supports classify regression predict and, here's. Just two examples of, like, an iris model with the classifier, API or an MS model with the particular API. Now. One, of the things that this enables is that instead, of protocol three JSON, which. Is a little more verbose than most people would like you. Can actually now use idiomatic, Jason. That. Seems. More intuitive to a lot of developers to them were more used to this and. As. I just mentioned the model server ships with this by, default so, when you bring up the tencel for model server you just specify the rest api port and. Then. In this case this. Is just an example of how you can make a request to, this model from the command line. Last. Time I spoke about this was, earlier this year and I, had to make an announcement that it will be available but now we've made. That available earlier, this year so. All of this is now in, our github repository. For. You to use. Now. What does that look like if we put all of this together. It's. Relatively straightforward. So. In this case you, start with the training data you use tensorflow transform, to express your, transform. Graph that, will actually deal with the, analyze phase to. Compute the metrics it, will output the transformed graph itself, and in. Some cases you can also materialize. The transform data now. Why would you want to do that you, pay, the cost of materializing. Your data again in some, cases where throughput, for the model at training time is extremely important, namely. When you use hardware, accelerators, you. May actually want to materialize expensive, transformations.
So. If you use GPUs, or TPU. You, may want to materialize, all of your transform such that a training time you can feed the model as fast as you can. Now. From there you can use an estimate or a case model as it has told you to. Export your evil craft in your infant's craft and that's. The API that, connects. The, trainer with tencel model analysis, and tensorflow serving. So. All of this works today I'll, have a link for you in a minute that has an end-to-end example, of how you use all of these products together. As. I, just mentioned earlier for. Us it's extremely, important, that these products work in a notebook environment, because, we really think that that area between data scientists, and product, engineers or data engineers should not be there so you, can use all of this in a notebook and then use the same code to code apply it in a distributed manner on. A cluster. For. The, beam Runner as. I, mentioned you can run it on a local, machine in a notebook and on cloud dataflow the. Flink runner is in progress and there's, also plans to develop a spark runner, that. You can deploy these. Pipelines on SPARC as well. This. Is the link, to the end-to-end example, you. Will find it it currently, lives in the tensor, flow model analysis repo so, you will find it on github there, or, you can use the short link that. Takes you directly to it. But. Then I hear some people saying wait actually we want more because, and I totally, understand why you would want more because you've maybe. You've read the paper and you've certainly seen that graph because it wasn't a lot of the, slides that I just showed you and we. Just talked about four of these things right, but, what about the rest and. As I mentioned earlier it's. A it's. Extremely important to highlight that these are just some of the libraries that we use this. Is far from actually being an integrated, platform and. As. A, result if you actually use this together you will see in the, end example, it works really well but. It. Can be much much easier once they're integrated, and actually there is a layer that that, pulls, all of these components together and makes it a good intent experience. So. If announced before that we will release next, the, components, for data analysis, and validation. There's. Not much more I can say about this today other than this. Will be available really really soon and I'll. Leave it at that and then. After that the next phase is actually the, framework that pulls all of these components together that. Actually will make it much much easier to configure these pipelines because, then it's going to be shared configuration, layer to, configure all of these components, and, actually pull all of them together such. That the work as a pipeline and not as individual, components. And. I. Think you get the idea so we are really committed to making all of this available to the community because. We've seen the profound impact that it has had at Google, and for, our products and we're, really excited to see what you can do with them in your.
Space. So. These are just the github links of the products that I just discussed, and. Again all of the things that I showed you that they are already available. Now. Because we have some time I can, also talk about tensorflow. Hub. Intensive. Hub is a library that enables, you to publish. Consume. And discover, what. We call modules and. I'm. Gonna come to what we mean by modules, but it's really reusable, parts of machine learning models. And. I'm. Gonna start with some history and, I. Think, a lot of you can relate to this I've actually heard the talk today and that mentioned some of these these aspects. In. Some ways machine learning and machine learning tools are 10-15, years behind the, tools that we use for soft engineering soft. Engineering has seen rapid growth in. The last decade and as. There was a lot of growth and as more. And more developers, start. Working together we. Build tools and, systems, that made collaboration. Much more efficient we built version control we built a continuous, integration. We. Built code repositories. Right. And. Machine. Learning is now going through the same growth that more and more people want to deploy machine learning but. We, are now rediscovering, some, of these challenges, that we've seen with soft engineering. What. Is the version control, equivalent, for these, machine. Learning pipelines, right and, what is the code repository, equivalent, well. The code repository, is the one that I'm going to talk to you about, right. Now four. Tenths of Lahab, so. Code repositories. Are an. Amazing thing because they enable, a few really good practices, the first one is if as an engineer, I want, to write code and I know that there's a shared repository. Usually. I would look first if it has already been implemented right, so with like search, on github or somewhere else to, actually see if someone has already implemented the thing that I'm going to build. Secondly. If I know that I'm going to publish my code on a code repository, I may, make different design decisions I may build it in such a way that it's more reusable and that's. More modular right, and. That, usually leads to better software in general and. In. In. General it also increases. Velocity of, the entire community right if even, if it's a private repository, within a company if it's a public repository, and open source such as github code. Sharing is usually a good thing now. Temps of Lahab is the equivalent for machine learning. Machine, learning, you, also have, code. You have data you have models and you. Would want a central. Repository that allows you to share these, reusable parts of machine learning between. Developers, and between, teams and. If. You think about it in, machine, learning it's even more important, than in software engineering, because. Machine learning models, are. Much, much more than just cold right, so, that's the algorithm that goes into these models does, the data test. The compute, power that was used to train these models and then, there's the expertise, of people that build these models that is scarce today.
And. I. Just. Want to reiterate this point if you, share a machine learning model what. You're really sharing is a combination, of all of these if. I spend fifty thousand GPU hours to, train an embedding and share. It with tensorflow hub, everyone. Who uses that embedding, can benefit from that compute power they. Don't have to go recompute. Recompute. That same model and and those same data right. So. All of these four ingredients come, together in, what, we call a module and what, you listed the unit that. We care about that, can be published in tensorflow hub and it. Can now be reused, by, different people in different models and. Those. Modules can our, tensor, flow graphs and they can also contain, weights, so. What it means is that they give, you a reusable, piece of tensor, flow graph that, has the trained. Knowledge of of the data and the algorithm, embedded. In it and. Those, modules are designed to be composable, so. They, have common. Signatures. Such that they can be attached to different models. They. Are reusable so, they come with the graph and the weights and. Importantly. They're also retain, able so, you can actually back propagate, through these modules and once. You attach them to your model you can customize. Them to your own data, and to your own use case. So. Let's go through a quick example for, text classification. Let's. Say I'm, a start-up and I want to build a new model that, takes. Restaurant. Reviews. And tries, to predict whether they are positive or negative so. In this case we have a sentence and if you've ever tried to train some of these text models you know that you need a lot of data to actually learn a good representation of text so. In, this case we, just want to put, in a sentence. And we want to see if it's positive or negative and. We. Want to reuse the code in the graph we. Want to reuse a train weight from, someone, else who's done the work before us and we. Also don't want, to do this with fewer data than is usually, needed. An.
Example. Of these text modules that are already published on tensorflow hub are the universal sentence decoder there's. Language, models, and we've actually, added. More languages to these where, Tyvek is a very popular type of model as well. And. The. Key idea behind tens of Lahab similarly, to code repositories. Is that. The latest research can be shared with you as fast as possible, and as easy as possible so. The use of universal, sentence encoder paper, was, published by, some researchers. At, Google and in. That paper the. Authors actually included, a link to tens, of Lahab with. The embedding, for that universal sentence encoder. That. Link is like a handle that you can use. So. In your code now. You actually want to train a model that uses this embedding in this, case between a DNN classifier. It's. A one line to, say I want to pull from tencel hub a text embedding column with. This module and, let's. Take a quick look of what that handle looks like so. The first part is just the TF hub domain all of the modules that we publish. Google. And some of our partners publish will, show up on TF hub to death the. Second part is the author, so in this case Google published is embedding, universal. Sentence encoder is the name of this embedding and then, the last piece is diversion, because. Attentive. Lahab modules are immutable so, once they're uploaded they can't change because, you wouldn't want a module. To change underneath you if you want to retrain a model that's, not really good for reproducibility. So. If we if, and when we upload a new version of the universal sentence encoder this, version. Will increment and then you can change it in your code as well. But. Just to reiterate this point this, is a one-line to pull this embedding. Column from, tensorflow hub and uses. An input, to your Dean and classifier, and. Now. You've just basically. Benefited. From the expertise, and the research that was published by, the Google research team for, text and weddings I. Just. Mentioned earlier that, these modules. Are we trainable so, if you set we trainable, true, now, the model will actually back propagate through this embedding, and updated, as. You train with your own data because. In many cases of course you still have some small amount of data that you want to train on such, that the model adapts to your specific, use case. And. If, you take the same URL the. Same handle and type it in your browser you. End up on the tensorflow hub website and see the documentation, for it see module, so. That same handle. That you saw in the paper you, can use in your code as a one-liner to use this embedding and you can put in your browser to, see the documentation. For this embedding. So. The. Short version of the story is the tempter flop really is the repository. For, reusable. Machine learning models, and modules. We, have already published a large number of these modules so, the text modules. Are just one example that I just showed you we, have a large number of image, embeddings. That. Are both cutting-edge, so, there's a neural architecture search, module, that's available, there's, also some modules available. For. Image, classification that, are optimized. For devices, so that you can use them on on. A small device and. We are also working hard to keep. Publishing more and more of these modules, so in addition to Google we now have some modules that have been published by deepmind, and. We're also working with the community to get more and more modules up. There and again. This. Is available on github you, can use this today and, a. Particularly. Interesting aspect, that we haven't highlighted so far but, it's extremely important is that you, can use the tentacle hub libraries also, to, store. And consume. Your own modules so. You don't have to rely on the tensorflow hub type form and use, the modules that we've published you. Can internally. Enable. Your developers, to, write out modules, to, disk or some shared storage. And other. Developers, can consume those modules and. In that case instead of that handle that I just showed you you would just use the path to, those modules. And. That concludes my talk I. Will go, up to the tensorflow, booth. To answer any of your questions. Thanks.