AWS Summit DC 2021: Solve common business problems with AI and ML

AWS Summit DC 2021: Solve common business problems with AI and ML

Show Video

- Good morning. - [Group] Good morning. - How many of you out here at AWS Summit for the first time? Raise your hand please.

Awesome. Welcome. Must be an exciting day.

Lots of sessions. Lots of fun. First of all, thank you so much for joining us here today. My name is Raju Penmatcha, I'm a Senior Solutions Architect, in AI/ML at AWS. Today I'll be commenting on how to solve common business problems using AI and machine learning.

The real star of today's session is Jason Davis, from University of Arkansas. He'll be presenting how he solved his business problem using these technologies. So our agenda for today is, I'll start with an overview of machine learning. Followed by a list of key ingredients that make a good machine learning use case.

We'll then discuss some common use cases in machine learning. And after that, Jason will present how he was able to detect weed in soybean crops using Amazon Rekognition Custom Labels. Now you may have heard these technologies described in a number of ways.

So let's take a step back on learning set on what they are. AI is a way to describe a system that uses, that replicates tasks which would normally require human intelligence. Many times this involves some kind of complex business decision, or complex logic. And all of AI applications expect a probabilistic outcome, like a prediction, or a decision that requires high degree of certainty.

Almost all AI systems are built using machine learning. Which uses lots of data of in order to create and validate decision logic known as the model. An AI application then feeds input into this model. And model output is a human-like decision. So machine learning is the underlying technology that is powering these intelligence systems.

Deep Learning is a type of machine learning that uses a technique called, known as Deep Neural Networks. These systems try to replicate how human brain functions. Because of that, they're able to solve much more complex problems than it was previously possible. AI moved from being an aspiration technology into the mainstream extremely fast.

For a long time this technology was available only for a few tech companies, and hard core economic researchers. But, all of this has changed as cloud technology entered the mainstream. Compute power and data became more available. And quite literally, machine learning started making an impact across almost all verticals.

Including finance, retail, real estate, fashion, healthcare, agriculture and so on. So it moved from being on the peripherals into a core part of almost every vertical. According to IDC, the AI and cognitive technologies spent for this year, will exceed 50 billion US dollars. Gartner predicts by 2024, 75% of companies will move from piloting to operationalizing AI.

And Deloitte says that more than half the companies think that AI will transform their business in the next three years. Now for you to be successful with machine learning, you need to pick the right use case. And today I'll present to you, a simple framework so you can do just that. So there are a few hallmarks for putting together a good machine learning use case.

So to get to this, you need to ask your team a few questions. The key is to balance speed with business value. You'll want to find a use case that can be completed in six to 10 months. You also want to make sure your use case is solving a real business problem. And is important enough for the business, so it gets the attention at option, if not it's going to sit in a (indistinct) experiment category.

You'll also want to find, look for places where there's a lot of untapped data. And also please, you need to also make sure that the, there is, this problem can be solved from machine learning. And you're not solving a problem that isn't actually broken. So for you to find a good machine learning use case, I'll walk you through a simple technique that has three dimensions, against which you'll access your use case.

Data readiness, business impact, and ML applicability. So your use case that is high in business impact, but is low in data availability, and ML applicability will leave you with an unhappy data scientist, as you can imagine. High availability of data and ML applicability, but little-to-no business impact is a good prototype experiment, but will not bring meaningful value to your automatization. So you'll want to find a use case that is high across these three categories. However, two out of three is also a great place to start. So you can list your use cases on a table as shown.

And the use case that is high in these three dimensions, can get a higher priority. Now use cases vary by business needs. But today, I'll walk you through three use cases that are common to our customers across multiple verticals.

Intelligence search, personalization, and augmenting repetitive human visual tasks. So let's talk about our first common use case. Which is on finding information quickly and accurately. In today's workplace, a lot of unstructured data is generated from text documents, web pages, images, audio and video, on almost a daily basis. As a lot of enterprise data is unstructured, it's hard to search and discover data and look for insights for you to make, to get insight to make predictions, or to, or how to make business decisions.

Employees across departments in the organization, including finance, engineering, research, sales and marketing, customer service, HR, they all need very specific information to perform in their roles. Keyword based search engine lack context because of which, they're often inaccurate, and according to IDC, 44% of the time employees do not find the information they need in order to perform in their job. So Amazon offers, calls this enterprise search problem using Amazon Kendra.

Kendra is a highly-accurate simple to use intelligence search service that is powered by machine learning, and it uses natural language understanding, and reading comprehension, to deeply understand content and questions. So that way Kendra re-imagines improvise search for your applications, and webpages, so that your customers can find information that their looking for, even when it is captured across the organization in multiple locations and content repositories. Also it is a fully-managed service.

Which means, you don't need to provide any servers, and no ML models to build, to train, or deploy. Leading industry names like TC Energy, 3M, Woodside Energy and others, have implemented Kendra in their production systems today, to power their enterprise search. 3M uses Kendra to connect its data scientists across years of research, so they can innovate faster. Our second common use case is on personalization. In a time when organizations are depending more on digital channels, more than ever, creating personalized user experiences, and triggered marketing communication is essential to cut through the noise that a customer goes through on a daily basis.

According to a 2019 (indistinct) study, organizations that have implemented personalized recommendations and cognitive communications, have increased their revenue by five to 10%. And the efficiency of their marketing spend by 10 to 30%. However, many organizations are struggling to realize and implement personalization for their users. Eliminate communications and recommendations, frustrate customers, leaving them dissatisfied, and ultimately resulting in lost sales.

Now, Amazon.com has pioneered personalization in 1998, when it offered an area of services for its book buyers through instant recommendations from its large catalog. Since then, Amazon has leveraged decades of research in personalization, to improve customer experience across Amazon.com, Prime Video, Amazon Music, Kindle, Alexia, through recommended items and content widgets. AWS offers two approaches for organizations to implement personalized experiences for their customers. Customers that have data science teams, and want deep customization.

They can use Amazon SageMaker. Our ML service for building, training, and deploying any ML model quickly, and also with more efficiency. Our customers that have smaller data science teams, or none at all, can quickly get started with Amazon Personalize. Amazon Personalize is the fastest and easiest way for you to realize and implement personalization for your users. It enables you to quickly build applications using the same technology for personalization that is used by Amazon.com. Also, this is a service using which you can implement in real time, personalized user experiences at scale.

So you can engage your customers, convert them, and also improve your revenue. Industries across verticals like Subway, Lotte Mart, Pulselive, Zalando, are using machine learning to implement personalization for their users. The Subway restaurant chain offers guests in over 100 countries, quality ingredients and flavor combinations in the nearly seven million made-to-order sandwiches created daily. They use personalization to quickly and easily implement personalized recommendations for their customers. Zalando is using machine learning to steer their campaigns better, to generate personalized outfits, and to improve their customer experience through personalized recommendations.

Our third use case today is on augmenting repetitive human visual tasks. Now extracting contextual information and metadata from your images and videos, can not only help you in search and discovery, but also to automate repetitive visual tasks, such as finding objects, scenes, locating individuals, their activities, identifying their faces, and so on. Doing this manually is not only time-consuming, but also error-prone, complex, and expensive. Especially when you have to analyze millions of images and videos. AWS, Amazon Rekognition is, makes it easy for you to add these image and video analysis to your applications.

Amazon Rekognition is based on the same proven deep-learning technology built by Amazon's computer visions scientists, to analyze billions of images and videos daily. So this is a service that is, that doesn't require you to have any machine learning expertise. And using it, you can quickly analyze any image at will. Amazon Rekognition Custom Labels is an extension to Amazon Rekognition. If you'd like to analyze an image, or identify objects that are very specific to your business.

For example, using Custom Labels, you can classify machine parts in an assembly line, find your logos in social media posts, find the animated characters and videos, distinguish healthy from infected plants. Even locate your products on store shelves. And also classify documents such as passports, or driver's license.

As well as, maybe detect damage on social posts, based on their visual characteristics. Now, Amazon Rekognition comes to you through an API that you can even directly call. And the extended capabilities of Rekognition Custom Labels come to you through three simple steps. In Step One, if you have labeling images, you can load them using the console, or just by calling the API. But if you want to label your images, the console uses your guided experience for doing that as well. Once the images are loaded in Step Two, you can train a model.

It requires no coding. And no ML expertise is required for doing so. And once a model is trained and deployed, you can make inferences against it.

It's fully-managed and offers a scalable API. And you can get good results using just tens to hundreds of images, instead of tens of thousands of images. Autonet's mission is to automate the car insurance claims process. They use Rekognition Custom Labels today to detect the extent of damage on car.

And made this technology available to insurance companies, so that they can in turn improve their customer experience. (indistinct) is a young Nigerian company that is innovating in the finance and AI/ML spaces. They use Rekognition and it's phase comparison features to verify the identity of their applications. Last by not least, University of Arkansas will be using Rekognition Custom Labels to detect weed in soybean crops in an automated manner. Applying it on (indistinct). And Jason, in the next few minutes, will give you a deep dive on that use case.

Now, if you'd like to implement machine learning on your use cases, AWS experts will be happy to help you and your team in deriving insight, and creating value of ML. All of you have come to the DC Summit today to learn. And you can continue this learning, even after the Summit, through AWS training and certification. We offer all 65 courses at no cost, on demand.

As well as, virtual instructor-lead training classes. When you are ready, you can also sign up for AWS machine learning certification. That will validate your skills, as well as as used in industry, validated credential. For more details, please visit AWS.training/machinelearning And please look up the "AWS Ramp-Up Guide" when you're there. Thank you once again.

Now, I'll hand this over to Jason. - Thank you. Thank you, Raju.

I appreciate the introduction, and also your assistance throughout this project. I'm very excited to be here. To be sharing a little bit about our use case at the University of Arkansas.

Where we use Rekognition Custom Labels to detect weeds in soybean crops. This is an interesting intersection between obviously a continuously emerging technology machine learning in across industries with a very old problem. Weeds competing with the crops that we value and produce for consumers. And so in this, again my name is Jason Davis. I'm really wearing two hats today. I'm representing both the University of Arkansas, Systems Division of Agriculture as an employee.

Where I'm an Application Technologist. What this means is, I actually work in the fields with producers on their application equipment, specifically their pesticide sprayers applying pesticides. And nutrient, or fertilize equipment, applying fertilizers and sole nutrients. And so I try to help them, these producers maximize the efficiency of their systems, to minimize input costs. And making sure that these nutrients and pesticides are applied at what would be legally labeled rates to make sure that the systems operate most efficiently, maximizing profits, and minimizing these inputs. This project, in the context of that, this project had also several contributors.

And I wanna acknowledge. Karen Watts DiCicco, Dr. Tommy Butts, Ashley McCormick and Noah Reed, has assisted either in the merging from my program's data to AWS. Or in the fieldwork. AWS (indistinct) you probably often don't hear about.

Fieldwork, literal, actually working with the dirt fieldwork. But that's the case here. In addition to that, I'm also, the second hat I'm wearing, is the University of Arkansas, Department of Geosciences, and Center for Advanced Spatial Technology, where I'm a Ph.D. student trying to work with

merging geospatial data, satellite and drone imagery, and machine learning and AI workflows, into these current workflows that we work with our producers. And so the idea is, how can we continue to become more efficient with the swelling data available through these remote sensing artifacts. And so within that context, I would like to recognize my committee members and my advisor, Dr. Jason Tullis, all which contributed to this, to these workflows and assisted in this project.

And finally, I would like to introduce to you a couple of machine learning expert users. My kids, actually. Who will be demonstrating the use of an object detection algorithm, and extracting a use, or a purpose for that.

(kids squealing "Oink, Oink, Oink") So these are aspiring Arkansas Razorbacks, right? Doing their modified hog call. But what's always intrigued me about watching my kids work with common apps that you may have on your phone, is how simple the interface is. And how they're extracting their entertainment and their value out of, in this particular use case, out of that app. But it's doing some very technical things behind the scenes, right? And so this amazing technology that's become more and more accessible in more industries.

And so this acts as kind of a loose inspiration for the project that we'll talk about today. And so, you may have heard in the news, in the last several years about super weeds in agricultural industry. The one we're gonna talk about today is one prevalent in the mid-South.

Mid-South, United States, is Palmer amaranth, which is commonly-known as Pigweed. This is a fierce competitor with our crops in agricultural production. And competing for soil nutrients, water.

And so out competing will reduce our crop yields significantly, and can take over fields very quickly. To put this into context, this weed in competition with our soybean crops, meaning that it's again, not getting all the nutrients it needs, it can produce 150 to 200 thousand seeds per plant for the next generation. So multiplying 150 to 200 thousand fold with each generation, after you leave one plant to come to maturity. And so as an example, if you were to, if 10% of those were to emerge, and the pesticide systems that we use currently in place, it has one escape.

You'll still, the next year having to deal with 150 to 200 thousand fold to the problem. And you see how these super weeds can overtake fields very, very quickly. And so timely observation of these emerging weeds is critical, and so here's an example of a soybean field that doesn't even really look like soybean field.

It's been really taken over completely by this super weed. And in addition to that, many of the tools that we've used traditionally are pesticides, herbicides, specifically, have become less useful. The actual, the weeds itself, have learned to metabolize some of these chemicals. And so, some of the chemistry that we have used in the past, were losing these technologies.

So how do we assess these fields currently for these needed timely observations to trigger these pesticide applications? Currently we walk fields, either a producer, or a scout, that's hired by the producer. They're exclusively ground-based observations, and so, these ground based observations generally look something like this. They'll walk a transect of that field, and scanning left and right, you can see that we only assess very small parts of that field.

Now it's very difficult to cover tens of thousands of acres on a weekly basis, doing it any other way. And this system works, we can generally catch those weeds early on, and find a representative population enough, to trigger that application. However, with current technology, with the use of UAS and drone imagery, we can actually capture most all of the field, or all of the field. And currently these drones and sensors are being used in agricultural fields today, often times for plant health assessment, or water management. And so, but can these be used to instead, also do weed assessments on the entire field? And so, if you were to do that, if you were to ask that question, you would have to manually inspect each image, or manually inspect the imagery as a whole. And so, that becomes obviously very time-consuming and cumbersome.

When I fly, say an 80 acre field, I come back with 10 to 20 gigabytes of imagery. And so to do that over multiple fields, over multiple days, it becomes very cumbersome to do this analysis manually. And so, if there was a way to automate this system to either bring up reports, or to highlight the weeds so that you can scan for location, this would be very helpful.

And so, that's exactly what this project attempted to do. Is to scan these fields using our raw drone imagery, processed through an object-detection algorithm, in this case, AWS's custom Rekognition Custom Labels, and to produce, to field imagery with labels on it, indicating where the weeds are. And so this is the context of our project.

With this, and your business practice, or your business use case, it's important to understand what accuracy means in machine learning. And so often times, as you read through machine learning papers, or reports, you'll see reported accuracies in precision and recall. What does that mean? And which end of the spectrum should you be on in this consideration? You really need to understand what questions these particular accuracy metrics are asking. Precision, on one hand, is asking when a prediction is made, how true is that prediction? Whereas on the other hand, a slightly different question, recall is asking of the true, or accurate objects in that frame, how many of those are captured by the algorithm? Essentially, detaining or measuring how many are missed.

Some common examples for framing this appropriately, is your email inbox. Generally you have an algorithm that's filtering out your spam mail. If you were to, if that algorithm were to lean very heavily on recall, you would have a report from that algorithm that says, I've captured 100% of your spam email, erring on the side of accuracy. In this case, accidentally throwing away that email from your boss on that emergency meeting this afternoon, right? This would be a problem.

And so, generally these algorithms will lean on the side of precision, and say, I've captured 100% of what I captured was a spam, erring on the side of missing a few spam in your inbox. And so, you can see the need for filtering by what type of accuracy you're looking for. On the other hand, a scenario where you might lean heavily on recall, would be in medical testing for diseases, or sicknesses. And so if you had five patients in this case that come to a doctor's office, and that were sick, three of which would have an actual, particular infection of a particular disease, if you leaned heavily on precision, a model would say, or a test would say that I captured, all of what I captured were sick patients, with this particular disease, or ailment, whereas I'm missing one, right? By leaning heavier on recall, you end up captured 100% of those patients actually infected. Although your accuracy may fail, fall short, and have some false positives. So understanding your question, your data, and what type of metrics you need to base your results on, are important.

In this use case, because of the timely observations necessary in our fields, and the need to not miss weeds, we wanted to lean very heavily on recall. Mistaking a few soybean plants in the process as a weed is acceptable in this case. And so and we did that using three different, and I'll go a little bit deeper on that in a second, but three different tests that we used that may, you may find some parallels with some of your data. Just to provide some exposure to what you might see in the user interface, the training data that you're gonna use for the imagery that you're gonna use, is gonna be broken into training and testing data.

In this case, we used high-resolution, RGB imagery, acquired by low-altitude un-manned aircraft systems, UAS flights, 60 and 90 feet in this case. We again, broke the data into 80% training data, and 10 and 20% testing data. And so, the model actually was never able to see the testing data until it was testing basically the perimeters that it produced. Here's an example outputs in our use case of some of the field imagery. So again, this is an overhead perspective from the drone.

And you see three images here that's been labeled by Rekognition Custom Labels. The green bounding boxes are actual weeds that have been identified. The red on the far right image, is actually a weed that was missed, a false negative in this case. And so you can see that it did a pretty good job. This was an initial test we did, just kind of a proof of concept. We only used 500 images in this initial test.

This is really amazing, in my opinion. The technology, five or 10 years ago, would of required hundreds of thousands, if not millions of images to produce these types of results. But through transfer learning, there's much of that image structure and the understanding of what to look for is already built-in into the system. And so minor tweaks on the tail-end of these perimeters is all that's being adjusted with these very small image sets. And so in this case, 500 images produce a precision of .97, or 97% accurate, essentially.

And nearly obtained 80%, or 90% recall. So we captured 90% of the field's image, or the weeds, in this particular field. So this was, in my opinion, very impressive. From that we said, how can we increase recall? And we did three quick tests. The first hypothesis we theorized that the model recall would increase with increase in training samples. Now this makes sense if you're gonna feed the model more and more images.

Then we expect the recall to increase. And so, the imagery again was taken at 60 feet. We used 250, 500, a thousand, and 1500 images. If you're manually labeling these images, which I did.

You're looking for that top threshold, right? You're ready for this process to stop, because this can be painful if you're not careful. Especially if you're looking at hundreds and thousands of images. This was important to find that top threshold.

These are the results from that particular data set. You can see the blue bar, 250 images, up to a maximum of 1500 images. And you see our precision held, and it increased, as well as our recall increased.

But we seem to have a plateau between that thousand and 1500 images, meaning that in our particular use case with this specific example, that we ended up, that was rocket stop labeling, essentially. A thousand images seemed to work. Some of the results that we built on after this was based on thousand image sets.

The second thing we did to try to increase our recall, was that, or to test, is that we theorized that as we increased the altitude of the drone, the imagery, that would coarsen the resolution, the image, right? If you go a camera frame, a camera that's framing an image on the ground here, the further you move away from it, the courser the resolution of your image would be. If you were able to capture, let's say you got an 80 acre field that takes you 20 minutes to fly, if you could fly at a higher altitude, you could capture the same 80 acreage at a much shorter time with a much smaller amounts of data. We expected that to decrease our recall because the model was fed courser resolution imagery. As an example, we used 60-90, 120 and 180 feet. In this case, the last two data sets, 120, 180 data feet were generated using a GIS system to basically coarsen those pixels and represent what that flight would look like, because we actually only had the 60 and 90 data sets. With that, to my surprise, well first off, model recall and precision did drop slightly.

But, we were able to expedite the data collection and the processing by four fold, by running these flyer, higher altitude flights with only a 2% reduction in model recall, which was in my opinion, that's large steps forward in system efficiency with minimal losses in accuracy. This was an import threshold to recognize for us. And finally, we suspected that if we were to break the sizes of weeds into different categories, that we might increase model recall. This would make some sense from your perspective here, you can see the two weeds that are labeled, those could be from a computer standpoint, labeled as different objects.

In theory, it could confuse a system by lumping them into the same category. We separated them out, and tested it to see if we could increase our recall. We trained and tested a model with exclusively weeds, and then trained and tested a model with different weed classes, based on their sizes.

Here are the results of that. The blue bar again, is a single class. And the other, the second pair of bars there, are a single model. And then, representing this classes, the accuracy of each class. In this case, surprisingly enough, the model actually did as good, or better, with a one class, looking for one class of weeds. And so that was nice, that means we for in our case, we're only interested in knowing whether the weeds are there, or not, in our next use case I'll share in a second, how to attack or spray those weeds.

We don't care really if they are in separate classes, and it didn't help the model any. And so that was good news. In summary, Amazon's Rekognition Custom Labels, as a scouting aid in this case. A thousand images seem to show the model enough information to produce accurate and somewhat reliable results.

Imagery collected from higher altitudes produce up to three time courser resolution, increase system efficiency, but only minimally reduced, or recall, our overall accuracy. And then, separating the weeds into separate classes really had no impact, or provided no advantage at all. This system, this workflow, shows promises as a scouting aid at detecting these weeds in our systems.

There is more work to be done. Agricultural systems are incredibly variable, between crops, weeds, soil types. There's more work to be done to incorporating more and more training samples to making these more robust systems. But the initial test, again, show to be very promising. What if we could take this one step further? Outside of just a scouting aid to trigger a pesticide application? Precision agriculture is a term that's used to try to better place inputs where they're needed most.

It has been limited in the past by both the acquisition of whole field imagery, or whole field data, rather. And the ability to analyze it in an efficient way. This has been cited as requiring these workflows, requiring three different components. So these make a lot of sense. You've got to be able to collect a whole field variability for a sensor system that's usable. You've got to be able to have an information system that can take that information, that data, and extract the value out it that you're looking for.

And then finally, if you're gonna actually place those inputs, whatever those inputs are, in this case, we're gonna talk pesticides. If you're gonna place those pesticides, and only where they need to be. You've gotta have equipment that can leverage that data in the field.

And so, I'm here to suggest that we can do that. First off, I've already mentioned the UAS system, so the drone imagery. machine learning through Rekognition of Custom Labels, and then we'll leverage a few other software packages here that categories geographic information systems. And then, from my background in the field with technology in pesticide application equipment, I'm here to suggest that modern sprayers can also be an integral part of this workflow. Currently, it's important to note that all pesticides, or the vast majority of pesticides, are what's known as 'Broadcast', or applied over the entirety of the field. This is generally, this generally makes a lot of sense.

What you're seeing here are four different plots that have been treated with, or not treated with, depending on the plot that you're looking at, what's known as a 'Pre-emergent'. Pre-emergent pesticides are those that are preventative. So we wanna treat the entire field to prevent the emergent of weeds, right? That makes sense, you can see the contrast between the first plot, and the third, versus the second, and the fourth.

And so, that's where a pre-emergent was applied. This acts essentially as a chemical net. It's like a vaccine for the soil, essentially, to stop the growth of these weeds. This makes complete sense, broadcasting this. And there are other pesticides, that's the same case.

However, there's a second group of pesticides that are known as post-emergent, only targeting the weeds in the system. We still broadcast these pesticides because there's really at this point, very little other ways of doing that. We broadcast, and we actually, even in post-emergence, we will apply the same amount of pesticides to the first, second, third, and fourth plot, even though the only target organism is the weed in this case. You can see the excessive amount of pesticide, that again, is legally labeled, and has been thoroughly tested, and is appropriate.

But you can see the potential for savings, and reduction in the environmental impacts here. In our production systems in Arkansas, particularly in soybeans, herbicides account for nearly 20% of the operating costs in these systems. And so, there's a lot of money that's gone into these to protect, and preserve these actual, the crops that we're trying to grow. And so, there's opportunity here.

Opportunity for both reduction cost for the producer, and reduction in environmental impact. The challenge here though, is to be able to site specifically apply, meaning apply only where needed. You gotta be able to locate those weeds. And this is a manually-labeled imaged set, in this case.

But you can see the contrast between those fields, represented by plots. The information system, again we got the UAS part, we've already mentioned. But the information system, in addition to a machine learning algorithm, is in this case, we're gonna need a GIS system.

Geographic Information System combines data with location information. You probably used at some point in the last couple of days to get here, your cell phone with a mapping app, that walking down the street, or driving a vehicle, and we generally refer to those as GPS apps. Let me pull up my GPS. It's more accurately described as a GIS system, which leverages the data in the background, where the restaurant is, where your location is, what services are around you, with GPS information, or your location. In agriculture, these types of GIS systems has allowed us to organizes these swells of data that's in the field, whether it's the soil tests that we pull in, the weed maps that we're fixing to look at, aerial imagery of plant health. We can organize and visualize these to help better make decisions.

And in this particular workflow, we're gonna look at how you can not only identify the weeds, but find their location in that field, and act upon that using three different software packages, or software options here. The first one we're gonna look at, is Pix4Dmapper. In this case, we take raw drone imagery, and this particular software actually pieces together those raw drone images like a puzzle. And so, each of these green dots represents a single image from the drone. This particular package will analyze key point, I'll do a key point analysis.

And actually look at where those images overlap, and produces a 3D structure of that crop canopy using these key points. And so now, we've taken what was a single image, or a group of single images, and now we can have a single layer across a whole field of 3D representation of that field. From there, this produces of what's known as an ortho mosaic. An ortho mosaic is a perspective adjusted, or distortion, I'm sorry. Is an image that is free of perspective distortion. Meaning that no matter where you zoom to in this field, it looks like you're immediately overhead.

That way you don't have any crops that are including the soil conditions based on your perspective that's often the center of the image. So in this case, you see the soybean crop, with a single pigweed in the field. From here, we can pull this data into this ortho mosaic into our GIS Pro, which is a geographic information system that helps us analyze. We can remove background effects, extracting just from the field. We can re-orient the images.

In this case, we're gonna take, and what layover, what's known as a fishnet, a grid, that is, correlates to the size of the frame of the original drone image. So that Rekognition Custom Labels are the algorithm we're gonna use, can actually look only within that frame. And then, analyze it for weed presence.

And so that's what we do. We can ship those individual frames to Rekognition Custom Labels. The weeds can be labeled in this case. And then we can pull that puzzle piece, since we know where it's at, and where the weed is contained, back into our workflow in our GIS, and reassemble the puzzle. And if you look carefully here, you can see all of the green bounding boxes across that field. Whoops.

From there we can remove the fishnet that correlates to the images, or the drone imagery, and overlay a new fishnet, a new grid. This represents the resolution that we can treat with our machine, or our sprayer, at the end of the workflow. Again, they're operating at different resolutions, and so this would be variable, depending on one mission to the other.

In this case, it's one meter square grid. So this is saying that each square meter, instead of the entire field, will be analyzed for the presence of a weed. We can over, once we overlay this, we can highlight, or select, those square meters that contain a weed that was identified by the algorithm. And extract that as a separate layer.

So there's our separate layer. And we can actually remove the background imagery. Now what we have is geo-referenced, or localized, polygons, that are areas to be acted upon. In this case, it basically is a layer of zeros and ones.

Of the equipment should be off, or the equipment should be on, as it traverses this particular area. How about that equipment I mentioned earlier? Today's sprayers actually operate using highly-responsive solenoids on each nozzle. And these nozzles are spaced between 20 inches, or between 10 and 20 inches, generally. In this case, we're gonna be operating on a one meter grid.

And so, that would translate basically to 50 centimeter nozzle spacing in each grid would occupy, be occupied by two nozzles, essentially. Each of these nozzles have a solenoid that kicks the nozzle on and off. This is currently only used for, we also have field computers that can read maps, such as what we produced here, but generally this is only used for turning off the sprayer as it exits the field. So as you approach a field boundary, those nozzles will kick off, but again, the rest of the field is broadcast. And these on/off solenoids also allow us to adjust rates.

As you increase the speed that you're traveling through the field, the flow rate through those nozzles, needs to be increased to hold rate steady throughout the field. These systems are very finely-tuned, to hold very specific rates. Again, that are legally labeled, but they're only used for blanket or broadcast applications, and to turn nozzles off in the fields. This technology with high-precision GPS, can be used in this case, of to selectively spot-treat that one meter square, one meter grid that I mentioned earlier.

A high level look at our workflow here, is to collect raw drone imagery. Process it through this machine learning GIS workflow, and then produce site-specific maps that can be treated grid-by-grid, one square meter, by one square meter across the field. Here's the research we got ongoing. This is a research sprayer that I modified with a boom that represents very, it's very similar to what we see in production systems. This particular system leverages, Cap Stand Ag Systems pinpoint two. Thank you to Cap Stand Ag for their assistance in tweaking these workflows to fit their structure, and assistance in the field.

This system has a field computer that can import the outputs for machine learning AI workflow, GIS workflow, from that field map created from drone imagery. And so, it can read it in the field, and turn nozzles on and off. Here's a drone imagery drone videoing the sprayer as it traverses the field. What I want you to watch carefully is, on the front of that system, there's about seven nozzles, seven spray nozzles.

And if you look carefully, they will come on and off. Now, you're not gonna be able to see the weeds here, because they're, again, embedded between the rows. They're small, we're trying to catch them very early and very young. But what you're seeing is completely automated. Again, from drone imagery, through the algorithm, and the output would be the weed map, and this machine can read it on the go. There we go.

So the system is kicking on and off, based on each square meter grid, as it traverses the field. And again, it's fully automated. How about results? So this is from a different set of plots. The weeds are a little bit larger here.

And so, current field tests, you'll see some red lines in the imagery here, that represent different plots. These are the results from an algorithm output, in this case. You can see the purple bounding boxes around each image, or each weed. In this case, we've broken the plots into separate treatments. On the far left you see a broadcast treatment, what the current fieldwork would generally represent. The third plot there you see, there's no spray at all.

And that's gonna be what's known as an untreated check, or an area that's left untreated for comparison sake. And then, you can see two strips, strip number two and four here, in this case, that will be spot treated only. This is the results one week after treatment. And you can see the reduction of weeds in each of the three plots.

By comparison, the untreated check. Early data suggests... I apologize, my clicker doesn't seem to be liking this very much.

As you can look, current system, the way we currently do things, 100% sprayed. We had 94% control through visual observations. By comparison, you see zero sprayed, zero control. And then, our machine learning workflow here, used 35% and 15% in different plots. Achieving nearly as good control, as that from the broadcast treatment.

So you can see the potential reduction of pesticides introduced into these systems. You see the potential for both cost savings for our producers, as well as, production and environmental impact. Now, I do wanna take note, these ratings were taken visually at seven days. Essentially asking the question, where were the weeds hit by the sprayer? Was the pesticide deposited? And you saw the results a second ago that the weeds were dying. As you look further and further out, the question becomes, did the weeds stay dead? And I know that sounds like a silly question. But some of our resistant weeds, will actually die, and then they'll come back.

They appear to be dead, but they're not fully. There's still more work to be done. And this, these are actually relatively large weeds, so some of them did come back.

But early indications meaning that the sprayer was able to read and spray. Again, the video you saw earlier was of smaller weeds. Those died.

(chuckling) Here's some of a scaled-up version of our plot. You can see the spot treatments, and the untreated checks scattered throughout there. We move from small plots to strip trials, to hopefully looking at full field trials next season, is the idea. Conclusions. The intersection of agriculture, remote sensing capabilities like drone imagery and satellite imagery, and machine learning technologies have the potential to revolutionize our current agricultural production systems.

Automated and semi-automated workflows have great potential to both reduce the input burdens, for cost savings for our producers, while also reducing environmental impacts. And in this case, that's a win-win. And so, by reducing the inputs, cost savings, and then we have reduction in environmental impacts. It's a very exciting time to be working in the space, the machine learning AI space. It's also a very exciting time to be working, in my opinion, in agriculture. With all that said, there's a lot of excitement, and I get very excited about this kind of work.

I think this is very promising, and exciting moving forward. But there's still a lot of work to be done. There's still a lot of how well does this generalize across other production systems? Or even within the fields that are similar.

While I get excited, I often times have to step back and say there's a lot of, more work to be done. Which is good, that's job security for me, right? Okay. With that, I appreciate your time. And thank you, and I think Raju will open for questions, if there are any.

(group applauding) - If you have any questions, we would be happy to take those. - [Man] Yes sir, thank you. Mr. Davis, that was a very interesting presentation. I wish you good success in that program. - Thank you. - [Man] I have to say that I'm not from Arkansas, but if I were a pigweed, I'd be a little disconcerted.

I think you got the, the laser is about to take me out. None the less, my business is not pigweed, unfortunately. I'm in the contracting data, legal data, type of a business.

So to you, Mr. Penmatcha, I ask, is there efforts being made using AI or ML for interpretative type of things? Trying to use machine learning for intent. Trying to read data and interpret what is meant by the data that your reading. Not just merely using algorithms to identify tags, but to also understand what is the purpose, or the intent, of the data that's being translated? Thank you. - I'll take that question.

The question was, like in legal field, how to understand the intent, or the purpose of the question, and how to use machine learning for those reasons. The thing that comes to my mind is natural language processing of text analytics have advanced quite a bit in the last few years. And AWS has implemented services to take the advancement, or even push it further. And Kendra is what comes to my mind.

And because Kendra uses very advanced analytic needs to understand the intent. And it can also, by understanding the intent, it can answer questions. Let's say you have captured the data, in text format, you can ask some very specific question. And unlike keyword-based search engines, which will many times miss the intent, Kendra is built for that purpose.

It will try to find the intent and show you answers based on your question, by understanding the purpose behind it. So maybe we can talk further in the corridor, but that's what comes to my mind. - [Tim] My name in Tim Tang with Hughes. That's a brilliant case study, I have a question for each of you. Jason, it seems like when it comes to AI models, the weakest link is the training of the model.

Looking through 1500 images, and the like. I was wondering as you did your case study, did you consider any other algorithms, or any other approaches that would require, or are you aware of any other research that requires less training of the models, or less owner's activity. You're dealing with things that are growing. Was there any kind of thing that looking at the imagery over time, for example, or something like that? - We look at the imagery over time to automate that step. That is a challenge, because actually a lot of our fields, we found on a weekly basis to capture that temporal effect, because you're right, it's an evolving system. You're looking at one point in time, and you may be making that decision at that point this year.

But next year it may be a week later, or a week later. And so, we capture that data. And we try to incorporate that into the training process. As far as, was your question, are their systems to automate that? - [Tim] Yeah, other ways of reducing the owner's process of training the models.

- Often times we'll leverage in some of our GIS packages that I mentioned, there are some embedded algorithms that assist with training data developments. I'll label a subset of images, allow the process to essentially guess with certain confidence levels at finding other artifacts in that imagery. And then I will truth it, essentially, verifying that before it goes into any kind of next process. There are some that I have leveraged within GIS systems. I don't know if that- - [Tim] Yes it does.

- Answered your question. - Craig, can I also add one quick thing. What you will notice, one of the experiments that Jason has presented, was, if you take, if you divide that into two classes.

One with small weeds, one with large weeds. If you notice that, it didn't make much of a difference. The reason is this, (indistinct) algorithms, they look for various things like the structure of the leaves, compared to surroundings in this case, the structure of weed leaf, compared to the soybean.

That's why having a large weed to measure, verus a smaller doesn't make much of a difference. So I think, at least in my opinion, that the idea here is to train the model only once, in an ideal case. And hopefully apply it across various growth stages of the weed. And also hopefully across various geographical locations. Jason, you are experimenting with various light conditions, and various environments.

- We're looking at time of day when the drone imagery was taken. If you think about where the sun is in the sky. It's gonna light, it will light the plants differently. And actually, some of the imagery that we used here, were plot work for training data, and they were staged by growth stage of the weeds.

So what I captured on one day, was three weeks of different growth stages. And in the next week was similar, but three different weeks, three examples of three different weeks of growth stages. We tried to simplify some of that in data collection.

- But that's the ideal thing, that's the ideal scenario. However, you gotta be a little bit more complex than that. But that's where we need to iterate a little. - [Tim] Excellent, excellent, well thank you.

Raju, my question for you was, there are a lot of retailers who would wanna use and take advantage of these capabilities, but they compete with Amazon. How can they use and manage an analytic service, and then be assured of protection of their data? - Oh, great question. The question was how data of the user, or the user organization in this case is protected? All the data that's on AWS belongs to you. And it is also, you can sign up that, sometimes in our ML/AI services, if they use your data for improving the training of these algorithms, there is a provision for you to sign off saying that you don't want.

But at the same time, your data is securely in your account. And nobody else can access it. And the models that you build, any custom models, and so on, like Custom Labels models, for example, in this case, Jason's model, are purely your models. No one else can have access to. - I can vouch for some of that. Because we had trouble communicating between my data sets and he didn't have access to a lot of that.

So if I had a question, we have to find ways around that I approved. But obviously. - [Participant] I actually have two questions. First question, why not just put the camera on the sprayer? And the second question, if you're using these machine learning algorithms to look at the leaf structure, wouldn't the weeds just evolve different weed structures over time? And therefore, wouldn't you be constantly having to retrain these programs to identify the weeds? - Two excellent points. One, there are other companies, and they're very, very smart, very talented companies and people that are working on this problem. And some of the workflows they use, actually do mount the cameras on the booms.

The trouble that some of it, from my understanding, what they run into, is that first off, they end up running into some lightening issues, related to capturing that imagery. They run into different problems that I run into, because of the remote sensing, the way I'm collecting the data. They also run into the processing loop between detection, the decision made, and then, actually taking action on that weed in the moment. Much of our equipment currently operates at about 16 to 20 miles per hour.

And so, to traverse that field, and detect and spray in that instant, becomes a challenge in processing speed to make that happen. Obviously that technology will evolve, and that may not be a bottle neck limitation in the future, but right now that's been a challenge. And they had to limit their speed to somewhere between two to five miles an hour. Advances have been made, I'm not real sure where that particular company is right now on how fast they can operate. The advantage to this particular workflow is we can, in the scouting step, we're already doing. Collect imagery ideally in an ideal time of day, and then that imagery can be used in the workflow.

And then you can look as a producer at the map, and almost ground, provide some reference to yes, this looks accurate, a way to adjust some of the perimeters in this. Whereas, if you're limiting that exclusively to be automated in the field, there's a potential for that, for more mistakes to be made that you can't foresee. And so, there's a second part to that. - [Participant] Yeah, my second question was the evolving of the weeds.

Wouldn't they just evolve different leave structures, and you would constantly have to be retraining? - Mother Nature is incredible at this. And so essentially you're going to, you're suggesting, which you're right. If we're imaging at what we see as a weed, then we're gonna filter by weeds that look more like our crops. - [Participant] And so eventually the weeds that are left, are gonna look exactly like your crop? - Gonna look like our crops. And that's basically what's happened with the chemicals, currently.

They develop resistance. We have filtered by weeds that have a better metabolism for these chemicals. What's left in the field generation, after generation, are weeds that can metabolize our crop. You're right, you would have to essentially update your training data set for each. And I'm a proponent of these models are not gonna be in general one size fits all for all production systems. I think we're gonna have to train and retrain over time, and update these to provide libraries of comparable training data for our producers.

Because there's such variability in our production systems. - [Participant] Can I ask a quick follow up question? How much of a cost on training with this in perpetual? How much time and energy, and cost, would be to keep having to train and increase these libraries to make them effective? - Training, I would say training is relatively inexpensive because you're doing it one time. And then you will stop the instance, the cognitive instance once you're done.

And just to add one more thing. These deep running models, they are very good, but also complex. Context trainability. In the sense of the structure of the leaf, there could be other things that it's looking at, like for example, the soil around it. The color of what is immediately next to it.

So I feel at least, of that leaves that are in between lanes are probably easily distinguishable then those that are in the crop, just because of the brown mud and sand that's around them. And these deep running models do take initially the leaf structure, all those things into account. - [Participant] Couldn't you use weeds in between the lanes to help train the leaves in the lanes? - Sorry? - [Participant] Couldn't you use the weeds in between the lanes that you know are more probably not part of the crop to help train, to look for the weeds that are in lanes? - Yeah, how to still experiment with how they will work if they are in the lane, itself. Because again, the surroundings are different. So how to see how the model would work in such- - [Participant] Right, so you could look at, you see a leaf that's in, outside of the lane, and then say, "Okay, that must be a weed."

So you look for one that's in the lane. See what I'm saying? - You're assuming that the genetics would be the same between the two? So by isolating the one that's in the lane, that you can therefore find the one that's in the lane. - [Participant] Correct. - That's a fair assumption. - Yeah, by running out again, run experiments to see if that makes any different.

So, but the same time, the point that I'm trying to make is, that it does take other things into account, in addition to the structure of the leaf. - [Participant] Okay, thank you very much. - Thank you. By the way, we try to improve our sessions through your feedback.

Give us feedback by the scan, the QR code. It will also be on your app. - [Host] And I'm loving this discussion, but because of the keynote that is starting in Hall B very soon, we do need to wrap up. Please, if you have additional questions, connect with Raju and Jason after. But please complete the session survey. And if we can all make it to Hall B for the keynote, that'll be wonderful.

But please give them another warm round of applause. (group applauding) - Thank you, Stewart, Thank you. (group clapping)

2022-03-18 13:07

Show Video

Other news