inVisionDays - Euresys Modular Deep Learning libraries enabling implementation in embedded solutions

inVisionDays - Euresys Modular Deep Learning libraries enabling implementation in embedded solutions

Show Video

So first thank you Peter for this nice introduction, and good afternoon everybody and welcome to this Euresys presentation. So today, we talk about deep learning based libraries running on embedded systems. So this presentation starts with a very short introduction about the Euresys company explaining who we are and what we are doing I will then talk about the advantages of using deep learning tools for vision applications. I will also explain why it makes sense to use embedded systems for vision applications and finally, as a conclusion, we will see some example of Deeping Learning based application running on embedded systems. So first, let's talk about Euresys, We are manufacturer of Machine Vision components. Euresys is part of the TKH Group Our headquarter is in Seraing, which is located in the eastern part of Belgium, We have R&D teams in Belgium and in Germany, Euresys has also sales and support offices in Europe, in USA, Singapore, China, Korea and Japan.

Currently our staff counts 92 employees, and half of us are working in the R&D department. So, Euresys benefits from an extensive network of Distributors and is renowned for providing high quality products and premium support to OEMs and system integrators. At Euresys we offer three types of products: We are very well known for our frame Grabbers that have been used in the vision industry for more than 35 years now, in addition to frame Grabbers we also developed and produce IP cores for various interface standards and imaging sensors.

which is known as Open eVision and those libraries are dedicated to the development of 2D, 3D, and deep learning based applications. Deep learning is a trendy subject in in computer vision, So all major actors in this field are proposing deep learning solutions or components. One of the reason for that is because where conventional algorithms simply fail to provide stable and reliable solutions. The fact that deep learning tools can now also run on embedded platforms, makes them even more appealing. so let's dive deeper into this subject.

Deep learning tools can be used for various kind of vision applications, and I would like to start with a review of some applications that would be very difficult or even impossible to solve using conventional algorithms. fruits, cereals or vegetables sorting is a good example where the deep learning approach should be preferred. Because Mother Nature can be very inventive when it comes to the color the size or the shape of fruits or vegetables. And typically conventional algorithms have some some hard time to deal with such variation. Which is not the case for deep learning.

In the textile industry an interesting application for deep learning is the accurate defect segmentation on fabrics. But this one is my preferred one The detection and the identification of foreign material in coffee beans. So frankly speaking, how would you solve this problem with a conventional algorithm? Deep learning can be used to count objects, so, in this example, we see electronic components, that can overlap. They are placed in plastic bags which are highly reflective, another example here, Deep learning approach can also be used to localize objects, even if they overlap and have different color shades.

So, when processed by a conventional algorithm, the number of objects in this image has a significant impact on the processing time, which is not the case when using a deep learning tool. The Deep learning approach offers several advantages: First, it is data driven. Which means that, unlike rule based algorithms, where we must provide a detailed description of the object or the defect we are searching for, in case of deep learning, we just need to train the neural network with a data set of images featuring these objects or defects. So, suppose we must develop an application to classify small stones as good or defective.

So, if we want this application to identify spots of glue on stones or broken stones, with the deep learning approach we just need to train a neural network with a data set featuring images of good stones, stones with glue and broken stones. So, it's really straightforward. There is no need to write complex algorithms to describe how good stones, broken stones or stones with glue would look like. The deep learning approach also reduces integration cost. By allowing easier and faster application development. So, in terms of programming: since we don't have to write code to describe in detail what is considered as a defect, the development of deep learning based application is easier and much faster.

Another great advantage of the deep learning approach concerns the maintenance and the evolution of deep learning based applications. so going back again to our first example, where we have to classify small stones, if we want this application to detect a new type of defect, this evolution just requires adding images of this new defect in the data set, then train the neural network with the updated data set, and there we go. Our application is now able to take the new defect into account, without the modification of a single line of code. And this is maybe the most important point here. without the modification of a single line of code.

If you need to apply the same evolution to an application based on a conventional algorithm, well, you have to be prepared to rewrite a significant parts of your code. So, there are basically three families of deep learning tools, Classifiers, Segmenters and object detectors, So, at Euresys, all those tools are gathered in the Deep Learning Bundle. which is part of the Open eVision software package. The Euresys software library for deep learning classification is called EasyClassify. it is used to detect defective product or to sort product into various classes.

EasyClassify support data augmentation, and for the training of its neural network as few as 100 images per class are sufficient. Of course EasyClassify supports CPU and GPU processing. For the deep learning segmentation, we provide a library called EasySegment. Two operation operating modes are available, the first one is Unsupervised Segmentation, for which the neural network has to be trained with images of good sample only, and in this case EasySegment Unsupervised can detect anomalies or differences compared to the model it has learned.

So, the unsupervised segmentation is very useful when defects are not predictable. For instance defects that might occur due to the aging of a machine. The other operating mode is supervised segmentation, which is also called semantic segmentation, and in this case the neural network is trained with annotated images. EasySegment Supervised allows to achieve a very accurate pixel level segmentation. and our last tool is EasyLocate which is used for a deep learning localization and classification.

EasyLocate is used to localize and identify products, objects, or defects, even if they do overlap, and, therefore it is also able to count them. So, just like EasySegment Supervised EasyLocate must be train with annotated images. EasyLocate provides two modes for the annotations. The first one just requires placing a bounding box around each object to learn, and assigning a label to this box.

and the second mode is called EasyLocate Interest Point. It should be used when all objects feature approximately the same size, and, in this case, just clicking the center of each object is sufficient to perform the annotation. So, EasyLocate Interest Point really simplifies the annotation. In addition to our deep learning libraries, we also provide an evaluation and prototyping tool called Deep Learning Studio. Deep Learning Studio is absolutely free, you can download it from the Euresys website.

It allows to create data sets, to annotate images, to manage the data augmentation and the data split, also train neural network and analyze the resulting network and finally, of course, it allows to test the robustness of our EasyClassify, EasySegment and EasyLocate libraries. Once the neural network has been validated, it can be exported to a file, and this file is crossplatform, which means that it can be loaded in an application running on a Windows PC, a Linux PC or on an embedded platform. So, talking about an embedded platform, that's also a very tendy subject in computer vision. So, in this section, we will review some advantages of embedded systems that explain why they are more and more popular in vision applications. So, maybe the most obvious advantage of embedded system is the fact that they are compact. They are easily installed on production lines.

The fact that they are compact also make them suitable for mobile setups. Using embedded systems is usually a coste effective solution. So, here, we see a picture of Raspberry Pi 4, I think the the latest version is Raspberry Pi 5.

But anyway, you can find those devices for less than € 150.- which is significantly cheaper than a regular PC. Embedded systems can combine image acquisition and image processing in a single compact housing. Putting the processing close to the sensor has many advantages: It avoids image transfer and therefore reduce the latency, it also simplifies the setup, because there is no need for a separate PC to do the image processing, and, of course, in terms of cabling it is also much easier. Power efficiency is another concern that becomes very important these days.

and in this regard embedded systems bring a solution, because they consume less power than regular PCs Of course, I'm not saying that embedded systems can replace all PCS. For high-end applications that require a lot of processing power, industrial PCs are still the preferred solution. But there are a lot of application for which the processing power of embedded systems is more than sufficient. So, all Open eVision software libraries support embedded platforms that are equipped with processors from the arm v8a Series So, the arm v8a is an architcture, which is used in many embedded systems today. In terms of memory to be supported by Open eVision, these devices should feature at least 512 Megabyte of RAM, 512 Megabyte of storage, and these embedded systems should also run on a Linux 64bit operating system.

So, here, we see a list, it's a it's a non-exhaustive list of devices that comply with these requirements, and therefore they are supported by Open eVision. So, to conclude this presentation, I would like to show you some some examples of vision application running on embedded systems, that are using deep learning tools. So, thanks to their low power consumption, embedded systems are suitable for handheld devices. In this example, we use a Raspberry Pi zero to read digits on a water meter. The Raspberry Pi Zero has only 512 Megabytes of RAM, but still, it allows to use our EasuOCR library to read characters. and for this library the character recognition is based on deep learning classifier.

So, here, we see the user interface of this vision application, we see that characters are detected, and each character is surrounded by a bounding box. The characters are recognized by a deep learning classifier, which also means that for this vision application, there is no need to to learn characters beforehand. Embedded system can also be used in the pharmaceutical industry, and this second application is based again on the Raspberry Pi 4 device, which features 4 Gigabytes of memory, and for the image acquisition, here, we use a 12 Megapixel board camera, which is connected to the CSI Port of the Raspberry Pi board. So, in this application we have to read a serial number, an expiry date, a lot number.

To do so, again, we are using our EsyOCR2 library, which is based on deep learning for the character recognition. We also use for this application our EasyMatrixCode library for the detection and the decoding of data Matrix code, and the whole process, so detection and detection of data matrix code plus the reading, takes approximately 300 miliseconds. And finally the last example, here we have a Baumer VAX-50C camera, which is used to run our EasySegment library, our deep learning based segmentation, for the supervised mode, actually, and this Baumer camera is a 5 Megapixels camera, and it is equipped with an Nvidia Jetson Xavier NX GPU, and it features 8 Gigabytes of memory.

So, the goal of this vision application is to detect foreign material in coffee beans, and to do so we use our EasySegment library, which is used in supervised mode and the processing time is around 200 miliseconds. To achieve this performance, the EasySegment neural network has been trained with approximately 150 annotated images. And this concludes my presentation. Thanks a lot.

and, (that's a problem we have no camera) OK, are there any sample data sets available to experience your deep learning tools ? Yes. So, we provide for each of our deep learning libraries, we provide data sets. They are available from the Euresys website. So if you go on the download area Open eVision, there is a section called additional resources, and this is where you can download all the data sets for our deep learning tools.

Thanks, and next question: Do you provide tools to simplify the image annotation process ? Yes. So the Deep Learning Studio allows you to annotate images, of course, we have the standard shape, I would say, annotation. but we also provide a dedicated tool for automatic segmentation, which is based on the GrabCut segmentation algorithm which really simplifies the annotation, because you just have to place a bounding box around the object you want to annotate, and the system will automatically identify it, and you have also the possibility to refine the location of the annotation. Just for your information, it is also possible to import datasets that are already annotated, we support different kinds of format for that. Another possibility to simplify the annotation is to use the neural network itself to do it.

So, if you have, for instance, 100 images to annotate, you can, maybe, train the neural network with only half of those images that are annotated, and in the entrance you have the possibility to use the predicted segmentation as ground truth to learn another neural network. So, yes, it's possible. Is the EasySegment tool also able to detect fine grained defects at high resolution just by using only good samples ? So, this is the unsupervised segmentation.

When you use only a good samples. In this case finding small defects, it's not really appropriate. So, I would recommend to use the supervised mode of EasySegment to be able to detect faint or small defects. In this case let's say 100 images should be sufficient to do that. There's a question for Euresys: Jean-Marie can you hear us ? Yes I can hear you.

Wonderful, there's a question: I'm not quite sure if it was already asked: How are the embedded libraries licensed ? So, it is just like as on normal PC, we have soft based licenses that can be activated directly on the on the PC or the embedded platform, as well as the dongle based licenses. So, actually, for the deep learning we have a single license which is called Deep Learning Bundle, and it grants the usage of our EasyClassify, EasySegment and EasyLocate Libraries. How many images do we need to train the models and what does it depend on? on the Euresys side, it depends on the tool so if you want to use our tool for classification, starting from as few as 100 images per class is sufficient.

For the segmentation for the object localization, in this case, it's again 100 objects per type Of course, we do have data augmentation, which means that you can also benefit from that to decrease the number of images, the goal of the data augmentation is to make the neural network more robust to variations, that are not present, or not sufficiently present in the dataset. OK, so I think we're coming to the end of this session, session 5, AI in deep learning, Thanks a lot to all the speakers for the wonderful presentations,

2024-12-27 13:18

Show Video

Other news

I’m Closely Tracking This High Growth Sector in 2025 ! Best STOCKS to Buy | Sector का राजा 2025-01-14 05:27
AMD BC-250 Обзор и запуск игр. Играем на чипе PlayStation 5. Simple guide how to run games on BC-250 2025-01-13 23:44
Tecnomatix Plant Simulation Tutorial: Free-driving AGV basics 2025-01-12 09:11