Smartphone-based Ergonomic Risk Assessment

Smartphone-based Ergonomic Risk Assessment

Show Video

good morning everybody my name is jack lew the manager for the niage musculoskeletal health cross sector program it's my pleasure to introduce today's speaker dr sung young lee to give a presentation on a computer vision-based automatic ergonomic risk assessment system dr lee is an associate professor in the department of civil and environmental engineering university of michigan dr lee has over 170 publications and has written two books he has received numerous awards for his contributions to science including the recent thomas waters awards for the best manual materials handling presentation at applied ergonomics conference in 2018 dr lee will be repeating this award-winning aec presentation and adding more information on how to use a smartphone to conduct ergonomic assessments in the workplace welcome dr lee thank you jake hi um we are a little bit of hostess but i'm glad it is working it is quite honor to be here and present particularly given that i received the tomorrow's world several months ago so it is great a really great honor and i hope it can discuss kind of the emerging and new technologies coming into the market and the industry let me talk about so basically i would like to talk and discuss this new emerging technology and what we can do for the ergonomic field so my presentation will be smartphone-based basically computer and vision-based ergonomic disk assessments okay so i don't think i need to repeat about the importance and the seriousness of the musculoskeletal disorders in the united states so basically many workers suffer from and a lot of money we spend to remedy this situation and one of the things that we do to address the muscaria cholesterol or this disorder is basically ergonomic risk assessments i have a chance to talk with the more than 100 ergonomic practitioners to learn about what they are doing for economic risk assessment this is kind of my very simple view of basically what they are trying to do so when they have the jobs in the plant for example they go to the site and observable how and what worker is doing or even they or sometimes they take a video and they come back to the office and figure it out they've got a basic level assessment this page level assessment is basically to identify ergonomic lists so we can say that it is more about the risk and job screening meaning that we would like to identify which job is opposing the problem then we are going to narrow down the score and do in-depth analysis and then if you want to fix something you will go with a very sophisticated analysis too to identify how we can address this issue when we look at this process one of the major limitations is basically manual observation you have to manually observe you have to go to the side and see how workflow is doing or you have to take a video and you have to play back like more than 10 times to figure out what kind of input you need for page level risk assessment so it's a time consuming let's say we have like a one or three minute five minute job cycle people usually spend half an hour even they spend one hour to get the old input for basic level risk assessments also it is a limited coverage meaning that yes i would like to study and screen all the jobs in my plan but basically i don't have a manpower to this so when i talk with several being manufacturing the safety managers they are saying we just cover 10 to 20 percent we have to rely on our some injury report or some observation so they would like to do ideally 100 coverage of the basic risk assessment but simply it is not possible with the resource data also sometimes training really yes manual observation sounds like a simple task they can hire high school student and undergraduate student as an intern but it also requires a lot of training also when i work with professional ergonomics we actually find that they have a different way to identify basic legal basic risk assessment for example the way that they measure back bending and it depends on who basically is observing the what workers doing so it is not consistent at all so what we are trying to do what we are hoping you say okay if there is a motion capture system we may automate this process right so there are a lot of different and commercial motion capture system which has been widely used in the movie industry and gaming industry also a lot of people in the ergonomics and mechanics trying to use motion capture to automate this process i'm not talking about the commercial system basically we have optical and inertial measurement unit systems for motion capture they are very accurate and and you i also saw that what you are using in your writing knowledge but basically it is very accurate and this is where the performance that we can get it's very accurate also it is less affected by line of sight right because for example optical system you use the at least eight cameras or even 16 cameras and 30 cameras or if you use the imu system you are going to put the sensors on the people's body part so less affected by line of sight so those are all great really great workplace however there are some horns for ergonomic risk assessment let's say these spray tools we would like to apply to a organomelastic assessment particularly in the field risk of screening you need to get the trader from the field so let's say so there are some points when you apply these technologies to the field for ergonomic assessment first of all the price drops down recently but still it is expensive and more more problem is it's a little bit cumbersome to be applied to the piece basically you have to put the markers or sensors to each body part each body joint or certain should have to be have to be one so it is not convenient for you to use to the face also you need for certain system need calibration time i don't know if you have experience some system basically requires your people in the beginning ask that you walk around immediately you apply this same to the field and stop the work and ask for them to do to the certain port to get the system going it is not that desirable situation also it offers you the control effect setting right for example optical system the camera has to be back you need to basically laugh it is very difficult to be applied to the field also only one subject at a time right you would like to have a multiple use but it is not that easy because the system is very hard to configure in the beginning so going back to the our original slide so this is the ergonomic list and we talked about the need for motion capture then how about just motion capture with a smartphone okay everybody nowadays has a smartphone smartphone has a basically video camera so why don't we just use the motion capture the smartphone for the motion capture that's my major motivation so why so not interfering with the ongoing work let's say you one day you walk around your job site in the plan and you see that something is awkward and you can just pull your smartphone and just record the video no marker sensors should unleash it just you are going to use your smartphone so you don't have to interfere with what they are what workers are using so also very affordable you don't need any sensors so no hard work is required also very easy to use the computer basically the software figure it out what you are going to do and also those set up time it doesn't require any key posts for set of time so you can just just record the video and get the data and also it can be multiple use meaning that many people have just smartphones so many people just can simultaneously walk around the site and get the data also no need to be stationary meaning that some system requires fixed camera but this one doesn't have to be stationed in the meaning there let's say you are recording with your smartphone and then you don't have to fix the smartphone so you can actually work for that let's say you see the line oversight here so you use your smartphone now it is a little blast and you can slowly move around of course if you swing like this one and move very fast there is no way to catch up but at least you can slowly move around and you have a just natural vibration from him it doesn't really matter so it actually allows to be very flexible to walk around the side and to get the data that we need so how this can be applied so what we are talking about right now is ergonomic posture analysis like os ruler and liver etc there are many different ergonomic posture analysis tools a certain company has their own tools developed for their own needs but when we take a look at most of the tools they use in the use for ergonomic posture analysis they require these four pieces of basic information posture classification is it bending or arm reaching frequency how often they bend their back duration once they bend their breath what is the duration of the back bending and severity like backbending angles so those are basic four pieces of information that needed for most of the ergonomic foster analysis too so that's what we provide can you play the first video so now you can see that what we try to do is that the i'm supposed to see the video right can you play yes okay okay i don't can you play it one more time sorry for the audience the audience here has not been able to see the video so we are playing again sorry about that so here we can see that the typical skeleton has been generated so the way it works is that basically we have thousands and thousands of images and we only take them as which one is shoulder which one is a wrist which one is a back and then computer identifies what kind of the image patterns are there and then once you see the real world video they recognize which one is that then we connect them and we do some usb kinematics to make sure that that is like a scallop so we use the one of the tools like a rebar so you can see that for example we use the libra center to identify which one is red which one is a yellow and which one is great for example for bear it is a bending more than 60 degrees we just put it left so it can be what i'm trying to show is that it can be customized based on what kind of tool they use and can you play the second video and then once we have a skeleton so basically now we can get a lot of the detailed information so for example now you can see that frame by frame typically smartphone has 29 frames per second and frame by frame you are getting the all the different angles for the different body parts right arm rough back bending so we can identify what is the angle but at the same time easily we can calculate maximum angle frequency and duration so here also you can see that whatever threshold that you are interested in you can put the different ratio then we are just going to identify because we provide raw data frame by frame thank you now can you please third video then we can use in many different ways and one of the ways that we are trying to use is a risk to comparison across different jobs so you can just collect different videos meaning different jobs and then you can identify okay in terms of last knee which one has a more hazard back bending which job is more harder or which how the proportion of the fake and the cautious and the hazardous so you can have close comparison across the different job cycles so how does it work so also one thing that i would like to mention is that again of course the process of processing unit in the smartphone at this time is a very limited so the way it works is the smartphone captures the video and then it send it to our cloud system and then do the older analysis and going back to the hour and fall with all the the skeleton should point pause the video as well as raw data it can get the text file or exercise or whatever format that you want so let's see the how what is the kind of the accuracy level comparing with the ergonomic practitioners again these applications is to make sure that we can do basic risk assessments so our concern at this time is how the performance of this technology reads ergonomic practitioners so we use the two videos each 72 second video and one is a side view one is a diagonal view because based on the depending on the view angle the way the measure angle can be different so we would like to make sure what is the size level accuracy from human ergonomic productivity practitioners and our technology as well as a diagonal like a 30 degree and then what we do is the based on liver we measure we ask them the ergonomic practitioners to measure frequency and duration per each body angle category so for example back bending if you see that back back bending angle is greater than 60 degree measure frequency at the same time when you measure frequency also measure the duration the baseline that we use is the imu based motion capture system as a baseline then we ask them to uh check uh we ask them to pick up uh calculate frequency and duration for each category 27 ergonomic precision practitioners participate in this study and seven subjects has less than two years of experience and 20 subjects has more than two years experience and of course the ergonomic practitioners can play the video bag as many as they want so can you play the first video in the slide 14. so this is a side view so this is a part of the 72 second video but you will see that when they lift something it is be more about the side view and also you can see the skeleton has been generated we use the libra for this purpose so all the different color coded are based depending upon the labor classification sometimes there are collusion then we miscalculate misidentified joint but as soon as it shows or we we have our algorithm to have a best gas for the hidden body parts so in comparison with the frequency and duration so in terms of the frequency among the older subjects our technology ranked as number one the metric they use is the average number of miscount for body joints so on average our technology is 0.25 miscount per body joint but the people usually have 1.5 times for each

body joint and the duration still our technology length as number one and on average our miscalculation is 0.33 but practitioners usually have a 5 seconds of error for each body joint you play the second video uh actually first video on the page slide number 15. so this is more about the diagonal view uh when you actually see the people lift something it will be more like a diagonal which is a little bit secured can you can you stop and play again i don't think it's showing on the screen okay but it is very similar to what you have seen a little bit different view in terms of the diagonal view the the frequency is that our technology is length number three on every slightly better than practitioners for example one point three three discount per body turn while the practitioners at the 1.89 discount for body joint and the duration our technology ranked the seventh and only on every slightly better than the practitioners so when we compare the accuracy of our technology with ergonomic practitioners at least we can safely say that we have a human level accuracy indeed is actually better but to be conservative at least we have a human level accuracy but it's much faster the practitioners took almost 60 minutes for on average for 72 second video but we including all the wi-fi connection and everything we took like uh less than 1.5 minutes so at

least 10 times faster than what people usually do so it is still ongoing with a more subject but you have a sense of the what kind of accuracy level and what how how fast it is comparing with ergonomic practitioners now so let me go back to this ergonomic assessment the motivation slide so now we have been talking about the motion capture with the smartphone for basic level assessment now you notice that it is all about the 2d so that is why it is at this time to limit it to basic level the basic level risk of sessions but in the end what we want to do is in-depth analysis meaning that why don't we have a 3d with a smartphone that will actually can open the door to be used for more in-depth analysis a sophisticated analysis try to identify how we can fix this problem can you play first video in the slide 19 so this is the video that we collected in our last okay let me see typical lifting tasks somehow i don't get these joints very typical lifting test can you play the second video on the slide 19 so this is skeleton generated only from the smartphone so now how can we use uh this 3d skeleton for further analysis one of the things that i try to show or what this how this can be applied to the ergonomics tool is that of course in the nausea lifting station so i try to semi-automate now sleeping location with the 3d skeleton that we generated so basically we don't need to measure any horizontal and vertical locations we don't need to measure vertical travel distance asymmetric angle and frequency so what we need to measure is the weight of the object by the way relatable object also can be automated because if you have a hand probe object that you are dealing with in the plant actually we can identify what object is and we can easily associate with the weight so weight also has a potential to be automated down the load something that we still need as an input is a coupling coupling is a very sophisticated so it requires a further technology of the mesmer but at this stage we are getting just the input about the coupling but if you may know you know better than me that coupling has the least impact on the lifting index when you calculate the limiting index and then we also need height information because knowledge limiting education is a little bit different from what you saw from a postural ergonomic analysis because you need to measure this stuff so we are getting workers high to infer all the distance that's needed for nausea sensations so one way there are two ways that we can use this now sleeping station the first way is like what you do you can pick up origin original destination then we will calculate all necessary inputs it's a straightforward right because now you have two images at the origin and destination we can identify angles but at the same time we can measure the distance or you can let the system to pick up the original destination this is slightly different so in this case you actually don't need to put the origin and destination you can just put the video and then our system will figure it out frame by frame calculate everything so for example the first step is we are going to identify the product of the multipliers expected vertical travel distance frame by frame and then we because now we have a vertical location we can also identify vertical travel distance of each frame so for example a certain frame like you see here we know the minimum vertical location and maximum vertical location you can get the difference and take up maximum to give the least distance multiplier and then you multiply one and two you will get a recommended wave limit or lifting index frame by frame basis so if you pick up manually original destination we are going to identify which one has a highest lifting index or if you let our system kick off lifting in depth we are going to identify slightly different from what you're expecting from your destination so this is because for example when you play when you lift something and place in the higher position actually what people do is you reach high and then put the destination so we are actually capturing how much people are trying to live higher to play something on the destination so this is the matter of choice that what kind of things that you would like to do but this one certainly leads discussion about how we can best use of technology for nausea lifting stations so this is our ongoing study but we just measured comparison with the actual measurement we use the three subjects short and medium and four percent and ground truth is the actual measurement for origin and destination and task is a material lifting so when we manually pick up an original destination mean difference in lifting index is at 0.1 so if you want to use this one as a very sophisticated analysis this difference may be high but if you use it for risk screening i talk to the several the practitioners they are accepted but it is just different the choice of the criteria that you apply but when actually practitioners use this or not slip indication to the field many cases they don't actually take measuring the of the parameters in the original destination they just guess and sometimes they just infer from certain the object height and etc so this is what we are getting when you manually pick up an original destination and when we automatically picking up original destination main difference in lifting index is 0.02 but this difference doesn't really show any accuracy it just shows a certain difference between what we automatically pick up lifting index versus what the actual the least safety index so we talked about the 3d and this is where we are heading right now what i'm in in the end is i would like to do on-site biomechanics analysis you may notice what i'm interested in is a certain tool that can be applied to the field without any uh without any big problems so i'm trying to go to interaction for on-site biomechanical analysis can you play the video in slide 21st this one um okay i hope you can see the video so basically what we tried we developed a prototype system which i extract 3d skeleton and then we connect with the biomechanical analysis tool here we use the 3ds pp developed by universal michigan center for ergonomics and we connect it and gene to identify biomechanical load for each body part so this is where we are heading where i am heading right now but at the same time i try to approximate measurement or approximate guess of the force patterns from the video images that's what we are working on right now so we try to fully automate on-site biomechanical analysis only with the video cameras or smartphones so let me go let me draw conclusions so smartphone based or vision-based ergonomic risk assessment enables quick and easy assessment without any special equipment and it is comparable with the existing risk assessment tool also i would like to make sure at this time we are really targeting the risk screening purposes and 3d skeleton also can be generality generated for more in-depth analysis i think it can open more opportunities for sophisticated uh analysis for the ergonomics and the last one is we are moving towards smartphone-based on-site firefighter analysis automating force prediction at the same time with the 3d skeleton so i would like to acknowledge my sponsors national science foundation and college of engineering the amtrak program as well as toyota and if you have any further questions or feel free to talk with me and i'll be also i'll be happy to answer any questions that you have thank you doc lee for your wonderful presentations and i apologize especially for the um the audience on the web that due to the adobe connect limitations and the streaming video on the web is extremely slow so a lot of video we we have shown it's kind of choppy so so now we're gonna we have some 20 minutes left to open up the questions and would like to open up a question to the web um any audience okay so jessica have not seen any written questions okay here's the one okay so the first one that we got says what are the error differences on the smartphone approaches relative to conventional motion capture particularly during axial trunk rotation yes so we also measure we compared what smartphone can produce and what traditional sophisticated high-end motion system capture system provide something that i would like to make sure is sorry for my posture in the audience here i try to be close to the mic so what we are trying to do is the 2d when we compare to 2d and 2d is of course we have a lot of the the view angle distortion so with the commercial sophisticated analysis we can see that the very high correlation but if i if you uh pull me into the corner and ask me what is the angle difference i would say around the 10 degrees in general so that is the the kind of the angle difference that we are getting in case of when we compare uh 2d technology 3d technology the our preliminary analysis shows that around 5 degrees but one thing also i would like to mention i don't know if you uh you some have a lot of experience using motion capture system when you use the commercial motion capture system one of the problems that i i i experience is that it actually loses a lot of data so it needs a lot of post processing so giving the a lot of force processing effort um then yes once we get the data it is very accurate but getting the excess data is a little bit challenging so that's something that i wanted to consider about this one okay so the next question is has any of your work been published in journal articles and if so can we get the citation yes um my earlier work has been probably earlier work means seven years and six years work about all the computer vision and since then i actually haven't published and journal articles because this technology was picked up by a certain agency great potential for commercialization then it has all the pattern issues and all these that kind of issue so actually i didn't publish general articles about the technology i'm apologize for this one but the we are trying to commercialize and widely deploy to help people for risk screening so it has a little bit different the proposal and loss for this one which i hope you can understand but the for my very listen to a 2d classification 3d world hasn't been published but we we just we are producing white paper for the company who commercializes 2d technology we are putting the validation for what kind of correlation that we are getting we are creating a white paper for this one okay and the next question do you plan to extend this technology for real-time risk assessment yes that's what i'm interested in currently if you recall my slide 2d technology one minute 72 second video having 1.5 million less than 1.1 in five minutes typically one minute but in the end what i want is that when you walk around your site with your smartphone and you see that something awkward and you just record it and when then you can actually see the result then you can provide the feedback to the workers or use for whatever that you are interested in i don't think it is going to be truly real time but near real time within a mini whatever the result that you are interested in is what we are what we are trying to do so we are heading into this direction okay thank you and there's one last question and then we'll jump to the audience is there anything in the future that will be developed for capturing hand wrist and finger posture this is a great question this is a a great question the let me put in this way in general computer vision what the technology we are talking about is basically when we see we can identify that so if we cannot see we cannot identify so at this time we are focusing so to speak growth posture because we are thinking about there are not many situations people put the camera on the hand and wrist and etc so we are focusing on the gross posture at this time but once we have a line of sight of detailed hand movement in the same same principle basically however i would like to be very cautious about finger movement because hand and finger model is a complete very sophisticated one so then requires further research but in general wrist movement wrist reflection and those kind of things as long as you have a line of sight as long as you have a place to put the camera it can work okay do you have any questions from a live audience in cincinnati dr lee um when you're using the 3d sspp analysis one of the later slides you showed the latter movement are is that all video also captured from a smartphone for the analysis yes and we also use the very cheap 3d depth sensor to scan the site it is not accurate uh let me see if i can should we just snapshot yes so here swiss skeleton is from the video and the leather actually you can see the very love scanning of the dark side that is we used a very cheap sensor to get the idea of what kind of job site looks like and then we should point for skeleton to the lopus canned business the job site yes anybody else um luckily it was a great presentation thank you i have a quick question um about the angles uh when you're just detecting the angle with using the smartphone and as an output you're giving the high risk whether they're in the medium risk or low risk right yes um for the reba and national senior question are you giving the like the scores at the end of as an output too or is just only like they're in the high risk or medium risk for lower okay that is only for just as a one simple example so we just say highest medium risk lowlife based on liver classification so in the end what we are trying to do is we can automate all the different body parts and then we can provide a score so for example we are automating the liberty mutual snoop tables right now that's a possible dance once we get the low data a little electric conversion then we will get the scores or population with supposed to get injury will be provided from legal to mutual tables for example so this is just one example but because we have a low data we can customize and to feed the system or scores that you are interested in so that's the point of the test right and to to summary going back to your question yes we can do thank you yeah thanks okay we have one more online question have you used different smartphones and are there differences in the capture data no so basically what we are talking about is a smartphone is just one device we are talking about actually the more accurate terms should be digital visual camera so the reason that i'm using smartphone is a smartphone everybody uses it as a video which has a functional video camera but at the same time it can never add and it can have a wireless connection so sending and communicating data that's why i use the smartphone but basically what what we need is ordinary digital video camera so most of the smartphone cameras nowadays anything that you have as long as you purchase the uh within five or six years most of the video camera in the smartphone is a high definition quality so meaning that typically at least 29 frames per second but that's what we are using but so no difference in the different types of smartphone as long as you purchase not long ago like as long as you purchase very recently like within five years so to speak so i have a question that's professor lee for you and other nash colleagues maybe as well oh i told it closer okay um there's so much promise in this technology and it seems like a lot of the uh the methods that practitioners have used in this field that were developed before these types of technologies were or were thought of really are is there some hindrance to advancing this by referencing things like reba rule you know even the niosh lifting equation to some extent are i mean if you if you kind of relinquish those types of references is there more opportunities with the technology this is a great question this is actually what i have been thinking about and what i'm learning right now so what i think by the way i have i'm not really trained for ergonomics from my undergraduate something like that so please pardon my ignorance if i say something stupid but the way that i see most of the two looks like they are taking into account the difficulty of the data collection they know it is very hard to collect the data and so for example large living data there is a good justification why we have to pick up origin and destination but we haven't really thought about we will be able to get the data frame by process like 30 29 frames per second we just thought that maybe we have a certain shot of the soft side or just observation how we can approximate to do analysis that's kind of what i'm thinking when i take a look at the existing technology it is no blame at all this is what it is that is the what we have have to overcome at that time but what i'm saying is that now let's imagine we just get the older stuff work what people are working basically you have a surveillance camera we have all the cameras so data is not limited at all now then maybe this is a good time to think about how we can revamp what we used to do maybe something more easier way and something more deployable rate can be devised and that's something i'm very interested in i when i that's what i would like to learn from your experience uh what kind of things that you can see this can open the new opportunity because now there is no limitation to get the data this is a very minuscule data 29 frames per second we even don't have to use all the data but now we have that kind of data all the jobs and all the people all the the work and what can you do i think this is a really great time to think about because this is about computer vision but even same as the wearable sensor that we are using right now we are getting the data then what is the best convenient way that ergonomic practitioners can use the data and what is a convenient do they don't have to even get the training once they just use technology they are supposed to get credible results i think this is like open the great opportunities for us work together to think about how we can do or better or can do more more easier for the field practitioners thank you thank you for the presentation um i'm wondering it looks like you might be getting really close to be able to do this analysis real time with real-time video streaming and i'm just thinking there's a lot of opportunities that open up with that capability yes and whether or not you're pursuing them yes and the i'm also only show the ergonomic perspective but also actually my earlier work starts from identifying awkward posture unsafe behaviors and this is also still i'm working for the construction workers like unsafe behavior or they are close to the crane we are identifying so yes we are getting to the almost near real time but at the same time what i'm working on ergonomics is a little bit different because it's a cumulative injury so we can do post or fixing later but something when we identify unsafe behavior we have to fix it right away because it isn't it can be just near mist but there is a high chance that it can be connected to nearby and etc so with the pigeon actually what i'm working beyond the near layer time is actually prediction so i would like to predict because we have a lot of data is occurring a lot of data coming so we are trying to have a near time understanding so to speak at this time but because of the data we are getting actually we can predict we don't have to predict 10 minutes 20 minutes we just need to play two to five seconds to give immediate warning to the workers right away so that's one of the my project that i'm doing in my lab once we have a disposed data can we predict the post and how accurate that pose will be and then can we predict unsafe period can you predict force these are fall and sleep so that we can do uh provide a real-time warning so that's something that i'm working in my lab right now and that is the i think the future direction that you receive from many they even though they use different technologies that is kind of the trend and the future that i see in the law in the down the road okay we have another question online is that some companies do not allow their data collected on employees to upload to a cloud system for security purposes do you have the capability to upload to a specific computer or company server versus a cloud system yes i'm very aware of the this problem so because of all the security issue not many people are interested in third party clouding service so here clouding service is just the example of the server but it can be your own server or it can be just your company server or if you use a certain vendor's clouding service for europe purpose we can use that kind of server so server hardware whatever computer the kind of the clouding system doesn't really have to be third party it can be just your choice like your smartphone you can choose whatever smartphone that you are interested in yeah okay that's the um that's the last question we had online except for one technical question which asked about um this presentation being recorded and available for view later which we have done through adobe connect so we just wanted to let everyone know that it was recorded and we'll find a way in a place to post it online so people can view any other questions from a live audience i have a question i really like brian's question about improving existing ergonomic risk assessment because as a lifting researcher i realized that we do really presume that the lipo point is the most stressful point but now with computer vision technology we're opening up this opportunity to to examine the entire course of lifting perhaps somewhere doing the lifting it's not the really the lip of points in the beginning or destination of this are the hazardous maybe during the lifting somewhere dueling could be very dangerous as well so i think research is needed to ver validate this hypothesis my questions for you will be i i noticed that you mentioned a lifting frequency in your computer vision i wonder uh your computer computer vision technology is able to automatically determine lifting frequency if so how yes if that is a liberative basically we can identify we can understand a certain posture of the start of the job certain posture of the end of the if it is reprocessed multiple times we can capture the frequency so that's how because lifting means that repeating same thing over and over again so we can learn the ritual the repeating pattern starts which point and which is the endpoint and we can automate and going back to your first comment and something that also i'm very interested in and collaborating and discussing more about is now composing lifting index so i think it is really great too but there is like a large listing composite in the component listing index there may be something there you have all the different self taskers and that is composed of the 10 or 12 and sub taskers sometimes you have also mixed up the subtitles because some means in the cycle some is not in cycle then maybe that kind of the video technology can be used to see that how we capture the list in that kind of a little bit bearing cycle a very situation so those are something that i'm really interested in what we can do together so hope that answers your question for the second as well as the first comment looks like we don't have a additional question at this moment i really appreciate everybody's uh you know attention and participation

2022-04-22 09:58

Show Video

Other news