Inventors Invent Three New Takes on Assistive Technology

Show video

[Music] uh thank you will and thanks everyone for joining us today welcome to inventors invent three new takes on assistive technology now inventors have long been inspired to apply their genius to helping blind people think of innovators in recent years like mike shebanic who invented voice over at apple or jim fruchtermann who worked so hard on to deliver bookshare from benetech and those are just two people there are many of them today innovators have a nearly miraculous array of affordable technologies to work with including lidar and computer vision and high-speed data networks and much more in this session we're going to talk to three of turning those core technologies into remarkable new tools for people who are blind or visually impaired so it's with a great deal of respect and admiration that i'm happy to introduce our panel of creators so we have three gentlemen today first is chaudry hakan zaman he's the founder co-founder of mediate an innovation lab based in boston which is the creator of the supersense app for the visually impaired and blind cogtry is an expert in spatial computing intelligence who aspires to create ai systems that enable and empower individuals in physical and digital spaces prior to establishing media chagri completed his phd at mit and his dissertation was spatial experience in humans and machines which offered an investigation into the computational aspects of human spatial perception and reasoning with the aim of developing spatially aware machines louis philippe massey is the director of product management at humanware dr marseille was has a doctorate in physics he worked for more than 20 years in the industries of fiber optics 2d and 3d sensor design as well as in product development in the field of metrology and product inspection he started at humanware as the director of product management is currently vice president of product innovation and technologies where he is responsible for product management development as well as soon he is overseeing the launch of stellar trek which is what we're going to talk about today finally kershaw de lan kershaw was uh was born in 1986 and he was born blind pershot studied psychological counseling at the in university and during his years in university he was selected to attend the yga leadership program as one of 50 people out of 50 000 applicants he volunteered on projects for the socioeconomic development of the visually impaired and after his graduation he started work at roach in istanbul during this time he was producing and hosting a turkish award-winning radio show called exploration of senses that are suppressed by sight kershaw won many awards for his wewok kane project where he is co-founder and we're going to talk about we walk today so welcome gentlemen thank you for joining us today so i'm going to begin with kershaw we're going to just quickly step through some of the product features associate um which is a cane and it's an app and uh perhaps you could give us the very quick rundown on on what the wewok project is all about and how it helps people who need assistance so we work is the smart chain developed for the visually impaired people and it alerts visually impaired user through haptic feedback about upper body obstacles such as trees and poles and these turn by turn navigation informs about restaurants cafes stores while i'm passing by most importantly it's gains new features by integrating with cutting-edge smartst uh solutions and uh so this is this is my blockchain and you can see the optical detection sensor here and it has a touch pad here and also it has a microphone speaker also it has um gyroscope accelerometer compass in itself so visually impaired people while they are walking they can uh on vlogs voice menu by swiping uh through touchpad i see does it work independently of an app or does it require an app to work along with it uh actually both of them so you can keep using it as a standalone device and you can get benefits from its obstacle detection feature but when you pair with your smartphone through bluetooth module you can reach navigation and other smart features of wiiwalk i see i see and uh the um what are the key technologies involved in the cane what what technologies are you using to deliver these features and i think uh most importantly we use uh imu sensors so which help us to provide more accurate navigation experience to visually impaired people uh as you are working on the street navigation technologies uh makes our lives really easier but we have to hold our smartphone always so it means our two hands are occupied white kane holding white cane holding smartphone uh and it's a destructive situation so um thanks to wework's imbel inbuilt sensors so we can put into our packets our smartphones and then i can keep getting uh navigation uh through wework uh touchpad um and uh we provide more accurate navigation experience uh thanks to wework's inbound sensors right so you're using your dev you're detecting above ground obstacles with sonar right yes you're right so so that gives a little more safety to the person using the cane exactly and then uh there's a one hand free which is a big advantage of course so you've got your phone in your pocket and you're using your cane this is so important that it was around three years ago i was in new york to give a speech at the united nations and while i was going my hotel i was using my traditional white cane and holding it with my one hand at the same time i was holding my smartphone with the other and i was pulling my luggage with these three fingers and i was trying to manage all these things and not surprisingly all of in this house i bumped into a pole so that's why you can see some scars on my front head so as you may know visually impaired people are trained at the schools at the blind organizations during the mobility trainings how to use their empty hand however because of the you know holding the smartphone we cannot apply these methods to protect ourselves to increase right and and you've also built in a um custom navigation so is that um you know are you are you uh how custom is it are you building on top of other uh navigation systems and how how have you put that together yeah of course we um we use uh google infrastruct infrastructure uh but we built our own technology on top of it to provide more accurate navigation experience because as you guys visually impaired people need uh more accurate navigation uh comments for example it's it tells me turn turn right in 100 meters okay i will turn right but there should be two different streets on my right so which one should i in one o'clock direction or three o'clock direction so we provide clockwise uh accuracy um you know navigation experience uh for visually impaired uh the the technology that we developed i see and so um uh so you also have uh an interface that uh a voice activated interface uh is that is that the predominant way that the user interacts with the cane and the app yes you're right as i uh showed at the beginning uh wework smart chain has a voice menu so visually people navigate through by swiping on vlogs touchpad so this is one way of interaction at the same time uh we we have just released our voice assistant um interaction as well uh so it is beta right now but we believe the potential of voice assistant interaction uh through we work smart chain uh and we will empower it day by day right as there are more sensors and more data and more complex algorithms providing information to the end user the interface gets more complicated how do you convey this information in a timely way to the user what what have you learned along the way about that that problem uh as uh now that your your device is in in use with many people um so you're right um we need to provide more simplified uh uh user experience and and that's why we started uh with voice menu uh through vlogs touchpad but uh we observed that there are many visually impaired people who interact with their smartphones uh through voice assistance uh you know i have i have met many visually impaired people who even don't use voiceover they are just using the siri so um that's why uh i think voice assistant approach uh um can make us uh our lives easier as well as of course our users lives easier and can solve this um too many uh too many data process yeah yeah and what about haptic do you think haptic has a long-term role in in interaction with the end user um i think it will take place in our lives but but we we will learn how to use it a more appropriate way and also uh currently we alert our users um through haptic feedback about obstacles but um we got some uh some uh demands uh from our users they they want to hear uh sound as well why in these days we are working on adding sound as well but it doesn't mean that haptic is not a suitable way to communicate with user but i think we we should find a better um way of uh using it you know maybe applying different patterns uh or uh or or it can be more understandable uh for you visually impaired user um and we can uh build um and and and uh it will be so helpful for visually impaired uh great great and um you know what is the as you look to the future you know on your product roadmap what would you say what is the most important thing that you're thinking about what stands out as the as the really important technology or feature that you know your audience that your customers really want so it's not difficult to guess we keep developing our technology and we are always uh we always push our boundaries and uh we we want to make v walk a personal hub for visually impaired people where they can reach all sorts of information and technology about their mobility experience and we feel cells are we we feel selves these sorry we feel ourselves so lucky uh because we are not alone in this journey uh so v walks ai studies was funded by microsoft uh uh because we woke uh is one of the microsoft ai for accessibility uh startups and also we are partnering with imperial college london to uh our technology uh more and we are we are making a research and development partnership with them and we we want to currently uh we are working on using currently uh we are working on mobile training feature of we walk uh so uh as you know visually impaired people uh are trained how to use white cane but unfortunately there is no uh follow-up mechanism so with with vworks mobility training feature uh the teachers will have a chance to follow their students mobility improvements uh and uh this this is the project this is the technology that we are building together with microsoft uh right now so that that that you've mentioned that to me in the past that's a very impressive system because trainers can then see the data even when the trainer is not with them so that would be exactly exactly remarkable so how many people are on the team right now so we are a team of 23 and we are headquartered at london uk and uh we have uh subsidies in turkey and the usa as well great and um and the price of the cane so it it cost six hundred dollars currently and it's available and uh so far we reached thousands of uh users in 59 different countries great great well congratulations on your on your success so far um and let's turn to uh louis philippe massey and and talk about the stellar trek so uh stellar trek's a a very ambitious device because it's it's coming in on the heels of another very successful advice device which is the trekker is that right louis philippe yeah exactly and uh actually we we have entered that the navigation device uh market for for uh for in the assistive technology field since with our first device and it was pretty successful so of course we have big shoes to fit but we're pretty confident that this new model will be pretty pretty well received and pretty useful with the new features we will introduce right i mean the the the ambition level is really pretty stunning because uh in in the in the words of uh your colleagues it's going to provide tools at every step of our users everyday lives um that's a huge step up from where trucker is right trekker had a more limited feature set exactly before we were kind of when somebody wanted to go from point a to point b well point b had to be on the sidewalk right so so now we we want the point b to be really at your friend's house or the restaurant you're you're meeting someone or somewhere else so so so that's why we we kind of introduced what we called not just navigation on a larger scale like what google map or other application could could give you but first of course our micro level navigation is is i i really optimized for for for blind users but then after the new features we'll introduce is uh use the the camera that are on the device to actually locate uh the door that you actually want to go to so so if you're going to uh 12 saint john avenue when you are in front of that address you take the device which i have a prototype here and the user just scan broadly the scene and the device will locate potential doors and eventually identify if it's the correct door and give you precise micro navigation directions to go to that door that's really pretty remarkable can we see the device again you you gave us a short one don't freeze the image that's that's a prototype that is used actually for for drop test and durability test so there's no logo on it but is that approximately the size of what the production model will look like it's exactly the same physical size and there's the only difference is that it's going to have markings and logos and stuff like that but you see it's a it's it's it's it's actually a little smarter smaller than the the usual smartphones it's a little thicker because we have like much more powerful antennas that you could find on a smartphone so that's why it's a little thicker there well that leads me to the to the tough question you know a lot of people are probably going to bring up which is a separate piece of hardware so in in the case that we walk we know the cane is separate and it's working with an app uh in this case uh it's uh the device is replicating a lot of what is in a smartphone but doing other things as well could you explain you know the advantages of having a separate device and not using one of the platforms well there's there's many reasons to that um of course we recognize that there's pretty cool applications that could even get you can get them for free on your phone and it's working pretty well but it's really going from really the old journey that you want to you want to go and that's one thing and also just on the hardware side we really have optimized the hardware for example the antennas are much more sensitive than the usual smartphone antennas to make sure that the gps [Music] reception is always optimal and and we will also introduce in this model the you know l5 gps reception so there's the precision on the positioning that is sub meter so it's it's like three foot or or better in terms of resolution so this was not it's still not so common on most smartphones you can get there and and of course since ai we have dedicated hardware uh to process uh the images and so on and finally one of the advantage is that the the there's physical buttons so the interface is really made for people who are not necessarily comfortable touching on on a smartphone screen so so it's all these combinations and also built on our past experience that we wanted this product to evolve gradually so this that's why we haven't embraced the the all smartphone bandwagon so far so so i guess then there there are there are usability questions i mean are advantages of developing your own hardware the physical buttons and then uh you can control the uh the trickier electronic elements to sort of amplify the the technology where you need to amplify it better antennas greater sensitivity [Music] have you been advantaged by all the things that have happened in the technology marketplace in the past few years you know uh you know the rise of very uh sophisticated gpu sensors and the presence of cloud compute and other things like that because in a way it's a little you know from a if sitting in silicon valley as i do you know uh people would say that's crazy why would you develop your own hardware when you've got an unbelievably sophisticated low-cost platform smartphone you know uh what is your response to that well there's again many reasons uh when everything is optimized to work together it it has many advantages uh smartphones are wonderful machines and we're not competing against them actually we're most of our users also have like android or apple based iphones and they they they're quite happy about them but a navigation device such as ours it's really something our users really depend on so for example the battery life it's something critical if you're going to some area you don't know you better be sure that you're if you're alone that you you can depend on something so our battery is made to last much longer than smartphones batteries when it's in use because if you use a navigation application on most smartphone well the battery will deplete quite fast compared to to our device it's like more than it's it's it's more than a factor of five or six and so because our device is re-optimized for this but in a broader sense we actually benefit a lot from all the new technologies that appears in the last few years just the the processing power that that permits all these calculators so that's why our first navigational device was actually a backpack it's it was a big device but now it fits and it fits in a small package and it have enough power that it can localize and recognize the door that you want to go and it guides you there so that's right and what extent are you are you drawing on uh on cloud services for instance to provisional features for instance are you is your computer vision uh uh your own proprietary system or are you using another platform so that's a pretty good questions and actually we're still debating this um we want the device to be self-sufficient so we we want to make sure that even if you're in an area where for example you couldn't get any wi-fi or or cellular signal you're still be able to use the satellite to guide you to your destination and let the device so we will have like offline capabilities even for the ai part so we have like the ddd the new neural computation that is is is based in part on the device uh we're looking how to extend that of course because we believe that as we move in the future you know internet connectivity connectivity is going to be pretty well universal especially in the in urban areas so we will benefit we will we want actually to benefit from cloud computing and and the capabilities that it adds to the device so we don't see this as a competition we have like the baseline completely off offline calculation uh but we will add later on capabilities to use the cloud so that's probably going to be a pretty significant factor in the future you know letting the user know perhaps that some features may alt depending on whether it's on device or in cloud that sort of thing does that come up as an issue well it's it's it's so you can make a the same parallel when you're using a smartphone and you're going you go to a remote area for example you obviously will not have 5g reception if you go to the deepest countryside so some of the fancy application you use a lot of data will not work very well there or not at all so it's it's kind of the same you know um in in urban area in the big cities uh getting internet coverage is it's not even a question it's it's there right um so of course we'll benefit when we have it but when we don't we'll still have enough capability in the device and the features that will guide the users to the safely to their destination will the device require a subscription to a cellular network the you can at the beginning since we will not introduce these cloud computing and service for now uh it's actually something where we're kind of debating internally some people are kind of allergic to these subscriptions right so we might have a subscription based uh service or a one-time fee uh that will cover uh like the expected lifetime of the product so we're actually debating this so we're not we're not not very clear yet on the exact model for that great well i think it's it's terrific to hear you talk this way because i think a lot of people aren't familiar with how difficult it is to figure out a product you know but you have to you have to address some trade-offs and all the rest and i'm speaking for the others as well is that since we have an additional layer of complexity is that we need to design a device that is used by people who don't have sight so it's it's even more so that's why we we have this challenge but that's where we are we hear the three of us so humanware has a has a great track record of building great products so i'm sure this will be interesting what is what is well i wanted to dive into one particular feature set which sounds super attractive my own wife is blind and i know that the last uh few feet of of her getting around finding the right doorway uh or indoor navigation uh you know locating uh the stairwell something like that it can be especially tricky and and even a little uh jeopard you know uh uh uh dangerous your your device is ambitious in that respect it really wants to try to close that gap how would it would how are you doing that well the as i were as i was explaining earlier you know when you're on the side of the side on the side of the street um you know you're not far away from that building but you're not you're still at risk of you know walking randomly into a car that is crossing or or bicycle pad or or some rocks that could be on the on your pet so so that's why we want to do you know i'm sure your wife know about the fff the final 40 feets which is there's also sometimes a ford f ladder that sticks into that micron but anyway and that's what we want to de-risk there's always risk we we don't pretend that we will remove all the risk of blind pedestrian navigation but we want to give additional help such that it will be a little like having a friend helping you going to that those final 40 feet so so we will use the ai and the to eventually locate potential threats and sometime when you're on the sidewalk the path to that door is not a straight line so we will say okay you will have to to go at 10 hour uh for uh for 40 feet and then turn to two hours and and so on something like that so direction sorry and the interface will be uh audio voice it's audio now we have uh well all the the the feedback from the devices audio uh we have basic voice command but since we will have a neural engine on the device will also [Music] introduce natural language understanding uh such that you can use the device as a conversional agent to actually help you more like a personal assistant to go from your your destination so right so you will say to the device we haven't given the device a cute name yet but suppose you say hey guide me to the to the door of the starbucks and then the assistant will will go this way but be careful there's a telephone pole in the way so things like that like if you were with a friend yeah i'm used to that question um life um great and what is the price going to be on the on the device final price has not been announced yet uh it's going to be above 1 000 u.s we probably like as i was saying earlier we're looking at different uh models uh in terms of of um if some of the service will be [Music] subscription based or not so this will be uh this will be refined in the next few uh few months we're uh planning to launch the product around the april may uh so before that of course we have the final price sorry next year yes of course i'm sorry april may 2022. yeah yeah so that's very soon that's that's we feel the pressure is working on the development of this device uh we're we're pretty lucky at humanware we have a pretty a nice nicely sized team of about 48 people in my r d team so we have a lot of people who are working on different aspects of the product uh the hardware of course but also the software the ai but also all the tests that needs to be done and we're in that phase now we're really on the testing phase to make sure that we catch all the potential issues because especially for a navigation device it's not like a recreational like audio player or something like this right we know that people will depend on this and their security will depend on this at least in part so we want to we're really in the beta testing phase uh big time now until the the launch of the product great great well thank you for that uh very deep and candid discussion about the development of stellar so let's turn to our our final uh uh participant in this panel chagri hakan zaman who straddles the world of academia and product development that's a a fun place to be i've always thought um could you tell us a little bit about mediate which is the uh the company that you uh co-founded and and um the or you know what what what you're trying to accomplish with mediate and how it led you to create the super lidar app family uh super lidar and lidar sense i think is the second one right yeah the supersense sure so the mediate stem from my phd research at mit which is on a special experience and perception in humans and machines i was always interested in how we must make sense of the environment around us how we find where we are how we navigate how we understand what's going on to develop ai systems that can replicate our skills and with that i basically applied for an accelerator at mit to build a company around these technologies and uh the super sensor first product was sort of a natural ally for this type of task because one of the the communities that need the most special awareness of the visually impaired and blind community so we devised the app supersense and mediates vision is to enable people in both physical and digital spaces and empower them with new technologies using ai and augmented reality and the i mean it's a remarkable app it's also a crowded field so you know how how do you you know on the one hand i can see it as a really cool proof of a concept in a lot of ways um but do you where do you think uh you can you know that app or it's its successors down the road can stand out i always got the impression that this is a step down a road not necessarily the yes exactly so we are following a strict user-centered design process with what whatever we do so the reason supersense ended up having some of the features that the competitors have is that our user base was demanding these to be included so we wanted to listen to them and provide them while actually developing our technology and solutions using this represents is our test bed and which uh interestingly grew to be our major app it was loved by people and we kept improving it but what we think is the most crucial thing in an app that is using a lot of technologies in a smart smartphone is to be it's to be a task centered not feature centered meaning it should try to provide a solution as fast as possible for the relevant problems that people have instead of giving them a lot of toolkits so in relevant tasks and thinking about what is the most efficient and quickest solution to these tasks so people will not spend a lot of time trying to find the solution um the reason one of the i think one of the sort of distinctive features of supersense is it's ux you can do anything only by two survives or two taps you will never spend time trying to find what are you looking for like from a toolbox we spend a lot of time improving and designing our user experience so that uh it provides the most efficient way to find a solution to a problem right right and um that all makes a lot of sense and the um as you look down the road you know i i was intrigued when you launched super lidar and i know that you have a big interest in ar and across many of the panels at the show this year and augmented audio augmented reality is a big subject of conversation we have a session on indoor navigation uh the seeing ai team at microsoft is very interested in audio based ar you know how how are you seeing that shape up because the launch of lidar sensors of course on the iphone created all kinds of new possibilities but it feels like the possibilities are so broad and it's hard to know where to begin and you you're a pioneer so i'd love we'd love to learn how you see the lidar possibilities and and what that taught you of course uh our like our first feature in super sense was what we called then the object finder it was first of its kind if you believe uh this technology was there for a while but none of the other apps apparently didn't pick up the very key thing that people need to find things so we first thing we actually came up with was actually contextually a relevant task solver and object finder so we started with the intent to provide a general task solver that is relevant for navigational and spatial awareness related tasks so when the lighter came out it was no brainer for us because we have been developing all the technologies and all our rnd was focused on using computer vision technologies to parse the environment and find out information in it so we quickly developed with what we already knew the super lidar which is a prototype which is being developed right now with the actual users that gives a sense of what is around you by converting distances into sound and uh in the rhythms that gives you a sense how big a room is and how you oriented in it and i think there's a big potential there of course this is early days for lidar we are still using our computer vision and uh down the road i think technologies will blend uh you know vision and lighter technologies to create these type of solutions as well as researcher in this in this field when when you look at where we are today and and your um you know close connection to uh the next generation of products what what what's most exciting or what do you wish you could get your hands on i guess uh in order to build something that you really know people would love um i think this year is one of the sort of converging areas is digital events and what people you know fancifully say metaverse the ability to collect and construct 3d environments and access the information in it it requires a lot of infrastructure but what we are headed towards and what i wish we had already is a system where people can free freely navigate and independently navigate indoor and outdoor environments combine some of these technologies that are already there like already developing great technologies but what we are missing is an encompassing ecosystem that can provide and collect all this information for people if a blind individual goes into any market or any shopping mall and they don't need to really ask around where to go just pull up their device and the information is already there i think that will be the ideal solution in terms of navigation is that about algorithms or is it about data or both both they were able to efficiently encode and con reconstruct environments store maps and then share them over the networks and using smartphones or other devices to access that localize a person indoors these are all dependent both in the development of technologies and new algorithms and infrastructure especially the cloud computing will be one of the key elements in this right right because this can't be on device at least no device that we can imagine today partially we have our like rnd work that allows you to record this is one of the upcoming features in super light switch that is going to be in super lighter that allows you to record any environment you are in and then later on someone can just localize themselves in the same environment and follow the path you created for them this could be used for example for an hotel to find like let people find a reception or other places you need to just store it once locally and then you can share that information with someone but this is again just localized and isolated solutions for it to be a general uh sort of task solver we need more connected and cloud computing well the very very cheerful and helpful staff at trader joe's would really welcome this at least when my wife walks in the door so a great advantage um that's really fascinating and i promised at the start of the show of this of this panel that i'd let each of you ask one question uh uh of one of the other panelists or both panelists and we this has to be the lightning round because we're out of time but uh why don't you go first um uh ask that um who has a question that they'd love to ask i can break the eyes okay thank you uh uh kagar and i'm very curious because uh i i know that the um the indoor navigation is definitely something we look on our side and uh i i i see i this actually potential good discussion we could have together for sure is it something that for example you would consider because you were talking about multiple device to get the the the multiple sensors and so on and multiple environments is it something that your solution could be used on different platforms exactly um one of the projects that is supported by veteran affairs this year's we call them uh the idea is that we would like to create a standard for recording the invoice in indoor environments and adding information to them and so that then interacting with that data can use it this could be a device by humanware or a smart home device a google assistant anything so we want to create an in like we want to get started with this type of infrastructures and we are definitely interested in getting many people to get onto the same ship so that we can actually grow an ecosystem and it's it's really like crowd crowd sourcing right yeah it's it's partly crowdsourcing or monster standard it's there once the data collection method is there i think there will be different methodologies for us we will make it for human like an individual to use for their own purposes now to get see actually as an experiment too what else can you add to that representation you have in your hand you are thinking of building infrastructure maybe you want to be able to keep track of the changes in this physical infrastructure so you can actually share this with contractors with others yeah with others yeah thank you calgary any questions don't ask ah should i can i take it away take it away all right i want the sdq shot actually uh kyusha this is an exciting word and i have i think i the first time i see in your hand of the device is um so you mentioned that people are learning how to use the white game are there difficulties for people to adopt to vivo because of its differences or are you planning to let people adopt to vivo to a different type of orientation training as well so that's why we built our technology on top of standard white chain because we want to provide a technology to visually impair people which is similar to their standard tool which is you know white cane uh so that's why uh it's it it it doesn't need rotation period but uh since it's a technology uh like every technology it needs uh some adaptation period and and uh we currently we provide um um special training sessions uh for our users uh but also we are working on uh developing automated training session so smart chain will introduce itself and will orient itself to the user and make their adaptation period uh much more easier great that's what i thought because when you mentioned the orientation mobility feature i think this is one of the great like one of the good directions to instead of relying on what is there you you know you are innovating the way people interact with the cause i think kershaw do you have any questions you'd like to ask yes i i have one question to louis uh actually he showed that the device but since i'm like i couldn't sure i'm sorry did you describe it because the ergonomics is so important for visually impaired because they will they will hold it in their hands while they are walking on the street so i'm curious about its dimensions bait etc sure sure no problem i will describe it it's so it's about the size of a smartphones if you look at the footprint but it's a little thicker i would say about a two centimeter thick maybe or like a what in in in inches it's a little less than an inch uh it's kind of rounded and in the front there is the the the key the keys that are used to enter directions or control the device [Music] and so on um and it's a pretty intuitive interface it's pretty similar to our older device we didn't redefine everything because people were attached to that that that physical interface there is uh on the what's in on the underside of the device two cameras two high resolution cameras that are used for the uh the object uh well the doors recognition and eventually the object recognition as well so that's why having your thumb controlling the keypads if you put the device in front of you if you scan left and right that's where you will localize potential candidates of a door so then through audio cues it will say we think we have located the door straight in front of you for example then we take a higher resolution image to do the actual ai door recognition and also recognize the the door number the civic number of the of the door or the address so i don't know if it's good enough in terms of description it helped it helped me a lot thank you so can i ask one more question so which operating system do you use uh do you build your own operating system or are you using standard this one is linux based it's a linux based device yes okay thank you and how many physical buttons are on the front of the device uh well four and there is in the middle this kind of kind of joystick it's not a joystick but there's up down left right arrows with a center push button to and that is kind of the enter command so so it's pretty simple there's not like thousands of buttons but since you know i'm i'm not blind but i can easily without looking at the device you recognize the button because they have all different shapes and different locations and they're all accessible from your temp so you don't need to have two fingers uh two hands to to operate the device so they enter their destination they put the device in their pocket because they have the the cane and when they arrive at close to their destination then they pull it out to take the image of the door uh to localize to do that those final 40 feet right right well thank you for that gentlemen i wish we could talk for another hour because i have a long list of questions i'd like to ask absolutely fascinating and i i hope our audience appreciates the incredibly interesting and difficult work you do to create the next generation of fabulous assistive tech so thank you very much thanks thanks thank you [Music]

2021-12-05

Show video