Accessible Multimodal Input in Augmented Reality Training Applications - Ashley Coffey + Tim Stutts
ASHLEY: We're excited to be here today. My name is Ashley Coffey. As Thomas mentioned.
I am an emerging technology and accessibility consultant at the Partnership on Employment and Accessible Technology, also known as PEAT. And I'm here with Tim Stutts, over there on the left hand side of the screen, who will also be presenting today. A little bit about myself. As I mentioned earlier, I'm a consultant at the Partnership on Employment and Accessible Technology.
And in this role, I really work towards advancing the accessibility of emerging workplace technologies. To increase employment opportunities for people with disabilities. I believe accessibility should not be an afterthought, and as we're approaching the fourth Industrial Revolution of technology, it's important that accessibility is baked into XR tools that are being used in the workplace and beyond.
I'm also a co-leader of the business case workstream within XR Access, and we're currently pursuing a research project on how inclusive immersive workplace technologies can create more inclusive processes. And environments for workplaces in the future. Previously, I used to be an Emerging Technologies Librarian at the University of Oklahoma.
Practicing implementing inclusive design practices for XR tools used in research, innovation, instruction, and collaboration across campus. But just a little bit about me... Let's dive into Tim Stutts here. I'm so excited for you all to hear from Tim today. We've got an action-packed slide deck here. But a little bit about Tim.
Tim is a multifaceted designer drawn to challenges involving interaction, input, user experience, prototyping, sensory feedback, systems design, data visualization, and more. Tim is very talented in accessibility, and he has experience as an individual contributor, as well as directing the efforts of small design teams, to solve complex challenges. Prior to Tim's current role as principal augmented reality product designer at PTC Vuforia, he worked at Magic Leap and IBM, specifically in the Faceware Technologies area. Tim will talk a little bit more about his background and experience throughout our slide deck.
This is just a brief sampling of some of our accomplishments that we wanted to share with you today. Next slide, please. To kind of start our conversation off here, I'm going to share a few themes from the recently published inclusive XR in the workplace white paper and leadership brief.
These are two resources that we developed in collaboration with the XRA. The XR Association. And these are meant to be resources on how accessible immersive technologies can help employers upskill an increasingly diverse workforce. You can visit Peatworks to download this resource. P-E-A-T-W-O-R-K-S, and also see it on this handy Post-It Note written by Thomas here. Thank you, Thomas.
So let's talk a little bit more about XR technologies and why they are beneficial to employers and beneficial to being used in the workplace. There are a few example images listed here on the bottom of the screen. The image on the far right here is a PTC Vuforia being used in a Med-Tech application. Tim will share a little bit more about that later on in this presentation. But we also have an example of Spatial here in the middle, which is where we are currently. And also another great use of augmented reality being used in the manufacturing settings.
But a few benefits of inclusive XR for employers: They can advance diversity, equity, and inclusion in the workplace. By enabling collaboration in new ways. Also, this is a great opportunity for XR to close the skills gap by upskilling and reskilling the workforce. Right now, as we are coming out of the Great Resignation, people are shifting careers, people are learning new skills, and XR is really poised to help bridge the gap and help teach people things quickly. Help them learn in new ways, and collaborate in more innovative ways. It's important to ensure that hybrid workplaces are resilient.
I think we can all agree that we have Zoom fatigue. If you work remotely 100%, and XR presents an opportunity to collaborate in new ways. For example, Accenture is deploying 60,000 headsets to their employees for onboarding and training. They've created a virtual twin of their offices.
So that when new employees are onboarded, they can have a similar experience to having an in-person onboarding experience. And XR is innovative and competitive. You can really gain a competitive angle in the job market by incorporating XR technologies in the workplace. We talk a little bit more about this in our white paper, but these are just a few samplings of the benefits of inclusive XR to employers. Next slide, please.
In addition to the benefits that are here for employers, there's also a huge business value for inclusive XR. And prioritizing accessibility in XR and immersive tech adoption can give organizations a competitive edge in this tight labor market that we're currently in. And as organizations accelerate digital transformations, you can use XR to engage employees in new ways.
Like I mentioned earlier, Accenture is using XR. Deloittes is using XR. I saw this week that they're using Virbela to host virtual meetings in a different space. And XR technologies can enable businesses to attract and hire more diverse talent pools. And there are proven benefits that include improved job training and enhanced collaboration.
So think of the time spent onboarding a new employee. Let's say it's 24 hours. What if that could be condensed into 10? You know, it's helping bridge that gap in knowledge retention. And ensuring that people with disabilities can access these benefits.
XR tools have to have accessibility features by design, and not as an afterthought. Case in point... Spatial should have captions built in. Right? So we can all be able to communicate effectively.
So it's important to raise awareness. But not just raise awareness. Actually take action to designing XR tools with disability inclusion in mind. Next slide, please. Now, here is a few samplings of designing for inclusion. So take into consideration how your application or your experience that you're using or developing can be used by different people with different sensory, physical, and cognitive needs.
Including those with activity-environment limitations. So I have a table here that takes into consideration people, activity, and environment. Now, we have moving and touching in one column. Hearing and speaking in the second column. Seeing and observing in a third column.
And thinking and learning in the fourth column. So as you're looking through incorporating XR into the workplace, take into consideration these aspects, to make sure all people can use XR technologies. Next slide, please.
Now, when it comes to diversifying your team, make sure your team includes people with diversity of experience, perspective, and creative ideas shaped by race, ethnicity, gender, age, sexual identity, ability, disability, and location, among others. You know, we always talk about nothing about us without us. Hire.
Include. Bring people to the table with diverse perspectives for designing these tools. Because that is how we bake accessibility into XR in the first place. Leverage Employee Resource Groups in your company as well. And if your company does not have an employee resource group, consider developing one.
Next slide, please. And provide flexibility and options in your tools. Tim will cover this a little bit more in our talk today. But as you can see here, this is a variety of different multimodal inputs for accessing information. So test for flexibility and options for input modalities. Interaction modes and outputs.
And, for example, supporting different ways of communicating, like voice, text, or Sign Language might help employees in loud or quiet environments. Oops. For example, here on the far right, we have a picture of a keyboard and a virtual screen of a phone. As well as a Magic Leap One controller. And when we dive into a little bit more of Tim's part, he'll talk about designing some of these aspects of multimodal inputs.
But avoid designing for the average user. Design flexibility can support all individuals. Next slide, please.
Now, without further ado -- that was just a brief, brief, brief touch of what we have in our inclusive XR in the workplace white paper and brief. But I want to share a few points to set the stage for what Tim is gonna share today. He put a lot of, a lot of work into these slides and he's excited to share with you. I'll kick it over to you, Tim. TIM: Thank you so much, Ashley, and a11yvr Meetup for having me.
This is actually the second talk I've done. I was fortunate to speak as a part two to the haptics talk that Eric Vezzoli of Interhaptics gave last year. It was kind of a tack-on, where I just went ten minutes into haptics.
And I'm excited to have a talk where we can talk more broadly about accessibility within multimodal inputs in AR. So just to kick things off, I'll talk a little bit more about my background. The Magic Leap... JOLY: Can I interrupt you and ask you to move to the left of the screen? I haven't got a good view of you. TIM: Sure, sure.
No problem. How is that? Is that okay? JOLY: Where Audrey is. THOMAS: Move where Ashley is. JOLY: Ashley. THOMAS: Do you mind moving to where her avatar is? TIM: The other left. I'm gonna go there and rotate.
How about now? THOMAS: Move forward a little bit. TIM: Sorry. I switched sides because it was very popular on that side before. THOMAS: If you move a little bit closer... Move forward just a little bit... TIM: Ashley is gesturing for me.
THOMAS: Are we good, Joly? JOLY: Yeah, that's good. TIM: Okay. Fantastic.
So... I'll just go ahead and start from the top. So yeah. Very excited to be here. Thank you, Ashley, for the intro, in covering some of the workplace stuff you're doing with PEAT.
Awesome work. I'm really excited to give this talk. I had the opportunity to speak after Eric Vezzoli, at an earlier a11yvr talk focused entirely on haptics. And I'm excited in this talk to expand more into multimodal inputs. So it's important, going forward, talking about the work that I'm doing now for Vuforia, the Magic Leap work that I did in a previous role at Magic Leap is really important in laying the foundation.
Because the Vuforia apps that I will discuss tonight were actually built on the Magic Leap platform. So at Magic Leap, I served as senior interaction designer from 2016 to 2018 and then lead interaction designer, 2018 through 2020. And I worked specifically on the operating system for the platform. Focusing on input and sensory feedback and accessibility.
I want to just talk a little bit about Magic Leap and what it is, for those who might not know. Or might need a refresher. So Magic Leap is an AR head mounted display. I'll sometimes use the acronym HMD for head mounted display. And basically, until somewhat recently, they were focused on consumer and enterprise applications. But in the recent couple years, they've switched to be more enterprise.
On the top left there's an application called Avatar Chat. Where you can chat with virtual human avatars. Kind of like Spatial.
And here we see three different avatars of mixed backgrounds. And a menu of emoji. So that's one application.
At the top right is the screens application. So someone is reclining on a sofa, watching a virtual screen in their living room. You can imagine the appeal of this.
You can have that 17-foot TV you always wanted, for just a fraction of the price. Bottom left is a keyboard. Particularly a numeric one. And keyboard and text entry are areas I worked quite a bit on in the platform. Here you see this beam, pointing at the keys of the keyboard that are floating in space. And those are controlled via a Magic Leap controller.
(audio echoing) And the bottom right is... I'm hearing some feedback in the mics. I'll just keep going, though. So the bottom right is the Magic Leap launcher, where users go to launch applications.
So here you see a radial arrangement of app icons with the most recent app in the center. In this case, Gallery, which is an app kind of like the photos app on Apple OS. So I'll talk a little bit about my XR accessibility advocacy and design work. So accessibility for mixed reality was a big part of my day-to-day at Magic Leap, where I served as vice chair of the Leapable Group, chaired by Bill Curtis Davidson. Bill couldn't be here tonight. He is currently on medical leave.
And we wish him well. But in the bottom image, you can see he and I standing out in front of the Tata Innovation Center, with our Magic Leap devices on. We were both working at Magic Leap at the time, on accessibility for the platform. And we were attending the first XR Access Conference at Cornell Tech. So that was an exciting moment. And then...
I hear some claps! And some of the stuff on the top are volunteering of our group in the community. Some things we did with the Dan Merino Foundation in Miami. And I also -- after Magic Leap, I took a break from XR accessibility for a year. And my last role at Faceware, which was more focused around facial motion capture -- but in my new role at PPC, it's a renewed focus of mine working forward, because I'm working on head mounted displays again.
I'll talk a little bit about the Magic Leap control. So pictured is a Magic Leap control in hand. The primary input mechanism shipped with each device. There is a dark-skinned male hand holding this wand-like device. And pointing it at an invisible virtual object that we are unable to see.
But we can insinuate. And there's a glowing touch pad that he's using his thumb to activate. Presumably some virtual button. So the control features six degrees of freedom, or six DoF pointing capability.
This provides position and rotation in 3D space for use in targeting UI. So it's pretty close to being like a real world pointer. You might imagine holding a laser pointer and pointing at a wall and having the flexibility to rotate it in hand and perhaps move your body forward and backward. So this is doing the same in augmented reality. The resolution on the touch pad is 4K resolution, with force sensitivity.
It's a circle. So I joked that it's actually πK. It's a bad math joke. Because it would be 4K if it were a square.
So you can shave some pixels off the corner there. But yeah. You get a pretty nice high level resolution. It also features a mechanical bumper and trigger button. And those are underneath the controller. And then finally this haptic motor and LED halo, which I mentioned, for additional sensory feedback.
So we'll get into some of the symbolic input methods on the operating system for Magic Leap. So on the left, a virtual keyboard. Here you can see a numeric keyboard. And a user is moving the control along a series of number keys, floating in space.
And typing a password. And it supports control targeting and activation of keys. Via six degrees of freedom.
But it can also use the touch pad. So you can just swipe your finger on the touch pad, while assuming focus of the UI with your head pose. So you can look at a panel and swipe.
And it can behave like a mouse too. Which is a really great feature for accessibility, for sure. For those who are unable to manipulate with their hands. With a 6 DoF controller.
And there's also direct gestures you can use on the keyboard too. So you can reach out and type on the keys that you see before you. There's a mobile app. Pretty straightforward. It has...
It runs on iOS or Android. And it offers a system level keyboard on either platform for input. So users can use this to type into a text field. And above that, in the same image, is a touch pad.
So it kind of mimics the control touch pad. And so you can use it to move a cursor around. At the top right, K600, from Logitech, is a wireless Bluetooth keyboard that's supported on the platform.
And one of the great things about this keyboard is basically every key does something. Bill and I worked really diligently at that. To make it flexible. And you can use that touch pad as well. To navigate spatially.
And then the bottom right is voice dictation. And you can see a rippling voice visualization that responds to text. And you access that via the virtual keyboard. Okay. Some other multimodal areas in the OS that I'm proud of, that I was able to touch on, while I was there... Audio is an area that's near and dear to my heart.
And there was a lot of multimodal input feedback for just setting audio level. So on the top left, you see a radial on-screen volume indicator for when you adjust the volume. It floats in front of your view. Not fully blocking it, but just enough so you can see what level the volume is.
The top right, you have dials in settings, which is an app to do what it sounds like. But it also lets you do really coarse adjustment, like if you wanted 7.3 as a volume level, you could dial it in here. If you needed...
To tune in like that. There's F for function keys on the Bluetooth keyboard, for mute and volume. And then you can do things like with voice command, say: Hey, Lumen, mute volume.
So use an indication word coupled with a voice command to execute things like that. Okay. So audio volume level buttons and LED patterns are also things that happen on the Lightpack. So there's a nice multimodal example. At Magic Leap also, I worked a lot on sensory feedback.
The primary is visual. And we often talk about it as if it's the only. But there's a lot more going on.
In particular, there's sound. It can happen via built-in speakers as well as headphones. Some Sennheiser headphones that will work with the device as well. Giving the user spatial audio cues. For haptics, we have a built-in motor.
A motor inside the control itself. To respond to input. And there's the tactile feedback of the controller itself.
And the trigger even has its own mechanical sound. So yeah. I had the opportunity to work on 40 different LED patterns across the hardware. And 20-plus haptic patterns. And 100 spatialized sounds.
So that was really fun. I also got to work on this, which is a picture of... Oh, before I forget -- the previous slide, there's a nice Venn diagram here of the visual, the haptic, and the sound, and showing how they can overlap with an emphasis on that sweet spot in the middle, where they all might happen together. I worked on Magic Leap 2 before I left. That's really about all I can say. But you can imagine this product has been in the making for a long time.
And I was fortunate enough to touch on it, before I ended my time at Magic Leap. Now I'm gonna shift to the main part of the talk. Which is my work at PTC. But I'm glad you have a primer with Magic Leap, because we're gonna be referencing it a lot in these apps.
So in PTC, I worked within a group called Vuforia. And the Vuforia group is focused really entirely on augmented reality applications. Mainly in manufacturing and pharmaceutical sector. But you know, broader than that as well. And I currently serve as a principal augmented reality product designer.
For Vuforia. Where I've been since mid-last year. I would like to go ahead and mention our team. So I work specifically on the Vuforia HMD, head mounted display, work instructions design crew. This team is led by our senior director, James Lema on the bottom right. He has previous experience working on HoloLens for Microsoft.
So he has... I'm kind of like the Magic Leap alum. And he's the HoloLens alum. And we've certainly worked on both platforms and share our expertise. Myself next to James -- by the way, I'm a Caucasian male with brown hair and a beard. Middle aged, approximately.
To describe myself. We also have Brandi Kinard, a brand-new designer on our team, and Than Lane, another principal designer, who's focused a lot on 3D design work. On the top right, Luisa Vasquez, who recently left PTC for another role, but she did some great advisory work on the HMD apps that I'll talk about tonight. Steve Jackson has been a great resource to us.
Steve, top center, is a lead designer on the Vantage application. So Vantage, which I'll talk about for Magic Leap, runs on a bunch of other platforms. So Steve handles all of the tablet and mobile applications of Vantage.
And finally, Joel De Guzman is an excellent 2D graphic designer who has been really helpful for our design language and working on icons and illustrations. So that's our team. I'm gonna get into some use cases and solutions. Let's talk about the hardware first. And what we currently support.
So head mounted display devices and applications for Vuforia work instructions. So these are running on Magic Leap One. The new Capture and Vantage app, which I'll go into and share with you tonight, is also Microsoft HoloLens.
It supports the original Capture and View. So these apps at this point are a couple years old. But still work. And are still in use by some clients. And then there's RealWear, H and T, which also runs the original Capture and View application.
Pictured at the top left is a Caucasian woman wearing a HoloLens device. Bottom left corner is a Latina... Latinx wearing a Magic Leap device, and then you have a Caucasian male contractor in a rainy construction site wearing the RealWear. Basically showing off the fact that RealWear is probably the only one you would be comfortable having in the rain.
The Magic Leap and HoloLens are best described as goggles, worn over the head. The RealWear is a tiny touch screen parked on a boom in front of your lower eye, and gives you kind of a tiny screen view there. But the great thing is... Well, a couple of things that are great. Being that close to the eye, it actually provides a lot of resolution.
And also, it's amazing for voice commands. Which is the only input that RealWear supports. Okay. My hypothesis and kind of central to this talk is that building usable multimodal input augmented reality applications with multisensory feedback in an industrial work setting to overcome barriers benefits accessibility. So this is kind of a bold claim.
And by saying this, I'm not saying that we are off the hook to just support accessibility needs outright. But what's interesting is, in the manufacturing setting, we have a number of obstacles to overcome that are similar to ones that are those in accessibility. So things we can do with head-mounted display applications... We can optimize for content readability, regardless of light conditions. We can have more onscreen text and stronger haptics and visual feedback, which helps in loud settings.
We can use hand gestures, to avoid the need to pick up hardware controller when not possible. This is huge for our current Capture and Vantage products for HMD, because in the factory, often people are holding other objects or wearing gloves that make picking up objects problematic. And we can use far field hand gestures to eliminate the need to reach for objects that are not within a user's reach.
Finally, we can... The UI can contain a calming and simplified design that helps prevent cognitive overload from overwhelming stimuli. And on the left hand side, you'll see there's a simple UI that's our original Capture app in HoloLens, with some actions on it, as well as a "take a photo" voice tip that would let a user know what to say if they wanted to take a photo with voice. In terms of our offerings, I'm gonna talk primarily about Capture and Vantage. But it's important to understand the full work instructions expert Capture Suite.
So Capture is what it sounds like. We're using it to capture instructional content. And one of the real benefits to our applications is that we can do things like capture area targets. So area target is a spatial marker that relies on Vuforia engine technology. And think of it as like... A marker in a space that uniquely identifies the space.
So I can record an image in video, but I can also put a marker on a particular table in a factory. And so then once a worker runs that step again, in a procedure, that anchor can appear in the same place in the factory setting. So it's really powerful.
In its ability to capture these varied targets. There's an app called editor. This does not run on the HMD, unlike Vantage and Capture. It's a web application.
And here you can see a laptop with a motorcycle on the screen. This one -- users edit the procedure content captured in the Capture app. They're putting it together on a computer. Because eventually they're gonna execute it. And publish it to the Vantage app. So that's the app where workers will go back through procedures.
Often many workers and a lot of repetition. So they'll want to be able to go and -- in this case, you see a tablet. Which is presumably our Vantage mobile app. And go to a motorcycle. And do a procedure on it.
And follow step by step with spatial markers. And finally, the app on the bottom is another webapp called Insights. And it's used to just analyze the progress of the whole operation. So it's interesting.
The two apps I'm gonna talk to you about tonight in terms of workflow are wedged between non-XR apps. And one day, we would love for those to be XR too. But at the moment, we're just leveraging probably about half of our workflow with HMD apps.
I'm gonna talk about input opportunities in our platform. So there's the Magic Leap controller on the left. So we support it for the Vuforia Vantage and Capture applications.
And we use things like the trigger to target virtual buttons. As what's happening in this illustration. So the user is pointing to control the virtual button and clicking the trigger.
We can also do near field gesture. So the user can reach out with their finger and press a virtual button. And with those buttons, they need to be close enough for a user to do that. And far field gestures can be done from far away. So here you see a hand. The palm of a hand.
Casting to a button. And then other digits on that hand, specifically in this case thumb and index finger, are used to perform an air tap gesture that does the click confirmation. So you can basically point and click with one hand. Which is nice.
And then finally the voice commands with our hey, Lumen invocation phrase. Here are some more... Those were illustrated examples. I like showing this too.
Because you can see what this looks like on our platform. Minus the... There's like a kind of... Demo mode skeleton effect happening with the hands. That's not a part of our app.
But that's just... This is just something that shows up when we're testing in the beginning. But you can see on the left a user reaching out and touching the menu directly. And if it's close by, that could be very convenient.
But if you are unable to move to the menu, for whatever reason, that far field hand gesture is here. So in the second image, you see the palm of the hand targeting a lock/unlock toggle on a menu that would allow the menu to be situated in space. Or situated in relation to a user's head pose. The top right is the control pointing at the same menu on another button that's used for putting down anchors. Which are the area target feature I mentioned. And then finally voice commands.
And note the tool tips. So in the top... In the images on the left, the two images, when the user is hovering over the UI, they're reminded of the voice command. So a tip says: Hey, Lumen, lock menu. Or: Hey, Lumen, snap, photo. So this tells the user what they can say, if they're not sure.
And it's a really nice feature too. Since learning all the voice commands would otherwise be challenging. I'm gonna show a video of inputs in action. So here a user is placing area target markers.
In this case, it's me in the backyard. I'm putting some on some foliage. And I'm doing so with a control. And then I'm going back and targeting the menu. And getting another anchor. And putting that one down, with a hand gesture, by doing the air tap gesture in the air.
And I'm looking around, waiting for the red he to appear. And then I'm giving a voice command, Hey Lumen to set the anchor. And it'll find the tree and place it there. So that's a nice example of the app in the wild, literally.
And there's videos for this going up on the YouTube stream. So all the videos I'll show tonight are being shared there. If you would like to click links and hear sounds. The sounds are not coming across in Spatial, unfortunately. On the input side, we are doing things like providing our own sound effects.
For many different places in the app. The benefit is obvious. Just having additional sensory feedback...
We also have sonified notifications. We're not doing text-to-speech yet. Though that was a feature of an earlier View app that Ashley had shown. We don't have that in our notifications yet.
It's on our radar. But we do have a notification system. And we have haptic feedback as well for hovering UI. So if you hover a button, you'll feel it in the controller. And it's really quite helpful. I worked on the sound directly.
I have an audio background. So I enjoy working with the team to carefully craft out about 20 different sound effects used in the app. In the video on the left, you can just see... I'll go ahead and play it again, since it was so brief. You can see the user scrolling with the controller and hitting the bumper, and they feel and hear a bump sound when they do that. Talk a little bit about placement of dialogues in relation to the user.
So we have a diagram here. The profile of a user's head. And a dialogue up near a wall. And it's showing where that is, spatially.
The orange block representing the dialogue. And we're also dealing with field of view. Which is something that comes into play with the head mounted displays.
And Magic Leap has 30 degrees vertical field of view. So everything we do has to exist in here. And so this shows you the challenge of that. I'm gonna talk about the Vuforia application for Magic Leap. And so here is a shot of it happening.
And you can see someone in a lab operating... I'm not a chemist. But it looks like they're working with a kind of vial of a chemical. And there's a menu off to the side, with the different recording actions in the app. The ability to take photos.
By the way, video is just rolling all the time. So in our current app... You're just recording everything. And later you can kind of edit down what you don't want. Or press spatial markers. You also have UI to advance to the next step on the bottom.
So a user can go through and imagine how they're gonna run this -- record this procedure. And be like... Okay. In step two, I want to tell the user to go to this table. So they might place a marker on the table. And record some photos.
Et cetera. And you can see a thumbnail here that the user has just taken a photo. Presumably with voice, since their hands are tied up, and the thumbnail preview shows up. Here's a video of Capture in action. And this will be on our YouTube stream, if you would like to take a look. So a user -- a gloved hand in a lab is reaching out and starting the Capture app.
They're gonna take a photo now, by tapping the menu. The photo button. And the photo is snapped. So they have a preview. They just locked the menu to the world. And we have a cool feature.
When you move back away from the menu, it actually grows. Scales. So it's more targetable from a distance. Which is another great accessibility feature. If the menu ever comes out of screen, we have UI that's persistent to bring it back.
You can bring it back. And now a user is placing a spatial marker with that air tap gesture, as with the other video. So when the user is done recording Capture, they can save their session and download it to the device. And then bring it into Editor for further editing. All right.
We just have several minutes 'til the top of the hour here. So I'm gonna try to wrap up the rest of the talk in that time. So I wanted to talk a little bit about some of the accessibility-related features in this app.
We're really just getting started. But one of the things we did is we created a Capture menu alignment function for settings. So that users can position the menu either left or right in relation to user's field of view. So I've highlighted that toggle there in settings. Currently it's situated left. Here's what it looks like.
Not literally. But here's a representation of a field of view. So you can see the menu parked in the left top corner. Top left. And then notification across the top. Just says recording started.
So this might be what a user would see. Just getting going. But then of course the menu can be brought to the right hand side. Now, what's interesting is...
We weren't sure whether we should have left or right be a default orientation. For menu. We weren't even sure if we should allow it to be left or right at first. I made a strong case for it, because some different people have different dominant hands. Also some people might be able to use only one hand.
So it's important to have this option. And we got 14 people internally to take a survey. And what's interesting is 13 of the people were right-handed. And one was left-handed. And that's somewhat accurate.
10% of the population around being left-handed. You know, not a great sample size. But what's interesting -- among those is 7 preferred it left and 7 preferred it right. And there's very little correlation to hand dominance. And what we learned from the survey is that even if you're, say, right-handed, you might prefer to have the menu in the left so that you can then pick up an object like a screwdriver with your right hand being your dominant.
And you might make cases for the opposite too. So this was really interesting. And now we know for the future -- we need to elevate the ability to switch the sides of the menu in-app.
So I think that will come. I talked about the autoscaling feature. So I showed how the menu can expand and collapse, depending on how close the user is to it.
So when it's expanded, and locked to the world space, it'll grow big enough to where you can target it and read it from around 10 feet away. Which is great for the factory setting. I talked a little bit about anchor placement before.
But here you can see a flow of that happening. You see the anchor button. I squared it off to show you where it is. And then a user, hand-drawn rendition of a user, pointing with their head, because that's how they do it in this step. They point their head. They see an orange reticule and then they say Hey, lumen.
And then they can gesture to place it. And finally on the right, the anchor is saved. We'll talk about Vuforia Vantage for Magic Leap. So here we see an application where a male, dark-skinned pharmacist is in a lab executing a procedure. And we have to the right...
There is a panel or rather two panels of content. And so you'll recall the Capture app, where we had gone in, and taken photos and videos and placed spatial anchors. Well, here... This person is on step four of 25.
And that particular step four has text that says verify calibration standards. Prior to beginning analysis in the mass spectrometer machine. So it has a machine, pipettes, a substance that looks like blood, and other things.
And there is also an area -- a target marker attached to the step cards. So they're going step by step and executing this procedure in the UI. From the Vantage launching screen, the primary action is getting that QR...
Scanning the QR code. This is another way we make it easier on the worker. Instead of having to navigate to something... Or go on the web, all they have to do is point a QR code scanner at a code.
And we bring these brackets -- you can see them on the left -- really, really close to the FOV. So close, because we want the user to feel like they can reach behind it with their phone. They'll have the QR code on it. Or even a printed piece of paper.
And so here on the right you see someone with a phone app. Actually our phone Vantage app, with the QR code. And now the procedure is recognized and the procedure is loading. And then we have UI for when the procedure is loaded. And here we get the opportunity to preview -- so there's a photo that was provided in editor of that procedure.
You have the name. It was currently cut off, because it's too long. But that's something we've already fixed and are updating, I'm proud to mention. So you can see the full title. We also have kind of previews -- a preview of a step list, of the steps you can do. And there's a preview mode as well.
So when a user enters a procedure, they can start working with these step cards. And this video... I'm gonna play it twice, because there's a lot going on. You can see the step card UI here.
You can see me navigating among different steps in a garage setting with my bike, actually. You can imagine step one might be put on the tire, step two might be grab this tool... And so as I move the next UI...
The UI automatically moves too. To an optimum place that a user has positioned. Here's a step card. THOMAS: Tim, I just wanted to interject here that we'll need to try to wrap in five minutes. So we can do ten minutes of questions on the stream, if that's all right.
And obviously we can have a conversation in the room after. But we're getting to the time here. TIM: Absolutely. I'm super close, and thank you for your patience. Yeah.
So here's a step card. This is just what it looks like. In case it was hard to see in the video. And then the other side of it... Those were thumbnails and text. And the other side is a media player that you can play the video media with.
And see images with. Here's a concept sketch. Showing some of the early thoughts around voice commands for different UI.
And so we really try to provide a voice command for... Basically anything in the UI that we can. We don't have them for thumbnail images specifically yet.
But we offer UI for all the other functions on this screen. Here's a step list UI that shows the full list. So if you wanted to see like a wider view of the list of steps, and then finally a finish session button on that list. A user submits the session. And then the analytics from that go off to our Insights app, which runs on a desktop. I'm excited to announce that our Vuforia Capture and Vantage apps were released on the Magic Leap World Store app towards the end of 2021.
So here on the left, you can see them both next to Pancake Pals, which is one of my personal favorites. And then on the right, you can see the Magic Leap Launcher with the Capture and Vantage app in it, and my backyard. And I just want to say... Special thanks to all of you. And also Bill Curtis-Davidson, who couldn't be here today, but helped a ton with deck preparations.
Bill is currently recovering on medical leave. And I also want to thank Ashley for stepping in. And doing an excellent job.
And representing PEAT. And Meryl and Thomas of a11yvr for hosting as well as providing presentation feedback. Meryl spent a lot of time helping me go through the deck and letting me know which text was difficult to read.
And I also at one point... I had subtitles on almost every element in the slides. And we were able to scale back. So thank you so much for that. And finally... Thanks to Jake Steinerman, head of community at Spatial, and a former Vuforia employee, for assistance facilitating the event.
That's all I got! (applause) THOMAS: Thank you so much. And for those of you that don't know, clapping in Spatial with hand controllers... You clap with two controllers. There may be a gesture at the keyboard. If anyone knows a keyboard shortcut to do that. But thank you so much, Tim and Ashley, for the presentation.
ASHLEY: Thank you. THOMAS: Today. I want to also really acknowledge... I'll move over here... I want to acknowledge Tim's work on the descriptions for his presentation.
So we do these events a lot in our events. And Tim really did a lot of work on making sure that he had alternative text specified for everything that was provided. And we're gonna plan to use his examples for other people that present here. At the meetup. So I just want to also give a special shoutout.
Thank you so much, Tim, for putting that work into the presentation. TIM: Absolutely. THOMAS: That's very appreciated.
So now we're gonna open it up. It was a great presentation. Does anyone here in the room have questions? If you... Yeah? Kind of cool in Spatial, we can see hands raised inside of this UI. Some people may see that. Though I can't read your name.
I'm gonna move over. James? You can ask the first question, please. Go off of mute. And you can ask your question. JAMES: Hey, Tim.
Great presentation. Can you guys hear me? THOMAS: Yes, we can. JAMES: Thank you. TIM: Hi, James. JAMES: Hey, Tim.
Great presentation. I'm gonna keep it quick and try to... I'm gonna ask something a little bit cutting edge. But I'm curious... More on how people have learned...
I'm sure through testing people learning any particular skill has increased in this medium, versus others. When performing these sequential steps. And having their hands free. And having...
You know, sort of using these tools. I'm curious if there's any benefit with sort of utilizing Magic Leap's image tracking and for like... Image tracking for placing... A simple thing is like QR codes or some sort of image in the environment.
Or if these tools utilize that? If that helps at all? For sort of the anchoring of the sort of sequence of steps in that? And also if... This is a... This is where the cutting edge question comes in... The HoloLens 2, I believe, is getting object recognition.
With object anchors. And if any exploration is being done there. If you can comment on that.
And... Of course... And if it helps, people... In these scenarios, where they're trying to not only create certain steps to teach somebody something else in a variable environment or... If it actually helps people learn... I'm curious if you can comment on anything like...
In that realm. It's basically... The question is... Image anchors and more cutting edge object anchors.
And if that helps anybody. Or if that's not really a consideration. TIM: No, it's fine. I can address that. Thank you, James.
And it's good to see you and talk to you again. So... Yeah. Basically right now... Vuforia is kind of known for using QR codes a lot. And we have plenty of pop-up experiences where you could point a device like an iPad at a QR code and have AR content spring up around it.
We haven't really done that yet in this app. In these apps on HMD. But I hope we can. And in the mean time, we're using QR codes mainly to just launch into procedures. And make that workflow easier. Especially eliminating text entry challenges, where possible.
Those are always difficult on HMD. And in terms of actual image recognition, we don't do that yet. In these apps. But we do a fair amount of spatial recognition. And related to your question about the markers, we use area targets.
Our earlier apps use -- like the original Capture app uses Azure Spatial Anchors. But area targets are kind of what we're moving towards. And are more broadly supported on our platform. And are actually supported for HoloLens as well.
Which has been great. And yeah. It's in the factory... I mean, I feel like images change. But 3D objects in the factory, like tables and workstations and engine blocks are pretty persistent. So we feel really pretty confident at recognizing those objects and affixing content to those objects.
But we'll totally explore more image stuff, moving along. I know we're talking about doing it using machine learning, computer vision, to see, for instance, whether a screw has been tightened adequately. In something that's being put together. Because currently we are just basically relying on -- our current apps rely on the trust of the user to say: Okay. You finished up one.
You finished up two. But we don't verify that yet in our HMD apps. They are just... We're basically providing a guide to completion. Not...
There's nothing coming back that would say success/failure kind of thing. At a particular step. Yeah. Not yet, at least.
THOMAS: All right. Everyone, we'll take one more question here for the stream. If anyone else has a question on the stream. And otherwise, it's also fine to hang out inside of the room afterwards.
But we're gonna... Does anyone have one last question to ask while we're still on stream? >> Yes. I would love to ask a question. It's Dylan.
THOMAS: Go ahead. >> Hey, Tim. Great presentation. One thing I've been really wondering about is: Switch controls. For people that use, for example, puff switches, or other kind of limited mobility switches.
Things like the Xbox universal controller. Do you have a sense of kind of... How long the pipeline is to get those types of things to work with XR, the way they do with desktop? ASHLEY: That's a great question, Dylan. Tim, do you want to take that one? TIM: Ashley, did you have something to say? ASHLEY: No, I was saying...
Dylan, that's a great question. I mean, I'm looking at, for example, Puffin Innovations has a great tool that could be integrated into XR. But the amount of integrations I don't think are there on the backend, in terms of developers.
So I'm very curious to hear from Tim: How long do you think are we on integrating puffing tools into XR for actions? TIM: So this is like a great question. So in Magic Leap, we were really working to champion many different inputs. External keyboards.
Game controllers. A bunch of different things. And I think... Our customer base, the work instructions expert capture, is primarily focused on a manufacturing setting. So when we implement this, it seems like having the ability to perform a switch, with a mechanism like that, could be really effective. I don't have...
There's no... Nothing in the road map for that yet. But I think it could be really great. We need to make sure that we have Bluetooth support. I know for Magic Leap, they're kind of particular on the kind of Bluetooth support. I believe it's like a unifying adapter.
Anyway, there's a limited number of Bluetooth keyboards that will work wirelessly with a device like that. And we need to make sure that a switch or a puffer could be supported with one of those protocols. Do you know if there are any that will work without the use of a Bluetooth dongle? Because I think the answer is: If they can work, without a dongle connected to a computer, they should be able to work with the Magic Leap, or a HoloLens. >> Yeah. I think...
I've heard that Bluetooth can be a little finicky. I know Thomas was talking about trying out a Bluetooth controller with... I think it was with Oculus. But having it be a little tentative about whether it would work for any given interaction. So I would have to imagine that for Bluetooth-powered switches, it's just a matter of kind of getting it to work with the software.
For other types of switches, I imagine you would need some kind of Bluetooth Hub. Because it would be very difficult to start plugging in USBs and what-not into the headset itself. But yeah.
Just something I would definitely love to see in headsets, as this software continues to evolve. TIM: I think something like... Also we're really focused on manufacturing and coming up without of box tools to just assemble things for our apps. But other parts of our company -- like Vuforia Studio, for instance, they're working on a product that's broader in application. And it's quite possible that something like that would be able to hook up to Vuforia App via Unity, and Magic Leap is a supported device there.
So if not in work instructions, certainly something in a broader Vuforia... And we also support a number of hardware devices. I mean, we have LIDAR scanners that are supported. So we're definitely no stranger to supporting third party hardware.
So that's a great question. Awesome. THOMAS: And with that, we're gonna also say let's wrap and say thank you so much to Ashley and Tim.
And I'm gonna use my clap animation here. ASHLEY: Oh, thank you. THOMAS: Thank you both for your... Joining us here today.
And thankyou everyone for joining us today and experimenting inside of Spatial.