You. All. Right hi everybody. My. Name is Andrew bagel I'm a senior, researcher, at the vibe group I'd like to welcome you to our, feature. Presentation this, morning of, Sarah D'Angelo who comes to us from Northwestern. University, she. Is a PhD candidate there and is applying for a postdoc position, here at MSR redmond lab, Sarah. Does, a lot of work in gaze visualization. She's probably the world's leading expert on multiple. People using eye trackers at the same time to try to solve problems, together and, she. Has, been rewarded, with like. An NSF, graduate, research. Fellowship she also has a Google, PhD. Fellowship, so I. Think. She's. Going to impress, us all with some. Really cool reviewer, 4 research over the last few years and she'll. Also tell us what she wants to do next should she come to Microsoft so. Thanks. Very much and I'll. Hand it to Sarah thank you. Thank. You and thank you all for coming today. I'm going to be talking about, designing. Gaze visualizations, for, remote collaboration. So. Started a high-level technology. Mediated, collaboration. And experiences, are becoming more common and they're creating new ways for us to work and learn together when separated, by a distance and. These can technology, can, improve access to expertise, when people aren't co-located. And allow, us to collaborate with colleagues in different countries or support people who want to work from home. However. These. Remote environments. Lack a lot of the rich interpersonal. Cues that we often take for granted in our co-located, interactions, for. Example this woman is clearly feeling a little, bit bored and frustrated and, that information isn't available to her remote, collaborators. In the same way that it is when they are co-located, and these rich nonverbal, cues. Can. Help people understand, their collaborators, and their current mood and what they're attending to which can facilitate richer. Communication, because. We rely on these nonverbal cues to support, coordination. And communication so, you do, things like gesture, and, look. At what you're talking about to support the ways in which we can't communicate. So in this example you, know maybe. This teacher tutor, is trying to explain, flex diagram, and he's going to do a number of things he might point at what he's talking about who probably checked to see if the student is looking there and those, ways they can confirm that they're following along and talking about the same thing and kind, of can enhance our ability to, collaborate when we know that we, have a shared understanding and. We're working together on the same page. So. This brings me to the question that I've been focusing on for much of my dissertation which. Is can integrating, gaze information, which is important, nonverbal, cue like many of the other ones that I mentioned in to remote environments, support. Effective collaboration, so in this example if you were working with this woman remotely, if you could see where she was looking in this shared problem would that help you understand, what perspective she's coming from maybe, what area she's focusing, on in a way that can help you communicate. About the problem together and help solve it so would adding in this Q that's currently not available, support. Our ability to collaborate and, communicate, effectively, for. These in remote, environments, which are becoming increasingly more important. So. To answer this question I design. Develop, and evaluate shared, gaze visualizations, these, are a few examples from my research which, illustrate, where someone is looking in a shared visual environment so.
On The bottom left you can see a teacher's. Gaze information, projected, over a lecture. On cloud identification. Right here you can see someone looking at a map with a magnifying, lens and a tail so, these are a number of different ways that, illustrate, how there's. A wide range of ways that these visualizations can be designed as well as a number of different tasks, that they can be used to support and collaborative, exercises, and I. Will, talk about each of these in more detail but. I just want to give you a heads up of what's coming. But. What I'm trying to achieve with. These shared gaze visualizations, is to support gaze awareness, gaze, awareness is the collaborators, ability to understand, what, their partner is attending to in it, during a collaborative task with shared visuals space and this. Is important for a number of reasons and important for supporting. Fundamental, features of communication. And coordination. Like, establishing joint, visual attention so, if I ask this all to look at the outlet in the back some. Of you will look back there you'll, probably check where I looked first and then look at that too and I'll be looking there and you will too and we'll have established joint visual attention. On that object although you looked there you'll have some understanding about where I looked and then, we can go forward talking about what I wanted to plug into the outlet right and for some of our remote attendees, you, know I don't have that kind of confirmation, I don't know if the people attending here remotely looked, at the outlet with me and therefore, I might have to put a little bit more effort into communicating. With them to ensure that we're on the same page whereas. With this queues in, our co-located, environment, I know that Gina looked back at the outlet I can continue talking to her about, what's next without having to kind of put more effort into, that communicative. Experience. Another. Thing that gaze awareness can help with is developing. Common ground or our. Ability to create a shared understanding so. In this example if we were attending or visiting an aquarium together which is something we probably do so I love to scuba dive I would, look at this exhibit, and I'd say wow a clown, triggerfish, isn't that cool and for. Some of you who I'm hoping there are a couple in the audience who don't know what a clown triggerfish is you're. Probably very lost, and so I'm gonna have to work a little bit harder to develop that shared understanding, I might say it's the black and white fish there. Are a couple on the screen so, that's still a little bit difficult I might, then do a little bit more work I could say you, know it's the black and white polka-dot fish at the bottom they could point at it for, those of you in the room hopefully, you know we're talking about this fish right here highlighted, in red but. That was a lot of effort right and so for those of you who were in the room with me I did make a lot of glances, towards the fish which he could use as cues I pointed, at it right. And so those. Cues helped you. Understand which fish I was talking about in the very, complex visual, environments, and.
You Could imagine you. Know if we were remotely experiencing. This, exhibit together it'd be a lot harder but, if you had information about where I was looking you probably would have figured out what I was talking about a lot faster because I was looking at the fish that I was pointing out and this, problem gets exponentially, more difficult when you start to think about what happens when these fish start moving right and so here's, one of the ways in which, gaze. Awareness, can help facilitate conversation, in these, rich environments, where it's important, that we. Know that we're talking about the same thing. So. With that in mind what, I'd like to talk about next are the. Results of a number of published studies that. I will use to give. Examples about, how gaze awareness can facilitate communication. Why. I think that is line of gaze visualizations, is important, how. We can use gaze visualizations to bring out different aspects of collaboration, that we want depending on the task and where, I see this going in the future and some applications, for Microsoft. And. To talk about each of these points I picked. Out and selected, a few results from. A number of different studies however, there's a lot more to say about each of these different studies so definitely ask. Questions at the end if you want any, more information on any particular study, or other analysis, that was conducted but. I'll give you some of the greatest hits I think. Just. Start off how, does shared gaze visualizations, impact, remote, collaborators, ability, to, communicate. So. To answer this question and a, number of other questions that I'll talk about in, this, talk I developed, a system to cross project, gaze, data between, collaborators, in real-time so it's taking your gaze coordinates, from, an eye tracker and displaying. Them on your partner's display, and. Screen in real-time and this has been modified for a number of different studies but the, set up looks something like this where, the collaborators are actually in the same room but they're separated by a visual barrier to, simulate a remote environment so, they can't see each other but they can communicate freely. And. In. This first study I, asked. Participants to, collaboratively, solve a puzzle, together. With. And without the ability to see where the partner is looking and so what this puzzle looks like is an example oh right here, you, can see that they're putting together a puzzle of a puppy this is sped up because in the. Real experiment this takes a lot longer and you, can see a. Participants. Gaze information, Illustrated as that gaze cursor, moving, around on the screen and so they're working together to solve this puzzle and, I have to communicate about. The. Pieces that they want to combine and. For. This study it's, important to know that we did both easy. And hard puzzles, so an easy puzzle is an example this dog here that has very distinct features dogs, have ears and eyes, and a nose that you can use to refer to those pieces that make it less, complex we, did a hard puzzle which, has shown as as, a plaid puzzle, which similar to the fish example, is very linguistically. Complex there's a lot of features, that you could use to describe these feed these pieces and some, of those features, lap with the other elements, if I told you to move the blue piece you'd. Be completely lost because all of those pieces are blue if I tried to make it a little bit easier inside the blue piece with, the orange bar it, would still be hard right because there are a lot of other blue pieces with orange bars here and so that's, what I mean by linguistically, complex and. Like, the fish example I started with in those, environments or where we would expect to see the benefit of these days visualizations where it's difficult for me to communicate to you what, I'm talking about. And. The. First thing that gaze visualizations, are useful, for are, as a referential pointer so.
When I say I think it goes with this one right. Now you probably have no idea which, piece this one is. But. If you could see where I'm looking illustrated, by this eye icon which, is enlarged so you can see it you, probably have a much better idea of what I'm talking about right it's now that piece in the upper left and. So. This is an example of using the gaze cursor as a referential pointer talking, about something and I'm looking at it and I use the word this and my, use of the word this is a deictic, reference so saying something like this here. That are, examples, of deictic references, and. Those are a rather efficient, way to communicate. About, an object so I could have said the. Blue piece with the orange bar in the upper left that, would have taken a lot longer could. Have been confused with the other piece, saying. This is much more efficient, because it's faster and if you understand, it we could move forward more. Efficiently but, it, does rely on your understanding of a more abstract reference. So. When we actually look at the language used by participants, we, do see that the participants, when, they could see where their partner looking are making significantly, more deictic references, with the gaze visualization, limb without so they're taking advantage of the sufficient reference form and using, the gaze visualization, to be able to communicate with their partner about what they're talking about. The. Other thing that gaze visualizations are very useful for is confirmation, so, I used my gaze cursor to signal to you what piece I'm talking about but, now if I can see this from my perspective I know that you've looked at that piece and, that can confirm that we're on the same page you're, looking at what I'm talking about and I can move forward understanding. That you know what I referred to if. You were looking at this piece I know, that I had some more work to do I would, understand that you're not on the same pages name and, they need to put a little bit more of a conversational, effort and to you to look at the piece that I'm referring to, and.
So These two features of communication. Are really great for, facilitating. Effective collaboration. They're allowing us to use more efficient, referential forums and verify. That we're on the same page. However. The. Impact of gaze visualizations, are not or, we're not always beneficial. Unfortunately. There, were some negative. Effects, of the gaze visualization, because they can be misleading, in. This transcript, example, participant. A is saying there's, you know I think it's this one, the one that I'm looking at they're trying to make use of the gaze visualization, to refer to an object however, a participant, B is confused, your, eyes are moving around a lot they don't necessarily understand. That reference and that is, due to a lot of things natural, eye movements, are very rapid, so. Directly representing, those gaze coordinates, can. Be noisy. Remote. Eye trackers, are, affordable. Now which makes some great avenues. For testing this kind of work but they do have. Some difficulties, with noise. And accuracy, of the data so those can buy and kind of can present you with a representation like this where, you can see that, eye cursor, bouncing around, and. It, can be a little bit difficult to attend to all of that information while you're trying to collaborate on a complex. Task so, when. They fail it, is because they, have now mislead. The participant, due to maybe misalignment. I'm now, confused about which reference you're making kind. Of noise in the data which is making the signal very distracting. And so those are features of the gaze visualization, that we don't want for collaboration, this is kind of introducing, a headache and to the remote collaborative. Experience, that we'd rather avoid. But. One thing that might be causing this is. The. Design right can this be attributed to the way that we represented, where, someone was looking I chose, this eye cursor, which, is a direct representation, of, the coordinate stream of where you're looking at right which is kind of absorbing, all of this noise and displaying. That to you what, have we designed it differently could we kind of reduce, the distracting, characteristics, of the gaze visualization, and kind of come away with only the benefits, and. Make it a more beneficial experience for, remote collaboration. Which. Leads me to the next question can, the design of gaze visualizations, in. Their ability to support remote collaborators, so if we change the design and make, it more, effective for the collaborators, can we come away with the beneficial. Aspects of gaze visualization, without the. Harmful effects and. To. Investigate this question it's research that I did here as. An intern with, Andy beigel and what. We wanted to do here is look, at gaze visualizations, that are embedded to a specific, context, so, we're looking at to design a gaze visualization, for remote pair programmers, and remote, pair programmers, are software. Engineers who, are working on a code, base together in real-time but. They're separated by distance so they're going to use tools like Skype with a remote desktop to be able to share their screens and talk about the code and edit in real-time together, and. To. Understand the needs of these groups of. Remote pair programmers, we administered, a survey. With. A bunch of software engineers at Microsoft, which, revealed that one of the biggest, drawbacks to remote pair programming is the lack of tools. Available to. Facilitate that, interaction to make it as rigid as as it is in co-located, pair, programming which is the predominant, form of pair, programming as people like to be in the same room together but. People would like to, remote. Pair programming because people want to work from home or be able to collaborate with people who are in different physical locations.
So. We thought maybe we could address this with a tool we. Conducted observations, with, co-located, pair programmers to understand what they're doing well and some things that you can do really well in co-located, environments, are refer, to objects in the code or locations, in the code together and understand. That you're talking about the same thing and, so we wanted to take that ability, of co-located, pair programmers, and introduce, that to the remote environments. So, we iteratively designed, a novel, gates visualization, which. Looks like this it's, that orange, bar highlighted, in the corner and, this, is a really interesting design for a lot of reasons right take going, from the eye cursor, that was a direct representation of where someone, is looking we've now made. It slightly more abstract we've, put it in the left margin near. The blind numbers which, was informed by the users because pair. Programmers, are, using line numbers to refer to locations, in the code already so, if we put this over here as a referential queue it's already near where they're expected to look and. They can use that resource it's, unobtrusive, so on the previous, case visualization, it's not in the space that they're trying to edit it's over in the margin of unused, space so, it's not going to disrupt them from, actually engaging with the code and doing their task and, we. Made it five line high to account, for some of this error that I was talking about in the noisy signal with remote. Eye trackers, so if you're looking anywhere within this five line high region the, gaze visualization, is not going to move which, is kind of naturally, smoothing it out so that it's not this, disruptive, piece, of information in, your, editor and, this. Is what it looks like in, actual. Use it changes to green when, participants are looking at the same thing and that was designed from. The interviews where people you know want to establish this, joint attention they, want to know that their partner is looking with them at the same information and so by turning it green we can confirm to the users that the, participant is looking with them yeah. Five. Line. How. Did you choose five and was it an adaptive, at all in terms of didn't someone's, eyes jumped around did you make it bigger, or smaller it was an adaptive, we, did, test, a number of different lengths and, based on the. Amount of code that we had here, and the kind of size of the different functions five seemed to be small, enough that you get an idea of what they're referring to but. Large enough to account for some of that error I think given other, tasks. We could have adapted that a little bit more that's how it came with that design. Highlighting. When you're looking at the same line or, the same area. Don't. You. Well. You might be able to infer I you. Know told you hey Gina I think we need to look at this area over. Here at the tick timer tick function and you. Know you might say yes I'm there but by turning it green we've made that very explicit, to you so, you just have that confirmation, you might have seen it in orange and you, could assume that the, change of color was just to provide a subtle cue that, you're working together and just make that more explicit. No. This is way somewhere you're looking Oh. So. To evaluate this. Design one, of the things that we did is kind of break apart that referential, scheme that I initially. Talked about where I was looking at deictic references versus not but, there are a lot of other ways that you can refer to locations, in the code so, we've made this referential. Coding scheme that goes from implicit references, like deictic references, saying, this are here to more explicit, references, like, selecting text and this was modified, for the specific task so we include things like, line numbers, which, are present in IDs that are not present in other cooperative, environments, so this was designed for. The task to understand. In more depth how, gaze visualizations, are. Enhancing. Our ability to refer to locations in the code and. So in this example I, can make a deictic reference and I can say I think we need to change this but. You probably don't. Know what I'm talking about and I can do a number of things to make that more explicit, I could use an abstract concept like. I think we need to change one of the shapes I could, call it a specific word like. Rectangle, but again there are a number of instances of this and so it's, getting a little bit harder if you don't know exactly what I'm talking about to make that reference and I could work. A, little bit more at you. Know referring to the line number saying that it's on line number five again. There are two instances so it's still a little bit more complex and then, I could go as far as to selecting text so this is an example of how I can communicate with you from a range from, explicit implicit to explicit, depending, on how I want to get you to, understand what I'm talking about and.
What We would expect is that with the gaze visualization, you're able to rely more on these implicit reference forums, these more efficient ways of communicating, and you, might have to reblock rely more on explicit. Reference forms when we take away the gaze visualization, because you don't have that cue about, what your partner is attending to and. So, that's what we see when we look at the communication behavior we, see significantly, more deictic references, with. The gaze visualization, compared. To without which is consistent with the prior work and. As we expect we do see this. Trend towards more explicit references being made without the gaze visualization when we look at specific words so. Program impaired programmers are using more specific word references without. The gaze visualization, compared, to with the gaze visualization. Visualization in this as well you can see the mouse cursor of your partner as well. And. We see this trend following, for the remaining. Reference. Forms those, two are the significant ones but you can kind of see this trend towards more explicit references being made with. The gate without the gaze visualization, compared to width which is what, we are hoping for and. The other great part about this is that. I'm. Wondering, about how. People's behavior change over time when once they have this available to them just, guessed. You'd be a big, changing, their behavior as they adapt, to reading. Home yeah, so these were from. Ten-minute tasks, refactoring. Tasks with pair programmers who had, worked together but. You could imagine if this was a more longitudinal study, how their familiarity with the gaze visualization, would, change how they used it fortunately. Haven't done longitudinal. Study yet but I do think that's an interesting Avenue, to go down when. We could look at the familiarity so is, there thoughts about like how much more efficient that is like they're just sharing a cursor like you know Mouse it's kind of pretty natural for us to be, able control where we're trying what we want to focus on a look at versus, the gaze we'd have to learn a little bit right like. I was wondering if you had, insights, around that yeah so I think you know when we talk about referential. Pointers you kind of can make that metaphor to a mouse and you can make that explicit gesture, what's. Interesting about the gaze cursor, or the gaze visualization, is that you also get these kind of different process behaviors, so. You can start to think about, what. People doing what people are doing that isn't explicit, so you're not always reading with your mouse cursor following, you but, when I can see where you're looking I know where you're attending to without having to ask you to, point that out for me so it's a little bit more natural and faster and.
What I'll get to you actually, next is other, ways in which gaze visualizations can point out these, more process, level cues that you don't get from. Explicit, gestures, with a mouse kind of how I am interpreting the information. And. How, I'm processing it that's different. From these explicit reference forums and I'll get to that in a little bit that's a good question. You saw the other person's, cursor in both conditions yeah. Right. And so building off of this one of the great things in addition, to this more efficient communication, form. Is that participants reported, that it wasn't distracting, so. They felt that it was a subtle cue it wasn't disrupting them from their task which is what we wanted to achieve and. So this is motivated, a new. Line of work for me which is on this, more creative, and iterative, design of, gaze visualizations and. I am currently advising. An undergraduate researcher, who. Jeff brewer who will be presenting this work as a gemo at KY I'm, very proud of him but what this is showing you is the, same. Coordinates. Gaze coordinates, visualized, in four different ways and so, what this is getting at is the differences, in, gaze. Patterns that you can visualize with, different features so you can see in the, upper right a heat, map will show you that the places where I fixated, the longest were the corners, but. The bottom left is going to show you my trajectory through, the space so, you can understand, how those each of those positions, connect, to each other whereas. You know something like these two are going to be. Have. Less pass information so they're not going to start including the visual space but the one in the top is going to show you direction, whereas this one's just going to show you a current position, right, and so you can think about the things that you want to illustrate because, it is it important for your tasks to know how long I look somewhere or, does it matter how I Traverse that visual space and. That's the things that you can start to think about when you, vary the design of these case visualizations, and start to investigate them or. When. Thinking about what of the many features the, five movements that you want to visualize. So. Iris is. The platform that we've developed it is a direct manipulation interface, you can see in the, top you, interact with your fixation point by dragging, on it you. Can adjust the fill, to. Kind of and test, this out with your gaze coordinates in real-time so you can interact it and test the number of different signs you can drag out the tail to show the different previous fixations, that, you want and that that length can depend on you, know how much previous fixation duration you, want to show so, how you've traversed the visual space you. Can change the representation to, show dots connected, by lines, and. Adjust. The different size and fill, of each of those points so if you want to show representations. That change. You look through the visual space and. So with all these different elements you can kind of imagine a, lot, of different ways to. Create gaze visualizations, that. We've. Made open-source for researchers to kind, of investigate these different visually visualization. Forums and also, support. This iterative testing so like you saw in the previous example I can, record my gaze information, and then play it back in a number of different techniques. And decide, which one's going to support their collaborative task. So. To understand this design in more detail, leads. Me to the next question which is kind, of design of gaze visualizations elicits different types of collaborative behavior, so, this is some of the things that I'm getting at with this more, in-depth approach the gaze visualizations, that we can take from. A more implicit, level, that are different from mouse cursors so what can we say more about the collaborative behavior, between the pair rather, than just our communication, forum so kind of going, a level deeper into, how pairs are actually, collaborating.
Together. So, to investigate this question I. Wanted. To know how different gaze visualization, techniques impact, collaborative, searching behavior, so. This is starting to look at the same task but. Different visualization techniques, so what you can see here is an. Example of a hidden image task. So. This, is a search. Task and there's a like a candy cane hidden here in the tree and. So for reasons that I've explained, earlier this is kind of linguistically, complex it's. Not simple to point out the candy cane it's embedded in the tree so it makes it a little bit of a harder search tasks. And. What I did here was design. Three, different gaze visualizations, so. You can see a heat map, right. Here which, is going to darken the color when you stare at some place for an extended period of time and, also, show your past 14, seconds, of eye movements, so you can kind of get coverage and duration. The. Next one over is going to be a shared. Area visualization. Um so this is kind of a new technique, which. Is only going to display this circle when. We're looking at the same area together at the same time so. This is kind of capturing. When. We're jointly attending the same area yeah but, is it going to be displaying, the case visualization, at all times so it's only gonna happen when we're looking together. Which. Is an interesting element. Of, gaze visualizations because most of them are always always, available to you at all times and so to avoid maybe some of the distraction, we, can make this available only when you want to coordinate but. Participants, can do here is activate. The visualization, by, directing. Their partner to where they're looking and. This will display and, one of the interesting things here is that you, know rather, than needing to use a lot of language since, natural eye movements happen so quickly if one participant stays constant, then. The next part the other participant, can scan the space quickly and then that act that visualization will appear so, this makes use of more. Efficient reference forums but is displaying it in a partially available way and the. Last visualization, is a path visualization, so it's showing you the current fixation, point illustrated, in black and, connected. To a previous fixation point Illustrated red so, this is going to show you some connections, in the visual space as well as where your partner is currently looking in real, time which. Is more, comparable to, the first visualization, that I showed you. To. Evaluate this you did a within. Subjects, design so, every participant. Interacted. With each of the different visualization techniques, and counterbalance, the orders for that, and. We measured a number of different things but what I want to talk about right. Now is joint visual attention so. The example that I brought up earlier about us looking at the same thing together, is. What I'm trying to illustrate here so you see two different gaze paths one in orange one in green and. What's happening at points 4 5 & 6 is that, we're looking together at the, same place at the same time so, we're attending to this information together and. This. Is important for a number of different collaborative activities, so you can imagine scenarios, where you're teaching someone something and, you, want them to attend with you right so I'm explaining something complex and I, want you to be following along with, what I'm talking about and we, want to be jointly attending, right, and then you can think about environments, where this might not be as beneficial like. Collaborative, search where. A more, effective searching, strategy is a division of labor approach, if you search the top and I search the bottom we're, going to be able to scan the page, and half the time working, together, and. So what. Designed here is to bring out both of those elements. When we want to jointly attendant when we don't so for the beginning search part before we find the object, it's gonna be beneficial for us to separate, and divide the space but. Once I find the object we're then going to want to jointly attend so I can help you locate it as well because the only way for us to move on is, first both to find the object so kind of get to both of those dynamics of, collaboration.
One, Where we want to be looking separately, and when we want to look together because, we're coordinating, on an object. And. What we're hoping to see here is or, we expect to see is that the different visualization. Techniques are going to elicit. These different types of behavior supporting, joint attention or, not, and. So our measure a proportion. Of overlap in time or the amount of time we spend looking at the same information at the same time shows. This difference so. With the shared area visualization. We see significantly. More, overlap. Compared, to something like the heatmap visualization and this, can be attributed to the design the shared area design is encouraging, us to look together at the same time because the only way that we can make use of that visualization is, if, we're jointly attending to the same information whereas. The heatmap is going, to, discourage. That kind of behavior by, visualizing. Where i'm looking in, the. Search area I'm starting to occlude the space where my partner is looking and so by looking there I'm not going to actually see the objects underneath I'm just going to seem that my partner's already scanned that space so that's going to encourage me to look elsewhere because I know the space is up there traversing, and I, can go somewhere else and so we do see significantly, less overlap with, the heatmap. Similar. To none I and. Paths lie somewhere in the middle and, with. Previous work we do expect that the path visualization, being an always-on, visualization. Technique is going to attract your attention a little bit more than nothing. And. When we talk to participants, about this we do see. This kind of behavior. Kind, of validated, in their responses, so with the path visualization, participants. Are saying that they couldn't help but follow it right this is kind of a description. Of this distraction effect that we saw earlier, whereas, the with the shared area visualization. We see that participants are saying that they are able to do their own thing but just target the objects, together so. They're able to kind of work separately. But when. They want to come together and coordinates. Then, they're able to make use of the visualization so they're able to do this tightly coupled action, when, they want to and, this. Is an example of how we can kind of tailor these. Visualizations to support the specific tasks so we want a visualization, that allows for. Some separate activity but then really kind of shines when participants, are trying to collaborate and. So, when we look at survey results, we, see that participants, you, know wow they're able to attend to where their partner is attending with the path visualization, they report, it being distracting, right, which is something that we saw in the first study but. We're not seeing that with a shared area of visualization didn't, distract from, the entire task and, they were able to use it to facilitate. Coordination. And. Then we do see that they're both rated equally as useful so people are kind of willing to deal with this distracting, element if they're able to use the path visualization, as a referential pointer but. It would be great as we, see in the shared area visualization, where we don't have this distracting, element. To it but we do have the usefulness so. We, can start to think about tailoring, these specific designs for. The context, and what kind of collaborative behavior, we. Want to support. In. Building off of that another, thing that we can do with gaze visualizations. Is. Try, to model appropriate eye movement behavior so. You can imagine using gates visualizations in the in training. Exercises or, learning, and, so this is get set another kind of element, different. To cursors, and direct. Kind, of inputs. Is that gaze visualization, provide some insights and how you're processing, the, information that, can be useful, so. For this study it designed a, lecture. On cloud identification. Again, clouds are. More. Linguistically, complex than maybe other types of visual information there's.
Not A. Clear. Way to describe, a lot of their individual features but. There are important, differences between clouds and so to evaluate, how. Gaze visualizations, can help people, model. Expert like. Gaze. Behavior, we. Implemented. A visualization. Of where the teacher was looking so, you could see how the teacher was. Looking at the lecture slides and the gaze information, compared. To a pen. Which, is just going to be an overlay of a pointer which. Is also used in online lectures, and. A baseline of no visualization. We. Did a between-subjects evaluation, because, we wanted to use the same, stimulus. For all of them and. We, evaluated, a couple of dimensions of learning so we get these with potential. Learners and contextualize. It as an online lecture. And. Like I was saying with clouds there. Are distinct, features to them so. We chose clouds because, they, have. There. Are different cloud types that kind of have different features so you have horizontally, forming, cloud types like stratus clouds or, sometimes, clouds that generate that have rain that, are going to layer across the sky whereas, clouds like cumulus, clouds are going to extend vertically, into the sky and so these different features are something that we can easily pull away from, the eye movement, data and look, at your horizontal, saccades and your vertical saccade and you. Should be looking or effective, cloud. Identifiers, as can look at the different cloud types in the way that, brings out these features that can help you distinguish the, different cloud types. So. When we look at student behavior, what, we see is when you can see a teacher's gaze visualization, which is clearly illustrating. A horizontal. Saccade. Pattern. That. Participants, when they can see where the teachers looking are making a higher proportion of horizontal, saccades on horizontally. Forming clouds, compared. To vertical saccade so they're kind of making, this difference where when you're looking at a horizontal cloud, you're making more horizontal, eye movements, and. This difference is not present, for the pen and no, visualization. Condition, and this results are from the post-test. So, students, watch the video lecture and then they took a post-test and we looked at their eye movement behavior and. So this demonstrates. That, you, know these subtle, differences and I move in behaviors which are potential, signals or expertise, where people are going to look at the right cloud type with, the correct kind of formation. Can. Be taught by showing, students where the teacher is looking so you can start to model this behavior that. Then can students can pick up on by. Illustrating, the, important features so in this in this task a design that connected, the two different points is important, because, then we're clearly illustrating, horizontal. Lines versus, vertical lines. And those illustrations can become clear to students and help, them distinguish. The different cloud types. And. We also see this. Difference when we look at learning behavior, so this is an example of a scoring grid where. Students got a point for identifying the correct cloud it's. Important to note here that you got half credit for being in the correct family, of clouds so if you were you, know if you identified, that it was cumulus, you've got half credit they. Were also broken up by altitude so we gave half credit for that and. We do see that participants. Score, significantly higher, on their. Post-test, when, they can see where their teacher is looking compared. To no, visualization, the pen visualization, a lies.
In The middle and it's not significantly, different from either of them but these results are encouraging in, the sense that the, visualization was able to help, students, identify, characteristics. Of clouds that could then help them differentiate. Them and, this is something that the gaze visualization, did. Well the pen visualization, maybe wasn't necessarily, addressing. As effectively, so, this is an interesting, Avenue. For gaze visualizations in a more applied setting and we think about, online. Learning. So. To summarize the, results, so far we. See that gaze visualizations can. Support effective referential, communication, between remote. Collaborators, so, we are seeing that when, you can see where your partner is looking you're able to make these more efficient referential, forms which are going to allow us to collaborate and. Communicate a little bit faster. Contextually. Relevant gaze visualizations can, improve their, effectiveness so, when we take the context, into consideration, like when we think about, designing specifically. For visual studio and pair programmers, we, can improve the effectiveness of these gaze visualizations we can reduce the distracting, characteristics, while, capitalizing. On the beneficial, communication. Aspects. And. Different gaze visualization, techniques can be used to, support or encourage, establishing, joint visual attention so. This is one of the ways you can think that about. How the design of these gaze visualizations, can. Be used to, support, or encourage a type of collaborative behavior, that you want to elicit from your remote collaborators, so if it's important that we're attending together I might, want to design the gaze visualization, that is going to encourage. Us to look at the same information together. Whereas, if we want to poor people maybe working separately but. Collaboration but, collaborating, at. Different points we could think about visualizations, that, discourage. Joint attention and kind of help support, participants, when they're trying, to divide. The space and they just want a cue of where the partner is and they know what. Areas their partner is looking on and they can work on other areas, this. Is an example of how the, design of these visualizations can. Bring out these different kind of collaborative behaviors. And. For. Application, spaces case, visualizations can help students follow along in MOOC style video lectures that can be used as an, intervention in these online learning platforms, to, help students follow along with, visually complex information, you think of a number a, different. Type of types, of training exercises, which might benefit from. Visualizing. Where an expert is looking to help give you information about how they're. Processing, that visual information and, potentially ways that you could start, to replicate that, kind of modeling. And. What, I'm working on right, now in my most immediate future work is how, Gaye's information, can support collaboration between groups. And. This is an interesting problem because, the. Majority of this talk I focused, on dyad. Relationships or relationships between two collaborators, from just giving you information about, where your other collaborator, is looking and you can use that to, work together but. This problem, becomes more complex and when we introduce more people, to, the collaboration and, so then. Signals. About where the group is looking can. Be used to make other kind of determinations, about group behavior, so whereas most of the group looking where, the outliers, looking, and what does this say about the, entire, cooperative experience between. The group. So. And understand this a little bit more we've, developed a system for visualizing, multiple, gaze coordinates, in real time so. This example shows, three. Computer. Science students trying to identify bugs, in the, code and, what you can see here is that two of the students have moved on to the next problem and one, is still staying up here right and so when you visualize the, group information you're, then able to target these outliers, and these differences, so, when we showed this to, teachers who were leading the session they, were able to. Provide more targeted feedback to, the students that were falling behind so the student who needed a little bit more explanation on, what they should have seen in that, output was able, to kind of get that feedback because.
The Student the teacher was aware that they were still looking at it and this is kind of information that the student wasn't necessarily. Communicating. To the teacher so this is one of those things that you get from, gays visualizations that, might not be explicitly. Mentioned to teachers which is you know I'm. Still struggling and, I'm, still up here looking at the same problem while everyone's moved on and so then you can kind of address that student more directly and. Help. Kind. Of provide. Them with the information that they need to move forward these, are some of the interesting things that we can do now that it's become a group so you're able to do more of a high level distinction, between the. Differences in the group behavior, rather, than just the one person you're collaborating in real-time so you can make comparisons and, to who's. Behaving similarly or. Differently and how do I need to address, those. Individuals, and. What is the higher-level class doing so how can I you, know you can use this to monitor more students and they're. They're interesting aspects and how we can think about how this scales right this is three students but what happens when we get to 20 or 200 and those, are aspects that you can start to think about visualizing, the average student, I'm, in the displaying outliers as, deviations. From that average so, a lot of ways in which this. Group visualization. Technique, can, evolve to, support larger groups and also display, more interesting characteristics, of, how people are engaging with remote. Content together. And. I started this talk talking about nonverbal. Cues in general I started, at a higher level thinking about things like gesturing, and facial expressions, in addition to I gaze and I focused, primarily, on I. Gaze, for my dissertation, which. I think has been an interesting way to start however. I, do want to explore, these. Other nonverbal cues, so, I guess is just one of many and we can think about the, possibilities, of incorporating, a suite of nonverbal cues if you think about where you're looking as well, as what your facial expression can tell us about how you're feeling maybe, if you start pointing at things what your body position might tell us about, how you're engaging with the content and gather, these nonverbal cues together and make them available in, remote environments I think is a rich direction, for, this research. And, taking away some of the elements that I've talked about today I'm, going, forward with other nonverbal cues there are interesting ways to measure them a measure their, effectiveness in terms of how people are communicating, we. Can also think about you. Know more social implications, of this work if people are able to develop rapport, with each other, and, what that looks like when you make these social. Cues available to people and. Building. Off of one of the main focuses of my work how should these visual these nonverbal, cues be designed. So. Direct explicit. Representations. Of the eye movements, we're not necessarily the. Best way to represent those and maybe more abstract contextually. Relevant design. Techniques, are important and those can be applied to other cues when you think about your, show expression or heart rate or body, position, what are kind of maybe more abstract ways that we can integrate those into the shared, workspace, that can make them more effective and potentially.
Not Overload, people with information, so. This is an area that I want to go into but, I plan. To use my background in the gaze visualization, techniques to inform how I go about internet. Integrating, other nonverbal cues. This. Is a lot of applications, for a lot of the tools that Microsoft. Works on when we think about Skype. And different meeting applications. Integrating. Nonverbal, cues into these environments can. Enhance, that experience and we try to think about how distributed, teams work together especially when we think about potentially, groups of co-located, people in addition to a remote collaborator, or you. Know groups of remote people integrating. With groups, of co-located, people so how can we share the. The nonverbal, cues of the group to. Their remote collaborators, how can we make these remote. Experiences. Richer. Rather. Than maybe just. Showing one. Look view into the point into the scene could, we provide more information about the group dynamics, and how people are engaging with each other and the information to. Make it easier to collaborate remotely. As. Well as applications, for augmented reality so. A lot of the work that I've talked about today has been primarily computer. Based and screen based but. You can start to think about how. Gaze. Visualizations can be used in physical. Tasks, so. When we get more complex visual physical, tasks like. Assembling. Objects, you can start to think about how these, visualizations can help guide people through, that process maybe, if you think about more physical. Tasks. That are harder to describe like playing sports together you know someone who's very good at golf might, not have the words for describing to you how you should do a swing correctly but where, they're looking will probably provide insights into how they're. Able to do that, kind of action and, so we can start to think about ways in which augments. And reality, can. Integrate. These nonverbal cues to, support remote, collaboration on physical, tasks as, well as things that require kind. Of more bodily movement and. The. Space as well for other nonverbal cues to. Be included, and, these rich, remote. Environments, that go beyond the, screen, so. Those are something that I'm really excited, about for my future work and I think are obviously things that that, Microsoft is doing really well here and can, start to build on. And. With, that I want to acknowledge, the very talented undergraduate. Researchers, who, I've advised at Northwestern, and whose, research, and talent has contributed to some of the results that I talked about today as, well as my advisor is Darren, gurgle and Mike horn for. Their support throughout my dissertation work for all of you for. Attending and I'll happily answer any, questions about the work. Yes. So. You talked in that earlier studies a lot about the efficiency of the communication, did you find any, differences. In terms of the, output, or the quality. Or the. In. Terms, of performance right, yeah so we looked at. Performance. In. The. First the. Puzzle study and did not see a time. Difference but, I'll pull up a slide for you that can address that question. Which, I think was attributed to some of the distracting, characteristics, but when we look at the. The. Second study where we really see this difference, while. Again we don't see maybe an average completion, time difference, we do see a coordination, period. Difference and, what I mean by coordination, period is so. You have the total collaborative, time and then, we have the time from when the first participant, finds the object to, when the sec when the second Japan finds it so, this coordination period, where I found, it now I need to describe it to you that's. Where we see the impact. Of the visualization so, they're really helping. This. Rich, this tight coordination, so once I find something and I need to explain it to you then, I do see this performance increase, however. Overall we. Do not see a performance. Increase which could be due to some, of the complexity, of the task. It's. A very difficult search task but, again again you know we're, thinking about tasks. That take on average three, minutes so it's sometimes it's hard to get those performance results but we do see it in that coordination but I think that, there is an opportunity to look at this in more depth, we, did see the learning gain.
Example, Where students, were able to score. Higher on their tests with the case visualization, but I think there's opportunities, building. Off of this more longitudinal study, to think about you know what. Does the code quality, look like for, the pair programming example, if we didn't think about people using this tool more, long term as. Well as maybe longer cooperative, tasks it's, a good question. Qualitative. Experience. Like. In the pair programming or in this test, what. Was the what, was the change in qualitative, experience. Yeah, so when we looked at. When. We asked participants out the gates visualization, some. Of these results from earlier. Where. We see you, know participants. Find them useful as, well as distracting, but we did kind of dive a little bit deeper into, you, know the, heat map not being a very effective visualization, it was you know visually very cluttering for them while, it did kind, of signal where. Their partner was looking it ended up being kind of disruptive for participants, as, well as a lot of participants reporting that the path visualization, was distracting, however, they do acknowledge the utility, right when they do this exercise. Without case visualizations it's a lot harder for them to describe. The. Location of the object so there's, this feedback around. It. Its, usefulness in terms of pointing, when, we did the pair programming study we, asked participants. About. Their experience, that and I'll pull up a couple quotes from that as well. So. They're able to kind, of get a sense of, where they're looking which they kind, of interpreted as a replacement of pointing because they didn't need to confirm. Where they're looking. We. Did see people who purport more preference to the gates visualization, rather than having to rely on line numbers um so we do get this more kind of experience, difference when, we think about people's effort. They had to use to refer to, locations, and kind of the subtle cue of where their collaborator. Is looking we did have them fill out a survey they. Reported, that they felt like their collaborator, was focusing, on the document, more when they could see where the visual see the visualization, versus. Not and so that might have been a signal, that you, know when I see a gaze visualization, it's, this direct confirmation, that, you're looking at the document, and engaging, with it which isn't, available when I do and see that and so whether. Or not they were actually, focusing more these visualizations can provide this cue that your partner is looking at the screen they're, not you know texting or doing something else so from that kind of perceived benefit do you see that, yeah. Is, there an option to maybe. Average, over a time window. To. May be smooth that out yeah. So. We did that. Well. With the programming, example it's kind of we, have the line numbers to kind of account for that kind of subtle.
Movement And there is we've done some smoothing functions, where you have to move. Further. Enough away from the previous point in order for us to display it which kind of accounts out some of the small movements, but, you could think about a time window as well I've. Focused, mostly on on distance, they. Think that that would be that would be interesting and as well as when you think about you know more asynchronous, collaboration where. You can use what's. Coming in the future to kind of determine what information is important that's, a good point. Yeah movie. Said this earlier but were you using was, the primary, input, tool for all this case of cameras. On the PC, they. Were remote eye trackers. So. We used the. External. Eye trackers. Which are. Attached. To the bottom. Of the monitor. Yeah. Does anyone use this with autistic, kids. It. Has been, used. Look. At how autistic, children. Watch. Videos, and so, you can kind of determine. Autistic. Children from typically, developing. Children based on where they're looking in the scenes so you see kind of let, me be more fixations, on faces with. Yeah. We have it hasn't been used in these more real-time settings, my tracking and, children with autism has been used to kind of understand their. Scene perception and, reading behavior but, I think, that's it's an interesting opportunity for, improving. These kind, of personalized, learning experiences. Where, you can get, this sort of subtle cue about where people are attending and then know maybe more. About what you need to share, with them that can be an alternative to other forms like facial, expression or something so it's a it's a good point and something I haven't looked at yet but they, could be an interesting opportunity. Collaborative. Partner can see, where. I'm looking am. I going to. Try. To use my eyes as an input device. Yeah. That's. An interesting question, when. We look at referential. Behavior, we do see that kind of like I like. With the transcript. From the puzzle studies and this is where I'm looking right and so that's clear that they're looking, at it and then saying where I'm looking that's where you should be using it so there's kind of that pointer behavior. Which, i think is apparent in that and the referential communication. Style but I think you, do get into these more naturalistic. Kind. Of interactions. Because, I tracking. Is pretty non-invasive, it's just you're, not necessarily aware, of it happening so you do get all the other parts, of interacting. That aren't I'm, not always cognitively. Aware that I'm making, pointing movements with my eyes, I think it happens when I'm making, an explicit reference but, it's not going to happen you.
Know I'm not going to tailor my searching behavior because you also have to use your eyes to process. The visual space, in. Ways that are going to help you do the task then you can use it as a, pointer. Sounds. Problematic. To me itself, this is something I would never do if Cory, and I were some children shoulder, as a task, I, would. Never, ever say, it's the thing that I'm looking at I would, point to it I would move. The cursor to it but right, but that's, just weird. Yeah I think. And. I wonder if it if it comes from having both the. Eye position and, the cursor position represented, because then I have to disambiguate, which, cute humans, or. If it's just that that, you're, trying to get too much mileage, out of eyes yeah. I mean I think that there is an interesting point to being made that like you're not going to make reference to where, you're looking in co-located. Interactions. But Cory might use that information about where you're looking to kind of help her so. Well, you might and co-locate interactions, you don't necessarily look at it as use, it as a direct cue, the, people around you are using that to understand what you're attending to and so, I think that that has, been one of the strengths of looking at more abstract, representations. Because with the with the first example the eye cursor, it is very much like simulating, a mouse and in that way I'm using, my eyes to both engage, with the task and refer, to objects and there's this interesting question around you know what. Was a communicative, i gestured and what was just me searching and so. Those, are interesting questions for when we start to think about, different. Ways to visualize so, with the pair programming example, you, know that's not going to be in, your face as much and so you kind of have a sense of alright you know I know my partner is looking down here I can, check on them and then we have this color change to kind of use it as kind. Of like a signal, and. So I think there are ways to design that out like with the shared area visualization, you know it's not we're. Able to do our own thing and then, when I say you, know look in the upright top, right then, you reactivate, it and then we're able to target it together so, I think that there's a place for the for both to, exist but it does come down to you know the. Design in some sense that we're not trying to replicate a cursor, but it's a good question. Yeah. Sorry, Todd. Looks, gays. For a prince among two, different people I wondered. Whether you thought about it applying to. Conversation. Ladies hypothetically. We might have an interest in conversational. Agents helping you, and with Noah, thing, where I'm looking you. Know inform. That, yeah. So I think I think then you know you can start to think about models of conversation. And we think about, looking. At what I'm talking about then conversational, agents can start to use that information you. Know to, understand, what is I'm referring, to I think Shawn Andres too was, hired here last year did an interesting study with. A conversational, agent instructing, a person, to build a sandwich and. The participant, could then see where that agent was looking and they could use that information to refer. To the objects together so it can facilitate conversation. In the same way but it doesn't co-located, in interactions. I think we just have to have, an understanding of, you. Know like Gina was saying, when. I'm using my eyes to. Signal. To you when I'm talking about when I'm using them injunction with what I'm thinking about versus when, I may be using them more independently, but, I think there's definitely a space for that. Or. Word if you're talking about, extending. This to be other types of nonverbal communication. How. Do you think you can do that in a way that doesn't get overwhelming, because, right now you're sort of given me one additional piece of information, and, the real world were able to just. Sort of piece all this together over, our, years. Of development. But. How can you present this without, completely, distracting. The test, I'm trying to do right and I think that's, an. Important question in something that has, become apparent through this design. Process with gaze visualizations is you know how to not be disruptive, and I, think you, know there, we've. Did a study with Iris where we had people to their own visualizations and try to ask, them what they were using it for and so a lot of people use color signaling, we, did a collaborative, editing and they said oh you, know use yellow for highlighting red for making changes and so you can start to kind of think about you.
Know Additions, to the gaze visualization, that can signal maybe what I'm trying to do you, could try to signal maybe mood in that sense. So. Potentially, ways to add on in subtle. Ways to this cue through, things like color change, could. Be an interesting way to not. Drastically, add too much of the space but also provide. It more of a cue I think, there's also room for you know with the pair programming example, we're using some unused space in the margin thinking. About potentially, other areas, for unused space. The. Last visualization, with the multiple students was also showing their position in the document in the scrollbar so, you can start to think about putting, other types of information in the scrollbar. Like. You. Know student. Like it maybe potentially you know high focus points when students are being particularly, engaged in certain areas I think there
2018-04-09