Cars Computing and the Future of Work Specific topics of mutual interest

Show video

Okay we're gonna get, started because, it's 105, so we're kind of just a little bit late and. We'll just kind of wait for people to come in and make sure we are mindful of people's time. So. Maybe all of you can sit down while, we get started, so, actually, it's gonna make just a quick announcement about you, know what the plans are for, the. Boards, and stuff oh sure, so we got a lot of questions about this so let's just take one second, and give, it up for Leah, and Jill who are doing a. We're. Getting a lot of questions about the boards we think they're fascinating. One of the things we're doing at the end of today is will be taking, high-res pictures all, of this and we're happy to share them back to you says. They're good capture your thoughts and they're a good opportunity if you want to keep driving them forward for some of the research collaborations. Okay, great so. With that we're gonna go ahead and get started so. We have just like in the morning session we have four presentations. Today so john lee is going to kick it off and then asa Shamsi, and then myself and, then, we're gonna again take a 20-minute break and then, basically, break you out into groups again so don't forget that we have all of these like post-its, on the, table so please use them as freely. As possible and, we put up some clean sheets too for you to go ahead and put up okay, all, right so with that we're gonna go ahead and get started so our, first speaker is John and I'm gonna it's gonna take it away great. Thank you and it's been just a pleasure. Working. With you today and the rest of the conference has been amazing, what, I'll be talking about is trust, in automation. There's. A paper I wrote with a graduate, student back about 15, years ago and some. Of the material I'll talk about here, is from that and unfortunately. That paper like most papers has lots of problems and. Graduate. Students, and I are working on a an, updated, version that, hopefully addresses, all of those problems, and. The. Title, is important, we're talking about trust in automation, but also trust through, technology. So, how you. Develop trust in people that, are represented, or communicated. To you through technology. I think is important. Research. Is sponsored by NSF, NASA. Toyota. C.src also. A collaboration, with JD powers figures, in this talk like to acknowledge them these, are my students, that make all this possible and, this. Is my super cute dog here in, the textbook we wrote we have a section on his relationship with our Roomba which, is relevant, here because sometimes, the stakeholders. That you design for the, people whose trust matters, may, not be the people you think and in. This case with the Roomba the dog is somebody they should design, for but I don't think they do what. I'm going to talk about first, is why does trust matter I think, it has a powerful influence on behavior particularly. As technology. Is becoming increasingly, a genic and, smart. Second. What, is trust, it's. A multi-faceted. Concept. Attitude. And I think also a relationship. Ship. So. It's relationship. As well. As antal how. Much should we trust more, is not necessarily better, as, we, heard from some of Eric's experience, maybe over trusting, his Tesla autopilot, we, want it to be calibrated. With the capability. Of the automation the trustworthiness, of the automation we, also want it to be. Aligned. With goals is, the goal. Of the automation the same as the. Person and then, finally, who, is trusted, and who is trusting.

Who. Are the stakeholders beyond, the direct users incidental. Users like pedestrians, may, not trust the automation we, heard a number of people concerned that they would be able to keep riding their bicycles, I ride. My bike I don't drive my car I don't want to get killed I want, to trust the automation to, keep me safe, here's. An example of. Trust. In automation, gone bad this, is from the NTSB. Investigation. Of the Tesla crash this, is kind. Of a mundane picture a little scuff, on the, truck. The other one is too disturbing, to show because, it took the top off the Tesla, and the, head off the driver and. What's, interesting is this. Illustration, of the of trust. In the technology. Autopilot. Was active during this gray period. Out. Of the 40 minutes or so 37. Minutes autopilot, was active these, are points. The yellow is when there's a visual warning the. Green is when the hands were on the autopilot I think, hands on time was about seven, seconds, total so. Touch, the steering wheel drop, down worked. Beautifully, for. 37. Minutes until. It didn't. Person. Trusted, the automation. Over. Trust of the automation so. In terms of why does trust matter it guides behavior, as we see there it, guides behavior. In, a way that I think is underappreciated Don, Norman wrote a book on the. Importance, of emotion, in how, we relate Todd not. Just automation, but in general design. Tony. DiMaggio, has written a beautiful book on the influence of affect. And emotion on decision-making. Trust. Matters because it influences behavior. It. Influences, behavior as, we relate to technology. As technology, becomes. More human, more agentic, it, is more active and engaging with us it might have a voice it might have eyes. When. It gets like that we tend to trust it tend to respond to it as if it was a person, a lot, of work with Cliff NASA shows that Trust. Is. Active. Across a broad range of relationships, it's multifaceted, as, I mentioned before from. The micro how we trust. The defaults, of our computer, the, spell-checking, and so on to, kind of a meso how, we trust a brand we, believe, Microsoft. Is out to protect us is on our side with, respect to privacy that's, trust but also at a macro level how. We, value money is trust, paper. In your back pocket is worthless it's all about trust as is, democracy. Democracy relies, on trust I thought was interesting Bill Gates was saying how education, is the core, flywheel, to society. And then he backed off and he said well actually trust, might be more important, I, agree with him, so. What is trust this comes out of. The, collaboration, with JD powers this is a quantitative. Look at qualitative, data looking. At a network, of topics, from, comments. On a survey, of people's, trust and automated, driving I'm not going to go through this in detail I, become, fascinated in, text, analysis, but I think what's interesting here, trust, one mature, technology. Is improving it, conveys, the idea that it's not just about technology. Now, but people also trust in the future they trust that it's going to get better that. It's going to improve and one day it will be sufficient. So that's one view of the complexity, of trust this, is another view of the complexity, of trust something. I'm working. On as we speak. This. Is a kind, of word cloud but, based on word, and beddings. Extracted. These words from 16. Items. For. Subjective. Ratings. Of trust so items, from, 16, different papers, that. Present. Ways of measuring trust, subjectively, so, Likert. Scale ratings, these, are all the words from that and, arrayed. So. That the similar, words are near, each other using. Word. Embeddings, and then you map to, spread, it across two dimensions, and what you see here is kind of a nice cluster of things, related to trust, it's dependable, honest, reliable sincere. Timely, correct. And then interestingly. These. Are, the, sort. Of focus of the trust the brand the, robot, the. Technology. So. Moving. On to how, much do we trust this. I thought was a great slide. From presentation, yesterday quote. From your CEO. On building. Trust and technologies, Crusoe completely. Agree with that it starts, with us taking accountability, for algorithms we make the experiences, we create. Completely. Agree with that and ensuring, that is there. Is more trust in technology each, day I disagree. With that last phrase I would edit, this to be more. Trustworthy. Technology. You. Want to make, sure the technology has, integrity, and that. Is. The, important, point trust, ideally. Should, be calibrated with, its capability. Or trustworthiness. You, don't want people just to trust your technology more you, want your technology, to merit, the trust and, that's, why that, phrase.

Calibration. Um one way to do it is through transparency. You make it more obvious what the algorithms, are doing, but. There's a surface, and depth component, of this the depth component, is how. It's actually behaving the, reliability. The dependability, of the system the surface is what's on the surface the colors you use the font choice that you use apparently. Lavender. If your car is infused, with lavender you, will trust the autopilot more, so. That's a surface feature obviously. That's. A way of increasing trust without necessarily. Increasing trustworthiness. And that could be dangerous. Calibration. Through control, this, is something missing from that first paper I think is really important, being, able to control, and feel how the, automation is responding, to you pushing it in one way or another, exploring. It through that is really important. So. This. Shows, the problem, of. This. Trust, and trustworthiness, Trust. Is your attitude, trustworthiness. Is the capability, of the system ideally. You'd, be on that line of appropriate, trust what. Can happen. Is hopefully you'll be here but. You. Might be here where you trust things much more than, they merit and this. Is the, calibrated. Trust you're in the Tesla your, hands are on your knees your eyes on the road ready to take over at any moment here. Is the reality I kind. Of get a better picture but that person is dead asleep, in. The car trusting. Way. More than, is appropriate, and, Linda's, standing up so it means I have three minutes one minute two minutes. Varieties. Of control, really, important, point here. Obviously. There's the pragmatic, you're controlling to achieve a goal, it's. Also communicative. You control, to signal to others you. Also control. To learn about a system you tap the brakes to see if it's icy but. Control. Is, for self efficacy control. Is what makes us feel like we matter in the universe so. Control is really important, Springsteen, put, it maybe best, and in, driving, this is really important, people feel, like, they want to be in control so communicative. Control, this is a paper by Josh, Domeier, looking. At the, way. Vehicles. Can signal, to pedestrians it's, through controlled not just lights and arrows. And verbal. Signals. Who's. Trusting, I think, this is really. Critical because oftentimes, we're just looking at this the driver and some automated element, but, really the, driver is in a network, of elements. That, need. To be considered, in. Total. So, it's not just the. Driver. Trusting. The technology. But, it's pedestrians. Trusting, the technology, it's those incidental. Users, out in the environment and it's, also not. Just the, vehicle technology. But the technology. That's. Arranging, the ride chairs and the people who are riding in the car with you and if. You're a woman in Iran talk. To one of my students she. Does not feel like she would trust a lot of the people that she may be paired with guys that may be paired in, the car so who you get paired with that, network, of your trusted, riders could, be really important, so, with that one, thing to give you some. Food for thought how 2001. A Space Odyssey, trust. Capability. Started. Out good appropriate. Trust highly capable system, and then, Dave lost. Trust unplugged. How but. If, you take the macro, view how was working for NASA not for Dave how. Was working fine at the end he discovered, that Dave was getting in the way terminated. Dave or, tried to for, the sake of the mission so. Is that appropriate trust, was, how. Actually, working, way should did, Dave loose trust when he shouldn't maybe should have just died. Let the mission go on but. With that I think, my time thank. You for your great awesome, okay, yeah, we clap sure, all. Right so it's gonna go around if you have a question I hear.

And Then and then Jesse afterwards. So. One. Of the things that, I feel, like we focus a lot on is. Capabilities. And then confidence, and capabilities, of agents. Or automation. In general and it. Was, really interesting to me to see your previous slide actually where. It. Was capability. And trustworthiness. Warehouse. Capability. Never, changed, it was highly, capable, yet. There were other aspects, that changed, a trust and it was so great to see honesty. As well, as one of the words and really, really we. Focus, on capability, but, I also, want to know that, that my autonomous, vehicle is honest with me it's telling me that oh it made some mistake and it's not quite good at handling pedestrians. Being, fair and maybe in many ways, reflecting. My morals, or perhaps ethical, ethics of a broader, society, being benevolent and and. We. Really, don't focus enough on it. Trustworthiness. We very, often. Equate. With capability. So I'm curious a little bit more if you could comment a little bit more on that yeah yeah great point and this is something that as I mentioned that first paper, equates. Trust more than this really with capability, and I think that's. Not. A complete picture and I, think there's multiple elements, of that but one that's really important, is goal. Alignment. Here. We're assuming that. The, goal that. Classic. How betraying. Dave. The. Goal alignment, was imperfect, to say the least right so. Trustworthiness. Part. Of it is the alignment, of the goal and the, second, half aligning, with asset was great I'm. Still highly capable still. Doing. What it should with. Driving. The, goal alignment, is problematic. So, the pedestrians. The. Vehicles, are not aligned with their goals, in fact it's a game situation, where. The driver and, the pedestrian are, negotiating. Or competing for, the right-of-way. More. Generally, when you optimize who, are you optimizing, for the individual, the, traffic, stream or the, traffic and the, pedestrians. So. It's it's, complicated. That way so I think as we move into these more networked, systems, the, goal alignment, becomes an, element, that. Complexify. Suggesting. Has a question then Dom before I get started so we have a few new people here, we, have post-its, on the table so as you hear comments because our goal is to try to generate research ideas please, jot down some, notes and plaster, up on the thing and if I see that you haven't written anything well I'm gonna call on you know just. So. Justin you and then Don go ahead so. John I want to hear about your idea, regarding. The concept of calibration. So, without go the standards there is no calibration, so I'm. Kind, of thinking about what is the good standard, of trust say now previously. There is auto say, assume, like. Tesla. Autopilot works, perfectly, but then suddenly you, make like. An accident, so like. The person's trust should decrease. But, then on the x-axis, where, the ghost and we know that gold standard we should not trust, it completely but where the gold standard will move and how to extend, that you will move, so. You're, not gonna let me off the hook with this aren't you we. Talked briefly before and I thought, I sidestepped, that all issue well but so, it is really complicated so, this trustworthiness. One. Element that I didn't, mention is that there's a time, period to, this it's a dynamic it's not the same so. With Tesla, it can be really, good and you can have your eyes off the road but. Then as we were talking before lunch that car moves, out and you're now the head of the queue the.

Capability. Of that vehicle is dropped, in some sense and it should, demand your attention to the road so, there's a dynamic component, that is really, important, to consider and so, I think the the trick, there's, two parts that make it difficult, to measure is the time scale of trustworthy. And how you estimate. That how you quantify that and. That's. Tricky but, then trust how do you measure that and part. Of that measurement is maybe through subjective, scaling, and that's, why I've pulled, together 16. Of these papers, that have different measures. Of trust, but. Also maybe more importantly. Behavioral. Measures of trust and, we've got a couple papers looking at how people respond. Like with vicarious, steering for example, you get a sense of how engaged they are so Don did you have a comment. Several. First. Of all I really like that, diagram. And. Now abut. Because. Actually I think the right-hand side, the. Two dots should be together, because. The, real question is who, is viewing the diagram or if you will when, it says trust it's trust, by whom, yes, so capability. Is capability, of the device and Trust. Is trust by. Dave. And. So, in in that sense, the. Two diagrams are correct, but. If you now say what is about trust from, the nasa's point of view the. Diagram, on the right should have the two blues so, they actually in other words maybe an 8-3 diagram, to demonstrate, that point of view is critical. In assessing trust. The. Second point I want to do is you. Can't make a system that tells, the driver, what. Is not capable of doing because. It doesn't know what it's not capable of doing and and, we have unexpected, things, that happen which, well. The, whole point about unexpected things, is first, of all they happen a lot and second of all they're unexpected. And. But. I think a really good problem. And, this I want to just say this to get it in, the record for the discussion. Period. There's. The problem of trust and over trust and over Trust is probably the more difficult problem, in many ways because, we work hard to cause people to trust and as. The Tesla example, Anna Medical examples, show that's a problem, but. How do you design display. And, I would I would. Like to recommend not. Using the, word trust because. That is a very loaded term insan I don't, know what there was substitute, is but I'm, gonna say effectiveness. And, one. Thing we know is if you have a map and you show a dot, and. That says where you are but, the system doesn't really know so suppose you show a dot that's shaded and blurred. And and, the radius is a function, of its uncertainty and, the. Same philosophy, could be used for a lot of the information in the automobile. One. Of the things that people are trying is the. Car often for the test purposes, shows a picture of what, it what the car can see and with outlines. Around the object its identified. And. That's often far too complex, for the for, everyday, people but. It's a start, and I wonder whose the way and, the last comment is. I. Know, you know this is for the record the the. Famous. Volvo, accident, the, the car actually did detect the pedestrian, but but, the certainty, fell. Below its, threshold, for reporting, but. But, if it could instead give us a, certainty. Measure one. That's not disturbing, though because this was gonna happen a lot but showing that yeah I think I detect something but I'm not certain. Something. That indicates, that not the words, etc. So this is for the further research discussion, okay I'm gonna like one more question and then we're gonna let a sigh start okay, so there was a there was actually a couple questions there the first one I agree. With you completely this. Problem. Of interpretation here, I think has to do with goal alignment, which, was missing from that first paper and a problem that hoped to rectify and then, on the, the point of uncertainty, one. Of my previous. Graduate, students probably sup lit actually. Wrote a really, nice paper on using sonification, to, indicate uncertainty, so, background awareness.

Supported, Through auditory, cues, of how. The automation, is understanding, the world I'm. Not sure if that's gonna work when you want to play your Bruce Springsteen, as you're driving down the road, but. It's a start. So. A big question okay quick question for the transversing, is from technical, side definitely, we can make a software, and we're more secure and more robust but how does really, like. Mapping, back to the house were saying you should be in a conceptual, level. So. So. As you change the make, the hardware in soccer more robust how's it reflected. Yeah. So I think that as. You make the hardware and software more, robust, that's, going to increase the trustworthiness. Of the system and then. Hopefully, if that's represented. Well at the surface, and depth features, of transparency, and if you give people the the right amount of control so they feel engaged with it and understand. It that should increase their trust to, hopefully. The, level, of capability, that, you've given. It you've improved. Or. Brazillian, see if I would stay would so I would say it's it's supposed to be resilient which i think is a good point so resiliency. Robustness. Capability. Words. We should think about yeah. So. So. Well so I think this is a really great discussion that, we'll have like as part of the breakout, sessions when we talk about trust because it sounds like a big topic so let's give John, a round of applause and. Our next speaker is AJ and she. Is going to be talking for 10 minutes I hope, so I'm. Gonna try to be really quick, I, know Linda, manages. A types of yourself. Hi. Everybody, I work. With Eric and base and, with Gaughan so I'm gonna kind of try to cover I'd our joint work in the space and I think this is gonna be kind of a shift because I'm gonna take more of an AI perspective. Like, as someone. Trying to build AI systems, what is the role of humans, in these systems what, does that mean, for the self-driving car experiences, we've been talking about and what, kind of capabilities, we should be putting into, AI systems, so that they can really utilize this human. Human-in-the-loop, having, a driver in the car much, better than what the current systems are capable of so. To do that I want to start with just overview. Of what. A. Learning pipeline looks like how. People are building machine learning systems today and this actually, includes the self-driving car functionalities, as well if, you ask commissioned learning person how they are building these systems they, are going to give you a picture that's kind of on the top I get, my favorite data I junk it up to a training site on a test site optimize, the parameters of my model, look at the accuracy, if I'm happy if that's better than the past accuracy, I'm very good, however. As we are looking into real engineering, pipeline, special, to work going on at Microsoft. We. Are seeing that humans, are part of every. Step of the development. Lifecycle and, also. They are the users of these systems so humans. Are a big part of how, these systems are, trained, developed. They, give the objective, functions, they tell the machines what they should be doing they, are part of the execution of the system because every time I tested, cars gives a warning light and says please help me it. Is actually getting a human in the loop for, reliability, purposes. And finally. Our real goal is not really getting the most accurate machine. Out in the field our real goal is really, having the, machine, that provides the most value, for the human and that's. I think what the purpose of AI, do all of it should be and, I will just try to make a quick case for why we think about human in the loop so much there. Are multiple, reasons they, are quite general, but I think they apply, to soft driving cars or semi self-driving, cars as well first. Of all unless. I die systems, are perfect. There. Are actually particular, complementary, strengths we see from humans, and people we see this in medicine, but I think the car settings are really interesting, for me because people. And machines, have different, sensors. To perceive the world so, it is quite unlikely that, something, that is failing the light out of a car is gonna fool the person or when, a lighting, condition, goes bad and the human can not really see much it's gonna be the same problem for the machine so we really want to see what that complementary, strength looks like John. Actually mentioned. The ethics the value judgments question, which is. No. I'm not gonna no that's not my favorite problem, I actually feel, that that is a superficial, problem, that just you, know. Gets. That gets us out of the real problems, but at the end but. At the end of the day do we still make value judgments every, time an engineer puts an objective, function into a system, they are making a value judgment it doesn't, look like the trolley problem but, they are making a value judgments about how fast the car should be going if Miss settings is ok to override the traffic, rules because, maybe people are not following the traffic rules as as.

It Is written in the books so all of those are actually value judgments, that are going on into the systems we, need people to debug these systems and figure out how they can be improved it data collection, and so forth but the main thing I want to talk about today is the, role of joint, execution. For reliability, we, know that the cars on the street today are not perfect, they actually, fail a lot and the, only reason we can put them into the world today is our reliance. On the people, as collectors of these systems, and have, that virtuous, feedback loop back to the companies, because every. Time a human corrects. The Tesla car that's, the signal, that the car or the bigger company, can use to, improve, the algorithms, for these systems. So. I just want to talk about why. These failures, happen, and what is the reliance on the human with. AI algorithms, requiring. A lot of data we rely on these. Kind of platforms, this is a aresome from Microsoft, this is a simulation, platform it's available for, drones it's also available for cars a lot, of the car companies are using these kind of simulation platforms. And reinforcement, learning to build the algorithms, that are that are going into the cars today and when, you look into it it is nice nicely lighter there is like a street. The car kind of bumps up here and there and then finally learns how to drive however. This is how the real world looks like this, is quite different than the simulation, platform this, is the screen from the uber, car when you know unfortunately. The, car killed somebody. A pedestrian. During. The drive so, what we are seeing is that there, is this mismatch between, the simulation, platform and, the real world no, simulation platform can, capture the complexity, of the real world and because. Of this mismatch our current, algorithms. Have blind spots in them they, cannot, really learn all the features that are important, to function, in the world and when, they get into a space where this kind of this outsider. That is not represented, in the simulation, is in the real world they, fail and they have no idea that they are failing, so these kind of confidence, scores being able to signal a person they completely, go out of the window when we have these kind of mismatches, between the, training platforms, and the, real world so, what we did in this particular, research paper this is just one step towards giving people given computer, systems, the capability. To know what they don't know is we. Actually used human data human, demonstrations, in corrections to teach machines, what they don't know build, these maps of confidence. And then, use them to be able to hand off decisions. To humans of course, right now this is all happening in small toy problems, that we can kind of do in our machines, right this is not real self-driving, cars but, I think this is one algorithmics, that we are taking to enable, the machines to know more about their capabilities, but. The. Problems, in this driving space are not limited, to machine, blind spots, humans, have blind spots too this is why we are excited about the prospect, of self-driving, cars what, you see here, is a human, blind spot. So. We should really think about where. Machines, have blind spots where humans, have blind spots and really have algorithms, that can reason, about both together to, really think about, control. Very much so, in our follow our work what we looked into is really, like how many can collect, data both from the machines and humans bring it together so. That we can map out the space of complementarity, really. Figure out who. Is a more reliable I, know you don't like the world I think it robust was the word right reliable, is the guitar who, is a more reliable actor. In. This situation. And we can manage the control that way, so. The last point I want to make is that unless. We can really make those kind of algorithms, work and build agents, that have a good understanding, of their capabilities. We, rely on humans. And we rely on their trust so the, the, way that when, I talked to Eric about his experience, with the Tesla the. Way he describes, his experiences, that he watches the car he. Watches what the car can do and cannot do and. True. That humans. Build mental models, of trust and they. Say in the street I can trust a car, I don't have to watch over a lot but I know this exit is problematic. And it does exit, I said I should be watching over it very carefully, however. All of these cars like any software, gets. Updated, all the time it. Is just one of the problems that we have when humans, and are in the loop but our AI systems, are not designed and, optimized, for having, human in the loop these.

Objective, Functions were updating, models have, no consideration for the human thrust it has, no consideration of the human mental models and when, that happens, an update, can, actually, kill the mental model of the human so. There are actually a lot of new insights, we should be putting into the development, of AI systems, for, them to be human, aware human, considerate, through, the reason about new, capability. Is going beyond, of accuracy, that, can sustain that partnership. Between the human and the machine so, we did a little bit were overcome, this with gagarin and we are continuing, our collaboration. Right now where we looked into the role of updates, machine learning updates in the human, AI collaboration. So. The expectation. Is that I have the blue agent human, learned about the blue agent the, green agent is a better agent I move to the green agent together, we get better. If. Things are not compatible. If the updated, agent, breaks. The mental model of the human this is a situation we got and we actually verified, the situation, with human experiments, and what. He can do is actually add, a term for dissonance, into the objective function of the machine learning model, that penalize --is these kind of new errors errors. That, break the mental model of the human and actually. Can, get machine learning models to be compatible with human, expectations. So, this is just one, way. Can be more considerate. Be, more human a very in the development, of AI systems, and why this is important, because unless. We can get perfect AI systems, in safety, critical situations. We have to rely on human, trust they, are our key for reliability, that, is why we have to really think about the human side in the development of AI systems, and that, requires, think about the humans from design to, the development, of the objective, function, thinking, about the improvement, loop be developed, in our engineering, practices. So. What we are doing is that the, original objective function for machine learning models only care about the curious in a data set what. We are putting in here is that a term, for dissonance, that, is actually penalizing. Every. Mistake I. Will. Get there but I, need to explain this first ok, so, what this is saying is that whenever. You are making a mistake that you were getting before that, human, would trust you it I'm giving, you an additional penalty, term so. That's what the objective function does and this, compatibility, score is actually, watching that it's watching how.

Many Of the things it, was getting right before, it, is getting right now so how much are our trust is capped or broken, this is like a percentage, and. Y-axis. Is the accuracy, performance. Of the eye algorithm. The, eye out a eye algorithm, is going to be most accurate if, it has no compatibility, to, the past it. Can just optimize a whole lot of the data sack so, that's why the. Points, on the tab so the three lines are different, optimization. Functions, different dizziness terms, we study in our paper but, focus on the blue one because that seems to be doing the best what. You see here is that you. Can you can you get most accuracy. When. Your compatibility. Is low but, you don't have to sacrifice actually. A lot of accuracy, to get to a higher. Level of, compatibility you can kind of continue, that function, quite a bit but if you really want to get to a lot of compatibility. You have to start a crow fighting accuracy, so, our idea is that we want to give these kind of graphs to. AI developers, when they are going to be updating, a system they, can look at these graphs and actually they can say this. Is the point I'm building to be I have, to be at this accuracy. That means I have to I will have this much compatibility. With my past models and maybe, I'm gonna have a strategy, to communicate, what has changed, to, my human user and. They can talk more about this at the break or something. Up. There yeah great thank you very much okay, this is it thank you, all. Right so we have some time for some, comments, discussions. Questions. Anything. Okay. John well. Hold a minute mmm-hmm. Really. Great talk. You. Mentioned, dissonance. And the degree to which it's compatible how. Do you quantify it what constitutes. A mental model breaking. Change versus, something that's just different that might not be noticed, it might not matter for the person that's, a great question, we are kind of taking a. Kind. Of simplification, approach, there what, we are doing in this paper is actually saying any, time you were getting right before I assume. That the human has learned to trust you with those and if you are that you start making mistakes on those instances, it's going to be really problematic because, I had trusted you now, the machine, is making a mistake and I'm not going to be really aware of that mistake I'm not going to be able to correct you however. In many settings in practice, people have personalized, experiences. So, they develop, trust in different ways their. Experiences. Dictate, where, they are going to be trusting, more or less so. Actually. The best better way of thinking in the future which we don't know how to do yet because, we really don't have computational. Models of trust or mental models yet would, be really, thinking about the personal experiences, of people try, to model, what that trust looks like and put that as a component. Into, backwards-compatibility. I. Also. Want to again remind people to go ahead and put up your post this and then Andrew, walk around and take a look at them so, if you have post-its, and you haven't put it up yet please do, I, guess, no, more questions, alright, then let's give her one more ham AJ. And. I'll next speaker now is, Shamsi. Give. Me a few seconds, as it gets yeah you know, what let's take this time to write some notes like write some research use, these microphones, forget it things done yeah go for it. Yeah. Multitasking. Micro. Tasking, yeah. Okay. All, right I think Shamsi is ready now so, I'll. Give people a couple of seconds to finish, their. Notes and we can start at a private point yes, okay. Ready. Let's. Go, okay. So I know that Linda, started her stopwatch, awesome. I already, lost two seconds, so. We. Are going to move from automation and Trust back, to work and we.

Already Had some very fruitful, discussions, during the first session in the morning and I'm, going to continue on that and kind. Of like talk about a couple of projects that we have done about, work, in the car or in the context, of when you're commuting. So. Oh yeah, it's, nice to have this now so. Again, getting. Things done is no longer confined to the desktop anymore because we're commuting, we have mobile devices that have increasing, capabilities. We. Can basically carry work with us everywhere, it's, not necessarily, a good thing it's just that we are starting to have that capability more, and, the. Key point that also came out today is that we are spending a substantial, amount, of time commuting and think, about the cities of today is that people are being pushed more and more out into, the suburbs which means that they're commuting more some. People have remote working capabilities, most people don't so that is kind of like lost time in productivity and so, we're thinking about can are there opportunities, to make use of that time and that's kind of the core of this, entire workshop. So. We. Know that driving is no longer a single attention, task and I just wanted to point out like for scenarios which. I had also briefly, brought up this morning so I'm not going to belabor over, this primarily. Manual cars continuous. Attention. Is driving, and we, could have opportunity, interleaving. Of other tasks, we have talked about this this. Morning connected. Car where is that you just have broader range of multitasking. Capabilities, because now you can talk to the internet and and the, cloud. Semi-autonomous. And autónoma so I know that someone had talked. It being more about self-driving cars versus non but there's this weird, spot where self-driving, cars are not always self-driving, and so, that's where driving becomes secondary and you have to interleave, just like paying attention to the road so that you're ready for take over and then finally autonomous, where, cars drive themselves you, have potentially, full use of the commute time but, there are other design, considerations. Around that so this is the Cinematheque environment, it's moving, and all. Of that so. We talked about this in the morning we all know that, engaging. In something is intentionally, challenging. And how do we design tests so that it can deal with that limited attention scenario. So. There is a flipside we. Know that we do a lot of mind wandering when you are driving because typically, we don't have anything else to do so driving, sometimes become so automated that you can allow yourself to think about different things but, it also means that mind-wandering, can negatively impact your, focus on the road and it's kind of like how you're able, to control your conscious thoughts that gets lost so. There has been research that shows that strategically-placed. Concurrent. Tasks during your driving can actually help people be more vigilant, and focus, on the road better so, there is some opportunity there we listen to music we, try to talk to other people so that we are alert and not falling asleep, so. There are also and I had hinted at this at in talk, this morning is that there are moments during the driving where we feel that we might be able to handle, things better. Than other times during driving that we know or at least we should know that we can't handle others tasks, and. They're, thinking about new experiences for semi-autonomous, and autonomous vehicles is that what are some of the things that we can do in the car. So. I, like to think of this as kind of like these four aspects, so one is that what are some of the non driving tests that we can safely do in the car now if I tell you that yes go ahead and write your KY paper in a car which is not self-driving that. Is probably not the right thing to do but we're already doing some of the things in limited attention environments. And we. Have now, assistants. That are making their ways into the car that are potentially, going to help us with some of these things, just. Because an assistant, can help you doesn't necessarily mean that it the tasks, are designed in a way that it, should be presented to you.

In The car so I think that there's a design opportunity, there is that your assistant that helps you at home like, Alexa, or see the, interaction, there is very, different than what the interaction, in the car should be and so that, is also another design opportunity, we. Talked a lot about microcast, this morning about, kind of like thinking about tasks. That, do not require sustained. Attention, and can, we how can we break it down to a level where it makes sense for the user to do that task but also take. Away that inter dependency, that well the next has that I'm going to do need, this task to be completed before I can move on to that one and then. Finally, and this is this is also an area. That is ripe for research is that we're not only designing for drivers, in the car we are designing for sometimes the passengers, my, Highlander, does not let the passenger, actually. Put in or interact, with the GPS. System when the car is driving so, I'm not driving so I should be able to do that but it doesn't let me but. Also, thinking about I mean a passenger, in a driver. In a self-driving car is kind of like a passenger and that's assume that's fully self-driving. But again it's, a moving environment how do we design. Especially. In keeping, in mind things like motion sickness keeping, in mind the resource constraints, and all of those things, so. I want to talk about two projects. The. First one was not necessarily. Done, with, the current mind though, it was motivated and the, second one was we really wanted to push the boundaries of thinking, beyond just the communication level, tasks. So. This. Is interesting because I, know this morning some people talked about kind of like so I want to not. Necessarily carry, on work all the time I need to be able to detach from work at the end of the day and particularly, people look at cars being the place. Where people start disengaging. From work and they start ramping up to home so. That was my motivation about, thinking about okay so how do we kind of like use this time as people are transitioning from home to work and work, to home for to allow people to disengage, and reattach.

To. To work after a while and there. Is there. There there is evidence in occupational. Health. Behavior, and organizational. Occupational. Health therapy and our organizational. Behavior which shows that adequately. Being able to detach, at the end of the day actually helps your productivity, in the long term and reattachment. When you come into work probably. Everyone, starts, off with checking their email getting caught up with the things that they want to do taking maybe their to-do list and that's where kind of like you are spending maybe half an hour trying, to get get ready for work and so we were thinking that can we move some of that some. Of those actions out from, the desktop into. When you are coming into work so. We. Developed, a conversational. Agent that. Asks, only a couple of questions, at the end of the day to help people disengage, and then. Brings. Back that information the next day as they were walking into work and so, that kind of like to get them get, them in that frame of mind this. Agent, was not used in the car so I will make that claim right away and part. Of the reason is that we didn't want to do that study in the car we wanted to see how effective these questions, were so it was set up as people could, use a Skype, client to actually interact with the agent but. It could definitely be used in the car and I'll show a video in a bit and, these questions are super simple what. Did he do today, what do you want to do tomorrow and how do you feel about work today and how do you want to feel about work tomorrow and the next day it just brings, back that information and asks, you what is the first thing you want to do to be able to do that so. I am going to quickly switch to a concept, video and. What. Is, this. Yeah, it is I did open it but. You. Know what just. I. Thought. I could open two instances. Hey. Cortana. Hi. Alex so I know you had a big day today how. Did it turn out it's. Only pretty stressed I had, a few things to do today I made edits to my paper I finalized. The study design for my study and. Started the code base for the new project that we're working on it I might, be able to help with some of those things if. You had to put one or two things that you want to do first what, would they be I'd. Love to get the bugs addressed by noon tomorrow. Cool. I said, a reminder, to send the email to Kelly for tomorrow, morning, and it looks like you have an hour free, between, nine o'clock and ten o'clock do, you want me to block that out so you can work on the bugs sure. That'd, be great, done. And done, we, can revisit the other things tomorrow. So. Again this is a concept video this does not exist it's. Not at this level but, this is kind of like a showcase, what we are envisioning with this and, this is Alex the intern who worked on this project so, we thought it would be nice to showcase. It. Warnock. Ratana good. Morning, Alex did you sleep well not. Too bad. And. I'm ready to get started let's do. It first, off remember to, email Kelly, when you get inside. Any. Time and, now's the time you set aside for those bugs if you want I can hold any emails, that aren't urgent until after 10 o'clock, would you Thanks.

Ok. So that kind of showcased, what we are looking at in terms of this experience and so you might notice that it was totally speech based and it. Was kind of these short interactions, and it sounded like a conversation that you would have in the car now. We then, wanted. To take this a bit further and, see that. Well can. We push, the boundaries of things that we can do in the car and I see, God looking at me and. So. So. Again this is an experiment, we wanted to and this was particularly motivated by the fact is that we can take these bigger tasks and break them down into micro, tasks and so are there parts, of a bigger task around, PowerPoint, or around document, editing that, we could actually present, in the car in a safety, aware manner, so. This is kind of like a simulator that we use and this is very conversational. So. This. Again to just highlight that this is not what we intend to do and no. We and so, when you think about non driving tasks for the car so again, this is a speech based interaction, that we're looking at it. Conversational. Agents, could be there, getting better but you could also embed, awareness, about safety because a lot of information that these agents. Could use is coming from the car so what is the car speed what is the environment around it we, could use sensors, to be able to figure out what the cognitive load of the user is and so we could use all of that information to kind of filter the type of test that we even allow the user to get engaged in. So. Again, that's where micro, tasks come in and have discussed, what we think about micro, tests so they're not necessarily at the very level where it doesn't make sense anymore but things that people can quickly do without having to depend on that has before other tasks after and again alerts were the other options, that we're thinking in terms of the safety so. I'm going to show another concept video because I think that that, showcases, what we were looking at. Nick. Is working on a presentation, he, is about to deliver in a meeting, imagine. Cortana, keeping, track of where he is in the presentation. And helping. Him with the final, touches as he, drives to his meeting while also making sure he drives safely. Where. Were we you were working on the motivation, slide. Perfectly. Okay, I'm ready the. Title screen says motivation. Do, you want to add text or graphics, on this slide. I. Found. A picture and added it there, is a bicyclist. To your right. Do. You want any other text, on the slide no. I'll speak over it. Okay. That's, done check. This slide before your presentation. In. Case you didn't see a picture, of an autonomous car is automatically, inserted, there so. Again, we're, nowhere near this, but in in the future we could imagine our, assistance, beings as smart and so that they can do these kind of things for us but, one of the things the other things that I did want to point out that is that if you notice that interaction is very fluid it is, allowing.

The Driver to pause it's completely, speech based of course, there are questions around okay so I am now working on my presentation or even thinking about my presentation and so now I'm visually, starting to think about it and how is that going to contradict, with my driving and so of course there are those kinds of scenarios that we have to think about and that's why designing, work for the car is so interesting, is that how do we how. Do we suggest these micro tasks and how do you kind of like get a measure of what the cognitive load is going to be and how is it going to going, to interfere, with with current, work so, the research questions, here and I'm going to quickly go over this is that we wanted to see how, the secondary, task structure or how the or how, the micro task is being presented, by the way. The agent how does that impact people's. Performance, and the. Context supports so the support about the road how does that how, does that influence, the drivers safety. Needs. As well as there needs to be productive, okay. Okay. So very. Quickly people. Yeah. I'll go probably gate will go here driver. Seemed to, be split on the test structure question, so some people like the agent, to be kind of like very very directive. And they would answer the questions like okay does this slide have, a picture yes or no. Other people hated it and they would rather prefer that I would dictate something, go, and make it work. The, good thing is that the drivers did not think that they could create polish documents, nor did they did they think that they should but they felt that whatever they were dumping. In terms of thoughts would be useful to carry on later, many. Rather said that even though they might not do, something that is coming from the office productivity, suite they thought that just, thought capture, or creating, - duze would be a good thing to do in the car. I'll. Skip over this and it's kind of like implication, for design is that support for safety is important, these environment. So it's not your regular desktop environment, neither is is that your mobile phone, environment, when you are moving when, you're on the go tasks. Should interleave with driving and so the tasks should be designed in a way that that they can be easily not only engaged in but you can also disengage. From them very easily and there are only some tests that are going to be driving driver friendly friendly, and trying, to put in everything, into the card that's not going to work and on.

That Happy note I'm going to open this up for great, ah I think that's, great does. Anybody have, questions, discussions. And don't forget to write so. Quite a few Wow so yeah. Let's do that hmm start, with Duncan great I really liked, the. PowerPoint theme while driving. It's. Nice idea so what, is it kind of react, there's middle moments of like going for a I think, of going for the the scenario you presented there was a person driving to the meeting and to, me that's like standing up and going for a walk and getting some distance from the work and. It's. Interesting that you chose to focus in on the folk on, more. Edits. Being done it was like about adding content, to the slides or, the kind of stuff that you would traditionally do at the desk whereas, I don't know my personal reflections, on how that goes, down usually, is it's when like the the talk is rehearsed in your head and you're reconstructing. That high-level and I'm getting nods yeah you're agree construction that high-level narrative, and so, it's interesting that you chose to the interactions. They're all about put this figure in add. Some text here whereas. You could imagine an alternative, here where it's like you're talking through. Conceptually. What you're going to be talking about the structure of it all and things are being rearranged. The, slides and cells are not being edited but they're being real range conceptually. The. Overall structure of the talk is a. Different level of abstraction it's, it's definitely, that and I think it boils down to where, we felt at that point what people have the more cognitive, load and from. My personal experience, and I have, used the car to actually, rehearse. Stuff and. Those are also the moments when I have felt that i have had no awareness of, my driving are you managed, to manage. To reach from point A to point B I have no recollection of how I got there so which is which is scary but but I think it's, a good point thinking about okay so what are the things that would. Not require me to have. Continuous. Attention on a test, that is not driving, if. You if we go back to the self-driving cars I think that that's actually a perfect scenario I mean it doesn't require you to visually, get engaged in anything so there's no no. Setting of motion sickness but it's, a perfect thing to be able to do in those kinds of scenarios but I think it's also personal dependent. Andrew. So. This is really cool and I really like the fact that you have an assistant who's thinking about helping and, I wanted to point out that you know data both in, experiments. As well as, looking. At data. About, crashes. Shows that having a passenger, decreases. Your, chances of getting into, a crash so. Is it because of you know what kind of interactions, go on of course you don't quite know but the fact is that you know having a passenger probably, means talking to the passenger right so one question would be how would you do this task if it was in Cortana but it was you know you and I are sitting there you're driving and you're telling me what to do and, it intuitively, feels like you'll be pretty safe so I'm really always curious about how we might be able to learn from those human human interactions, and improve, so. I don't know what your thoughts are yeah I think one of the things that also came from the previous talks is trust, in the system so I mean right now I wouldn't trust, Cortana, or Syria or anyone even, I may or may not trust a passenger, in the car so it's a matter of when, you are thinking about manual, cars you you always need, to be vigilant for yourself, but. Also learning, from these interactions, between passengers, and drivers I, mean so, passengers would point you to things, I mean I would say an alert passenger, would, point you to things in the road that okay so keep. An eye on that or. Maybe. Talk or maybe, the amount of cognitive load that the conversation, is having so those kinds of things I think there's definitely these things to learn about there but, there's also some good points about systems, is that they can do some measures that maybe a human can't so, if you're looking at wearables, you're looking at all the car sensors that could be put in a car and awareness. About okay there's traffic coming up or they're distilled Road has suddenly changed in terms of the density of cars so, those kinds of information may be a passenger may or may not be able to pick up on but a system, might be. So. One, of the questions, for me is sort of what does it do to our, very. Close as a whole so, I have done some of those scenarios with, a human, assistant, so. On. The motorway going from my email and quite, regularly, and I find this even. With a human assistant, who is very well knowing what I do.

Extremely. Tiring so I usually arrive, and I feel I have done a lot of work so I did this very often on longer, motorway tribes like one hour and I, felt this sort of I I arrived. At work and I felt, I had heard sort of for a number, of hours so. For, me the question is this sort of is putting this into that context, of trying has anybody looked at what, it does to our, perceived. Route load so I think these, things and, if you study them in the lab that's a different thing but so, it's. Very easy to study them with a visit of us and, my. Feel is sort of you feel at the beginning this is really working extremely, well but, once you arrive in contrast. When I have listened. To music or, -. Yeah. Podcast, whatever sort. Of I feel quite relaxed, when I arrive and so so technically. There's not really a big difference but. I, find. It quite taxing, doing, that and. So so did anybody study, that. Yes, so we me. And you or it and John we actually have a National, Science Foundation project, trying to look at trying, to understand, how much of a workload it is to actually do work and what, is you know are we attribute. But. It's the the. Package, attack. Right. That's, a good point so so, Linda can you repeat what he just said so, what he was saying when Albert was saying was that it's not just about the moment to moment you know interactions, but it's more of the big picture and then long term what, is the effects overall, all. Right so. I would add one point to that and so I mean, in case there is a misconception so, I am not proposing, that we do more work in the car but there are seen and if you look at the first one that was actually helping. People disengage, from work so that on the way home you're not really thinking about work anymore you can start ramping up to getting to home the, other is, sometimes. There are work thoughts that would be there automatically, and so what we're looking at is that are there effective ways of getting those thoughts captured, so that you can actually you. Are able to relax and you are able to listen to music rather than thinking about oh I have this meeting with Eric and I have to think about all the all the points that I need to make during those meetings so. I mean that that's one that's at least one of my motivations to think, about how, can we use the time in the car effectively, but definitely, thinking about the well-being of the person that's, another key, part of the research that I do. So. Just to repeat basically. What he said was that you know once you know we have this feature and we're able to use it our employees may actually demand, us to do it and is that a good thing or bad thing from, an ethical perspective we're just gonna take one more question, IRA. Okay. It's, flora from Melbourne RMIT so, thank. You sir I'm Geo I wish, my drive scenario, looks like that it's very relaxing. But, the thing is when. I, think, what what is missing really is a lot of toss happening, especially. In the first and last, mile of the drive so.

For. Example. Give. You a couple scenarios for example I know now, the traffic's. Building up and I'm. Going to be late I have to find a car park where, I, usually, get. My train to work right, and I know if, I'm not catching that train, in five minutes I have to write that email to, my ten o'clock meeting say I'm going to be late and there, will be repercussions along. The way and I have to say much to my students maybe hey I will have to cancel my meeting with you because my ten o'clock meeting is delayed blah, blah blah a lot of things happening in my mind so, these are Democritus and I think that's this I think I personally think we're still far away from, the. Scenario. Of drafting, our document, but I think these are the low high low hanging fruits that we should be tackling. So. My question to you is have. You explored, the. Sort of taxonomy. And, categories, of tasks, these micro tasks that, people might have been thinking, while they're driving so, it could be I mean we've export something like this in our other projects, but it's something maybe it could be a navigational. Task about. Okay where to find the closest, carpark is it something to do with, finding. Something, is it, a finding, toss is, it a. Searching. For information so this, kind of little. Little micro toss so, this morning. That's. Fine what, people want to do in the car then I I believe that some of these communication. Needs came up and the, scenario that you described I think that's a perfect example of that yes I do need to send that email because I am worried that I'm gonna get laid and I need to get, this information, out somehow and I I suffer from that repeatedly. So I think that when we think of productivity test I think that it's not only creating, content or managing. Content it's also about these communication, needs and how can we design those to happen in a safe manner and one. Thing I don't, know if it's also discussed here because. I do work with a lot of roadside safety experts. One. Way to actually measure how much, you're paying attention to the road the driving, test is actually even. The way, you drive so for example if if you start swerving, around a lot and and, I guess that's when you your, intelligence, system maybe should ask you and prom your question. Are you okay, and what are the things that bother you that we can help absolutely. Linda. I hope it's okay let's just let's say thank you to Shamsi I, just. Want to make sure that you guys get coffee and. So that we can go ahead and do the break and then bio eight these are really great comments so for those of you who I did not get to please. Go ahead and write the piece of paper you. Know your comments so that we can tack it up and then we'll try to group people together to discuss, so. I'm gonna go ahead and do my last presentation all of you can get even on me now and take, time to find. Out if I'm going to be able to keep within ten minutes I think. I can though oh. Yeah. That's true right no no no so. Why. Didn't I think of that no I'm just kidding so I'm just gonna wrap up by talking about, you know everything that we discussed with regards to trust using. Our, vehicle. In the car and then what happens over time when we use automation, and, so I'm gonna talk about just adapting to technology in. Our cars so, I spend, a lot of time looking at behavioral, adaptation. Which is basically. When you have extended use of a system what. Happens to the operator and how their behavior may change based, on use of it and oftentimes. It, changes. In way that's unintended, from the person who actually designed, the system and this, change can be based on many things in addition, to like what the situation, is the context, and it's often based on you know how much experience we have how, familiar we are with the situation, as well as what our motive for, what why we're driving to begin with but. What's very interesting is, that you know as technology. Has evolved, technology, is right now adapting, to the limitations, of the human but. We are also adapting, to limitations, of technology, so we're, definitely technology, while technology is adapting, to us and so. From that then we, we see these different types of implications and there's actually many types, but for this particular workshop, I just wanted to focus on the perception. Of safe driving what is what a drivers think is actually, safe when, over time they're able to do more and more things and, they they were able to do it without actually, getting into a safety critical incidents, so, then therefore the amount of non driving activities, while driving then starts, to seem. To seemingly, increase, and, we actually saw, this we did a study where we looked at people texting, and reading.

While. They were driving, and you can see we did this just over three time periods and overtime, so we separated out these individuals, based on the driver performance measures, based. On risk and and and the more conservative, driving based on how close they are ahem how what's, their propensity speed and you'll see that over time there. Are people that are risky actually, we're more. Willing to do you kno

2019-08-25

Show video