The Ethics and Governance of AI Course: Class 5, April 3, 2018

The Ethics and Governance of AI Course: Class 5, April 3, 2018

Show Video

So. We have a guest today yet, Rafa runs. Scalable. Cooperation, here at the Media Lab I've you've, seen some of his slides in my presentation already. But. We'll, let him present on. Autonomous. Vehicles the moral machine and some. Work that he's doing on labor and then, we'll roll, into a conversation. And, a case, and. Have. A conversation with, yet about his work so, take. The way yet. Okay. You have to use this thing. Is. It working perfect, okay, so I'm going to present together with, Eddy whose. Postdoc. In the group and. We'll. Talk, about the moral machine experiment, but also a bunch of other studies, that we've been doing around. That, maybe. Some of them will be familiar to you I. Hope. Will will. Give you some will show you some new results so, that will hopefully be interesting. As well. So. The, I wanted to start with this picture because. You've. Probably all seen this, already, in, the class, I'm. A very big fan of this. This. Model of regulations, things that regulate, and. Actually. I drew, Larry Lessig, himself in the middle here just, to emphasize, that and I'm. Really interested in how. When. We have, new kinds of agents like machines. Like, autonomous, cars for example I think, it's helpful to think also in terms of these four regulatory, forces and. I think it's useful because if we. If. We do then we recognize, that not only is the architecture, of the system itself you know what is the car capable of doing and what sort of sensors does it have. Matters. But also the. Law right. Liability. Who's at fault when, the car causes. Damage and so on which is kind of the obvious thing but also markets, and norms right so markets. Would, influence market. Forces the incentives of people who would buy these cars matter you know in this case as well and they shape what sort of cars get. And. Also norms, you know what sort of expectations. Do people have these. Norms differ. Between. Different, countries for example, and. And. So on and how does you know I would put also in norms human psychology as. Well right so how how, do the cognitive, biases, of people, shape the way that we may perceive these types of machines and, the.

Expectations. Of their behavior so, so. Keeping that in mind let's move. To discuss. Some of the results. That we've got if. You're interested in kind of a broader overview. Of this we've, written a piece which. Is more like an op-ed that. Got. Published, last. Year on. What we the psychological, roadblocks. For. To the adoption of self-driving, vehicles, and we classify these into three major ones oops. Yeah. As you can see these are very important, block road blocks, I, hope. This. So. One of them is ethical, ethical dilemmas, and the. Which is kind of fear of. Of. The unknown right, and, then there's. Sort. Of psychological. Bias about the, ability to trust something that is not you and finally this opacity which, is something that you cannot really perceive. Or understand, you don't have a mental model of it, let's. Hope this is this problem is isolated. So I, wanted to ask first I. Wanted. To also say something about the, way that we regulate machines. They're. Sort of in. A. Way there by a bit like all regulating. Products so the way that you regulate an AI you. Know a machine like an autonomous car or a robot. That. Is adaptive. You. Know maybe it's similar to the way we regulate, chairs. Or children's. Toys you know you cannot use certain. Material. In children's toys because they would be toxic and you know there's standards, for you know what. Which age groups you know can you allow. Certain. Toys to be sold to because they're they have pieces that are too small or. And so, on which are choking hazards certain, kinds of warnings you have to put on the product you know for consumers to use so, it's kind of similar to that. But. Regulating, AI systems, is also kind, of in a way like. Regulating, humans and I would love you. Know to. Hear the legal opinion. Here as well because, they. Exhibit things, that chairs or toys you, know passive. Children toys don't exhibit, right things like things, like autonomy, you know me they make decisions on their own things like intentionality, like the action you know the thing takes an action for a certain purpose things, like adaptation, so the ability to learn behavioral patterns that, were not programmed before this. Is not something that a, table. Or a, child's. Lego would do right unless. You program it with scratch. Or something. So. Ok. So now let's move to the case, of autonomous cars with this kind of background and let's. Think about the, promised. Benefits that, the. Proponents. Of this technology are promoting. And they saying that they could in principle, eliminate 90% some. People say 94% of todays accidents, and the reason is most. Of today's accidents, are caused by people by. Human. Misjudgment. Right so be in a human, mistake or human error so, people cannot take cannot. React in time they don't assess the situation sufficiently. Well. Because they don't see objects, in their periphery and so on so, it seems like a no-brainer that we have to adopt. This technology and. There. Are obvious technical. Hurdles. That we have to overcome. Before, we make that happen, and, there are also legal, hurdles that, smart. People who teach tort law think, about you. Know who's at fault when, something wrong happens. You. Know is it manslaughter, or not if you kill somebody and so on. But. I think there are also kind, of psychological. Issues that we have to take into account and this falls under this norms right, public. Acceptance, category. And. Let's. Start with one of them and this one doesn't, it's actually new research, that we've done this, has nothing to do with ethics. It has to do with trust. So. When should we deploy autonomous. Cars should we deploy them when they are as. Safe as humans 10%. Safer than humans 50%, 90%. There. Was a study that, was, published last. Year by, conducted. By the RAND Corporation in. Which they said well perfection, is the enemy of good, we. You, know what would how many lives would we lose if. We wait right. Because there is there's two ways in which you can lose lives one. Is the. Technology isn't safe and. It's less safe than humans, and the, other one is your you're waiting, for it to be perfected and people are dying in the mean time right. So. They did a whole bunch of basically. Simulations. Of some sort of different. Different. Trajectories, of the improvement, of the technology, different, trajectories, of. Consumer. Adoption like how fast people buy these things and they. Found. That, over, a 30-year, period 13, year horizon, more. Lives would be cumulative Lee saved under the improved. 10 policy meaning that we would allow them we would allow mass adoption as soon as the, cars are ten percent safer, than the average human.

Then. Improve. 75, or improve 90 which is you know waiting until their they've, eliminated 75. Percent of accidents or ninety percent of accidents that, we have. And. In some cases this, is more than half a million lives I mean imagine that's, a lot of people right if, you have you know that's a serious, public public. Health decision. That. That regulators, have to take. Okay. So. We looked at this and we thought okay, that's great but will people actually buy, these things even, if you do adopt. If you do allow mass production, at. This threshold you know only 10 percent safer, than the average human. So. I'm gonna ask you first let's do a quiz, it's. Gotta motivate the study. What. Number would you choose here I am safer, then I'm. A safer driver than X percent, of the drivers of all drivers in the US what's. X for you. Who's. Who's, safer, than. 10%. Of drivers in the u.s., let's. Say if you drive how many people drive actually look, okay. Okay. Who considers, themselves safer. Than 50% of drivers. Who. Considers, themselves safer, than. 70%. Of the drivers. 90. Okay. All. Right you're a reasonable, bunch. But. Basically there is something called the better-than-average, effect, or. The. Illusory. Illusory. Superiority which. Is a psychological, effect that in. Which. People. Always. Consider, themselves better than average. So. If you take the average of the. Of. People's. Estimations. Of their. Performance. That you. Know the median become shoots up significantly, so, show you some of those results. So. We you can also ask things like you know I would ride so, after, we ask them this question we ask these questions I would. Write with, a self-driving car that was at minimum safer, a safer, driver than X percent of drivers in the u.s. so. Clearly if you consider yourself, above average driver you, would expect the car to be above, average before, you would, be willing to ride. In it because otherwise you'd. Be safer than the car so. You're the. Bias that shapes your perception. Of your own capabilities, is, going to also shape your adoption your willingness to adopt, the technology. So. We. Also asked. You. Know as we frame the questions in a couple of different ways and I just want to show you the results, so, this is the percentage, of act of accidents, eliminated, so we we, varied those you know. You. Can see this is a spectrum from, zero. Which is this, is the current status quo this is what we have now and then. This is eliminating, ten percent of today's accidents, 50 percent 75 and, 90 percent and so on and this, is how many people would be willing. To purchase a car or to get in a car to.

Ride In a car that was, that has eliminated this much this, percentage of accidents, and. You. Know rationally, you would think if we all believe that we are average, drivers, we would adopt we would all adopt the cars at this point right but. We don't only, 8% of people do as, most people believe there are much better drivers, and. 50%. You know so this is half of the risk, is eliminated, and only 22% of, people are willing to adopt, them, in. Fact. 35%. Of people want. More, than 90 percent safety, right. Improvement. On safety. So. That's, kind of a problem right now. We. We. Then asked, well maybe this is a question, of you know maybe this has doesn't have to do with machines, maybe people are, just not willing to get in a car driven, by another human being so, there's nothing special about the cars being self-driving. It's. Just I don't trust other people you. Know to drive myself, or other. Entities, because I I just believe, in my own skills but. Then we did this and. The. Results look like this so you can see that when, it's another human driver you, still see this effect you know that I, people. Have higher expectations you. Know if they knew how, good the uber drivers, were if uber, were to publish these. Statistics, people, would require, safer, uber drivers than average. Because. They think they're safer but, you can see that there's a gap right so there's with. Automated vehicles, this threshold is actually pushed further right so the requirement. Is much higher so. For. A night for a uber, driver or a sort of a scar that is 90%. Safer, than. The. Average driver there's. A big difference between you know 91% people, willing to get, in a car with somebody else who's, that safe but, only 66% this is another application of the study I, think. That's that's. Interesting right it shows us that there this. This is a real cognitive, bias has nothing to do with ethics that is going to delay the adoption, of autonomous, and you, know how to handle this and how to overcome this bias is a question where we're, now asking and. So. This is the potential buyers, of autonomous vehicles depending on the. Safety level so. You could see so this is trying to connect the people's. Self. Appraisal of their own driving. Skills to this decision so. This. Is for people like for example. People. Who think. Who. Demand. A car that is 50 percent safer than the average driver, their. Median, self assessment, is 75, percent right like so they think they're safer than 75, percent of of other drivers. For. Example. Actually. Not so this is the median of the entire the entire sample, you. Can see as as the, safety, threshold. The, safety requirement, increases for, these people you. Can see that the color distribution, is the distribution of their own self assessments right so here you can see this is the median it's actually quite high actually, much higher than 75%, so, there's a correlation between how how, good I think I am as a driver and what. I demand from others, so. So. I just wanted to give you this and this. Question, or, to highlight this issue for you as an. Example of this kind of psychological barrier, to adopting a technology, that you don't understand, even if the numbers give you very, clear indication, that is safer that. It's better than than. Humans now. A different. Psychological. Barrier is going to be the. The. Ethical, dilemmas and the, safety. Whose safety the cars are going to prioritize. And. In this case. This. Isn't this is a thing that I think. Joey. Has already mentioned I'll go, through it very briefly but, basically we found that now. In situations where, the crash is unavoidable. The. Question is whose safety. Should the car prioritize. For. Example, should it prioritize, the safety of the. Greater. Number of people even. If and. Should it harm a bystander, on the pavement. Example to, save. More people or, should, it compromise. The safety of the person in the car to. Save the greater number of people and that's I think is. Si. An important, question because it. Shows you the limits of the not, just the norms but also the market in solving.

This Problem so the. Paper that we published a couple of years ago found found, or stated that people. Would never buy cars that would sacrifice. Them. But they want everybody else to do so so they want all cars, to minimize, harm but they don't want buy this car, themselves and. This. Has this kind of signature of the tragedy of the Commons which, is the thing. That you know I want what's good for society but. I don't want to contribute the small price personal. Price to make that to, bring about that outcome and. If everybody thinks this way then, we want to bring them out that, outcome for sure. And. We also asked people whether they'll be willing to accept, regulation, that ensures everybody, opts, into this you, know minimizing. Harm and people, would say well if you regulate it I want opt-in right so this is this is the difference between the. Traditional tragedy. Of the Commons in the situation, we have now because people can say well, I'll, just stick with my own car thank you very much I don't want to opt into a system. In which a car can his, program, to kill me under certain conditions, right, so you can imagine that this actually, ad exacerbates. This issue that we found before it's another psychological, bias they, will exacerbate the. Unwillingness. Or the the hesitation, you know of people to adopt, the autonomous cars. I. Wanted. To show something quickly which is well, you know this is a contrived scenario, it doesn't make you know any sense most, accidents. You. Know most of driving doesn't include ethical, dilemmas you know this is astronomically. Rare there's, been a lot of criticism, of this sort I just wanted to address it quickly by. Saying that the dilemma is an idealization, or an abstraction of something that's much, more common, in fact happens. Every single second. Or. Minute and, just show you an example here of you, know if you have a car that, a, large. Truck that drives next. To you you probably instinctively, move away slightly, away from the garage already and. By. Doing so you may end up getting closer to the bike lane let's say and that, slightly, increases risk for a cyclist. If, the cyclist happens to be in that lane -. It leads to a different outcome in expectation. Then. The. Opposite outcome of giving the cyclist the benefits here so. It, may we're, calling this the statistical, trolley problem, because. Maybe in a single instant nothing will happen you know if you stay away from the cyclist but.

Over A million instances, of this situation, over many many cars. That. Are programmed to behave in this way perhaps. You would have a passenger. Fatality, because you're increasing the risk of collision with this big car. But. If the car is programmed to. Get. Further away from the from the large truck then. Again over a million, instances of the situation, you may end up with single cyclist fatality, so, again we have 1 vs. 5 so. From a kind of policy perspective, it's, a similar kind of trade-off it's a trade-off between, actual. Lives lost that. Is an immediate consequence, of the way that a program, a car is programmed right. So. You. Can imagine here that if the cars are programmed, to maximize passenger, safety then clearly you're going to get the second outcome right because, these are the customers, who are paying for the car so. Just to show you that this is not not, not science fiction or, not sort of an purely, intellectual exercise. And. It's also something that patents. Are you, know this Google patent. Describes. Where you know this is a situation in which the, this, vehicle, is supposed to figure out whether decide, whether to overtake, a truck that is stuck at the lights and. You. Know the usual, variant calculus or, the. Utilitarian self-preserving, calculus, is very clear here so this is an actual table from the patent filing, where. L risk penalty is computed, based. On the probability of different events and the. Risk magnitude, of the different events so you can think of the risk magnitude, as the expected. Liability, from an accident of a particular type so you could see you know hitting, a pedestrian who runs is you know that's a hundred thousand dollars in liability. And, that's, a very low probability event, but then this is the risk penalty right, so. So. You can see that these things are actually. Car. Companies. Or companies that are developing this technology actually thinking, about this. Sort, of calculus in very explicit terms so. Then then of course. People. Like Jonathan Zittrain would say well maybe you should increase this number a little bit produce, this number and so on that's your policy labor right. And. It's. Something that people have you know in regulations, have, been thinking, about but. They've been pointing, to this kind of lack of consensus, as well. Public. Consensus and also that there are no metrics, to evaluate. Against. So with that I want, to pass on the, mic now to Eddie. Who's gonna talk about the model machine project and some of the very, recent results, that we've got. Thank. You. All. Right so. So. To promote this public, discussion, we. Built the model machine and Joey told talked, about it before I'll, just you know review, briefly what, it is it's like a website that generates. Random. Moral. Dilemmas that, are faced by a driverless car and. It asked you what do you think the car should do in those dilemmas. For. Example here we have a driverless, car, that. Is heading on the street for. Pedestrians, jump, in front of the car. One. Male one female and, one large, female and a dog they, are all crossing, and, do. Not cross. Signal which is out there all jaywalking, and, we. Assume here that the brake failed so the car will either, kill, those four pedestrians, or it could swerve, into. A barrier killing. The only passenger, that it has which is a female athlete and. Then you ask you what you think the car should do and then, each user that goes to the website would see like 13 scenarios, of this type and then, at the end of the 13 scenarios it shows you a summary of your results, here's, an example of one of the users, you. Know hold, summary this is not the aggregated, results so this is just one example of. It. Also shows, you like for. Example when it comes to saving more lives like.

How Like where do you fit in the spectrum, like compared, to other people protecting, passengers, upholding. The law of course these summaries, are just for fun because each, person, only takes 13 scenarios, so these are not enough to form any kind. Of accurate judgment of one person but when we get, all of these numbers. For so many people we can we can make start. Making some statistical, inferences. So. Let's try this together. Let's try to do some. Moral. Machine exercise, now so, here we have a scenario where we have a driverless. Car that is empty which, is fine it's a driverless car. And. Then. There is a boy in front of this car if. The car continues, it's gonna kill this boy the, car could instead avoid that by swerving, into, the other side and kill one. Man. So. Raise, your hand if you think in this scenario the. Car shoots were sacrificed. The man in, order to spare, the boy how, many people would. All. Right I. Assume. That the people who didn't raise their hands would give the boy. Whatever. So. This is this is what the users, of moral machine, you. Know converge. To, 73%. We'd have swerved and you. Know sacrifice, the demand in order to spare, the boy all. Right so. Let's. Try another scenario it, seems like you're a little bit try let's. Try this one. Again. A self-driving car it's empty. One. Man is in front of the car the car if if the car does nothing is gonna kill this man who, is, crossing. Legally on the. Other side the court could could, avoid. Killing this man bike. Swerving. To the other side and kill, a male athlete, who is crossing, illegally. Like. Jaywalking. So. If. We leave it continue, it's gonna kill this man who's crossing legally, or if, it could swear, to, kill the illegal crossing. Athlete. So raise your hand if you think the call should swerve and, kill. The. Sacrifice. The athletes oh there. You go. So. Again, so this is what the, users converge. To 72, percent with swerve, sacrifice. The illegally. Crossing male, athlete and spare, the illegally crossing male. Alright. Last one. Now. The call could continue, straight and kill one. Legally, crossing male. Or it. Could swerve and, kill. To illegally. Crossing males, raise. Your hand if you think the, car should swerve in this scenario sacrifice. More illegally, crossing. All. Right less people. This. Is what our user our users thought they kind of they're. More in like, six fifty six percent would stay, straight, and forty, four percent with which, word, okay. Let's let's, talk a little bit about how, we design, this website. Or. First. Of all these are like the, you know family. Tree like a family picture of our characters, we. Have like. The males, and females, elderly. Executive. Doctors. Athletes. Pregnant. Women a homeless person criminal. Dogs. And cat and. In. Each scenario basically we we have four main elements that we, that. We play with first. Element is whether the car should continue straight or it should swerve the difference, between omission, and Commission, the. Second part is whether the, casualties. Are pedestrians. Or passengers. Sometimes. You might put pedestrians, versus pedestrians, the. Third element is if there are pedestrians whether, they are crossing legally or illegally and, then. The fourth element is the. Type of those characters. In this case we have elderly. Characters, on one side and young. Characters, on the other side. More. Systemically we have the element of interventionism, whether, stay or swerve the. Relation to a V which, is like passenger vs. pedestrians, or pedestrians, versus pedestrians. The. Legality whether one. Part is legally crossing the other is illegally crossing or sometimes we don't have legality, involved, and then. The character types in which we have six different. Attributes. One. Of them is male versus female the, gender the, age young, versus elderly the. Fitness. The. Social status low versus high, humans. Versus pets and more, more, characters versus less. Characters so. We have in total nine. Attributes. Of. Course we also, translated. The website to nine other languages, so we have in total ten languages to increase the visibility. We. Had for more than four million users so far visit the website they. All contributed, to more. Than 40 million dilemmas. That like, the ones that you just saw and. There. Is a survey at the end that people could add more information, also.

We Had like I have more than half a million who answered the, survey. Users. Came from different. Places from all around the world this, is just to show like at least one users came from all of these places, you might, see that. There are some empty places. But. If we put. This. Next to the map of the lit world, we. Find that these are the places which have like electricity or have like a light. So so. This is kind of there's. A pretty, much like very close, representation. And. So. This is the kind, of main results. That we find, so. Just to explain how we can see, those results. First. We can see that on there on the right side we have sparing humans, and on the left side we have sparing, pets this. Is here we say like this, is how much people prefer sparing humans over sparing. Pets and. This, is the probability that they will spare. Humans so we're gonna take if we take all the scenarios that have humans, on one side and we, see how many times people saved, those humans and we, take all the scenarios where people have why we had pets, on one side and probability. Of people sparing, Spets and then we just subtract these two values so we get like a difference, of 0.6. So this is the biggest difference. Next. We could also try this with having. All the scenarios where he had more characters on one side than the other side and we see how many times people spared, the more characters, versus, how many times they spare the rest characters, we subtract, we. Could also because we varied the difference, in number of characters so we have difference of 1 2, 3 & 4 so, if we had like 5 vs. 1 there's a difference of 4 characters which. Is even, the difference is bigger than sparing humans over pets. We. Could do this with all the 9 different. Attributes, and we, could also compare, those 9, attributes, so. As we said the, species. Seems to be like the most like, the the, factor that people seem, to agree the most on that, it you, know it kind of make a difference. Then. The number of characters and then after, that the age so it seems like so. Many people approve. Of sparing. Young. Characters, over. Elderly. So. Here, if you if, you look at this graph always the ones on the right are the ones that are preferred, over the ones on the right, now. Spanning. The young over the elderly is kind of interesting, and you. Might think this might be driven by some part of those young characters, like maybe the babies are like, you know people. Prefer them but then when it when they when. It's a different kind of characters people change their mind so, what, we did is actually we went and tried every, kind of character, so we take now. So. This is again we take all the seniors that have one character let's. Say in this case the baby stroller and we see how many times people spare, that character, and we, compare, that with the adult. Male or adult female cases. So. And then we subtract them again so here we see that the baby stroller is much. More safe than the adult man and woman we. Can do this for all the characters and order them so. We see here like the top most, safe four characters, are the, baby stroller the. Girl, the boy and then, the pregnant woman so we can see all of these characters, have some kind of elements of, character, in, them also. If we look at the bottom of the list aside. From the dog and the cat and the criminal on, top of that we see directly, elderly. Man and elderly woman so. It seems like the. Attribute, of age is something that is kind. Of something that you know most people are kinda, seem to find it like okay or like an obvious thing that we should you know that. Should be used in, or. Like that's. What they think that should be used. So. Let's let's see how this, let's. Contrast this with one, of the guidelines that, that. Have been out so Germany is probably one of the first countries that started putting some general guidelines for ethics. Of autonomous vehicles. They, formed a committee of experts, professors. In law ethics. Engineering. Catholic. Bishop and, this. Committee you know put some kind of gate guidelines.

Let's. See how our, what. What the public like let's see how the, overlap. Between the, public and those experts. So. The first point we found that the most approved, thing is that people wants to spare humans over pets and. This. Kind of in agreement, with what those experts found they they, said that the, system should be programmed, in a way to accept, the damage to animals if that means sparing. Humans. Next. The. Next in line was saving. More people over fewer. And, this. Also somehow you, know is in I agree with in not in this agreement, with what the. Committee, put which is that you know any programming to reduce the number of personal. Injuries might, be justified. The. Third in the order is sparing, young over, elderly, and this, comes in this agreement, with what, the committee said which is that any, distinction based on personal features. Is. Strictly, prohibited. Next. In line is people, were, more in favor of sparing higher studies, over lower status also. Come in disagreement with with. The guidelines. The fifth in line, was spanning. Lawful over the unlawful, and this. Is not in this agreement, with, one of the which we could understand with one of the guidelines. Which say that parties. Involved, in the generation, of the risk should, not sacrifice the non involved parties so we could understand, this as in, like if the dilemma came out of some, jaywalkers. Then, the non jaywalker should not be sacrificed in that scenario. Next. We try to see if there are any like. A. Characteristic. Or like some like a if, you can have like a, different. Populations, have different, preferences. So. We, split, our users, depending on their age into two groups the younger, users and other users. We. Calculate the same nine numbers, that I showed you before for each sub population, and then, we subtract, those, numbers so. What. I showed you before was Delta P will cause the probability. Of saving, this minus this now this is a delta delta p so this is basically.

Probability. Of sparing something, -. Given. That the respondent is author. -. Probability, of sparing humans given that the respondent is younger, so. For example we can see here that other, people, are more in favor of sparing, humans, over. Pets, than the younger, users we. Can see also that younger. Users are more, in favor of sparing the younger it's, kind of make sense we. Can see also that younger users are more in favor of spelling the fit and the, higher status. Which. Is often things that could probably also kind of related. To the younger users as well the sign of in in their in their self-interest. We. Could also do, the same spit on other attributes. Like education. Whether. Like our more educated users versus less educated users. Male. Users, or, versus female users like, female, users are more in favor or sparing. Females. Same. Thing with income political, views religious, views we, can see in more in, more cases that there are not much. Differences. Next. We could do this also a deliverer of each country so we could calculate the nine numbers for each country and then. Then. Construct like a distance, between every, two countries and use that to do like clusters, and we, find like we have like a three big, clusters, of countries, one. Cluster we call it Western, because it has many Western countries the. Other one has called, Easter because it has so many Eastern countries and then, we the third one we call it southern because has Latin, American and some of the previous, French colony, countries. And. Then we could also calculate. You know the this you know those nine numbers for each clusters, we can see that for example here Eastern. Countries are more in favor or sparing the lawful for, example southern, countries are more, in favor of spending females if you compare them with the other two, clusters. Then. We could also try to see whether how these, findings. Like correlate with other cultural. Measures. One. Of the measures is the individualism. Versus, collectivism like. Some societies, are, individualist. In the sense that they care about the the, you know they trust, institutions and, they care about the person, reputation. Other, societies are more collectivist. Where they care more about the connection. And social, connections. We. See that you know more, individualist, society the. More it tries to save more characters because individual. Society, believe, that everyone is should, be equal so more, people is always better than less people. Also. Another measure is the rule of law how strongly the, law is in in a society, or in country. May. Also could predict people. Spanning the lawful more. I'm. Gonna skip this and then we could also make a distance, between two. Between, from the US like how different each country is from the US, so. It might be like if US also. Put guidelines at some point then, other kind some countries might make more sense for some countries to copy those guidelines those. Are the ones that are the closest to the US but, might not make sense for other countries which are far, far, far, from us to do that and, then. We show that, this measure actually corresponds to other. Or correlates, with other measures that were done, by other people in genetic, distance from us and cultural, distance from, us. And, I'll give, two yet. Basically. The. Bigger picture here is that we have a new technology autonomous. Cars machines. That have you know that are capable of, you, know what. Joey. Calls kinetic autonomy, which. I think is really cool term and because. They're capable of physical harm you know unlike some. Of the other things you may have studied which are systems, that are capable of different. Kinds of harm like psychological, or economic harm but, now I do think that the. Autonomous. Vehicles also can, lead to. Non-physical, harm things like economic harm for example, by replacing. Three. Million drivers truck. Drivers in the US and I don't know how many taxi. Drivers and uber drivers and people who rely on being. In the transportation. Sector. In order to make, their. Living so. I just wanted to give you a very quick overview, of, some of the work we're doing which. I think is trying, to, really. Take this discussion from. Thinking. About very course categories, of jobs like, you know drivers. Will be automated or professors will be automated or won't be automated and so on to, really, thinking about the skills, because, that. Constitute, those jobs because automation, is going to automate. Certain skills. Which. Perform, specific tasks, it doesn't take a job, or a person and automate them wholesale and we've.

Seen This before because people are in some cases are able to adapt you know sometimes a factory worker is completely automated, over. Time in other cases a bank. Teller is partially. Automated you know the the cash. Handling tasks, are automated, and then these bank. Tellers start doing other things you know so it kind of adjust the composition of their work by. Shifting, to other skills so we were what. We wanted to do is is map those skills and map, the structure of these skills so that we can understand, the adaptation. In the system and. It's something that it seems that mainstream, economics has not really gotten to at. The point where they can these sorts of dynamics yet so. We resorted to. Other. Techniques but. The whole motivation, is you know how can we quantify this idea of racing with, the Machine rather than racing against the machine, racing. With the Machine presumably, means working, on developing. Skills that are complementary, to the machine rather. Than trying to be very you know even better than the machines at the things that they're encroaching. On so. The. First thing we did was. We. Looked at differential, impact well, we said you, know we took some numbers that people have come up with on, the impact, of automation on jobs and we we, just looked at it spatially and said and found that you. Know the, impacts, from automation is not going to be uniform, there. Are places, which are in blue which would are expected, by AI scientists, to experience. A lower. Impact than. Some of those places in warmer. Colors and. Then we found this very interesting trend which is that. City. Size matters so it's it's something that is a function of the the size of the local, labor market, which, is how economists, refer to, the. Labor market of a city then, we found basically, that smaller. Cities are comparatively. Speaking going, to experience greater, impact. Which. Is kind of worrying because we you know there's already, significant. Urbanization, people are moving to cities to find jobs and this. Trend is only likely, to accelerate it seems in the, in the presence of automation. So. The. Second thing that we started. Doing is. We. Start to use biological metaphors. To try and think of the labor. Market, as an ecology, in which. There. Are basically. Mutualistic relationships. Between. Different. Workers and, that. Operate, at kind of population level and have potentially. Nonlinear, dynamics, so let, me explain what this means in the case of. This. Is this bipartite, network that shows your relationships, between different species of, bees. And different species of flower, so. What. Ecologists, do is they they they, map these networks by saying by investigating. You, know from data we. Bees, pollinate which. Flowers, and what happens is you know these flowers optimize, their, nectar to attract certain kinds of bees but. There are these kind of interesting, relationships, because if, if, two, flowers, rely. On the same species, of bee and that, bee goes, extinct. Maybe because that bee cannot, you know live in certain conditions or certain temperatures, then. These. Sort of problems can start cascading. Because. The. Species of bee cannot pollinate the flowers the flowers die then. This other species of bee which relies on it for nectar also, dies and that kind of creates a cascade and not every change creates, a cascade so. The way that, mathematical. Ecologist study this is they they project these networks so they create a network of dependencies, between the flowers right. That maps that is this flower, goes. Extinct, or dies in, an island then. These other flowers might actually also be affected indirectly, through the bees right they pollinate them and the same thing goes for for, the bees as well now. We. We. Took this model and started, doing these kind of similar projections, to see to try and map the dependencies. Between the skills so. Obviously. A occupation. Is. Related, is a bundle of skills right because a job is basically a set of tasks that need to be completed and the workers. Provide skills, that complete these tasks and, you could use this to project a network, of skills that show those dependencies, right so, even if the skill gets automated, now we. Could think of similar kinds of cascades that may occur. Now. When. We map, this network first. Of all we use the technique that was used by Caesar Hidalgo, who's in, this building in.

The Next building and since. I did this kind of exercise, this madness, this network mapping exercise, for products, that countries, export, and. The thing the interesting thing is he found was, that. This. Network has a core and periphery structure so you have a periphery, of very simple products like you, know very raw, materials, like fish oil or fish, or vegetables or, fruit and then, in the middle you have these complex products that have a lot of dependencies with other products so, country started the periphery, and I tried then try to move to the mid as they get more developed. Now, we expected, to see the same thing for skills you, know you have a core of of. Kind. Of important, key skills and then you have a periphery of maybe, specialised, sort of obscure, skills but, what we found is that. We have two. Separate. Course okay, so, which was let's, go back which. Was kind of striking for us because you know why would you why would you observe this but, then it made sense that there you know there is there's, a core, of skills, that are sensory. Physical and a core of skills that are social. And cognitive and. They seem to kind of operate separately, and. There's this narrow bridge between them with skills which are you, know fundamental. To all jobs so these things are things. Like memorizing numbers. Doing. Basic, numeracy. Switching. Tasks for example everybody needs to do some some kind of task, switching and, then, what we do is. We. When we projected this, network also on the jobs instead, of on the skills we observe the same, kind of polarization, right, and actually this is something that we're still doing now, students. In the group Judy is doing we're observing that this structure. Is getting more polarized, over time as well which i think is even more scary right, because then it, means that if you live in one part of this space, then. It gets harder and harder to switch because there are no adjacent skills you have to learn everything wholesale, and. And. It. Matters because. If. You, are a person. With skills, which are predominantly on the right sensory. Physical cluster you, don't make a lot of money right, your, this is a median household the median income for these jobs and then. The more you move to the left the. More money. You make you can see a see chief executive, is very much so on the left side, and. There's, a kind of a not, very smooth, transition between those two things right there's not many people with, a little bit of skills. On this side and on. The right side and on the left side and. I think what's more scary is that. Not. Only individual, workers exhibit this behavior. Or this kind of pattern, but also cities, and, here I have three characteristic, cities. One. Of them is yuma arizona which, is which, i think is a kind of farming. City, and it's a small city as well in, line with the previous result, the. Workers in the city so if you take all the employment, statistics and you break down the workers into, their, skills, and you add up all of those skills together and project them here to, see what. What is this city skilled, that you, see that it's skilled mostly in the in things that are on on the right side in the sensory physical cluster and. Which. By the way is also much more susceptible to automation, because they're you know it includes things that are physical, you. Know finger dexterity manipulation. And so on and. You can see detroit here, is sort of straddling, you know between those two worlds and the.

By The way these are median household incomes, and then finally we have New York which, is you know people who say it's a very resilient City and. Very. Much lives on the left side so I think as we think about. Retraining. Truck, drivers you know which would be, to. A large extent on the right side because their their job requires a lot of sensor, sensory. Skills. Multi. Limb coordination. A spatial, orientation, and things like this it's. Clearly being automated like we know that because. Of you know masks and others. Then. Then. We have an issue right and the question and the issue the other issue we have is that it's not obvious how you can. Reconfigure. This network by retraining, people, and. We hope to inform those kinds of strategies going forward and. We're. Really interested, in kind of the short term you. Know how what sort of retraining programs who can work but, also in the kind. Of extreme, scenarios, that people are talking about which is mass unemployment and I, would say that mass unemployment is, kind of equivalent to an ecosystem collapse, right so the system just cannot sustain itself anymore. And these. Are not our results, these are results from. From. Different, kinds of perturbations, to, ecosystem. Networks that. Are done so, for example, this, is a study by laszlo barabasi across, the river and what, you do is you basically start, deleting nodes. For. Example so you basically. Extinct. Certain species of flowers at. And, you, see what happens you see you, play out the dynamics, of the network and you see how the network reconfigures, itself and. At what point does the network collapse, okay, so this will be basically this the abundance, of the species goes, to zero because. The system just cannot sustain itself and you. Know there are different, points at which at which this happens this. Is particular parameterization. And, then you could also do things like link loss so you remove the link between the nodes rather than delete the notes themselves. You also do kind of weight loss so you change the the abundance, of certain things so all of a sudden you you, reduce, the.

Number Of flowers or let's say you inject, a certain skill right into the if, these are skills you inject a certain skill into the system by, retraining. A lot of people in it so. With. What we're doing now is we're applying similar methods to. The, network, of skills and the early results. Show that we're, actually able to predict things like recovery. Like things, from history things, like recovery, from the last financial crisis. Better. Some than some of the other models so there's information in this network that we're able to extract. And then. Once, we were able to show that you could predict the past we want to predict the future and be you know speculative, but in a more intelligent way about what might happen in these types of systems so. We. Wanted to end up with kind of a few questions for discussion but of course you know we'll leave it to the. To. The course instructors, here too to, figure out but I just wanted to have a throw throw, away throw these questions, in. Relation, to jobs, some. Of the questions that I think are interesting is is it realistic to assume employment, could collapse you know or is this just an insane, question. Should. We slow, down technological progress you know it's it. Provides, lots of benefits you know and. How do we resolve these kind of trade-offs. Between maybe. We live in an unequal, world if we move forward too fast but, maybe there's you know prosperity, that we're giving up as well and. I like to think of trade-offs because everything is a trade-off in life right everything, you can't have it all I wish. We could and. What what new problems, would you know if we did think about universal, basic income as something that you know mass unemployment with, they. Would support a world mass unemployment where, we, assumed as our attendant then, could. This introduce, new kinds of problems social problems and so on or would we find other things to occupy us and to keep us happy and. The. Other thing is also I found I found interesting is that, I'm, fascinated by is this question of whether you, know if, government. Becomes so. Powerful and are able to police the world with drones and robots and, so on why, do they need why. Do they need to opt into this social contract why would they need to are, they compelled or could they just force us you. Know into a different kind of regime and, I think that's an. Interesting scenario to think about as well and, we. Have discussion. Of questions also related to autonomous, cars more broadly like, in the ethics questions. You. Know when should we allow those autonomous cars on the road. Should. We take psychological. Barriers. Into account in this kind of aversion to trust in vehicles, how, do we overcome this aversion you know do we just educate people or is this something better we can do. Should. We force people to adopt autonomous, cars and ban them from driving. Also. Why. Should any of these of, the following. Parties. Be involved in regulation like what are the incentives and and.

The Kind of the moral hazards you know that manufacturers, have for example you, know they want to sell cars so what. Does that lead to they lead you know for example and leads to prioritizing. The driver over, passengers unless, they were forced somehow to, do something else what. About the policy makers do they have incentives as well can. Can there be regulatory, capture of some sort and. The public you know you've seen that the opinion, of the public conflicts. With some of the the. German. Guidelines for autonomous vehicles and in some cases it conflicts, in a good way you could say you, could argue like. The. Public thinks you should prioritize kids and the guidelines, say don't use H but. Maybe saving kids is not so bad right but saving people with higher status over homeless people that seems you. Know that's also a public opinion that, conflicts, with the German guidelines but it seems to be a bad. Sort of disagreement, right in, which you probably should overrule the public so, what role does it does it have and and. How do you combine these opinions as, well. And just reflecting on the moral machine results, and. How. Can we resolve these disagreements. Between the public and policymakers and. To. What degree with the differences, in between the different. Country, clusters, should. Be taken into account in. Formulating. Guidelines. Or audit, sort, of processes, around these, technologies, so with. That I think we'll leave it and we'll be part of the discussion at least for the first, part. So. Okay. We're gonna go back to slide, 8 so we did the discussion. Around some of these issues and. In previous some but I think this one on labor. Is maybe. We can use this as part of the case so and. And, we'll. You'll. Be the consultant, coming to Detroit where, let's. Say the community, in Detroit, trying. To ask some of these questions, and. So we can ask you questions we, can have opinions about what we think but you, know I think I think there, are probably a couple of pieces you know one is I. You. Know what are you telling us in Detroit personal, what would you tell us to do, well. I think as, a consultant, it depends right if if my, goal is to just improve efficiency, like, an overall. Productivity. Of the of, the city or of the factories. There then, I would say you, know automate, everyone whose skill is on the right I guess one question I would ask you is what is your prediction and how. How. Confident. Are you about your prediction about what will happen to Detroit in five, years if we keep, the course we're.

Not There yet like in terms of the the our results, I would say so I think that I think. It's super hard to make kind. Of these. Singular. Predictions, about the sink a particular city I think, what we want the point we want to get to. Is being. Able to run kind of an in Samba love simulations, of different scenarios, just like that study of the adoption of autonomous. Cars and how many people would die and then you say on average or you know this is a distribution of outcomes and, then. It, seems that in under, this regime you know things, are bad askew, so what's the how, would we use your tool is it a dashboard as we keep, moving into, it but what would be the how. Your, how are you useful to us so, at. The moment the tool is, mostly. Visualization. And then the next step is we're going to include with the the, simulation, component into it and, for, that we, basically want, to understand. The consequences, of different assumptions a bit more explicitly, like, you say well, I'm going to educate, more people in these in, these sorts of skills and then, you run them from the model and then if the dynamic say, that the. System will collapse anyway then, then. These assumptions, you know or these policies aren't helpful and and we did a whole session. Algorithmic. Bias and, things like that so we, know that, an. Algorithm is only as good as its underlying data and, the outcomes are only as good as how, we tune the algorithm, so you're, an algorithm using data so there's kind of a meta thing which, is on the. Accuracy which we can go, down in tune right so this, as a city council we can say wait, we don't like the way you're collecting the data we don't like the assumptions you're making so first of all would be kind of a open. Conversation, about whether. Your models and your methods, practice wrong so okay. So celebrity. I think. This is a very good point actually that you bring because I think. Today, the way we run the economy is based on wrong models and, there. Are people you, know smart people who argue, that this is worse than running, blind because it gives you this perception. That. You. Have a model you know and you've been thoughtful about this and and you have a theory and there's somebody who won a Nobel Prize for it and and you. Know and the model says you. Know increase interest, rates or decrease interest rates and that's the best thing for for the economy and then something completely random happens, so completely different or there's, a collapse of you know this there's, a recession, that you can't anticipate and. Somehow, you know people have been blaming you, know the economists for, for the advice that they've been given based on their models but, somehow the system doesn't, change like the, same people you just bring different economists, and they make the same kinds of mistakes so I think this. Is why I think it's good to think in, terms of. Ensemble, you, know simulations, of different, scenarios and then do, something that minimizes, average, risk rather. Than you. Know be extremely, confident yours. Is like you mentioned to me once that it was like a weather model more, like like predicting, the weather yeah, and you're you know that the weather isn't always correct, but you know that it's useful yeah, and I remember in in devilĂ­s there was this funny session, where I was on the climate team those years ago right after the financial collapse and the financial, guys were next to us and we had and they and we said ah you guys didn't see it coming your models were wrong and they said you, guys saw it coming but didn't do anything about it and so what, I think that's the sort of two pieces right you've got models, that are wrong and, then you have models that are roughly right like some of our climate models but we have no political or, I. Or, even a method, to intervene and I think those are the two things. That we would have as a city so even if we knew what was going on do can we actually do something but I want to start to involve people I I know I, mean.

Like The first question is it realistic assume, you can employment. Could collapse I mean this is there's a multi-layer thing here I mean this is I don't know we have many people who have, economics. Backgrounds, but the, economists don't use these network, models right they use different. Models and they don't and but but but economies collapse right, so what's, the do you want to frame the question is what's the argument I mean for against it yes there's, you know there's this kind of idea you. Know economists like to study systems, in equilibrium, so and they, and some, systems, are. Always out of equilibrium and this is not a big debate in in economics, actually but I would say mainstream economics, studies, equilibrium, models because. It assumes the system is always at equilibrium. Or will quickly get there but. There are people who say, that it's never there and you just need to have, a good you, know have, you done a mix in a good place right and. That's what Martin Novak at, Harvard. Always talks about is that the. Evolutionary. Dynamics people, don't. Think in equilibriums, they think as. Evolution. Is a search pattern so, because you can't actually model, economics properly if you think about equilibrium, states right I mean I mean there are economists who who are introducing, these ideas from complex systems but they're, kind, of been marginalized, I would say. Anybody. Have an opinion. I. Have. A question but it's actually not related to this first points okay maybe should I ask it now yeah. Okay so. It was actually about the moral machine. Presentation. And I just maybe, it's a clarification, question or an. Expression, of skepticism. Perhaps from a person, that comes from the humanities. But. I I, just I'm. Just, wondering, what the. Rationale. Or the question, underlying, your. Kind of survey and approach is because. To me it feels like asking, people whether. They prefer to swerve, or to hit. You. Know which. Alternative. They prefer between two is an, expression perhaps of a preference, or of a. Feeling. But, should. That be, the basis, for, thinking. About what is right in, terms, of regulation. And. So kind, of aggregating, those reactions. Might. Not be the best way, to think, about what the Lord might. Need to do but I mean it's, a question I, guess. I'm curious what, would you put in stead for example like well it would be a problematic, if I understand the question correctly like. You you're are you saying that the. Asking. Certain questions in, itself, is. Framing, the discussion in a particular way, I guess, it's simply giving us an answer that might not be useful. In terms of making, policy so, for, example I think in the next slide you had a question about how to reconcile. The. The intent the what the public, wants and what policy. Makers or judges. Or, I. Don't. Know people in government might want and, I guess I. Guess. It's the. The difference, is that people, are expressing, preferences. Or ways. To assess, alternatives. And. Policymakers. Might, I might know because there might be all sorts of, corruption, or other kinds, of considerations, but policy, makers are thinking about what the public as a whole mind, might need, or might want and, I, think that though kind. Of what is good for the public might not necessarily coincide, with, what. A person, filling. A survey, is thinking. When kind of comparing, to alternatives. Yeah. I guess. This is a question that should go to the lawyers because you know how do they that's a question I've asked. Casey, actually before like how do you you. Know clearly the, law is not a vote. On every single issue. What. Is it but but on the other hand we're always told that the legislators.

Are The, representatives. Of the public and you gave me a very good explanation of this and maybe you, know do you want to get it here. I. Mean. The interesting tension that Elettra is raising, is that of, individual. Decisions, and when is a collective, decision thus some of individual, decisions, or something else and what role might an individual, be asked to adopt if the individual, is responsible as, a mayor might be or a. Member, of Congress, for the. The, society, at large and. It's. Interesting to see that come up in the moral machines context. As electra raised it because, actually to me it ties, back to. Part, a back. To the labor question so I'm gonna use some privilege and just bounce it back to that for a moment without delving, another, click into the moral. Machines piece, which is when. I think about the contribution of, your, very. Pretty. And that, sounds somehow derogatory, I mean literally very pretty visualizations. And I, can imagine all the work that went into coding, all this stuff you know where does it fit in that map. And how do we generate it I, think. The takeaway especially. With your analogy, to the bees, and, the flowers is, that. It's important, to look out for the whole ecosystem. Not. Just, for a given. Member of society who, was displaced. Because. Of the impact, of an automated replacement. And that's, such an interesting. Rejoinder. Say, to Jason Furman's paper like my guess is Jason, Furman isn't thinking about that at all he's just thinking about Adam. People. Who, might be replaced. And, all right they need some job retraining and don't give them basic income because, three, micro, economic, reasons and. It's. Just such an interesting contribution. That way because if. I'm thinking through the policy, implications, of this. Group, is different from the sum of individuals. It, might say that. The, town out west. Requires. A form, of policy intervention. That, New York doesn't even. Though there will be people in identical. Jobs in both towns who, are each losing, them but. One might lead to a systemic, collapse. And, the other, wouldn't. So, I think it's a very interesting policy. Contribution. There to be insisting, on thinking, of it as an, ecosystem and there's a certain irony that autonomous, vehicles make it that much easier to, live in places, that are far from your, job so. Maybe that's another factor. That scrambles. The boundaries, of the hive and, how, much physical, proximity is, the, right unit, to talk about for a colony, here or there I think, I think it's important to remember that.

Models. Are great until they're not and so, because. We can model the. Economic collapse I mean so there's a couple of things that aren't in your model we don't have. Augmentation. Creating, new types of skills we don't have, second-order. Effects, so, so in ub I was just doing research writing my article on, it but it's, really interesting because we don't know whether UB. I actually. Encourages. People there's not a lot of evidence about what it does does it actually get people to go back and retrain, does it make, people look for better jobs if they have crappy jobs I mean that we don't really understand, the second-order effects, and so so, what's what's interesting is when you have, enough, of the stuff so I think that the idea of thinking about it as a ecosystem. Is really interesting, and innovative but, then the question is do we have the, right drivers, and do we have the second-order effects, enough. So that we're actually modeling, something accurately, right yes I think, okay, this conversation keeps, getting more meta but I like it so, I think the thing is I think. As an academic you've noticed maybe there's a moment in every class where someone has said that yeah at, least the first part I, like, it so so I think in, you. Know you've sense, my hesitation, to make kind of specific, predictions about specific cities and it's because I, do. Think, that the main lesson here is you know this wrong this model is wrong because first, of all it has 200 skills, roughly. And it's, based on the specific, data collected. Using specific methodology, by Department of Labor and, it doesn't include all the skills you know for example software. Installation, and. Plumbing. Installation is like one skill right, it. Doesn't distinguish between those, two so it's a it's, a very coarse, distinction. Between skills there, are of course using natural language processing to, to, create, a more fine grained. Model. Which will also be biased in some other way and and. I think basically the. Qualitative. Lessons. From these things are much more important than the specific, it's. The idea that things, are interconnected, don't. Just do localized policies, and you know forget the second and third order of X and until we have the science that can model be all of these secondary. Effects, let's. Just stick with that broad lesson rather than take. This specific model for. Face, value and I think that one of them and I think we caught myself very reasonable at this too until they become chairman, of the Federal Reserve right, or until they become a full-time. Contributor, to the New York Times where they, start being. Less sort. Of surrounded, by people who are poking, the holes and that's the problem with academia is whenever I say that it kind of keeps using GDP, the economists say well we don't use GDP anymore because we've gone beyond that we know the limitations, but whenever you see them on a panel they keep using GDP so so I think the problem is they say well well I have all these qualifications to the thing but then once you present it pe

2018-06-30 21:16

Show Video

Other news