AI Now 2018 Symposium
Hello. Everybody and welcome to the third, annual, AI now symposium. It's, somehow, we've got two three already. We. Are so excited to have you here tonight it's been an extremely big year this is our biggest, gathering, this, is where we talk about what's been happening in AI we. Acknowledge. Good work and we map some paths forward, and, we, have a stellar, lineup for you tonight and they're going to be addressing three, core, themes ethics. Accountability. And organizing. Our. First panel is going to be on facial, recognition and, surveillance, the, second panel is going to look at the problems of rising, inequality, austerity. Politics, and AI and then, finally, we're going to look at the intersection of research, and organizing. And how, these can work together to. Develop stronger, accountability and, address. Some, of the vexing. Issues that we've been facing in, 2018. So. Speaking of that we're, going to give you a tour of what, happened this year now, AI systems, have been increasing, in power and reach against, a pretty stark political, backdrop, meanwhile. There have been major shifts. And upheavals, in both, the AI research, field but, also in the tech industry at, large to. Capture this we. Have decided to do, something a little different we're gonna visualize for, you all of, the stories that happened this year now. I want to warn you this is a little overwhelming and it's designed to be because, well. It's been a pretty endless parade, of events, this year too to, keep us busy but, here goes in, any. Normal year, Cambridge. Analytical, would have been the biggest story but. In this year it's, just one of many in, fact, Facebook. Alone had a Royal Flush of skin including. Let's. Go through them briefly just as just a sample a huge. Data breach in September, multiple. Class actions, for discrimination. Accusations. Of inciting, ethnic cleansing, in my an MA in September, potential.
Violations, Of the Fair Housing Act, in May and for, hosting masses, of fake Russian, accounts pretty much all year round and. We saw Facebook executives, frequently, summoned to testify with Mark Zuckerberg himself, facing, the US Senate in April and then the European, Parliament, in May but. Facebook, was by no means the only one, news. Broke in March that, Google had, been building AI systems, for the Department of Defense's, drone surveillance program. Known, as Project, maven this. Kicked off an unprecedented. Wave of tech. Worker organizing, and dissent, and then. In June when the Trump administration introduced. The family separation, policy, that, forcibly. Removed parents, from their children, employees. From Amazon, Salesforce, and Microsoft all. Of their companies to end contracts. With ice, not. Even a month later it, was revealed that I've had, tampered, with its own risk, assessment, algorithm, so, that it could only produce one result. 100%. Of immigrants, in custody would, be receiving. The, recommendation. Of detain. 0%. Would get release. Meanwhile. This was a very big year for the spread of facial, recognition, we saw Amazon, Facebook and Microsoft all launched, facial, recognition as, a service, offering, plug-and-play, models, off the shelf we. Also learned that IBM, was working, with the NYPD and, that they had secretly, built an ethnicity, detection. Feature so, that you could search for people's, faces based, on race and they, did that using, police, camera footage of thousands. Of people in New York none, of whom knew that they would be used for this purpose and, all. Year we saw more and more AI systems, being used in high stakes domains, with, some real consequences, back. In March we had the first fatalities. For both drivers. And pedestrians, from autonomous, vehicles and then back in May we had a voice recognition system, in the UK which, was meant, to be detecting, immigration, fraud, accidentally. Canceled thousands. Of people's visas, and then. In July it was reported, that IBM. Watson was producing, inaccurate, and sometimes dangerous, cancer. Treatment recommendations. Now. All of these events have, been pushing a growing, wave of tech, criticism, which is focused on the unaccountable. Nature, of these systems, some. Companies including, Microsoft, and Amazon have, actually, made public, calls to, have regulation. Of technologies, like facial. Recognition and that's a first although, so. Far we haven't really seen any real movement from Washington, waiting. On that one so, that's a tiny sample of what has been a hell, of a year so, researchers, like us who work on issues around the social implications of AI have, basically, been talking about the scale of this challenge that we now face there, is so, much work to be done but.
There, Are also some really positive. Changes too we've. Seen the public discussion about AI mature, in some significant, ways, so. Six. Years ago when I was reading papers, from people like Kate and Cynthia, dork on the topic of bias, and AI these. Were outlier, positions, even. Three years ago when Kate and I ran our first AI, now symposium, some of you may have been there this, was not a mainstream topic, but. Now, it's pretty standard right there, are news stories everywhere. About the ways in which AI can reflect bias and they provide us with just so many, examples. Like. Amazon's. Machine learning system for resume, scanning which was shown to be discriminating, against women to, the extent, that it was actually down, ranking, Seavey's simply, for containing the word woman. Then. Back in July the ACLU. Showed how Amazon's, new facial recognition service, incorrectly. Identified 28. Members of Congress, as criminals, a significant. Paper also showed that facial, recognition software, performs, generally, less well on darker, skinned women and we, are thrilled that the co-author of that paper AI research, scientist. Gay brew will be joining us on stage tonight along, with Nicole ozer who drove the ACLU, project, so. Overall, this is a big step forward people, now recognize bias. As a problem, but. The conversation, has a long way to go and it's, already bifurcating. In two different camps, in column. A we, see a rush to technical, fixes for bias and a kind of solution ISM in column. B we see an attempt, for ethical, codes that will do the lifted heavy lifting, for us in just. The last six months, speaking of column a IBM. Facebook Microsoft and others have all released bias busting, toolkits, these promise, to mitigate, issues of bias and AI systems. Using statistical, methods to achieve mathematical. Definitions, of fairness now. Attention. To bias issues from the technical community is necessary, and it's important, but. They can't fix this problem alone because. At this point they're selling technical, methods as a cure for social problems, they are sending. In more AI to fix, AI you. May recall that we saw this logic in action, when, Facebook quizzed, in front of the Senate repeatedly.
Pointed To AI as the cure for its algorithmic, ills now. The second response column, B has been a more public turn, to ethics and ethical, codes for the tech sector in part. This is just a reaction to this big year, what, should be built what should we not build and, ultimately. Who should be making these decisions, Google. Published its AI principles, Microsoft. And Salesforce turn to ethics boards and a crop of ethics, courses emerged with the goal of teaching engineers, to, make ethical decisions but. A study published just, yesterday called into question the effectiveness of these approaches, it showed. That software engineers, do not commonly, change behavior, based on exposure, to ethics codes and, perhaps. This shouldn't surprise us the, current, focus on individual, choice sought, experiments, and technical, definitions, is too narrow instead. More, oversight, and public input is required or, as, Lucy, Suchman, a pioneering, thinker. In human-computer interactions. Put it well. I think codes, are a good start, they lack any real public, accountability we're. Also delighted, that Lucy is joining us on our final panel tonight, so. In short well, attics principles, and bias busting tech can help we, have a long way to go before they can grapple with the complexity, of issues now in play the. Biggest as yet, unanswered, question, is how do we create sustainable. Forms of accountability. This. Is a major focus of our work a inow, is formed as an institute, at NYU just, under a year ago to address these, kinds of challenges and, we have already begun looking at AI in large-scale contexts. Beyond a purely, technical focus, to, include a wider range of voices and methods so, okay, how do we do this well. Meredith. We. Basically have been seeing five themes emerging, from our work around accountability, so we thought we'd just give you a quick tour of these tonight. So. First of all there's a lot to be learned by, looking at the underlying. Material. Realities, of our AI systems, last. Month we published, this project, which is called the, anatomy of AI this. Is a result of a year-long collaboration. Between myself and the excellent, Ladon Jolla where, we investigated. How many resources, is actually, required for you to build the device that responds, when you say, Alexa. Turn on the lights so. Starting, with that with the echo as our example we, basically traced, through, all of, the environmental, extraction. Processes, from, mining, and smelting and, logistics, and container, shipping through, to the data resources, you need to build, AI systems, at scale to, the international, networks of data centers all the way through to, the final, resting place of so many of our consumer. A I gadgets, which, is buried, in a waste chips in Ghana Pakistan. Or China, and. Let me look at these material, realities, I think, we can begin to see the true resource. Implications that. These. Kinds. Of large-scale AI really, require for, our everyday conveniences, but. In doing this research we, also discovered, something else that. There are black boxes, on top of black boxes, on top of yet, more black boxes. It's not, just at the algorithmic, level we're, also looking, at issues with trade secret, law and untraceable. Supply, chains so. I think this is why the. Planetary. Resources. That are needed to build AI at scale is really hard for the public to see but, it's actually really important, if we're going to develop real accountability. Second. We are continuing. To examine the hidden labor behind, AI systems, now, lots, of scholars, are working on this issue right now including. People like lily irani ma. Hicks and we, are delighted that, tonight, we have asked her Taylor with us who, coined the term photo. Nation, for, those systems that claim, to be seamless AI but, can only function with huge, amounts of click worker input. Because. Often when we tend to think about the people behind AI we. Might, just imagine, a handful, of highly, paid engineer, dudes in Silicon Valley who, are like writing, algorithms. And optimizing. Feature weights but. This isn't the whole picture as Adrienne. Chen recently, showed in a documentary there. Are more people who work in the minds, of content, moderation, than the people who work at all of Facebook and Google, so. AI actually takes a lot of work and as we're going to hear tonight most, of that goes unseen, and, third. We, need new legal approaches. To contend with increased, automated, decision making that, means looking at what's working and looking at what's needed because.
Accountability. Rarely works without liability, on the back end now. There have been some breakthroughs, this year we saw gdpr, which is Europe's general, Data Protection guidelines, come into effect in May and New, York City announced its automated decision task force the, first of its kind in the country and then, California just, passed the strongest. Privacy. Law in the u.s. plus. There, are host of new, cases taking. Algorithms, to court we. Just recently held, a workshop called, litigating, algorithms, where we invited, public, interest lawyers who are representing, the people who've, been say unfairly, cut off from Medicaid benefits or who've lost their jobs due to biased, teach, systems, and people, whose prison, sentences, have been affected, by skewed, risk assessment, software this. Was kind of an extraordinary and positive, gathering, because it really focused, on how do you build more due process, and safety, nets and later, tonight you're gonna hear from Kevin Taliban whose groundbreaking, work, is important, in this space we. Also published, algorithmic. Impact, assessment, frameworks, which is designed to give the public sector more tools to critically, inquire, as to whether an algorithmic, system, is even appropriate, to be used and. Then ensuring, more, community, oversight, Rashida, Richardson, who is AI nails very own director, of policy research will, be talking more about this later, tonight so this. Brings us to broader systems, of power and politics, and to the topic of inequality. Because. Popular discussion of AI often, focuses, on hypothetical. Use cases and promises, of future benefit but, of course AI is not a common, resource available. Equally, to all looking. At who builds these systems who, makes the decisions on how they're used and who's left, out of these deliberations, can help us see beyond the marketing these, are some of the questions that Virginia, hughbanks explores, in her book automating, inequality, and we're, really happy that she'll be joining us tonight, there's. Also a growing concern, that the power and insights, that can be gleaned from AI systems, are further skewing. The distribution, of resources that. These systems are so unevenly, distributed that, they may actually be driving greater, forms, of wealth inequality a new. Report from the UN published, last week said, that while AI could, be used to address major issues, there's. No guarantee. It will align with the most pressing needs of humanity the, report also notes that AI systems, are increasingly used to manipulate. Human emotion, and spread, misinformation and, that, they run the risk of reinforcing, existing biases. And exclusion, well. Tonight we have the UN Special Rapporteur on extreme, poverty and Human Rights Phillip, Alston he'll be joining us to talk about his, groundbreaking work on inequality in, the u.s., so. It's clear that if, this last year was a big moment for recognizing, bias and. Potations of technical systems and social domains, this, coming, year is a big moment for accountability. And the. Good news is a lot is already happening people, are starting to take action there are already new, coalition's. Growing, we're. Really excited, to be working with a wide range of people many of whom are in this room right now including. Legal scholars, ethnographers. Journalists, health and education, workers organizers. And civil society, leaders now. And research, will always include the technical, but, we're working to expand, its boundaries we're, emphasizing. Interdisciplinarity. And were foregrounded community, participation, and the perspectives, of those on the ground that's. Why we're delighted to have speakers like sherrilyn Ifill the, president of the n-double a-c-p, Legal Defense Fund and vincit Sutherland, the executive, director on the Center for race inequality, in the law at NYU each. Of whom have made important. Contributions to, this debate. Because. Genuine. Accountability. Will require, these new coalition's. Organizers. And civil society, leaders working. With researchers, to assess AI systems, and to, protect, the communities, who are most at risk. So. Tonight, we, offer you a very different type of AI symposium. We're, really consciously, including, a wide range, of disciplines and, people, from really different sectors, to try and support those new coalition's, and build, more accountability, that's, a big reason why Meredith, and I founded, AI now in the first place and it, really drives the work that we do, because.
AI Isn't just tech AI, is power, it's, politics, and its culture so. On that note I'd like to welcome our first panel of the evening which will be on a fake facial, recognition it's, chaired by Nicole OSA she's, based at the ACLU of Northern California. And don't forget you can also submit your questions for, any of our panelists, just, go to the Twitter's and use hashtag hey now 20 18 and that's, true. Of people in the room tonight but also everybody, on the livestream, hello, guys we, see you. So. Please send your questions in and we'll be sure to have a look at them and now on, with the show. Good. Evening everyone I'm Nicola sir I'm the technology and civil liberties director, at the ACLU, in California, since. 2004. I have led the organization's, cutting-edge, work in California, working in the courts with, companies, with policy, makers and, in communities, to, defend. And promote civil, rights in the digital age our, team, has, worked to. Create. Landmark. Digital privacy laws like khalaqtu, we, developed, surveillance, ordinances, passed in, Santa Clara County and Oakland, and in communities across the country we've. Worked with partners like, the Center for Media justice, and Color of Change to. Expose. And, stop social. Media surveillance, of black activists. On Facebook, Twitter and Instagram and. Most recently we, have started, a national campaign to, bring attention to the very real threats of face surveillance, so, tonight we, are talking face, surveillance, and of course I can't think of a more timely topic. For. Some quick background the. ACLU. Has long. Opposed, to face surveillance, we've, identified it, as a uniquely. Dangerous surveillance. Form and a particularly, grave threat because, of what it can do to secretly, track, who we are where. We go what, we do and who we know and how it can be built so. Easily layered, on to existing surveillance, technology, and how, both. Data. Incentives. And societal. Incentives, really combine, to. To make it ever more dangerous once, it gets a foothold and. Also how it feeds off an exacerbates. A history of bias, and discrimination. Country. Professors. Woody Hartzog, and eben Salinger, recently, wrote imagine. A technology. That is potently, uniquely. Dangerous, something. So inherently. Toxic, that, it deserves to be rejected, banned, and stigmatized. It, is facial. Recognition and, consent. Rules procedural. Requirements, and boilerplate. Contracts, are no match for that kind of formidable, infrastructure. And irresistible, incentives. For exploitation. This. Past winter our ACLU, team in California, discovered. That the future is now that. These surveillance, that had been thought of as a future, technology. Was now being quietly, but. Actively, deployed by. Amazon, for. Local law enforcement.
Amazon's. Recognition. Product promised locating, up to a hundred, faces. In a picture. Real-time. Surveillance, across. Millions of, faces, and the, company was marketing, its use to monitor crowds, and, people. Of interest, and wanting, to turn officer, warned body cameras, into, real-time roving. Surveillance. When. We discovered, the use of this technology we were shocked to find that nothing was, in place to stop it from being used as a tool to attack community, members to. Target protesters, or, to be used by ice which, we already know has been staking out courthouses, schools, and. Walking. Down the aisles of buses, to, arrest. And deport community, members a. Coalition. Of 70 organizations, came together this past spring civil. Rights organizations. Racial, justice organizations. Immigrants, rights organizations. To, blow the whistle and to start pushing, Amazon's to stop providing face surveillance, to the government, our. Coalition, call was quickly echoed, by institutional. Shareholders. 150,000. Members of the public some of you may be some of them hundreds. Of academics, and so far more than 400. Amazon. Employees, themselves, these. Silly you also reinforced. The public understanding. Of the dangers of face surveillance, by doing our own, of, Amazon recognition. We. Use the default matching, score that Amazon, sets inside its own product, and that we know that law enforcement has, also used the. Result. Amazon. Recognition. Falsely, matched 28. Members. Of Congress, and, disproportionately. Falsely, matched members, of color including, civil rights leader John, Lewis the. Congressional. Black Caucus and, numerous, others have expressed deep concern, and have been asking, Amazon, for answers answers. They largely, have not gotten the. Reality, is that we only know, the tip of the iceberg, how. The government, particularly, in the current political and social climate is gearing. Up to try to you to space surveillance to target communities, and, we don't know what companies, both large and, small are doing, and specifically. Not doing, to, protect community. Members so. That brings us to tonight's very timely, discussion, first. Of all I want to thank a I now is, Hill you as a founding partner of AI now and their research is helping to further inform, some of this very important, work and I, also want to thank them for the immense privilege of, being here tonight to. Discuss this critical issue with, Tim nigiri and sherrilyn Ifill. Tim. He is a research, scientist, at Google a I she. Studies the ethical, consideration. Underlining, data mining, and what methods are available to audit and try to mitigate bias. She's. Also a, co-founder, of, black and AI where she's working to increase important. Diversity in the field and reduce the negative impacts, of bias in. Training, data Tim, needs PhD, is from Stanford and she studied computer, vision in the AI lab under Phi Phi Lee we. Also have, sherrilyn, Ifill the president, and director counsel, on the n-double-a-cp. Legal defense fund cherylin, is only the seventh in history to lead our nation's premier, civil rights legal, organization, for. Many in this audience, sherilyn, needs no introduction, so many, of us know and admire her as an acclaim legal thinker author. And, true, powerhouse, in bringing to light and challenging, issues of race in American, law and society, so. We are in for a wonderful conversation, tonight. To help us all dig deeply into, understanding. The broader social, context, to face surveillance. Exploring. How it is not about. Solving a technical, matter but, how decisions, about the future of the technology, and how it is used and, not used matters. So profoundly, to the future of who we are as communities, and as a country and to, focus on what power we have can. Continue, to build and will, need to wield to, push back aggressively, on threats to the safety and rights of communities, so. With that let's, get started I have. The first question for, cherylin so. Phase surveillance, is a relatively, new technology but, it isn't being developed, in a vacuum, how. Do you think that threats of face surveillance, fit within our country's history and practices, of a bias and discrimination. Thank. You so much and I want to also thank a I now for inviting, me and for just convening, this extraordinary. Evening and for. Recognizing the, importance of us spending our hands around this critical issue and I really thank you Nicole.
For Teeing up the first question in this way. Because. I think much of our conversation, about technology. In this country, happens. As though technology. And. AI in particular is, developing. In some universe, that is separate than, the universe you and I all know we live in which. Is rife, with problems, of, inequality, and discrimination. So. Here we are. It's. 2018. It's. Four years after we, all watched Eric garner choked to death by, the NYPD. It's. Four years after Michael Brown was killed in Ferguson, it's. Three years after Freddie gray was killed in Baltimore, it's. Three years after Walter, Scott was shot in the back in a park in North Charleston, it. Comes at a time of mass, incarceration a, phrase that now everyone. When. The United States incarcerate. The most people in the world the. Overwhelming. Percentage. Of them african-american, and, Latino, it. Comes at a time in which we are segregated. In this country, at levels. That. Rival, the 1950s, in our schools. Where. We live, it. Comes at a time of widening. And some of the widest, income inequality. That we've seen in this country since the 1920s, and into. That reality, we. Drop, this. Awesome. Power that, allows. Us to speed, up all. Of, the things that we currently do. Take. Shortcuts, and. Theoretically. Produce, efficiencies. In doing. What we do. So. If we think about facial, recognition technology, just, in the context, that we most often talk about it as, a threat in the context, of law enforcement and, surveillance. I mean. Who here thinks the biggest problem, of law enforcement is you know that they need facial recognition technology. Why. As a matter of first principles do, we even think this is something that, is needed for law enforcement why is this, something that we would devote our attention to, why, would we devote our public dollars to the, purchase of this technology, when, we recognize all, of the problems that exist within law enforcement and so. What we do is we deposit, these technologies. Into, industries, into practices. Into aspects, of our society into, governmental, institutions, that already, have demonstrated. That they are unable to address, deep, problems. Of discrimination. And inequality that. Leads. To literally. Destroyed, lives not just, you. Know the killing of people which is bad enough but actual, destroyed, lives we. Drop it into a period of racial profiling we drop it into stop and frisk we were. Part of the team that sued the NYPD, for stop and frisk, you. Know here in New York and are diligently. Monitoring, that consent decree and we have a president calling, us back to stop and frisk, and, now. We have a technology, that purports, to assist. Police. In. In, doing. This kind of law, enforcement activity, when. We combine it with things like the gang database in, New York which we've been trying, to get information about New York has a New York City has a gang database. There, were about 34,000 people. In that gang database they. Have reviewed, it and dropped, some folks out so I think it's down to about somewhere between 17, and 20,000, so they obviously make mistakes since they were able to drop out 10,000. People the, gang database is, somewhere between 95, and 99 percent, african-american. Latino, and asian-american. So. It's 1% white and. We. Have asked the NYPD to tell us the. Algorithm. If you will the technique by which they put somebody in the gang database and the process for getting out of it like if I discovered I was in the gang database how could I get out of the gang database and. They still haven't provided us with that information and we just filed suit. Last week for their failure to comply with our FOIA request. So. Now I'll try to imagine. Marrying facial. Recognition technology, to, the development, of a database that. Theoretically. Presumes. That you're in a gang and that. Your name pops up in this database and we know we're in the age of the internet that even when you scrub me out of the database it exists still somewhere, we. Are unleashing, a technology, that has the ability to completely transform forever, the. Lives of individuals, we do work around. Employers. And misusing, criminal background checks and some of the work you, know demonstrates, that your arrest record stays with you forever, and you have employers, who will not employ anyone who has you know an arrest on their record we're, talking about creating a class of unemployable, people a class, of people who are branded. With. A kind of criminal tag we. Have a number of school districts, that have already invested. In facial recognition technology. Theoretically. For the purpose of identifying. Students. Who are suspended students.
Who Might be a danger, students, carrying, one, of the ten most popular guns used in school shootings when they come in the door but. Very often these are not students who are suspended and, so now we're going to pray students within the school in. Certain kinds of ways as well so the, context. In which this technology is, coming to us. To. Me is a very. Chilling. Context. And yet, we talk about facial, recognition technology, and, all of the other. Efficiency. Algorithms. And AI. Technologies. As though. They exist, were created, or could be evaluated, separately. From this very, serious, context, I just described and in terms of the historical. Context, as well you know I think about history, and face surveillance, I I often, think of the 1958. Supreme, Court case and double-a-c-p, versus. Alabama where, the n-double-a-cp was, able, to maintain. The, privacy of its membership less really in the heart of the civil rights movement and in, that case the Supreme Court really recognized. The. The, vital relationship. Between privacy. And the ability, to exercise First, Amendment, rights to be able to speak. Up and to protest, so, you know in the current political context. That were in how. Afraid, are you how worried, should we be about, the impact, of a surveillance, on on civil rights and activist, movements more generally, we should be deeply worried, we should we should you know when we hear, the way this president, or this attorney general talks about certain protesters. Protesting. Groups the, creation, of this category, of black identity, extremists. The. Idea that this technology, can be mounted, and used you, know on police cameras, and. That the police can be taking, this. Kind of data, from, crowds of people who are coming out to exercise their First Amendment rights, we, heard the way the crowds of people who came out to protest against. The confirmation, of Brett Kavanaugh were characterized. By, the President. And by some, of the Republican leadership as a mob. And. Imagining. That those kinds, of protesters. And activists, would be subjected to facial recognition technology, in, which they would be presumably. Included. In some kind of database that would identify them with these, kinds of words. To, think about what this means for young people, who you, know we're in a in a movement period, in this country in which young people are engaged and, active, and coming out and protesting, and now, we want to monitor them the recognition, of n-double-a-cp versus, Alabama and, the reason that the n-double-a-cp, did not have to give up their membership, list was because the court recognized, the, possibility, that. That. The revelation. Of who these individuals, were would subject them to retaliation. Within. Their local community and would chill their, ability. Willingness, freedom. To, fully exercise, their First Amendment rights and I think facial, recognition technology, has. Has the same threat to. Activist. Groups and to others who want, to be out in the public space and at some point maybe we'll get to talk about that so much of this is about implicating. The public, space and the contested, public space in this country and. The way in which we now want to privatize. Everyone. Because, do you have to believe that if you step into the public space you are automatically. Survey. Surveilled, and did, the last thing I'll say is just. To go back to the point about you know us being so segregated, the, idea, of facial recognition technology. Is, not just recognizing.
A Face it's also evaluating. That face and, evaluating. Who they think you are we already know for example that, police, officers, tend to assign, five. Additional, years to african-american, boys that they see african-american, boys as being much older than they already are, who, says that people who have grown up so segregated, actually had a position to evaluate faces. Are actually a position to tell people apart are, actually in the position to know whether, someone is a threat or is dangerous, so once we go down this road, without. Recognizing. The way in which frankly, in America, many. People, who would be using this technology are uniquely ill-equipped, to. Evaluate. The. Face of someone to recognize and to differentiate, between two black people or two Latino, people you. Know and any woman who's been asked you know why aren't you happy why don't you smile more knows, that the idea that somebody could just look at you and tell what your emotion, is what you're feeling what your intention is is. Is simply not true and so, we should recognize that it's not just that you know it's gonna click and say oh that's sherrilyn, Ifill it's gonna say more than that it's going to try and evaluate you, know what, my intentions, are in that moment, so. For to mean I mean you know we maybe we're out protesting I, think that many of us can sort of understand at baseline that faith surveillance, is different, because while you ostensibly can, leave, your cellphone at home and not be tracked you know we can't leave our face at home and. But on a more complex, level you know what, do you think it is about face, recognition, that, potentially, makes a different, more, dangerous. Or risky than, other AI technologies. Well. I think actually you touched on many. Of the things I wanted to say so the. First thing is, the, fact that you. Know for example let's you. Said that it. Doesn't just recognize. Your item it evaluates, you and. Let's. Think about your emotions, you know your emotions are I mean Rana said run out who started, efectivo. Which is an emotion recognition company. Your, emotions, are some of the most private things that you've met and so. Even. If it worked perfectly, that. Would be terrible, if you can just walk around and, people could now almost see your emotions and but. The fact that it also doesn't work perfectly but, people, have what we call automation, bias where they trust algorithms, to be perfect, that combination is also about it right because I might be perfectly, happy and somebody. Could say I'm dissatisfied, or something like that I just read that I just. Read recently so, every day I learned something new about. Where, this, different. Face automated, facial analysis tools are being used and. I read I forgot the name of the company that, this company was using face recognition automation. Automated, facial analysis, and turn instead, of these, time, cards, so. Then, that, then, they were talking about the potential, to then, do, emotion recognition and, aggregate, then if their employees, are not satisfied. They, can tell over time and this is pretty scary right so. Um, a. Combination. Of things the fact that there are some things that are very private to us that, we just want to keep private, there. This. Even. My research, with joyful meeting showed that automated. Facial analysis tools have. High, error, disparities. For different, groups of people but. At the same time people, also trust them to be perfect so, I think these three combinations, are pretty dangerous and so. Speaking, of some sort of that research there's been a lot of talk about accuracy, or inaccuracy. And. Sort. Of improving. Its function overall how, does some of this, conversation. Really sort of miss some of the bigger picture around. Face surveillance, so, I think that the, fact that we showed high error disparities. Could, start the conversation. Right good just same way that the ACLU could show that. There. Were high errors for some. Of our members. Of Congress. Congress right. But, it but that doesn't mean that you can you should also have perfect, facial. Recognition that. Is being used against mostly, black and brown people like you said so, these, two conversations have, to happen in tandem so for example, for.
Emotion Recognition. Again. I'll bring our honor because I she's. The only person I've, talked to this she. Talks about how she started, Affectiva, to help. Autistic. Kids people, with autism and I've also talked to people who want to you to work with older people, who have dementia and use some of this emotional. Recognition kind, of technology now this could be something good right and so in this particular case you don't want high error rates. You don't want disparities so this this conversation, about accuracy, should happen. Similarly. There are some other, computer. Vision technologies, that are being used for melanoma. Detection. Or are there kinds of things where again, you don't want you, know a skin. Tone. That's very dark to be. Forward. For the AI technology not to work on it and miss that it get misdiagnosed. So that's one one, so this conversation. About accuracy should happen. But, at the same time it doesn't mean that, the solution, is to just have perfect, accuracy face recognition face. Surveillance. Being used everywhere right and. You, sort of alluded to some of the sort of corporate space you know that the face surveillance, coalition, has drawn particular, attention to government, use I think for for some good reason but. I just wanted to explore with you all sort. Of the kind, of distinction really be drawn effectively. Between, sort of government use and and corporate, use or you, know do some of the issues really bleed and blend, together in terms of civil rights and civil liberties and in particular, this has been on my mind this, week some of you all, have may have seen some of the press. About. The. Face book revealed this week that it thinks that some. Of its photos have been scraped, by Russian. Face surveillance, firms this, was an issue that we were particularly concerned, about at the ACLU when. The. Cambridge analytic, a story. Broke because we know that at, around the same time that face book provided, data. For, third-party apps they, also, started. To change their privacy settings and, and got, rid of privacy, settings for things like photos, so we've been sort of very. Very. Attentive, to the issue that a lot observed those public photos and other types of information could. Really become a a, really. Great space for potentially. Scraping, for face surveillance, data so, you. Know what do you all think can and should we. Be addressing sort, of government and and corporate issues separately, or. Do they necessarily, lead together and blend together and should, we be sort of looking bigger picture, at some of these issues. Well. I think on the privacy front they actually, blend together. You. Know this is part of the the difficulty. Of this. Work is that the pathway, in is. Usually. One or the other right so the pathway in is usually this is a business you, know I'm running Amazon, or I'm running I'm running Facebook, and it's and you know it's wonderful and the, you, know the owners and the shareholders are making lots of money and people are using the technology for, whatever reason, they want to use it and they're. Making a personal decision to use that technology and, so that's supposed to cover a multitude of, ills and it's not the government but we know that the relationship between corporations.
And The government is often, very, close and it. Gets very close particularly. In times of national, security high alert like post 9/11, right where, you have you know the telephone companies handing, over data to the federal government for surveillance and so forth so, we we know that there's always this kind of symbiotic relationship. Between the government and between corporations, and the government relies, on corporations, to develop technology, for, the government to use for a variety of reasons so I don't think that, there's some place where it's benign in some place where it's evil you, know I think the technology, itself is like a monster that once once, Unleashed, is very hard to put back in the box the, problem, I think is that where government, stops acting like government and starts, acting like it's just another corporation. Or another client, of a corporation, the government's, responsibility, is to protect the, public and that's, why this conversation about regulation. Is so, is, so important. Because, that's, the government's, role, and so it can't act just like another consumer, of a product or client, of a corporation, it is supposed to hold. The public trust and I think what we're seeing is the government fall down on that job being so scared, being so tentative buying. This story that, you know we have to leave these these folks alone because they're the brilliant ones who are doing all this wonderful technological, stuff, and if we regulate, them we're gonna smash, their creativity, but you know I'm old enough to remember when we thought it was crazy that the government would require us to put a seatbelt across, across, our wings we thought it was just awful and. We didn't want to do it I come from a very big family until. My seat in the back of our view it was on the floor like we sat kids on the floor that way paper. That, that leaks, yeah. Yes it's, true even, safer. At the time right you know even the big car used to be able to sit through it you could fit as many people in your family. And. My. Father was outraged, when we had to put seat belts on because he thought it's like discrimination, against people with big fancy tank my family now. You can't imagine getting that part without it but we create these boogie men you know who could interfere with the Ford Motor Company and I tell them how to do their cars so, I think the government has to kind of. Reinterment. Trust. But not just be another client, this, is I'm so happy you brought this up because we wrote a paper called Danis data sheets for datasets with actually. Kate Crawford and a whole bunch of other people and one of the things we did was we had case studies, and the automobile, industry was one of the case studies and we were saying it took many years for. Them to legislate. That you, have to have seatbelts and even when they were doing crash tests, they did them with. Dummies. With prototypical, male. Bodies. So then you ended up having car. Accidents, that disproportionately killed. Women, and children and so we were drawing a parallel and. Trying to make some cases, so I'm glad you brought it up right, and it's you know it's not the car but, it's you know there are certain uses that are very dangerous for society and, you know there has to some interventions, there I think that one of you know I always think of civil rights and civil liberties as either protected.
By Friction, or by law I think you know decades ago, it, wasn't possible to monitor. Who people who are using face surveillance, but now, the technology has advanced, and the friction, much of that friction, in terms of what the police can do what, the government can do has been, eviscerated and, so, then the question is sort of what types of protections. Are going to be built back up to, protect the, public and communities so you, know you, know the with the ACLU have called on. Congress. To pass a federal moratorium on. Face, surveillance, to give us the time to really think through the implications obviously. We, have, also been in this large coalition pushing, on Amazon, and other companies, to, you. Know stop providing it to the government because of sort of some of the threats and the dangers that we've talked about, as. I mentioned the over. 450. Amazon employees, of them sets themselves spoken. Out in writing to Jeff Bezos about, this and and today, an Amazon employee actually called Jeff Bezos out and an op-ed, yesterday. When he, was. On a panel, Jeff. Bezos actually, acknowledged, that there's this. Tech could be used by autocrats, but suggest that the company really has no responsibility. That instead, we should leave it to society's, eventual, immune. Response, to, address you. Know the threats in terms of the, real threats to community members in, Congress um, Google has new, AI principles. That specifically, say they. Will not pursue, technologies. That gatherer, use information for, surveillance, violating. International norms. Or principles, of human rights so, with our last sort of couple minutes for both of you to answer one or both what, do you think some of the responsibilities. Of companies. Is you. Know what should they be doing, and what should lawmakers, be doing. Well. I'll talk specifically, about companies, something you can talk about lawmakers, maybe but I think. What happened that Google is an example, of the, fact that. Corporations. Consist of people and people, have values, and people can, can advocate that people could, change. The course of things and so, I, think people in corporations, need to remember the fact that they have these values and that it's their responsibility to, advocate for change so I'll just keep it there I. Think. The government has the long view and, and, they hold in many ways the responsibility. Of communicating. History to, corporations. And other companies, that are developing technologies, and, setting. Up the guardrails, so, you know if we think about platforms, like the Internet for example these, are public, spaces that have been created their virtual but they've been created, we, know what has happened in the physical public space historically, in this country we, know that most of our civil rights movement, was a fight over dignity, in the public space that's, a that's a lesson right to communicate. As you think through how, you're going to engage that new technology, and the same is true for facial recognition technology, if, we think about racial profiling for example and so forth it's the government's, obligation to recognize, that, those, kinds, of pitfalls exist, and to compel. Corporations. To adhere to some kind of regulatory scheme, that guards. Against what we know are, the excesses, of every system we're doing the same things over and over again there's, certain themes that are recurring, in American public life racial. Discrimination is one of them so the idea that we're going to create some new technology, and we don't have to worry about racial discrimination is absurd, and. So the government has to take responsibility for, that I think, that that's a really good segue into our audience, question, so. The audience question is is it already too late aren't, we already on camera, everywhere we go and I think you, know to, meet thought you know there's companies, companies. Companies, are sort. Of people and, I. Don't know, everything. That I'm. Learning. New things like, I said every, day I learn, a new thing about you know I joy, had talked. To me about a company, that's, you. Know interviewing. People on camera, doing emotion, recognition on, them and then giving. Verbal and nonverbal cues, to the employers, who, are there they're, their customers. I didn't, know this existed until, she wrote an op-ed about it right so every. Day I'm learning something new and I feel like we're.
Just Unleashing. This technology, everywhere, without guardrails. Without, regulation. Without some sort of standard, I don't, think it's too late because, you. Know people wear seatbelts now right so I think I. You. Know so I it's become standard, so I don't think it's too late but we have to move fast yeah in tens of thousands of people were killed in cars exactly, for there were seatbelts, and it wasn't too late when we finally got seatbelts you know you just do what you have to do at. The time that that you can and but. I do think we have to jump ahead of it because what's, dangerous about all of this is how deeply embedded it, can become that's part of why we don't know because, it's, so easy to slip in and to embed in a variety of contexts. And the employment context, and the law enforcement, context, and then it's hard to get it out so, that's. Why I do think there's a sense of urgency that, we have to move very very quickly you know I agree, and I mean sort of just what I've seen in the past six months is the ACLU, has been working in this coalition to, really blow the whistle on these issues and bring a lot of national attention the. Fact that there has been such, a great response and, that people have been moving, with. Haste to sort of really start to address these issues I think Kate, and Meredith talking, about sort of all that's happened in this past year where so. Many people are sir waking, up to these issues, of. Course I work at the ACLU so I never think it's too late but. You know there's a long there's, a long arc. Of history and I think that, we. As people can really work together to, to, influence, that history and to make sure that, civil, rights and civil liberties are protected, you, know historically. There's always been an advance in technology and, it takes some time for those protections, to, get. In place but we should not just leave it to an immune response, we. Have to actually push that response and make sure it happens, both, in companies. And. By, lawmakers you know it. Makes me think about tasers, in. Part because you know the Taser company is the same company doing a lot of facial recognition recognition. Technology and law enforcement but. You know when tasers first came out remember this was supposed to stop the, use of lethal force by by, you know police officers was greeted, as something. That was going to be great because now police officers wouldn't have to kill you right so it didn't get to any of the you know at the discrimination, or use of excessive force or brutality or any of that it was just like you might not die like that was the theory and even that's wrong right we just you know spring court just denied cert in a case we had in which our client was tasted death but let's leave you, know as terrible as that is let's leave that to the side it. Really is about the dignity issues it really is about all of the issues that surround law, enforcement, that we that we recognize, and talk about and just switching the technology. To something actually, doesn't get at the problem so I think that in that sense it is not too late because we keep kind.
Of Referring, to this again and again and again and we're really nibbling, around what is the core issue which is we've got problems that we have to solve very, deep problems, and layering, new and faster technologies, on top of those problems, you know ultimately, doesn't get us there oh I want to I want to say that seeing. The level of discourse, about this this, year versus. Last year it's, a pretty big difference and even you, know meg Mitchell who's somewhere here I think and I are, trying to co-organized, a fairness accountability, transparency and ethics workshop. At a computer. Vision and technical conference, and now. People, are starting to be, excited, by this kind of topic, right three, years ago if I mentioned. This you know they're even when I started, partnering with Joe I was trying to explain to them why, research was important, and and it was very difficult for me to explain to people what I was doing and, I think there's a glimmer of hope they know I think they're absolutely as an you know action. And involvement, by everyone, in this audience and hopefully everyone who is watching us does make a huge difference our ability to sort of change. The, narrative change, the trajectory. In. The companies, and by lawmakers really. Is based on sir how many people speak up and and and work together on these issues so I am, I am, optimistic, I, think that we can really do some some important work here and the last six months of just, reinforced, that so I want to I want to thank our panelists, so much for joining us tonight. And I want to thank you all for coming and now it's time for our first spotlight of the evening so thank you very much. So. Now we. Have a very, first spotlight. Session, of the night this. Is where we, invite, people whose work we admire and then, we punish them by asking them three questions in seven minutes so it's. Very much a. High-speed. High-stakes game but, I couldn't, be more delighted to, be sitting, here with Astra Taylor many, of you know her she's, an author she's a documentary, maker she's, the co-founder of, the deck collective, she, also has a new film called what, is democracy it's opening in the u.s. in January. In. January, just around the corner and. She also coined, that it's really useful, term called.
Photo Nation, in an essay and logic magazine, so I thought we might start there and ask you what, is photo nation and how, do you find, the pho in AI yes. We have to be clear that it's FA UX yes so in the French way photo, mayshen so. You know I I have been writing and thinking about technology issues and. Economic. Inequality, through my work. Organizing. So thinking about labor are thinking about debt. And, so, I wanted to come up with the term it would name this process, right, the, fact that so much of what passes for automation, isn't, really, automation. And so, I think we can you. Know give, a little definition which is photo, mayshen is the process, that, render, is invisible human, labor to maintain the illusion. Machines and systems are smarter than they are so. You give, some great examples already, an introduction, right, we can think about all the content moderators, and digital janitors, who are cleaning the, internet and making it a space, that we actually want to be in, you. Know there egregiously, underpaid they have to work overseas. You. Know we can of course think of Amazon. Mechanical Turk and they're kind of cheeky slogan they kind of admit it right with artificial, artificial, intelligence. But, the same labor, issues are at play I mean, frequently. Their articles. Exposing the fact that your digital assistant, that you thought was a bot was that was actually a terribly. Underpaid human being doing a mind-numbing task, so. It's everywhere and the, moment that sort of tipped me over the edge and helped me name this phenomenon standing, in line ordering. My lunch and I had talked to the human being and I had paid them cash but. This man was in front of me clutching, his phone and he just was, awestruck, and he said how, did the act know that my order was done 20 minutes early and. The, girl looked, at him and she was just like I sent you a message. And. It was the it was the man's. That, was so willing to believe that it was a robot you know he was so willing to believe that this all-seeing, artificially, intelligence, if the myth over sink is organic rice bowl I know, then, he he, couldn't he couldn't see, the labor the human labor right in front of his eyes and you do that all the time because we're not curious about the process we're not we're, not we're, so ready to devalue, and underestimate. The. Human contribution, and I think that's really dangerous so. I mean where is this coming from I mean who really has. The most to gain from this kind of mythic. Building, off of perfect. Automated, systems so. You, know automation. Is a reality. It. Happens it happens but. It's also done ideology, and so the point is to separate, those two things right to, be. Very sort. Of up front when, the ideological, component. Is coming into play so who benefits the bosses benefit so, you know I open the page by saying you know somewhere right now an employer, is saying to a broken, sauce at underling someone, or something, is willing to do your job for free right. So. This, idea the sort, of inevitable human, obsolescence. You. Know helps employers and I found the perfect villain in ED Renzi was the CEO of McDonald's, who was so angry, at the Federative teen movement, and he you know helped take out an ad in The Wall Street Journal and he wrote comments for Forbes and he was basically like if you people asked for $15.
Oh Robots, are gonna replace you and then. He wrote another piece of humans later it was like it's happened, and he sort, of cried some crocodile tears. But. You know we watch this video that he was promoting, of you know how these troublesome workers have been done away with what you see is not anything, we should dignify, with the term automation, it was just customers, doing the work for free in putting their orders into iPads, yeah, you know so like that's not automation. Newark, Airport, right I'm going to buy it exactly, I'll be doing it right this is this is Funimation, and so I think we need to replace this you know where I was like robots are taking our jobs and so I've made less catchy but, more accurate slogan is you know capitalists, are making targeted investments in, robots, to. Weaken workers and replace, them. If somebody can make that catchy yeah, let them even co-brand, the revolution. One. Of the things I really love about the, work that you have done for so long but particularly, in this essay is that you really, walk us through the. History of automation but you conceive sort of the feminist, history of domestic labor tell. Me what we can learn from those histories yes so you, know I think we were led astray when we look to science fiction sometimes, right we see this this robot future where everything. Is done for us and I'm, saying you know instead of looking at that the. People who can kind of give us insight are actually socialist, feminists, because. Women, have a long history of base old domestic, technologies. There's a great book called more work for mother about how these labor-saving devices actually just sort of ramped. Up the cult of domestic, cleanliness. You. Know and these tools actually created more and more work but, it's a deeper insight they offered deeper insight than that and they it's socialist feminist, but they're the rest of the question like what is work right. And they're, observing the fact that capitalism, czar grows and sustains itself, by, concealing, work and not paying for as much of it as possible you. Know they don't want to pay deathless. Don't want to pay the full value of work and so you can picture to assembly, lines one it's like the assembly line of a factory, or the coffee shop they're going to we're involved. In monetary exchanges, and the underlying assembly, line is all the work that is done to. Reproduce, daily life and to make the workers who could then work those jobs for wages so women have always, been told their work doesn't matter and that it doesn't deserve, a wage right, because it's uncompensated. And. So I think you, know they there's, something there for us right because we're told there's gonna be this future where we're, you, know where there's.
No Work for humans to do and this insight, it was really made powerful, to me I'm in a lecture given by Sylvia Federici who's this amazing scholar who sell so features, very prominently in my my film what is democracy and, this. Grad student you know we're, talking about she was talking about reproductive labor and the value of it as grad student very earnestly said okay but but aren't we heading to a future where there, would be no jobs you, know in sort of hundred, marks as a reserve army of labor and this image that we would just all be sitting, there like with nothing to do you. Know just on the margins, and, Sophia's. Response, was really bracing, and she just said don't let them convince you that you're disposable. Right. This. Don't. Let don't. Believe, it, don't believe that. That. Message and you know I think that there's a really valuable point there because it'd be if the automated. Day. Of judgment were really neither wouldn't, have to invent all of these apps to, fake it. As. You can imagine I want to point out that not only did you manage to walk from, McDonald's. To Silvia, Federici but you did it in exactly. Seven, minutes. Next. Up we have a panel on inequality. Politics. Of austerity and AI and it's going to be chaired by Vincent, Sutherland and NYU. Please welcome him and our panelists. Good. Evening everybody, my name is Vincent Sutherland I'm the executive director of the Center on race and equality law, here at NYU School of Law and. Part of the center's work focuses on the implications of for. Racial justice an inequality that can be found at the intersection of algorithmic tools criminal. Legal system other systems that govern our lives I also, serve as the criminal justice area lead for AI now. It's. Because, both of those rules I'm really thrilled to be part of this needs a conversation, with our panelists, both. Of whom are at the forefront of work, being done on AI and, automated decision systems, at. A time of rising austerity. Inequality. And quite frankly turmoil. So. Please join me in welcoming Jin, Eubanks a professor of political science at SUNY Albany and author of the breakthrough. Book automating, inequality, and Phillip, Austin who is a professor of law at NYU log in as. The UN Human Rights Council Special, Rapporteur on extreme. Poverty Human Rights. So. Philip, let me start with you in a UN report on extreme, poverty, in the United States it. Was the first and such a venue to really include. AI and a conversation, about inequality why was it important for you to, do that in this report. Well. My focus. Is on the human rights dimensions, of these issues. The. In. The AI area, where well accustomed, to talking, about inequality. Relating. It to the. Impact, of automation the gig economy, and. A range of other issues but then the human rights dimension comes, in very. Often I, tend. To see it in something. Of macro. Micro terms, if, you're, looking at inequality. Then, it's, a macro focus, what sort of major, government, policies, or other, policies. Can we adjust in, order to improve, the overall situation, but. If you do a human rights focus, then, you're really going down to the grassroots you're looking at the rights, of the individual. Who. Is being discriminated, against usually, for, a whole range of different reasons or, who simply been neglected, and I think one of the problems is that there's neglect on both sides that the AI. People. Are not, focused, on human rights there's a great tendency to talk about ethics which. Of course is undefined. And unaccountable. Very, convenient, and on. The human rights side, there. Is a tendency to say this stuff is all outside, our. Expertise. And not to really want to engage with it so, in my report, on the United. States down. At the end of last year I, made, a big effort to try to link the. Issues. Of inequality human. Rights and the, uses of AI. Great. And so I'm a question for both of you and made a protein you can start with this one what a responder Phillips oh, I. Can't. Talk to Philip this. Report was so important, and I think it's it's, so crucial to particularly. The movement organizing, and in, the Poor People's Movement to. Have this. Kind of vision, of the United States with 43 million. Poor. Working folks who, are really struggling to. Meet their needs day to day and are. Finding. That these new tools often. Create. More barriers for them then. Then. Lowering those barriers and one of the things that I think is so important, about what Philip just said is that we often particularly.
In In my work in public services, we. Often, these tools get integrated. Sort of under the wire in a way because, we see them as just being administrative. Changes, and not as these consequential. Political, decisions, but. Yeah we absolutely have, to reject, that narrative, that these things are just creating, efficiencies, they're just optimizing, systems, we. Are making political a profound. Political decision, every. Time we say for example, we have to triage we, don't have enough resources and. So we have to make really hard decisions and. So these tools will help us make those decisions, like that's already a political, choice that. Buys into this idea that there's not enough for everyone and we live in a world of abundance and there's plenty for everyone, right so I think that's so important, to point out. Plus. Is good so, what does that look like on the ground like what types of things are you seeing