Creating Community-Centric Tech Policies | Jennifer Lee

Creating Community-Centric Tech Policies | Jennifer Lee

Show Video

hi everyone my name is jennifer lee and i lead the technology and liberty policy work at the american civil liberties union of washington or for short the aclu of washington the aclu is a non-profit organization and the biggest public interest law firm in the u.s we work in the courts in legislatures and in communities to defend and preserve the individual rights and liberties guaranteed to everyone in the u.s by our constitution and our laws we work on many different intersecting issue areas including healthcare immigration technology and gender justice just to name a few our overall objective in doing our technology and liberty work is to protect people's rights in the face of new and powerful technologies and to make technology accountable to people and particularly the people and communities that have always been disproportionately affected by surveillance we do this work in a few key ways first by advocating for community-centric policies and laws that create safeguards around data and technologies or sometimes we advocate to stop use of certain technologies altogether if appropriate where existing laws are unjust or have been violated we conduct strategic litigation with our legal team to remedy harms and lastly we convene the tech equity coalition to organize with communities and individuals working to hold technology accountable and lift up the voices of communities that have been historically disempowered in decisions about if and how technologies are deployed in this presentation today first i'll share a little bit about the tech equity coalition then i'll discuss some of the policies and laws that we've been working on in our state legislature and our local governments and lastly i'll share some ways we can push for equity in the way technology policies are designed deployed and used to start the tech equity coalition is a group convened by the aclu of washington that aims to center the voices of communities that have been historically disempowered in decisions about technology it is composed primarily of local organizations working to advance racial and social justice the tech equity coalition also includes individuals who are privacy advocates technologists artists and researchers who all bring tremendous value to our collective advocacy we have grown the coalition to now over 80 organizations and individuals and together we've done some really amazing work the coalition has testified both in the state legislature and in city and county councils we've drafted letters and published op-eds hosted webinars and workshops and contributed to the development of bills toolkits and so much more a key part of our advocacy is emphasizing that the harms caused by surveillance privacy invasions and unchecked technologies are neither new nor surprising in our conversations with lawmakers and in discussions like these we emphasize that at its core surveillance has been and will always be about power and control here are listed just a few examples of how certain technologies and technology policies have been used throughout history it's important to ask questions about who has the power to watch and police whom with what tools and for what purpose who is affected and who is not we work with the tech equity coalition to raise these questions to policy makers tech workers academics and the general public because much like anything that is designed by humans technology isn't neutral technology is not some valueless tool that can be used for good or evil every technology by design reflects a set of value choices made by people and often people in positions of privilege and power there is reasoning behind why and how technologies are built and deployed so when we talk about how to build and regulate new powerful ai and database tools it's important to recognize the fact that there is a long and ugly history of technologies being designed and deployed to target surveil and harm those most vulnerable in our society technologies however rudimentary or advanced have always disproportionately affected communities of color religious and ethnic minorities sexual and gender minorities and other marginalized communities what has changed is that the institutions that oppress and brutalize the lives of black indigenous and other marginalized communities are now equipped with tools truly unprecedented in their surveillance power such as facial recognition location tracking drones and other ai based tools a key difference between these new powerful technologies and technologies of the past is that these new technologies are powered by massive amounts of data these tools have powerful surveillance capabilities and can even make important decisions about people's lives if these increasingly powerful and unaccountable technologies continue to be built and deployed without adequate consideration of the impacts on communities they will continue to exacerbate the structural racism and inequities already present in our society so it is with that kind of framing and context that we approach the kinds of policies that we draft support and oppose so what are the policies that we are drafting supporting and opposing with the tech equity coalition well three of the key issues that we are working on are data privacy ai based automated decision making systems and face surveillance technology first on the topic of data privacy the u.s

does not have a comprehensive federal data privacy law and at present our personal information is collected used and shared often without our knowledge much less our consent new technologies are making it easier for corporations and the government to learn about the most intimate aspects of both our online and offline activities from where we live and work to what religion we practice to what we purchase to what we read and with whom we associate and with more and more of our lives moving online we face increasing vulnerability and threats to privacy we are increasingly required to share personal information or be surveilled as a consequence of participating in public life and accessing basic goods and services it's often impossible to clearly understand what information is being collected about us and how it's being used in many cases people have little alternative but to give up their privacy in exchange for valuable services this lack of transparency and accountability allows companies to collect as much data as they can about all of us and to use it as they wish at a profit this is concerning because privacy violations and misuse of personal information can lead to a wide range of harms such as unfair price discrimination domestic violence abuse stalking harassment and discrimination in areas such as employment health care housing and access to credit these harms disproportionately impact people with lower incomes and people of color and subject them to outside surveillance and data brace discrimination through the weaponization of our personal information companies and governments can even more easily identify control discriminate against and oppress marginalized communities in washington we have been embroiled in a privacy battle since 2019 the first year that a weak industry-backed privacy bill was introduced this bill does not require opt-in consent to collect use and share people's information it prohibits people from holding companies accountable for privacy violations is riddled with loopholes and exemptions and prevents local jurisdictions from passing any stronger laws this bill has been opposed by consumer rights orgs privacy advocates and civil rights groups like ours and it's been supported by big tech giants including amazon google and microsoft i'm so happy to share that due to a tremendous advocacy effort with the tech equity coalition we actually managed to kill this bad industry bill for the third year in a row preventing the passage of a bill that would have only entrenched a status quo that benefits companies but not people how did we do it well with the tech equity coalition we position this fight as one between people versus big tech because that's exactly what it was that the tech equity coalition consists of so many different organizations and individuals who can speak to so many different ways in which privacy surveillance and technology issues affect people's lives allowed us to connect with lawmakers on both sides of the political spectrum for hearings we organize testimony panels with former big tech workers parents workers rights orgs racial justice groups immigrant rights advocates and so many others effectively painting a picture of why we need meaningful privacy protections and cannot settle for anything less this year we collaborated with the tech equity coalition to introduce a privacy bill called the people's privacy act that requires affirmative opt-in consent to collect use and share people's data it allows people to bring companies to court if they violate people's privacy rights and allows local jurisdictions to pass stronger laws it also provides strict standards for biometric information banned cert tissue surveillance and bans the use of face surveillance technologies in places of public accommodation like restaurants parks and schools we will continue working to pass strong privacy protections in washington and we will continue to oppose bad privacy laws and we hope to see strong privacy protections emerge in other states and at the federal level the second key issue we are working on is ai based automated decision making systems automated decision-making systems are increasingly affecting our lives even though we don't always know it every day people are denied healthcare over-policed kept in jail and passed up for jobs because of scores assigned by computers sometimes these systems are simply functioning as calculators helping government officials apply complex rules to individual circumstances with results that could be explained and challenged if wrong but we're seeing increasing use of systems with secret algorithms that even government agencies themselves don't have access to let alone understand many of these automated decision making systems are created using complex mathematical formulas based in statistical regression analysis or machine learning a type of artificial intelligence to put it simply these formulas are created by using computers to look for commonalities in large data sets for example an hr software company might create hiring formulas based on a computation of what a successful or unsuccessful job candidates have in common once the computer determines a secret algorithm based on all the data available that algorithm may then be applied to future job applicants to score their fitness for a role but because these algorithms identify people who are similar to those who have previously been hired they simply reinforce and exacerbate any bias that was used in past hiring decisions without any visibility or opportunity for correction and not all biases may be obvious because sometimes characteristics that aren't directly associated with gender age or race can still indirectly reflect them for example an algorithm may discriminate against candidates who live in certain zip codes which could indirectly affect black and brown candidates or candidates with lower incomes when an algorithm is used for job applications the hiring manager may never know why certain applicants are being suggested or rejected and whether those reasons have anything to do with the ability to do their job because these systems rely on data reflecting historical discrimination the decisions will often discriminate for example predictive policing systems determine where law enforcement should deploy resources but use data that reflect the historical over policing of neighborhoods of color faulty risk assessment systems have recommended different sentences for the same crime based on race and in the employment market algorithms that screen applicants often replicate and reinforce the lack of diversity in many different sectors even when the automated decision-making systems are neither secret nor algorithmic they can still perpetuate bias because they are based on flawed inputs for instance when washington courts decide whether to release a defendant pre-trial they sometimes use openly published risk assessment tools however because these tools use criminal history as a major input in the decision process the results reflect the racism inherent in our criminal justice system from over-policing to over-prosecution to over-incarceration we are working to pass legislation that would require any government agency using ai-based automated decision-making systems to make decisions about people including decisions about our health care housing loans sentencing and bail to make transparent their use of such tools this bill would also prohibit government from using machines to discriminate and would empower people to hold government agencies accountable if they discriminate by a machine this year this bill passed out of the senate policy committee but didn't make it out of the fiscal committee over the interim we will be gearing up to pass this bill in the next session i'm really excited by the progress that we've made on this bill because currently there is no law banning discrimination via algorithm even though this type of discrimination is definitely occurring and will continue to increase in the future if we do not pass strong laws lastly we are working to pass bans on facial recognition technology at both the state and local levels facial recognition technology must be banned not only because it fuels discriminatory surveillance but also because it jeopardizes everyone's privacy and civil liberties with this technology government agencies can track individuals movements and contacts without their knowledge or consent chilling free speech and free association undermining press freedom and threatening the free exercise of religion people cannot meaningfully consent to be face surveilled and the use of this technology harms our collective privacy and democracy government agencies should not be allowed to deploy racist anti-black face recognition technology in communities that are already over surveilled and over policed at the state level we are advocating for a face recognition moratorium or temporary ban on government use of face recognition technology since 2019 we've introduced face recognition moratoria and this year we supported sb 5104 sponsored by senator hasagawa which would place a five-year moratorium on government use of face surveillance technology this bill would require that there be a robust and meaningful community driven conversation to decide if and not just how face surveillance should be used at the local level we are working to pass the first county-wide face recognition ban in the u.s this face recognition ban ordinance would ban government use of face recognition technology in king county which is the largest county in washington state and the home of the headquarters of both big tech giants amazon and microsoft now that i've shared some of the policies we are advocating for with the tech equity coalition i'd like to talk about some ways in which we can all push for equity in the way technology and technology policies are designed deployed and used first i think we need to ask some key questions some of which include who has the power to decide whether and how to deploy certain technology what kinds of regulations are different communities calling for with what lens and for whom are terms such as cost and benefit designed and what would technology and technology policies look like in a world where those most disempowered in our society wielded the ultimate authority in making those decisions second it's important to recognize that historically marginalized communities are the experts on the impacts of technology and surveillance our goal must be to bolster the ability of communities to share their expertise and exercise decision making power too often the community engagement processes created by corporate governmental nonprofit and educational institutions do not serve to equip communities with decision-making power but rather function to purposely or inadvertently co-opt community voices and legitimize decisions that have already been made task forces community engagement meetings and outreach processes that ask community members to draft lengthy reports provide feedback on or create recommendations and repeatedly share their lived experiences often demand time and energy without also giving communities meaningful decision-making influence and community expertise and decision-making processes is often undervalued while academic technical and legal voices are elevated as the only experts and given authority even when those expert voices do not come from impacted communities and contradict community expertise without articulating the specific objective of ensuring community decision making power community engagement processes can function as a perfunctory and performative means to shield the status quo even unintentionally so before we embark on a community engagement process surrounding technology and technology policies we should be questioning the norms and assumptions inherent in that process we should urge our peers colleagues and employers to ask questions like have communities already vocalized their support for or opposition to the technology in question and if so how will this feedback be considered in decision making what authority do historically marginalized communities have in deciding if and not just how a system or policy is implemented lastly in order to build community decision making power we need to focus on changing power structures within the different contexts in which we operate whether we are academics artists technologists educators lawyers organizers or policy makers we must continuously practice sharing institutional and personal power with historically marginalized communities everyone must take on the role of uplifting the voices of those historically disempowered and ensuring that such voices have the most weight in deciding if and not just how technologies are deployed now that i've gone through the policies we've been advocating for with the tech equity coalition here are three key takeaways in advocating for tech policies we must center the voices of people who are disproportionately surveilled and over-policed intersectional coalition building is critical in advocating for strong people-centric tech policies and lastly we can and should all be advocates in the different contexts in which we operate i'm delighted to participate in this great program and please feel free to reach out if you're interested in learning more thank you so much

2021-06-14 10:44

Show Video

Other news