Reimagining Risk: A Conversation about Risk Assessments, Technology Design and Inequality

Reimagining Risk: A Conversation about Risk Assessments, Technology Design and Inequality

Show Video

[Music] hello and welcome to http's connected communities digital forum my name is alejandra rourke and i'm the executive director for hispanic technology and telecommunications partnership http is a ceo roundtable of 16 of the country's oldest and largest latino civil rights organizations who work in coalition to promote access adoption and the full utilization of technology and telecommunications resources by the latino community across the united states through our community engagement congressional education and by serving as a national voice for latinos and tech and telecom policy http member organizations work to support the social political and economic advancement of over 50 million americans of latino descent by facilitating access to high quality education economic opportunity and effective health care through the use of technology tools and resources this virtual briefing series is dedicated to exploring the intersection of ethics technology and public policy by engaging scholars community thought leaders civil rights leaders and policy makers from diverse backgrounds to give greater context to the lived reality of black and brown communities in our increasingly digital world today's special edition of our connected communities digital forum is presented in partnership with dcai week ai week is the nation's only week-long tech festival dedicated to artificial intelligence and brings together thousands of c-suite leaders from the government tech and education communities across the nation with the sole focus of understanding artificial intelligence and its power to revolutionize the world around us before we begin today i want to invite all of our friends who are tuning in live on our website to join the conversation on twitter by following at http underscore policy and using the hashtag ai week that's hashtag ai week our conversation today explores how risk assessments technology design and public policy each have a role to play in the way that ai systems are developed and deployed and how redefining the way that we consider risk helps to center equity as a distinguishing feature and the outcome of american innovation we are joined today by andrea arias attorney of the federal trade commission division of privacy and identity protection mona sloan senior research scientist at the nyu center for responsible ai vincent lee uh technology equity legal counsel for the green landing institute and beltram lee council for media and tech at the leadership conference for civil and human rights so thank you all very much for being here with us today mona i would love to start with you because we have all kind of heard about the mysterious black box that powers ai algorithms and that gives machine sight so as senior research research scientist at the nyu center for responsible ai and as a sociologist working on inequality in the context of ai design and policy do you mind taking us inside of the black box and kind of sharing with us your perspective uh on how uh predictive analytics make decisions yes of course alejandro and thank you so much for the invitation and i'm so honored to be on this panel with vincent andy and bedroom um and i can't wait to uh delve into the conversation um so as a sociologist what i really do is i look at what makes society tick so when we talk about having a candid dialogue about artificial intelligence i want to start with a question which is what does that have to do with how we organize society at this point in time and i think the most important observation is here that ai truly has become integral to the way in which we stratify and organize society create recreate hierarchies and organize our social lives now to wrap our hats around how and why that is i want to offer two observations for this conversation today the first is and this really is somewhat of an old chestnut at this point is that we should be a bit clearer about what ai means and this is actually a very current relevant policy question which i'm sure my co-panelists agree with there are many takes on what ai is but let's focus on predictive analytics specifically for the sake of this conversation now predictive analytics are technical systems that analyze data in order to make prediction that serves as the basis for making a decision that decision can be made autonomously or with a human in the loop for example predicting what the next word in a sentence will be as you type into your phone and based on that decision um suggesting what the next word could be we call this autocorrect this is a matter of calculating the probability of an event occurring in other words we are dealing with statistics or math when we talk about ai as predictive analytics what is important to remember is that as we deploy the systems as part of organizing society we're seeing a bigger and bigger impact these systems are deployed in the context of loan decisions college admission decisions benefit decisions so in the public sector if you get a job if you're released on bail and many more so the stakes have become really much higher than just autocorrect there is actual algorithmic harm that can occur and this harm is distributed unequally across society much more affecting communities who are already being discriminated against there are plenty of examples at this point for example we see algorithms or we have seen algorithms or ai dictating that less resources be distributed to sicker black patients than healthier white patients in the context of the healthcare sector we've seen that buying bulk food being classified as abnormal shopping behavior causing the loss of benefits this is in the context of automated fraud detection we've seen the word women on a cv causing automatic hiring tools to send anonymized cvs to the bottom of the barrel as it were and more and as we can see and this is my second point there is a significant impact and therefore a significant risk related to these systems particularly in the context of public agencies using these systems now as public agencies take to ii to rapidly solve issues under conditions of strapped resources and let's not forget we're still in the middle of a pandemic they can actually end up exacerbating these inequities that i just spoke about even though they are tasked with mitigating them unlike private companies whose ramit it is to ship to shareholders or whose responsibility is primarily towards shareholders in the context of profit generation public entities must consider the entire population when delivering a solution so the use of ai in the in the context of the public sector is quite different the public use of an algorithmic decision-making system has different requirements than a private use product is it is acceptable for a private company to create a product that addresses the needs of for example 80 of their target market however if this product is translated to public use addressing the needs of 80 of your constituency is unacceptable it is also likely that the 20 percent who are not addressed are likely to be from underserved minority groups as we just um discussed so this gap is rarely considered we have seen biases manifest in federal code 19 funding allocation algorithms favoring high income communities over low income income communities due to historical biases um that are baked into the training data so really we're seeing this risk tied to the way in which we reimagine and rebuild the economy past the recession so if we don't want these systems to scale up and exacerbate the inequities in our society we need guardrails and i think we're kind of in agreement on that as regulators on both sides of the atlantic grapple with that task and the european commission has just released their suggested regulation for ai last month and we could talk about their sort of crown jewel later which is the high the classification of so-called high-risk ai systems um i want to sort of throw in five points of conversation here um which are all grounded in ongoing research and projects that i'm doing number one i think we need distributed accountability mechanisms as part of and as part of developing those we need to talk a little bit more about product liability regimes second we need to mandate rigorous ai impact assessments prior to the development of an ai system that is particularly systems that are used to assess people third similarly post deployment we need socio-technical audits and i'm happy to talk about what i mean by that that holistically assess an ass system beyond just the simple question of whether or not it worked as intended um fourth we need innovation in public procurement to establish set accountability mechanisms to establish impact assessment to establish audit mechanisms and also cultures and i really think we need to invest in capacity building and knowledge sharing for public servants and those who want to have a career in the newly forming space of public interest technology and i'm going to leave it at that thank you so much of course no thank you mona for that uh grounding for our conversation those are all kind of great points for us to think about and i think a great uh jumping off point so uh beltram i know that you've spent a lot of time thinking through how to advance the interest of historically marginalized communities and tech and media policy and i know that you have some specific thoughts about the term risk specifically and it's inherent framing or our framing of risk assessments to achieve either corporate or social outcomes do you mind sharing a little bit about your work at the leadership conference and how we can begin to redefine uh risk absolutely and thank you alejandro and thank you to http for hosting a conversation on artificial intelligence and its impact on communities particularly historically marginalized communities um such as people of color women immigrants lgbtq individuals for dc ai week i'm bertram lee a policy counsel at the leadership conference on civil and human rights a coalition of more than 220 national organizations working to build america as good as its ideals and one of those ideals is to make sure that new technologies further not hinder civil rights protections and i think one of the ways in which to frame ai and its impact on marginalized communities is just to think about how ai currently impacts a number of traditional civil rights issues and traditional spaces including but absolutely not limited to education title vi of the civil rights act employment title vii of the civil rights act uh credit um the fair credit reporting act and equal credit opportunity act housing the fair housing act right and criminal justice and there are significant sixth amendment issues um with the current use of ai within the context of the criminal justice system and so one of the ways in which that we think about ai is that is where do we get to compliance right um and i think that that is the issue that i think vexes i think many within the civil human rights community um uh through the idea that how do we get to a point where we know that the ai used in these traditionally protected civil rights classes is compliant with if not the exact letter but the spirit of civil rights law how do we make sure that that's transparent how do we make sure that companies know their obligations to civil rights law and not hide behind any variety of business except business reasons um to engage in non-compliance right i think that's point number one point number two is that um the leadership conference has worked on these issues since 2014. um in 2014 uh the leadership conference along with a number of uh partner organizations and coalition members put together uh a the era of big data civil rights in the era of big data principles um that uh those principles were updated in 2020 um you know you can find those linked on not only on my twitter but you can find those on the leadership conference page and a lot of those principles and the key idea of those principles is to make sure that technologies work for marginalized communities equally as well as the rest of society and that these um technologies do not discriminate based on protected characteristics which is a core element of how we've interpreted civil rights law um since the i since civil rights law and really since the 19th century if you really want to look at the civil rights act that were passed in the wake of the civil war and so lastly i think there are a number of issues that like we're concerned about with as it pertains to technology and how we think about risk right um one of the ways in which you can think about risk is who is it risky towards right when you're talking about what the risk is is it a risk for the company or is it risk for an individual and really what we've been talking about and pertains to risk and i think the larger conversation is missed is that there have been risk assessment tools for say someone who um may not pay back a loan or risk assessment for them to be um go out and to go back into society and commit another crime or not show up in court but we don't talk enough about the risk that there is for those people to be put in that system and the risks that they have for not being able to be out and for instance on pre-trial risk assessments um if you are not allowed to go out and continue to go back to your job while you are still within the context of the criminal legal justice system that has an economic impact for generations that impacts not only the individual but the individual's family and the individual's children and potentially their children's children if they do not find themselves out of the criminal legal justice system within the proper context and so we don't think about the risks that these algorithmic technologies have on marginalized communities and we need to change that and lastly i think we need to think about accuracy and we need to think about bias within the context of how are these things working for the communities that are not only protected by law but also protected by the context of how they frame how what is the protected by the social contract that we have right we have a collective contract with one another that society works fairly right and america is building america as good as its ideals right and so we can it is hard to change an individual's standpoint about not only race um or class or um kind of getting into kind of the um i would say the in inherent biases that we all come with right that's hard to change for an individual that is much easier to change for a computerized system and that is something that i think we can hold people to a high standard for and i think that when we're talking about these contacts it's important to say that there is a significant human cause like not only like human costs to individual lives but taxpayer dollars um and even like corporate dollars right the less diverse a corporation is mckinsey put out that study a few years ago the less money it makes there's a cost to engaging in this kind of discriminatory algorithmic practices that we're just not talking about and i think that within the context of risk it's a risk to everybody to continue engaging in algorithmic practices that are not only biased but inherently unfair and i think that's something that we need to think about moving forward absolutely thank you so much for that so i mona i actually want to go back to you a little bit because um belgium raises a really great point which is um a lot of the way that these automated systems are developed today they really are um focused on understanding or mitigating risk right but i think it's important for us to really kind of underscore um risky for who you know who are they protecting um and what are the kind of processes or social assumptions that inform the development of this technology and i wonder if you might have some insight into i think the current state of play for the development of these technologies oh that's a big question let me try so i think what you see there is really important which is the question what are the kinds of assumptions that are baked into the technology which is a different question than asking is there just bias in the data set that trains these system because when we ask about what kind of assumptions about society and certain communities are baked into the system we can ask questions around what is the system optimizing for who is it optimizing for and in what way so for example if you're optimizing for the reduction of uh fraud in in uh you know a public uh benefit system um you your model will optimize for that and we'll try to find as many fraud cases as possible to uh of course which will then of course uh or is likely to cause harm um for communities who are already discriminated against so that's one thing the other thing is that when we talk about risk and we talk about risk for whom and to whom then i think when we look to the corporate side we can say well essentially we've got two streams we've got risk in terms of reputation reputational risk for companies and we've sort of seen that for example in the you know recent months with google and the firing of dr tim and gabriel and dr mike mitchell and sort of that's a reputational risk and there are significant reputational risks um associated with the deployment of these technologies you know when it comes out that they are actually discriminating communities um that this is you know a risk to the to the corporation i will say though that flagging risk is labor that rests on the the shoulders of those who are experiencing the harm right like only when there is harm that occurs and that is flagged by these communities it becomes you know it is notif it becomes um known as restorative corporation the other one is regulation you know uh cooperation of the covered side understands regulation as a risk and we've seen that a fair bit um happening with the you see having dropped the regulation and and i would love to hear from andy also at the ftc's thoughts on that um but that's another sort of risk stream that we are seeing so i think there is that there are you know all these moving parts i think we're at a point where we need to um really sit down and wrap our head around that and as bertrand has said we need to bring folks to the table and define a workable definition and we need to also develop socio-technical literacy so we need to we need folks to understand that systems are not just technical systems but systems that you know where technology and society kind of work like this it's not just one way um and i'm gonna leave it at that i hope this was a sufficient answer yeah yes of course i think that's a great segue um for andrea so andrea we know the speed of innovation continues to transform our lives at a rapid pace you know the federal trade commission um whose charge is to protect uh consumers um has been working to understand and define consumer harm in our new digital reality i know that you were one of the key drafters of the ftc's big data report and i wonder if you could share you know what are some of this the main consumer protection issues raised by algorithms and maybe provide some insight into the legal tools that the fdc has at its disposal to curb that consumer harm sure hello everyone and thank you for having me today um it's a pleasure to be here before i begin though i have to disclose that today's comments are my own and do not represent those at the commission or any one of its commissioners all right that aside i'm really glad that we're here today discussing the impact of ai on multicultural communities not because i'm glad this is happening uh when it's harmful and not beneficial but rather because there are serious issues that require serious attention so as you all know while the sophistication of ai and machine learning technology is new automated decision making is not and we at the fdc have long experience dealing with the challenges presented by the use of data and algorithms to make decisions about consumers so very briefly for those of you that do not know the federal trade commission is a highly effective independent agency with a broad mission our mission is to protect consumers and maintain competition in most sectors of the economy on the consumer protection side our matters range across a variety of issues from student debt relief scams to various types of health advertising and many others but today we're here to talk about privacy and data security and in particular ai so the ftc has been the primary federal agency charged with protecting consumer privacy since about 1970 yeah believe it or not with the um with the passage of the fair credit reporting act as bertram kind of alluded to earlier so from the growth of the internet to the mobile device explosion to the arrival of iot and artificial intelligence we have continuously expanded our focus on privacy to reflect how consumer data fuels these changes in the marketplace and so over the years the ftc has brought many cases alleging violations of the laws we enforce involving ai and automated decision making and have investigated numerous companies in this space for example the fair credit reporting act uh and the equal credit opportunity act also known as ecoa enacted in 1974 both address automated decision making and financial services companies and we have been applying these laws to machine-based credit underwriting models for decades okay we have also used section five of the ftc act which prohibits unfair or deceptive practices to address consumer injury arising from the use of ai and automated decision making so i thought it would be helpful for me to kind of give you guys a little bit of an understanding of these laws so that we can understand kind of how they apply in this space so first let me take section 5 of the ftc act which prohibits unfair and deceptive practices how does this work in the ai space in appropriate circumstances the commission could challenge the use of a discriminatory algorithm as either unfair or deceptive or both if for example a company used an algorithm that generated bias results impacting consumers access to a service and failed to take reasonable steps to correct that bias the fdc could use its section 5 authority to challenge those practices as unfair or if a company represented on its website user interface privacy policy or elsewhere that it used an algorithm to achieve fair unbiased results when in fact it's systematically discriminated by race or other characteristics the fcc could use its section 5 authority to challenge those representations as deceptive now the fair credit reporting act or fcra is also relevant to algorithmic discrimination where the algorithm affects the content of a consumer report so the fcra applies to companies known as consumer reporting agencies or cras that compile or sell consumer reports which contain consumer information that is used or expected to be used for credit employment insurance housing or other similar decisions about consumers eligibility for certain benefits and transactions so under the scra and i won't get into too much details there are several requirements that cras and furnitures of information have implement so let's say a cra relied on an algorithm to generate credit reports without maintaining reasonable procedures to ensure maximum possible accuracy of that report the ftc could challenge that conduct as a violation of the fcra and so we have done so for example in our most recent case against real page in 2018 a company that deployed software tools to match housing applicants to criminal records in real time or near real time and we allege they violated the sdra by failing to take reasonable steps to ensure the accuracy of the information they provided to landlords and property managers finally we have ecoa which is a just a small piece of a variety of federal equal opportunity laws including title seven of the civil rights act of 1964 the american with disabilities act the age discrimination and employment act the fair housing act the genetic information industry just non-discriminatory act there's a slew of laws we at the sec enforce ecoa prohibits credit discrimination on the basis of race color religion national origin sex marriage status age or because a person receives public assistance so how does that work in the ai space if for example a company made credit decisions based on a consumer zip code resulting in a disparate impact on particular ethnic groups the fdc could challenge that practice or if a lender refused to lend to single persons or offered less favorable terms to them than married persons because an algorithm indicates that single persons are less likely to repay loans than married persons the fcc could challenge that as disparate treatment so very briefly i think alejandro kind of alluded to this but to complement our enforcement actions we also engage in a variety of policy initiatives so as alejandro noted we released in 2016 our report on big data a tool for inclusion or exclusion which examined the big data industry and gave companies key advice on how to reduce the opportunity for bias when using ai and i hope in our conversation we get to talk a little bit more about that today in in 2018 we held a hearing regarding artificial intel the consumer welfare implications associated with the use of algorithmic tools artificial intelligence and predictive analytics as part of our hearings on competition and consumer protection in the 21st century and in those hearings we heard from dozens of participants about the potential uh benefits right from these technologies which can lead to advances in medicine education health and transportation but can also lead to significant risks including algorithmic bias resulting from operators assumptions in data sets imperfections right and then more recently and i think everyone here has seen more recently we published two uh blog posts uh one entitled aiming for truth fairness and equity in your company's use of ai in 2021 and then again in 2020 using artificial intelligence and algorithms to provide guidance to companies on how to effectively use ai without having disparate impact on multicultural communities and encourage everyone to read um all of our reports and all of our blog posts are free to everyone to just go to fdc.gov and you'll find them there so i just briefly kind of wrap that up and i hope we have more conversations about some of the guidance that we have in all these reports and policy initiatives that we've put out we really do believe that fdc's law enforcement actions studies and guidance can offer important lessons about how companies can manage the consumer protection risks of ai and algorithms so i'm happy to chat more about how the specific guidance and the fdc has given these companies and how to effectively use ai and i uh very much appreciate that um and yes i think that we at least the panelists that are here with us today have read that blog post um and definitely i think we're receptive to it it's definitely making the rounds so happy to look back around and kind of talk about some of the specific guidance that you guys offer as as a part of that new kind of framework but i want to kind of go to to vincent real fast because i know that the vincent over the last decade algorithms have really replaced as has been mentioned decision makers at all levels of all society judges doctors hiring managers they're all shifting their responsibilities onto powerful algorithms that promise more data-driven efficient more accurate and even more equitable decision-making however we know that today poorly designed algorithms are also amplifying systemic racism by reproducing patterns of discrimination and biases of the past so i think my question to you um and and your work at the green learning institute is is is there a fix to algorithmic bias and if so how can we build you know more equitable automated decision making systems yeah thank you alejandro um yeah thank you for for all uh for inviting me and being on here with everyone else i'll say real quick i'm vincent legal counsel with the green lining institute uh you know we were formed uh to combat the the practice of redlining which is you know the illegal practice of denying services to communities of color on the basis of race or related characteristics um and you know one big focus on that is making sure that we don't recreate redlining in algorithmic uh systems right and and how that impacts economic opportunity right so in banking housing healthcare finance education you know ai's making all these decisions so you know one of the big focuses for us at green lining is you know building in what we call uh algorithmic green lighting you know so ai is really good at optimizing for say profit or finding out who's committing fraud but they're not really good at optimizing for you know shared societal outcomes right so how do we limit polarization how do we help the climate how do we make sure that with the racial wealth gap closes um so you know one of the key original goals of this work was to build that into systems to create the incentives and um that you know this the societal systems where companies are incentivized to internalize those externalities right to think about when they're optimizing their algorithm how is it impacting society how can we maybe we make two percent less profit but we you know have 10 percent better outcomes for for the communities that we serve so you know that's one of our big goals and we do that in a lot of different ways but you know we do a lot of legislative work where you can get you know better more state investment or say dollars if you are directing investments to the community's worst hit by climate change and you know that's an algorithm called cal enviroscreen uh that green lighting helped develop in california where you know it's directed billions of dollars to communities that are worst hit by the impacts of pollution but in building that system we made sure we included metrics and data that really captured the the worst impacts of redlining in terms of you know quantifying you know who are we going to give money to which communities are going to get funding so that's one solution right and but you know that was our original goal of the work that we did at greenlining but it's really much so much focused on algorithmic bias and it's a dealing with a lot of what you know mona brought up is you know how do we make sure that we build in the systems for accountability how do we make sure there's liability for when you are committing um disparate impacts and another part is you know i do think that you know the ftc is on the ball but a lot of other agencies aren't right and it's just one agency is not enough to to capture all the aggregate bias investigate to create the culture of compliance that virgin was talking about so what we want to do is build capacity at state local federal organizations to tackle disparate impact um to require impact assessments if you want a any federal money any public procurement of an ai system and then eventually what we want to do is begin to shift what are the legal standards around what is a disparate impact what is a reasonable business necessity that justifies desperate this justifies discrimination and justifies uh unfair outcomes that we consider we consider unfair outcomes for communities of color absolutely so beltram i'd love to loopy back in because i i feel like um we're really kind of getting to the question of you know how do we ensure that our long hard fought um and settled civil rights laws are clearly translated and enforced in emerging digital tools and i wonder if you might have submitted some some perspective into um how do you think we accomplished that i mean um it's through the regulatory process you know um so we have these these laws have been on the books for decades um and they've changed and evolved over time right well for instance the eeoc for fifth rule right was instituted in the 70s and that's really the basis of a lot of hiring algorithms right now right but i don't necessarily think that rule is i think not only equitable but also i don't know whether that rule is correct for the current algorithmic system kind of like or the current algorithmic space that we work in right now and so i i think that you know there are open questions about that um for example um for and uh to add to that example um housing during the last administration housing and urban development put out a um a uh disparate impact um rulemaking that specifically mentioned algorithms right agencies have jurisdiction over these spaces and so there's a there are a few ways to think about ai regulation um and you know there's you know there's many ways to kind of go about this one way is to think about it as we're regulating all algorithms right so i've heard the kind of european model and more of the model of like the faa model when it comes or the medical device model right where it's just you take specific portions of ai right and specific use cases and you look at it within the context of is this working the way that it says it's going to work right so there is uh for example uh there is a complaint against higher view saying that higher view has a technology saying that you know they there is a engagement where they say that it's um there's a complaint against higher view and higher view says and technologies claim that if they can see kind of a job applicant's feasibility through its facial recognition system that's yet to be determined i don't know where the ftc is on that and not to call that higher view but like there's a complaint against whether the algorithm does what it says it does that's an open debate right um but for the context of higher view within uh the ftc and within the regulatory structure right there are already laws on the books that say if you're if you have a disparate impact and we've said desperate impact but let's define it right desperate impact means that a practice on its face is neutral but has a disproportionate impact on a protected class and i think that's something to just keep in mind as we're talking about this because you know this isn't to say that there is intent behind it but that's not necessary for disparate impact analysis the the impact is is that you have a disparate it has a disproportionate impact on protected classes based on the actions that you take and that there is a less discriminatory means and manner by which to do so right and then there's it goes back and forth between business necessity but that's a that's illegal as for a different legal debate right but i think that's important to keep in mind that we have the regulatory tools to do so but i think something that vincent brought up that's really important is that we have not seen kind of agencies kind of like really i would say do um a lot to regulate um ai in the current context but also there's a lot of space to do more right and we look forward to working with these agencies in order to buff up um kind of their ai not only expertise but how to think about ai within the context of civil rights and this goes from just regulatory this goes from rule making across the board right across a litany of different spaces and then the and this builds to kind of like the second way to think about regulatory process which is in every single area we want different ai regulations right so education may look different than it does with housing which does for employment which does for the context of the criminal legal justice system right which looks completely different for health care all of those areas have different data sets all of those areas have different um wouldn't say liabilities but different contexts of how the law has interpreted um what disparate not only disparate impact is but also what discrimination looks like financial services has a different bar than uh title vii right and we want to and that's the court that's where the law is right but again how do we get to compliance with where the court has been and how it interprets discrimination and so i think that's the more reasonable and feasible approach because these industries and these corporations already have a relationship with these regulators right and ultimately you know as much as i would like to dictate uh to the companies what to do with their algorithms and how not to be discriminatory the reality is is that the best way to kind of move forward is to make sure that there's some buy-in right with these companies and make sure that they know that you can actually make more money by not discriminating which is something that we've all said but it's very hard to adjust it's like it's making a right turn in an aircraft carrier for a lot of these corporations and it's even harder for government right um and so i think that those contexts are really important to kind of like think about how we engage in that policy making but you know the leadership conference has done a lot of work on this we have principles um that we put out on a number of occasions we engage in uh comments and um engage in the regulatory process to kind of push these things forward and there are a number of ways in which to think about these things and there are a number of ways in which to move forward but when we're talking about this we do need regulation and honestly the industry is craving regulation because all it takes is one court case for the whole context of how the industry engages in certain practices for them to absolutely have to just modify their practices we'd rather work with industry to try and get them to compliance right because this is the current law it's not a new law that needs to be passed it's not what we should be doing it's not what we would like to have done it's not some moral kind of like idea of like where fairness comes from it's the law how do we get everyone to a place where we are tr we are transparent with how with the laws in which folks are need to be compliant with but also how they're being compliant and i think the regulatory process is the best way to go forward on that and vince it looked like you wanted to jump in there um did you have something to add there yeah you know just uh to you know richmond's point you know that's what we're trying to do in california this legislation that we're working on that required you know any anybody trying to sell ai to the government to disclose you know uh whether they've done uh disparate impact analysis you know um but the original idea of that bill is to make companies disclose right have we done this test uh how likely are we compliant with you know all of these laws that already exist and you know california as a state would be able to spend its significant purchasing power on the products that have you know been more tested that have gone through the audits and you know we're trying to create the system now where we um you know encourage companies to do that and you know that's just the first step but you know what i see as a better way to deal with disparate impact is to have companies disclose that on the front end you know rather than having an ftc investigator go to you and like look through all your data you know you have an ai system you can audit it on the front end to say hey you know this is our disparate impact of our system among these protected characteristics this is our reasonable business necessity that justifies that disparate impact um and just put that out there in the in the front and then you can sell that product to any you know govern private entity government entity and then you know it's it's there you've proven it on the front end so that's why i would like to see us go but you know we're starting small trying to incentivize companies to even do these impact assessments and i will say one thing is i was on a call with a lot of companies about this bill and trying to get industry on board and we had a lot of industry folks come in and say yeah we want regulation when i when i said okay well let's do this they're like well if we measure a disparate impact then we're liable for it and i was like well yes that was the point um so it's you know we have we have some ways to go but uh i do think regulation also you know is the guideline is the way to kind of create those business incentives that i was talking about earlier yeah so andrea i'd love to go come back to you because you know i i'm really kind of curious to see from your perspective you know how well the zfdc's current enforcement tools address potential bias against race gender other protected designations within emerging algorithmic decision making or targeting tools and i think um as we're coming to the top of the hour if you could also share some of the the guidance that you've recently kind of issued to companies that are developing these technologies yeah i'm actually going to take your question the other way around because i kind of wanted to jump into what vincent and bertram were talking about which is since we're talking a little bit about a carrot or stick model right uh we're trying to incentivize them by telling them why they should be doing the things that they are but sometimes we need to kind of use the stick and i think the blog posts that we recently put out are kind of that stick right if the carrots are not working we're going to tell you okay this is the sort of things you should be thinking about because if you don't then we're going to use the tools that we have at our disposal now to be able to bring an enforcement action against you right so let's talk about some of the recommendations that we've made so that companies can avoid the stick from the ftc as i kind of said so we did put out those two blog posts that um and the 2016 big data report which i recommend uh all companies that are looking at this to be really thinking about because i think it has some good guidance and some kind of minimum standards i don't want to say that they are these standards what they should be doing when they're using ai uh to make some of these kind of decisions that we've got so firstly the companies need to start with the right foundation i think moana amona sorry amona oh no i have a four-year-old and obviously you watch a lot of moana so that's on my brain so mona you you mentioned pre-deployment right i think that that's exactly what starting with the right foundation comes to which is what do what should companies be doing before some of these things are deployed and some of that is set out in our 2016 big data report which is how representative is your data set you should be asking yourself that you should be asking does your data account for a model does the model account for biases really be testing it on the front end has been mentioned how accurate are your predictions based on the big data so this is post-deployment how mona mentioned you need to be testing even after you deployed and be thinking about some of the uh predictions that are being made by those models and finally does your reliance on big data race ethical or fairness concerns again this is post-deployment we really be thinking about the effects of some of these uh uses of ai on a variety of communities so by asking these questions as a company you'll be able to determine if your data sets are maybe missing information from particular populations kind of those data desert questions as such may yield results that are unfair or inequitable to a legally protected group second companies should be watching out for discriminatory outcomes so how can a company reduce the risk of becoming the example of a business whose well-intentioned algorithm perpetuates racial inequity it's essential to test those algorithms both before you use them and periodically after that to make sure that the algorithms don't discriminate on the basis of race gender or a protected class third companies should embrace transparency and independence i think vincent you were mentioning a little bit of this before but as you develop and you use ai you should be thinking about ways to embrace transparency independence for example by using transparency frameworks and independent standards by conducting and publishing the results of independent audits and by opening your data or source code outside inception we've seen actually in a variety of ways how this has led to finding some of these biases even when companies themselves have been testing and not finding themselves fourth companies should ex shouldn't exaggerate what their algorithms can do or whether they deliver fair or unbiased results okay so say an ai developer tells its clients and that the product will provide 100 100 unbiased high unbiased hiring decisions but the algorithm was built with data that lack of racial or gender diversity so that may eventually lead to a deception or discrimination and an ftc law enforcement action under section 5 of the ft act fifth companies should tell the truth about how they use data or face up potential deception action from the ftc finally companies should aim to do more harm than good it's oh my goodness i was like wait a minute finally companies should aim to do more good than harm so if a company's model causes more harm than good that is in section five that's what we say it causes or is likely to cause substantial injury consumers is not reasonably avoidable by consumers or outweighed by countervailing benefits to consumers or to competition i know it's a mouthful basically make sure that you're doing more good than harm okay because otherwise the fdc can challenge uh your use of that model as unfair so we were talking about can the so obviously we have a lot of tools we have section five we have the fcs sdra we have icoa and obviously we are committed to vigorous enforcement of the laws in our charge but these laws may not always apply to the use of bias algorithms in some circumstances for example when the company controlling an algorithm is behind the scenes which it often is right separated from the consumer by a service provider or for relationship there may be no misleading statement or a mission that the commission can challenge using our section 5 deception authority similarly if a company whose algorithm determines how healthcare's resources are allocated it may not be covered by the fcra if the algorithm is not used to make eligibility determinations for individual consumers in addition even if the ftc act were to apply or the fcra or ecoa and we were to find an instance of algorithmic bias to be deceptive or unfair or somehow violating the sdra or ecoa an enforcement action may not serve adequately deter companies from engaging in these practices the ftc act for example does not give us at the commission the authority to impose civil penalties or issue rules related to privacy or data security these and other limitations such as limited jurisdiction there's just some companies that do not fall within the ftc purview as that's our ability to deter unfair or just practices with respect to privacy and security which may relate to algorithmic bias this is why we have urged congress to enact comprehensive privacy legislation that would give the fdc the ability to seek civil penalties for first-time violations make rules under the administrative procedures act and exercise jurisdiction over common carriers and non-profits obviously we'll continue to enforce the laws that we have but there are some i think gaps where i think legislation and congress could really do a lot of good rather than harm just like i mentioned before so uh mona i'd like to come back to you because i know that you've done a lot of thinking uh specifically about what types of technical policy or regulatory frameworks we need to establish in order to give the people building these systems better tools to achieve more equitable outcomes what do you have any reactions to um some of the efforts of the ftc um i'm going to keep a bit more more general um and just sort of um connect to what everybody has said and what andy just has said as well and sort of add on to the you know the the not mountain but not insignificant list of things we should do um is kind of expand um our view a little bit when we talk about audits specifically so audits as a way to you know or can be a tool to oppose deployment continually manage risk and harms and as part of an audit and as part of you know what i've previously introduced to a socio-technical audit i think we need to find ways to ask questions again about the assumptions that are baked into these technologies which includes asking questions whether or not we actually want improved accuracies for certain technologies do we actually want to improve accuracy for technologies that are deployed against communities of color for example do we actually want to improve technologies that we know are based on eugenesis theories and i'm going to give you an example gonna sort of stay with uh hiring a tool example so if we um include or if no let's put it that way if we preclude um questioning underlying assumptions and politics of any given technology um then we sort of open up you know really a space for for anything to happen and we open up a space for companies to actually set their own standards in terms of what an audit means so um there is as i said a very well-known well-documented and well-criticized history of physiognomy and this history shows that bodily features and abilities bear no significance on either personality or ability let alone future job performance signs that science that claims otherwise is not science it is essentially eugenicist technologies which means that technology is built on that assumption not only perpetuate but also scale you genesis thought and practice this is a different issue than asking whether or not your data set is biased or whether or not you know your algorithm works as intended and this is really where we run into these kinds of questions would be which leads me to the second point which again is literacy how can we bring this together how can we bring together claims or sort of calls for for regulators policy makers decision makers to be more technically literate with the really the need to be more socio-technical literate where do these technologies come from what do they mean and also bring that literacy to the people who create technologies i'm a professor at an engineering school i'm a social scientist who teaches engineers and i can tell you the future generation of engineers really is hungry for that literacy they are waiting in the wings to do things differently and better and we have to really get our act together and create pathways and educational opportunities and jobs to make that happen absolutely um thanks mona and so i think that you know we're kind of reaching the top of the hour and i want to just kind of uh end the conversation with with this question right so in your opinion and this is to everyone on on on the panel is kind of like closing remarks um what is our time horizon to set up and implement these new inclusive ai frameworks right before we've reached uh the point of no return so before we have um gotten to a place where there are more systems creating harm than there are creating good or readdressing that that that harm uh vincent i'm happy to kick it off with you yeah um i i will say you know there's there's that chinese proverb right um the best time to plant a tree was 20 years ago the second best time is now so i i definitely feel like uh we we could have started 10 years ago we could have been where the eu is now um you know we're starting from a little bit behind but um yeah so i you know i think it's already happening you know these harms already happening so i'm not going to say it's uh there's a time horizon to prevent that because it is already happening but i do think uh i'd want to see a lot of movement in the next two years to start establishing uh you know comprehensive whether it's at the fdc or another agency a comprehensive privacy regulator and getting all of the other accessory agencies that each have their own you know housing employment health care really empowered to to do this work to get the funding to do this work and to get the you know to hire the right people to do this work what about you bertram any closing remarks i think i agree with vincent um the best time you know then we should have done this uh we could have done this 20 years ago easily um but the second best time is now um but i think the um the point of no return is going to be i think in five years um even within the past decade um if you look at kind of the paper that framed the cambridge analytica um report um out of out of uh with michael kozinski um the level of accuracy that companies had about um what i would say the personal um characteristics of marginalized communities was at 95 back in 2010. i can only imagine what

that what that statistic is now and i can only imagine what's going to be possible five years from now um we're competing against the exponential growth of the technology industry um and of the end of algorithmic intelligence right and of artificial intelligence i should say and so i think that that if we do not do something soon i think it is going to put marginalized communities even potentially further behind than they already are and so i think again the best time to engage in this conversation is yesterday thank you um andy mona closing thoughts i i'm not sure i can comment on a time horizon i unfortunately left my um crystal ball as somewhere in the garage so sorry about that guys but i think i can comment on the fact that the ftc has been paying attention closely in the past and we are looking closely now and we're ready to provide guidance when appropriate but also to put our enforcement tools whatever they may be whether they're the existing tools or maybe new tools that we develop in the future to use when appropriate but we are certainly we take our mission very seriously to protect consumers in light of changing technology and we'll continue to do thanks so much yes i'm just um gonna second all of that and say yes the time was yesterday and i'm gonna add to that that i really think the time now is for harmonization for a dialogue with other countries with the european commission and sort of really create a space in which these global technologies are discussed uh on a global level and that's one thing and the other thing just want to really underline we're dealing with an incredibly powerful industry here um and that's one to reckon with and i think um if we don't look at it that way as well you know where do these algorithms come from who do they generate profit for um in at what scale we're kind of not going to see the whole picture and we might need andy's crystal ball well thank you all again for a great conversation unfortunately we are out of time but i encourage our audience to please follow andrea mona bear trump vincent on their social media i'll make sure to put their information in the video description below and please also make sure to follow http on our social channels at http underscore policy to be the first to hear about our upcoming conversations presented as a part of our connected communities digital forum so until then please continue to tune into the conversation uh speak out and stay connected because the future needs our help so until next time have a good one [Music] you

2021-05-21 19:15

Show Video

Other news