AI Fairness and Bias

AI Fairness and Bias

Show Video

Welcome to our session. On ai fairness, and bias. And we're really excited to be talking about. The issues around, ai, that aren't so, glorious, and exciting, as so much. Of what else is going on at scitec global, here we're talking about some of the wider, impacts. That the increased use of ai. Actually has on people with disabilities. And so. Our two panelists have spent a lot of time documenting, some of those impacts, so let's start with lydia, can you share a little bit, of the research, that you've been doing. And give us some examples, of how ai and machine learning has had, maybe a less than favorable, impact on people with disabilities. Thanks jim so much for asking, that, disabled. People. Experience. Such widespread. Discrimination. In society, that we have a name for it, we call it ableism. And ableism, is what happens. When prejudice. And bias against people with disabilities. Meet systems, of power that reinforce. And perpetuate. Those biases. And prejudices, against disabled, people. And those prejudices, are built, deeply, into, algorithmic. Decision-making. Systems. One of the areas in which this shows up, is in the context, of public benefits. Many disabled, people because, of reasons, directly, related to disability. Rely on benefits. To move through the world, because we, face, record, high. Levels of unemployment. Precarious. Employment and underemployment. Because we are disproportionately. Likely to experience, homelessness. And because we may have very specific, physical, and mental. Health care needs, that are not readily. Readily, met through existing, services, systems. We might rely, extensively. On access to public benefits, in a number of ways. To cover, housing, to cover necessary, services. And care. And to cover gaps where employment, and lack of supportive, and accommodating. Work environments. Leave us. And in public benefits, what we've seen in the united, states. Is that thousands, of disabled, people have been affected. By state governments, increasing. Adoption. Of benefits. Determination. Systems. That are driven, by algorithms. And those, algorithm-driven. Determinations. Tend to result, in cuts to people's benefits, across the board, there are different ways this happens.

And One of the contexts, that, we've been writing about more recently, is in medicaid. Where people who receive medicaid, benefits. Receive. Care hours. That allow people to stay at home, and to live in the community. To keep work if they have a job. And to receive whatever care they need to be able to live a meaningful. And supported, life at least you know when care is provided correctly. By caregivers, and support workers, who respect, you, and when you have, adequate, funding. That. Subsidizes. Or outright covers the cost of that care for you. And what we've seen, is that when algorithm, driven benefits determinations. Cut people's access. To those forms of care, someone who previously, needed, perhaps, 56. Or 70, hours of attendant, care per week to do everything, from eating. To taking medication. To turning or repositioning. Yourself, so that you don't get bed stores and so that you can engage in different activities. And everything else imaginable. Are now being told, well you're now only approved for 30 hours of care a week or you're, only approved for 25, hours of care a week, that number might seem arbitrary. To a non-disabled. Person who doesn't understand, what it means to rely on such services. But for disabled, people. That cut can result, in, dangers. To health. To safety. As well as a severe. Deterioration. Of one's quality of life. And even worse. If your services have been cut so drastically. That you don't know for sure when you're going to eat or to be able to use the bathroom. Or to be able to have support, to go out into the world and visit the store, meet with friends, in a non-pandemic. Of course. Or to do other things that people like to do to live life. Then, you might actually, fear. That you would have to go into an institution. Or a congregate, care setting. To receive the very same care and support that you should have been able to. And were able to receive, at your home. In the community. And those. Those cuts to care. Just really events, how fundamentally, flawed. Our benefit system, is, one that relies, on an entitlement. System. That we don't fully fund. And that, you have to be able to prove. That you're, sufficiently. Disabled. Or disabled, in the right way. To be able to receive the right services, and rely on, non-disabled, people's judgment, of what your needs, are, of non-disabled. People's, beliefs, about your ability to communicate. And express, your needs, and. Lastly, that place people in a deep bind, where we're often forced to choose between, accepting, some level of services, that may be inadequate, and unhelpful. Just to be able to stay free and live in the community. Or, to risk going to an institution. And technically, on paper. Receive more services, but be subjected, to an infinitely more abusive, and potentially, neglectful. Environment. And so, those. Cuts, which. Can affect, all people with disabilities. That rely, on medicaid, type services, in that area. Will of course end up harming, disabled, people who are low income or who are people of color or who are queer or trans the most, because we're the least likely to have access to additional, resources. Financially. Supportive, family or community, members. Or, even just the ability. Mentally, and cognitively, to have energy. To be able to do something about it. Wow well you know pushing people towards this. Institutionalization. Seems to be a very retrograde.

Motion, Compared to all the disability, rights, activism. Of the last 30 or 40, years. Yuda, you've spent a lot of time working on this issue internationally. As well as here in the us, can you talk to us more about. What what your research has found about the impacts of ai. On, issues like fairness, and bias. Yeah so. Based upon. Population, data so, data, about, people. Is going to be biased, against, people who are different, from the average, majority, or statistical, norm. And this existed. Before, we had artificial, intelligence. In. Anyone, who, would listen to predictions, about all women, all men. All. Teenagers. Etc. Would see, some inkling. Of, that, pattern. Because. If you have a disability. You and, the only, common thing you have, uh, with other people with disabilities. Is, difference. From, the average, or the typical. To the extent, that things don't work for you. And so. Um. That, then, means, that. When it's not decisions, about or, things like. Natural, language, processing. Whether it's, standard, speech, or detecting. Things that are average within the environment. But decisions, about, you, as an individual. Based upon. Your behavior. Your looks. How you act, um, what you've done in your life your history. It's going to be biased. Against people with disabilities. And, um this. It, uh there are many. Things that are currently, happening, already. That show this particular, bias. You were. Earlier, asking, about, one example, that that. Was my first. Experience. Of great alarm, which was, in working, with. Automated, vehicles. Where we were able to and this was back in 2015. When, automated, vehicle. Learning models were just emerging. And i had the opportunity, to test. An unexpected. Situation. Knowing, that. Data. Driven, systems, like. Uh. Automated, vehicles. Are depending, upon. A whole. Bunch of data about. What happens, within an intersection. Typically. To predict, whether they should stop. Move through the intersection. Or change direction. And i, introduced. A, friend of mine, that, pushes her wheelchair, backward, through the intersection. Very erratically. But she's very efficient. A lot of people that would encounter, her in the intersection, would think she had lost control. And would try to push her back, onto, the curb. The, all the learning models. Of these, automated, vehicles. Chose to, drive through the intersection. And effectively. Run her over. That, um. Worried me somewhat. Um, they all said don't worry, we're we're going to train these these are, immature, models they don't have enough data yet about people in wheelchairs, and intersections. When. I um however, came back to retest, it, after they had fed these systems, with lots of data, lots of, uh. Images, of people, in wheelchairs. Moving through intersections. Um what happened. That shocked me even more, was. That these, learning models, were. Chose, to run my friend over. With greater confidence. Because. The learning models, showed that the average, person, in a wheelchair. Is goes moves forward. And that. When. A car was to encounter, my friend in an intersection, the assumption, would be, they could proceed, because she would. Uh not be going backwards. Uh. Into their path. Um. That. Twig this. Um. Just alarm. About, what what are the implications. Of, this, this, behavior. In all sorts of things. Um, and. Since that time, and this was five years ago. Um. There have been, more and more of these instances. That have popped up. Um, one of the things that, has, been. Concerning. Is. Uh, security, systems. Where, people with disabilities. Anything, anomalous. That is detected. In, a security. System. Is going to be flagged. As, a threat. Um so whether, it's. Moving, through an airport, security, system, and. Not. Actually. Um. Meeting the expectations. That someone would have for an average traveler, but um. A. Most recently. This year, actually. The, kobit. Situation. With respect, to tests and exams, in schools.

Has Come up, and so, what what has been rolled out at many universities. Billions, of dollars have been spent and thousands of universities. Are using. Proctoring. Systems, which are using, artificial, intelligence. Data. To detect, who is cheating. And the flag, of cheating, comes up if you do anything unusual. If you gaze. Um, and, uh. Refocus. Uh, somewhere, that that isn't at the screen. If you have. Uh strange, movements, with your hands. Um, and they're not on the keyboard, or the mouse. If anyone. Um. Comes into the room. If, um. If there's a vocalization. Which could be interpreted. As. Someone speaking, to you, any of those unusual. Things flag, you as someone who's cheating, and these, types of exams, are used. With to make very very critical, decisions, about people's lives. And so. That, pattern. Um. Occurs. And and i can, i mean there's so many, other, examples. Of, where. Anything, unusual, anything, that isn't. Average, or typical. Or that it doesn't have to do with the majority. Um. Is, is flagged, as. Something that is a threat, on the other hand, as well. Um, it, are all of the optimization. Techniques. So, optimization. Basically. Artificial, intelligence. Amplifies. Automates. And. Accelerates. Whatever. Happened, before, it's using data from the past. So. Um. What it's trying to optimize. Is. What was optimal, in the past. Um, what was optimal, with respect, to, average, performance, or normative, performance. And so. If. Someone. In a. Hiring, recruitment. Situation. Uh like you was never. Performing, that job before you're never going to be chosen, if you as a student are applying. Uh for, a, high. A highly competitive. Position, in an academic, department, and there is no data that a student like you, has ever performed. Well, then you're not going to, uh get, uh an opportunity. Etc, so it's. The, it's biased, against. Anything, that isn't average, anything that isn't the majority. Anything that is, unusual. And that. There's, a there's a, silver, lining, to that which. Um i, i don't want to. Yeah, well i'd like to get back to the silver, lining unit but i think i think one of the things that, you know, many of the complaints. About the use of ai is that it reinforces. Existing. Biases, in society, right so. You know no, brown person can get hired for this job because. You know our algorithms were trained on a body of white, employees, and so, you know that university. Never showed up whatever it might be but i think what what you're highlighting. Is that there are like fresh harms. That come from this that aren't just. Existing, biases against people with displays, or ableism. It's that too but it's also these other things so lydia do you have some, examples, of of where you know, you know one is just a traditional. Sort of you know bias against people with disabilities, reinforced.

And One is a novel thing like oh they've come up with a new way, to. Disadvantage, people with. Disabilities. Oh you'll you'll need to unmute. So. This is lydia. Apologies. For that, despite, us having a tech conversation. We're inevitably. Going to do something that's uh, tech foolish. I you know want to push back on that a little bit right, and. I i want to put it out there that it's not so much that, algorithmic. Discrimination. Creates. Totally, different, forms, of discrimination. But rather that algorithmic, discrimination. Highlights. Existing, ableism. Exacerbates. And sharpens, existing, ableism. And only, shows. Different, ways. For ableism, that already existed. To manifest, so it's not so much that it's a new type of ableism. So much as a different manifestation. Of ableism. So like let's take two of the examples, that yuda was just, bringing up right, in the context, of algorithmic. Or virtual, proctoring. The idea that the software, might flag you as suspicious. Because. Your eye gauge is not directly on the computer. Or because your movements, of the mouse or the keyboard, aren't what the program, recognizes. A typical movement. Perhaps because you use a sip and puff input. Or perhaps because you use eye tracking. Software, in order to. Create input into your program, right, or, because, you have spasms. As a person with cerebral palsy or any number of other examples. Well. Now software, is designed, based on the idea. That there is one normal way to learn. There's one normal, way for bodies, to be configured. For people's, bodies, to move, there's one normal way that people's bodies, look like, and if you are abnormal. Right like you'd have pointed out that one thing that we all share as disabled, people. Is that we are non-normative. In some way, perhaps, multiple ways, and so if that idea, is embedded. Into the algorithm. Then that produces, the discriminatory. Effect. Where disabled, students will be more likely. To be flagged the suspicious. By that algorithm. Take another example, that you alluded, to, where, you may reinforce. An existing, bias, because. If you've trained your hiring. Algorithms. Data set, based on existing, employees, and your existing employees. Where majority, non-disabled. Majority, straight. Majority, cisgender. Majority, male and majority, white. Then yes it will, begin to attach. Certain. Factors, or, or, characteristics. That might be more associated. With resumes, of people that are not, non-disabled. Straight white men, as being less likely to be successful. Or less worthy of being considered, for hiring. Whether that is because someone's, name is black coded. Whether that is because someone's name is feminine, coded, whether that is because somebody has a longer, gap on their resume.

Which In turn might have been caused. By repeated, discrimination. And inability, to get hired, because, of perhaps ableist, discrimination. That now is a self-perpetuating. And self-fulfilling. Prophecy, that you have never been able to get higher before. That long and increasingly. Longer gap on your resume. Might now be flagged as a reason to automatically, screen that person out if that's what the algorithm, has been trained to do, and that reinforces. The existing. Ableism. And just one last, you know example, on that point, too, when we think about predictive, policing. And algorithmic. Law enforcement. That not only will reinforce. Existing. Racism. And classism. And other forms of structural, oppression, that we already, know. Exist. Within. The prison industrial, complex, and, mass criminalization. And mass incarceration. But they will do so in ways that might appear, to be new, but it's not because the bias, or the oppression, is new, it is because the tools are new, so when we think about how disabled, people are affected, in this way. For me the conversation. Isn't about a new kind of ableism. It is about a new set of tools. That, exacerbate. The existing, ableist, ideas, that someone who is well behaved. Will have a record that looks a certain way, that someone who is intelligent. And able to academically. Excel, will move a certain way and will communicate, and express their thoughts a certain way. Or, that somebody who is able. To live independently. Will be able to check a certain number of boxes, on a sheet, or that if somebody, needs a certain type of support. Someone else a nurse or another professional. Will be able to look and decide, for themselves, what kind of support somebody, needs. Rather than believing, a person, about themself, about, what it means to live in the world. And, to be able to live a life authentically. And with support, that we choose. To. Be able to learn without, fear of being, surveilled. To be able to learn without fear of being, made a suspect. To be able to move out in public. Without, fear, of being criminalized. Or automatically, labeled suspicious, which of course, is always going to fall, hardest. And worst. On disabled, people of color and particularly. Black and brown disabled, people right, and. You know when we talk about ableism, in that way. It helps us understand. Algorithmic. Discrimination. Doesn't create something new, it builds on the ableism, and other forms of oppression. That have already, existed, throughout society. Well well thank you lydia for for explaining, that it isn't necessarily just new but it's just a new manifestation. Of, ableism. Um it blew my mind, that there was, hiring, software. That rated whether or not you actually got to talk to a human, based on your facial, movements. Right, and many people with visual impairments. You know, aren't necessarily. You know, trained, to move their face the way ableist, people expect them to move their face and they may never get an interview. Yuda, i know that you work a lot with. Product designers. And, um. And so. And people with disabilities. Can can you give us some ideas of what can we do about this both as individuals, and as people building products. Designed to help, rather than hurt. Yeah. So, um. I i think. And i i want to answer your question but i also want to, sort of take the conversation, a little bit further because. Um, i i. There's. A, a lot of buzz at the moment, about, ai ethics, and, the issues, with artificial, intelligence. Which is. So, so necessary. Um, but, what one of the, worries, i have, about. Um, framing. The, the particular. Issues that people with disabilities. Have with artificial, intelligence. In the same vein, as, other, um social justice. Uh. Ai ethics, efforts, is that. Um, the. Many, of the ways, in which. Um. The, bias, or the discrimination. Is is, tested. Or determined. Or flagged, or identified. Um, is depending. Upon. A certain set of criteria, that. Is, are not possible. When, the issue is disability, or discrimination. Because of disability. Um. What do i mean by that um. The. When we look at, the the. Bias detection. Systems, that test algorithms. To see whether there is. A. Discrimination. Happening. What what is done, is we identify. A particular, justice seeking, group. And their bounded characteristics. And then we compare, it to, how the algorithm, performs, with the, the the general population. And if there is a distinction, between those two then we say okay. Here's a problem this is discrimination. But if you. There is no. Bounded. Data. Set of characteristics. For people with disabilities. And so. It's very difficult, to prove discrimination. Because. Um. Of the opaqueness. Of the artificial, intelligence, systems, etc. Um. So, the the other, uh area that, um. We talk about is representation. There isn't adequate, representation. Of, uh people who are, um. Who are black, of people who speak a particular, language, of people that that, have, uh particular.

Cultural, Norms. Within the data set. And so, we talk about. Improving, the representation. Adding, additional, data. But even if, we have full proportional, representation. Of people with disabilities. Within the data set because, people with disabilities. Are tiny minorities, are outliers. There is usually, not another, person. That is, has exactly. The same sort of. Distance, from the average, that you have, that could represent, you within a data set. That doesn't. Happen. Um or that. Um that will. Representation. Will not address, this. Um. The. And even if. We, remove. All of the, the human, bias the attitudinal. Um, ableist, bias. From, our algorithms. Um, because, of course the. It enters, via the data but it also enters via the people that create the algorithms. We're still going to have, an issue here. Um there is and, the. I was talking, earlier, about, the silver, lining, this same, issue. Actually. Hurts. All sorts, of things that we're doing with artificial, intelligence. Especially. The the more critical, decisions, that are being made and the products. That, companies. Are developing. Because. It points, to a. A terrible, flaw, within. Ai. In that, artificial, intelligence. Cannot, deal with diversity. Or complexity. Or the unexpected. Very well. Um so we are, um. Not able to, easily. I mean. You can, say okay something anomalous. Is happening. But, there's no way of interpreting, it because the ai, system is depending, upon. Previous, data that, big data, large data sets about what is this anomalous, thing that's happening what is this. Threat. At the periphery. Of our vision, or that. Something, unexpected, that is happening. Um, and that. Means, that, uh. We're now in covid. Um, the covet came about as a weak signal. Um. The, we are not able and there will be other weak signals. Unexpected. Things that, are not, based upon data that we have about the past, because. A.i, is all about, the past i mean all data, is about the past right. Data is something that has already happened. Not something that might happen. And so. Um, what does it do for companies, to not. Address, these particular. Flaws. Uh within, artificial, intelligence, it means that, companies. Are, not able to. Develop. New innovative. Approaches. And they are not able to, detect, flaws, very well, or in in an intelligent. Way or in a way that, that they can actually. Address, it, and, it becomes. This, it's either a threat, or it is, anomalous, and should be eliminated. So. We. People with, with um, i mean disability. Is a perfect. Challenge. To artificial, intelligence. Um because. What. If you're living with a disability. Then. Their, your entire, life, is. Much more complex. Much more unexpected. Much more entangled, and and. Uh your experiences. Are always diverse you have to be resourceful. So. So you don't we're down to the last couple of minutes, and i want to make sure, yeah, um, uh could this is jim by the way uh lydia could could you kind of you you spend a lot of time worrying about public policy. And about how that affects, people with disabilities. Uh do you have recommendations. On what we should do because, um, it's it sounds like pretty challenging. All these different ways, that the technology, doesn't help the cause of individuals, with people, individuals, with disabilities, and their unique individuality. But also, the community of people with disabilities. So if you could unmute it would be great to hear from you. This is lydia. You know there's. Two, different, major angles to go from. And one is what is it that we as disabled, people need and want. And the other is what are users, and vendors, of ai technologies. What are they trying to do and what are they trying to accomplish. And. The first thing that i recommend, to everybody, at all times if you're a policy, maker, or if you're r d. For a company that is creating a new ai tool, or if you are acquisitions. And for a private company that wants to start using a hiring, algorithm, or, your acquisitions, for state government agency, that wants to start using a benefits algorithm. Is listen, to. Center. And lead. And follow the lead of actually disabled, people, so if blind, people are telling you, you need. Not to be creating software, that prioritizes. Eye gaze, and eye movement. Then listen to blind people when they say that, if we as autistic, people are telling you, you need to not be using software, that is attempting to flag, students, as dangerous. Based on social media posts. That, are not actually about threatening violence, then listen to us when we talk about that, listen to me when i was in high school i was falsely, accused of planning a school shooting, it was horrifying.

Right, So listen, to us and let our perspectives. And our priorities, guide and lead those conversations. But on the other end of that, if you are a vendor. Or, if you are a person who is trying to to use or acquire. An algorithmic, tool for whatever purpose it might be. It's incumbent, on you, to be very deliberate, and careful about what it is your tool is actually, aiming to accomplish. If your tool complies. With appropriate, legal, guidelines, and legal requirements. And where your tool might. Go astray, or run afoul, of those guidelines. And lastly. How your tool. Can be used, as limited, a capacity, or application, as possible. To protect, people's rights. Maximally. And that includes explaining, it, and making sure people understand, it, that sounds great lydia you in our last minute do you have a few final thoughts to share. Yeah, um. Disabled, people are the best stressed. Testers. And they're the, the. Primary, people that are going to come up with resourceful, new ideas, and, and the choices. And, options, that we need to get out of this crisis, and to do. Much more, uh inclusive. And supportive. Innovative, things. Thank you very much yuda, and lydia, for illuminating. This key area, of how ai, has, bigger impacts, on people with disabilities. And let's go forward and listen to the rest of the scitec global. Conference. You.

2020-12-07 11:51

Show Video

Other news