Disability Community Hearing on Artificial Intelligence PM Session

Show video

>> Good afternoon, Madam Chair.  You may begin the hearing.   >> Katie Kale: Thank you. Good afternoon, and  welcome to the U.S. Access Board's artificial   intelligence virtual hearing for the disability  community. My name is Katie Kale. My pronouns   are she/her and as a visual description I  am a white woman with long light brown hair.  

I'm wearing a blacktop under a corral  Blazer, a very bright corral Blazer,   and I'm wearing dark rimmed glasses. I'm sitting  in my office in front of the American flag and   the General Services Administration  flag. Thank you for joining us today.   I'm a federal board member of the U.S. Access  Board, as well as the Deputy Administrator of   the U.S. General Services Administration or  GSA. I am proud to serve as the Chair of the   U.S. Access Board, as well as chair this hearing. The Access Board's mission is to create access for   all the Access Board is an independent  federal agency dedicated to accessible   design for people with disabilities. The  board is comprised of 25 individuals,  

13 of whom are appointed by the President and the  other 12 are representatives from various federal   departments. I would like to acknowledge my fellow  board members who have joined us here today for   the hearing and thank them for making time. I would also like to take a moment on behalf of   the board to additionally thank today's presenters  and all of those in attendance for being with us   today as we discuss artificial intelligence  in relation to the disability community.   So a few housekeeping notes as we begin. The  hearing is being recorded. American Sign Language  

interpretation and realtime captions are being  provided. All attendees will remain muted with   your cameras off until you are being called upon  to speak. You may then use the chat feature to   communicate with the host if you need additional  assistance. For all board members, presenters,   and for those that have preregistered to offer  public comments, please take time now to ensure   that the name that is listed on your Zoom screen  is your full name so that we can easily identify   you and provide you the permissions needed  to speak during the hearing. On the screen,   you'll find the agenda for today's hearing. To go  over the agenda, after my welcome remarks, we will  

begin the hearing with Alison Levy, Director  of the Office of technical and information   services for the Access Board. She will provide  some foundational background on our AI series.   After Alison, we will hear from a wide range  of presenters on the uses, benefits, and risks   and barriers for people with disabilities  regarding AI. Following the presentations,   Access Board members and staff will be  able to and panelists questions. Afterward,  

there will be time for public comments from  those who have preregistered to provide public   comments. Let us begin by welcoming Alison. >> Alison Levy: Thank you, Katie. It's a   pleasure to kick things off with  a little background information.   Let's start off with the first slide. First, for those of you who are new to   the Access Board and who we are and what  we do, I'm just going to provide a little   information about our roles and responsibilities.  We were established to develop accessible design   guidelines and standards, including Architectural  Barriers Act, the Americans With Disabilities Act,   section 508 of the Rehabilitation Act, and others.  We also provide technical assistance and training   on these standards and guidelines and we also  enforce Architectural Barriers Act standards,   which apply to federal, facilities, and  federally leased facilities. Next slide, please.  

So last fall, President Biden issued  executive order on the safe, secure,   and trustworthy development and use of artificial  intelligence with the intention to maximize   the benefits and minimize harms of emerging AI  technologies. To that end, the Access Board was   tasked with several asks. Next slide, please. Three items. First, we were asked to solicit   public participation and conduct community  engagement. Second, we were asked to issue   technical assistance and recommendations on the  risks and benefits of AI, including biometrics as   a data input. And last, we're hoping to provide  people with disabilities better access to   information and communication technology, as well  as transportation services. As a result of those   asks, we developed a memorandum of understanding  with two key partners, two national, nonprofit,   disability based organizations, the Center for  Democracy and Technology, otherwise known as CDT,   and the American Association of People With  Disabilities, AAPD. You'll hear from them  

in just a few minutes, but I wanted to fill  you in a little bit and bring you up to speed   on where we are with our outreach efforts. Today's session or this afternoon's session   is the third in a series in which we're hearing  from different folks in the community about AI   risk and benefits. We kicked off our series first  with a foundation of information on Artificial   Intelligence to help reach out to all of  our communities to establish a foundation   of understanding on artificial intelligence. The second was a hearing hosted this morning to   the disability community, and we heard from some  wonderful presentations which highlighted risks,   benefits associated with the use of AI in the  disability community. And then that brings us  

to this afternoon where we are hearing from  a second group of people. We're very excited   about the panelists coming up. And our next  issuance of a will be on August 22nd where   we'll hear from partitioners in the artificial  intelligence field, as well as federal agencies,   what they're doing in the AI space. So with that, let me kick things back to  

Katie for the next introduction. Thank you. >> Katie Kale: Thank you, Alison,   for that foundational information. I'm  excited that we will now transition to   the panel presentations part of the hearing. Our panelists will be sharing general AI benefits,  

AI risks, and touch on the current research,  employment, and other related AI topics. We ask   that all panelists keep their presentations to  around eight minutes, and if they have slides,   they should be prepared to share their screen at  the time of their introduction. As a reminder,   please keep your cameras off until  it is your time to present. We ask   that all others remain muted with cameras off. Okay. So we now welcome Henry Claypool, Technology  

Consultant from the American Association of  People With Disabilities and Arianna Aboulafia,   the Policy Counsel for the Disability Rights  and Technology Policy Center for Democracy and   Technology for their presentation on MOU partner  insights on AI for people with disabilities.   Henry and Arianna, you're welcome to begin. >> Arianna Aboulafia: Good afternoon. I'm   Ariana Aboulafia. I'll give a brief visual  description. I am a young ish person with   sort of past shoulder length brown curly  heir. I have gray rimmed glasses and I'm   wearing a black T shirt and a gray Blazer. So I'm going to be speaking very briefly today   just about our partnership a little bit. SI am the  policy Council for Disability Rights and policy  

for center for Democracy and Technology and I lead  disability rights work. I am so grateful for the   partnership with AAPD and with the Access Board  and our work with Executive Order implementation   to engage the disability community and those  interested in understanding how to ensure   that AI is deployed in a way that is helpful for  people with disabilities and to sort of paraphrase   the Executive Order in a way that increases  benefits and also minimizes and mitigates the   harms for people with disabilities. So this is our third, I guess,   session in this group of sessions on AI and  disability. We had a disability community   session earlier this morning and we also  had a session a couple of weeks ago that   was entitled a foundation on AI and disability.  And that one is on the Access Board website.  

And I wanted to give a quick little recap of some  of the things we spoke about there. So we gave   definitions of AI, because it's really important  to sort of couch these conversations in what it is   that we're talking about, what it is that we  mean when we talk about the impact of AI and   algorithmic systems on people with disabilities. So we defined AI and machine learning and   generative AI. We discussed some of the uses  of AI. We mentioned that, you know, oftentimes   federal agencies will use AI as ways to hopefully  make things more streamlined and faster and   easier and sometimes even less biased and that,  unfortunately, that is not always the outcome.   We mentioned some of the ways in which people  with disabilities are using AI. Sometimes in  

assistive technologies. And we also mentioned  how AI can impact people with disabilities right   now. We talked about how Artificial Intelligence  tools are being incorporated into all sorts of   different systems that people with disabilities  interact with over single day. That can include   education and employment. That can include  healthcare and benefits determinations. And   we mention some of the ways in which those  sorts of tools, when used in those contexts,   can have discriminatory outcomes and problematic  outcomes for folks with disabilities.  

What you're going to hear in today's session  is hopefully more about some of the ways in   which people with disabilities use AI, some of the  benefits, some of the risks, some of the potential   concerns, and hopefully some ways to recommend  mitigation of those concerns. These listening   sessions provide the Access Board, AAPD, another  opportunity to center the experiences of people   with disabilities and to understand the impact  of AI and emerging technologies on our community,   because it's an opportunity to foster dialogue  with federal agencies and with industry on the   use of AI. And it gives us the opportunity to  continue to raise awareness for how disabled   people experience AI, because AI tools, they  are being used right now, which means that   some of the benefits and also some of the  concerns, they are happening right now.  

But these tools will only continue to be  incorporated into more and more aspects   of our everyday life. So it's really important  that we hear from the community now, that we start   to hearing from the community, that we continue  to hear from the community as to how folks are   interacting with and experiencing these tools. Our next to last session and our next session is   going to be on August 22nd where we look  forward to hearing from federal agencies   and other practitioners, and then we'll have a  final session in mid November with the Access   Board to release a report that will hopefully  support future work on AI from both within and   outside the disability community. So we are  really looking forward to this hearing and to   continuing our partnership with the Access Board  and with our community as well. And thank you so  

much. I will kick it back over to you, Katie. >> Katie Kale: Thank you, Arianna and Henry for   your work on this. Very important topic,  and thank you for your presentation.   So next we are going to welcome Erie Meyers, chief  technologist at the Consumer Federal Protection   Bureau, CF PB, on Biometrics of Data, people with  disabilities, and the impact on credit ratings.   Erie, you are welcome to begin when you're ready. >> Erie Meyers: Thank you so much, and good  

afternoon. My name is Erie Meyers, as you heard.  I'm chief technologist from the Consumer Federal   Protection Bureau. I am a white woman with brown  hair. I'm sitting in my home in a floral chair   in front of a map and lamp. I'm so glad to  be here today. My agency regulates consumer   finance. These are things like credit cards,  but also debt collection and student loans,   and there are a huge number of things that we  regulate or oversee that impact communities   such as the one we're talking about today. So I want to start with the really good news,   which is that there's actually no exception  in the law for fancy technology. That  

means that the rights that you have won in  sometimes very long fights are still yours.   That means that the obligations of companies to  follow the law don't go away just because they're   using AI or fancy technology. And the federal  government is building more technical capacity to   make sure that companies are following that law. So I want to give you a couple examples of places   where the CFPB is working hard to make  sure that that is the case today and   special areas of interest for the  community of folks with disabilities.   The first is around housing. Federal regulators  have levied allegations of potential collusion   in rental markets around the price of rent.  One of the other things we've also seen are  

allegations around the use of automated  valuation models for the pricing of homes.   These technologies have challenges even in  traditional settings with the addition of   advanced technology. It's critical that the firms  understand how the technology is being used, how   the data that is training the models is selected,  and how those models are unleashed on the public.   Our agency is working hard to make sure that these  methods are not used to evade the law; that your   homes are not rated as less valuable because of  the color of your skin; that the cost of your   rent is not going up because of illegal conduct. Another area is around data brokers and fraud. We   are working on a regulation around the fair  credit reporting act that says if a firm is   collecting data about you and using that data to  make a really important decision about your life,   they need to follow the law. So when  this law was passed 50 years ago,   I think folks traditionally thought of the Credit  reporting agencies. Many people know about your  

credit report or sort of the big three. But in  2024, the data economy looks different. So our   job is to make sure that firms, regardless  of, again, how fancy their technology is,   are really following the law. You may have seen allegations levied   by the Department of Justice against several  data brokers for knowingly selling the data   of people with diminished capacity  to firms that were known fraudsters.   Those are the types of fact patterns we're  very interested in ensuring don't proliferate   or expand, and we're really looking forward to  our forthcoming rule making on data brokers to   address things like fraud or potential threats to  our national security, but also to make sure that   personal health data and biometrics are not used  to decide what opportunities are available to you,   what options you have when it comes to getting a  job or finding a home, anything where you would   need a credit report or a background check  in order to get access to those services.  

Additionally, our agency has done work around chat  bots and potential violations of law. I think you   all may have had the experience where you're  trying to get help from a company or even just   a straight answer and you can get caught in sort  of a doom loop. You ask a question on, you ask for   help, and you get passed from person to person or  in a web chat bot I ask a question, ask for help,   and you get an answer that doesn't make sense.  Maybe it's robotic. The firm might want you to   think it's simply a hallucination. But when you're  relying on a firm to give you a straight answer so  

you can access your rights, it's much more serious  than that. So my agency has put out a paper around   the use of chat pots, both generative AI, cutting  edge technology chat pots, and also very simple   wins to say that the existing rights that everyone  has in the United States when it comes to getting   a straight answer from a financial firm remain in  place, despite any fancy technology being used.   The last thing I'm going to mention  and then I'll turn it back over   is that my agency accepts consumer complaints on  these topics. If your bank, if a debt collector,   if a credit reporting agency, if your student  loan servicer isn't giving you a straight answer,   if you're concerned about having meaningful  access to the services you are due, we are   very interested in hearing from you about what  is a happened and how we can help. At consumer   finance.gov/complaint or on our phone line at 866  411 CFPB, we have folks ready to help. You can  

file online. You can file on the phone I believe  at 160 different languages. We have materials,   including videos in American Sign Language and  other products to increase the accessibility of   our complaint system, and it's critical we hear  your stories and where you're stuck, because it   is our obligation to ensure that these laws are  followed. So thank you for the time today. I'm   excited to hear more and learn more. Thank you. >> Katie Kale: Thank you, Erie, for that   presentation and all of that great information. We will now welcome Lydia Brown,  

Director of Public Policy, National Disability  Institute. They will be presenting on AI Risks   for People With Disabilities. Lydia, you may begin.   >> Lydia Brown: Hello. This is Lydia XZ  Brown, pronouns they/them. I'm a youngest  

each Asian person with short brown hair and  glasses. I am wearing a dark green and blue   top and behind me there is a fake background  that shows wall to wall and floor to ceiling   bookcases in a light filled room. People with disabilities have long   experienced discrimination in all sectors of  life, resulting in disparities in outcomes   economically and otherwise, including a poverty  rate that is twice that of non disabled people,   as well as unemployment rates that are about  double that of non disabled people, disparities   that are greatly exacerbated when data is  disaggregated to account for racial and   gender based differences alongside disability.  So it is no surprise that the presence of AI and  

algorithmically enabled decision making systems  can further exacerbate and amplify those existing   disparities. This is for a number of reasons. Win is reliance upon data that represents existing   inequities and disparities. If an algorithmically  determined tool, for example, is making decisions   about job candidates, housing, or about a  credit determination and that algorithmic   determined tool is relying upon existing  data, it will replicate existing inequality.   Conversely, if the data that a particular tool is  using is actually unreliable to begin with, the   data is not necessarily accurate, the data is not  necessarily representative of the full range of   experiences of disability of disabled people and  of the full diversity of the disabled community,   then even an otherwise well designed system  will be relying on un useable and reliable data   to make its determinations or its assessments. The purposes for which AI generated tools might  

be used very widely as well and generally fall  into the categories of assessment, evaluation, and   assessment that operate together. Of prediction  or of decision making. And in all of those realms,   reliance upon data that either replicates existing  inequality or data that is unreliable will result   in further inequitable and potentially harmful  and discriminatory outcomes for disabled people.   Another way in which AI tools can be particularly  dangerous for people with disabilities is if they   are not designed with the needs and the  functionaries of disabled people in mind.   In particular, if a particular tool requires input  from or engagement of a user that assumes that all   people communicate in certain ways, that all  people's bodies move in certain ways or have   certain functions, then all people receive and  act upon sensory input, whether visual, auditory,   tactile, or otherwise in the same way and with  the same mechanisms for accessing a device or   accessing a software program, then those programs  will inevitably end up discriminating against   people with disabilities, because in the worst  case scenario, they will be entire inaccessible to   a disabled person and in the best case scenario,  they may tend to exclude people with disabilities   or at least make it much more difficult for people  with a range of disabilities to interface with and   engage where that particular tool. with that particular tool.   AI based technologies further have the potential  to harm people with disabilities when they are   supporting Public Policy aims that are not in  line with goals of people with disabilities and   the Civil Rights and independent living movements  and the self advocacy movement led by disabled   people. So for instance, if a state agency  adopts an algorithmic tool with the explicit  

aim of reducing the number of individual people  who are recipients of public benefits programs,   that is a goal that is explicitly at odds with  a Disability Rights Movement that aims to ensure   that all people with disabilities are equipped  with the support and the services that they need   to be able to participate in society. And  given the currently high rates of poverty   and unemployment for people with disabilities,  rates that are disproportionately high for our   community and higher still for those who  face multiple forms of marginalization,   that is a goal that will inevitably result in a  greater proportion of people with disabilities   from being removed from a public benefits program.  If the Public Policy aim surrounding the use of a   particular tool is not in alignment with goals  of community integration, civic participation,   and engagement in the mainstream economy, then  that tool is posing a very real and immediate   risk of harm individually, as well as at a  macro level for the disability community.   Further, AI tools that tend to have a  discriminatory impact on disabled people will   entrench discriminatory practices and policies  in society. If it becomes more acceptable,   for instance, to use an algorithmically determined  method of choosing who will be considered for a   job, for access to a rent stabilized unit or a  subsidized housing program, or who will be able   to maintain access to publicly funded healthcare  benefits and long term services, then it may   become acceptable in the public mind to allow  existing inequities to grow. Without addressing  

the underlying Public Policy inadequacies of  current research, of current practice, of current   services, and of the ways in which disabled people  are able to live their everyday lives, not just   interface with systems, then AI tools that we  develop will not exist outside of that context.   And that ultimately is what is necessary  for policy makers to understand if they   don't always know what is happening, because  the average policymaker cannot be expected to   be an expert on every aspect to the disability  community, and certainly not on AI and other   algorithm driven tools. But policy makers do have  a responsibility to know that the policies that   they are working to develop and implement are  done with the needs. People who have the most   to lose, who are most vulnerable, and most  marginalized, front and center in mind. They   have a responsibility to ensure that the policies  that they adopt will not result in reinforcing,   perpetuating, and amplifying inequity injustice. And so AI tools that we have public funding for,  

public support for, or public procurement  of, in particular, should be held to a high   standard of accountability and should be held  to standards of accountability that reflect   back what the priorities and needs are of the  communities that are directly affected by them,   whether that is in relation to credit  decisions, in relation to access to housing,   in relation to access to employment, or in  relation to access to public benefits programs,   as well as a myriad number of other applications  possible for algorithm driven tools.   It makes sense when building and designing  policies to collaborate with people who are   from impacted community. And both technologists  and policy makers have a great deal to learn in   that respect from members of the disability  community who are well represented here today   in this discussion. AI tools exist in context,  and that means that to the extent that there  

are many potential opportunities for benefits,  for disabled people from growing applicability   of AI tools to everyday life functions, that it  is all the more important for us to be cognizant   and conscious of the great risks that occur  with the existing utilization of AI tools,   as well as with continued development of AI tools  where we are still working to create policy that   works for human beings, where we are still  working to create policy that makes sure that   people have access to the workforce and education  and housing and healthcare. And work that we spend   time and attention that we spend in relation  to AI needs to reflect at the end of the day,   it is human lives that matter, and risks of AI  are enormous. And if we work to address them now,   we'll be better positioned to benefit for  the potential opportunities that AI can   offer to making the world more accessible and  inclusive for people with disabilities instead.  

I'll turn my time back over. >> Katie Kale: Thank you, Lydia. We will now   hear from Theo Brady, Executive Director, National  Council on Independent Living on AI benefits and   impacts on independent living. Theo, you may begin.   >> Theo Brady: Great. Let me get started. Again,  my name is Theo Brady. I want to talk a little   bit about myself. My pronouns are he/him/his.  My video description, I'm a black many, male,  

bald headed with a salt and pepper beard and  glasses, blue glasses. I'm a C4 quadriplegic   who uses a complex motorized wheelchair. I've  been a person with disabilities since age 15   due to a football accident. I ran the Center for  independent living for 31 years and I am currently   Executive Director of the National Council  on Independent Living, as well as a member   of the national Council on disability. So let me briefly take to you about a number   of things actually Lydia mentioned in regard to  the benefits and opportunities for people with   disabilities in regard to AI. And so I have a few  of those things to share with you. And keep this  

in mind. AI is moving so quickly that probably  by the time this presentation is over with,   there's going to be some advancement. So mine is  definitely not inclusive of everything, but I want   to point out a few things. Assistive technology.  Speech recognition. Right? Tunes like Apple   series and Google systems and Amazon Alexa help  individuals with mobility, disabilities, control   devices and access, information using voice  commands. So it's very important. Text to speech   and speech to text. Software associated with  dragging, naturally speaking, assist those with  

individuals disabilities or more disabilities by  converting spoken words into text and vice versa.   There's mobility and a half irrigation. Autonomous  vehicles. It's going to be a game changer. Right?   Self driving cars like those being developed by  Waymo and Cruz and other companies who provide   transportation options for people with physical  disabilities. And again, I say it's going to be   a game changer. It's going to be something that's  happening right now and certainly in my lifetime.   Navigation apps. Apps like AIRA and BeMyEyes  and human assistance to help individuals with   visual disabilities navigate their surroundings.  Very key. Screenreaders and keyboard navigation  

allows people with mobility disabilities to  also navigate and communicate augmentative   and alternative communication. If anybody knows  Bob Williams, he's doing a lot of great work with   community first with this. AI powered devices help  individuals with speech, disability communicate   more effectively. Speech assistance in regard  to particular text, to assisting and forming  

sentences. Realtime translation. AI tools like  Google Translate facilitate communication for   those who are deaf or hard of hearing by  providing real time translation, spoken   languages into text. Very useful to people. Home automation. Very important. We saw this   even more so during Covid. Smart home  devices. AI driven devices like smart   thermostats, lights, air purifiers and security  systems. Something that was very beneficial in   helping people with is mobility disabilities  manage their home on a daily basis. This stuff  

is advancing so quickly. I encourage you all  to stay and use these kinds of smart devices.   Lotus ring. Something fairly new was presented  at the National Council on Independent Living   conference R a wearable ring that controls  objects at your home by pointing at it. Again,  

technology is moving so fast. Health and wellness. Again,   wearable devices. Like smart watches and finger  rings can monitor a person's health on and alert   them when potential health issues come up.  Again, something I'm constantly using myself  

and encourage other people to use this, this kind  of technology. Telehealth services. AI enhanced   telehealth platforms by remote consultation.  Reducing the need for logistic, you know,   getting back and forth. We all know that people  with disabilities, some cannot afford affordable,   accessible transportation and can't get to a  doctor's appointment like they should, and again,   Covid 19 proved this to be very valuable  and those things are continual post Covid.  

Educational tools. Personalized learning. AI can  tailor educational content to meet the unique   needs of students with disabilities, providing  adaptive learning experiences. Accessibility   features and education software can offer  features, such as closed captioning. Audio   descriptions, customized interface  to improve, again, accessibility.   Even if the workplace, we see AI taking effect,  tools that remove biases of hiring. All right? And  

one of the things that we got to be very careful  about, and I think Lydia mentioned it, we've got   to be very careful with that, that these AI tools  are not being developed based on ableist thinking   so they create another barrier that people with  disabilities have to face on a day to day basis.   These applications help with task automation and  making things more accessible in the workplace   in regard to assistive AI and employment. Lastly, I want to finish up with talking to you,   something personally I can testify to in  my own life, during emergency situations.   Dropping my phone or falling, but still being  able to contact someone using voice command,   voice activated commands. I can personally testify  that it worked in my life. During the pandemic,   it was very difficult to a period  of my life and I'm sure many others,   but I was still able to use AI, assistive  devices, to do my grocery shopping, my banking,   and so on. Household devices. I use it for my  lights, my thermostats, my air pure fire, and my   heater. Heat and coolness can be very did he have  state to go a person with a physical disability,  

and I was able to use that  AI to ensure my own safety.   Then with cameras. I'd use ring video cameras  to see who was at my door. Was able to lock it   and unlock my door. Speak to the people before I  even let them in. And we also know how important   it was during Covid 19 or any other kind  of emergency how important it is to access   information in realtime. AI allows that to  happen. All right? Air quality. We talked  

about that already in regard to air purifiers. We  know how difficult activities can affect a person   with a disability. So you can also use that. And  again, I mentioned telemedicine. My doctor was   rarely available. I couldn't come out to see her,  but when I brought her in on video technology,   she was right there. Even my pharmacy was based  on automation and they could send my medicine.   Socializing during that stay at home period, we  all know about that, how important Zoom calls   was. We needed to be entertained, and again,  we used AI for that purpose. And not only that,   we visited and virtually visit our  family members to ensure that they   was also safe and also could be socialized. And I'm using right now, I'm working at home,  

like many other people are working at home  remotely based on technology, AI technology.   And we saw post Covid that employment for people  with disabilities increased simply because of AI   technology. And again, we're talking about Alexa.  We're talking about Amazon, Echo, Cube, Fire   Sticks. All of these things are voice activated  that allows you to function in your home safely  

and securely. Apps can enable a number of skills,  such as morning routines. Right? I'm talking my   medicine. Sometimes this becomes very important  for people to be able to take medicine on time   and often people forget what these routines, once  set up, can help you. And I can't even talk about   how many times I used Audible books and Kindle  books to read when I'm not in a sitting position   as well as Facebook Portal, that can follow you  around as you take care of business. Smartphones.  

All of these things are very important,  vital. Right? I wish I had stock in Amazon,   because I use them all the time. DoorDash and  Instacart. My wife don't even shop anymore.   She makes me do it. All right? They had are the  things that AI is doing that enhances our life.   It used to be where AI and technology like this  was an afterthought. People would develop this  

stuff and they would say, oh, well, this is  going to benefit people with disabilities.   Now big businesses and companies are intentionally  looking at what these things could do to enhance   the lives of people with disabilities. So again,  I say by the time this presentation is over with,   there's going to be some technology that is going  to improve the lives of people with disabilities,   because it makes sense and it's good for business  and people every day are contacting to say can you   terrorist this or can you test that in regard  to autonomous motorized wheelchairs are being   developed. And these are the kinds of things that  AI are going to benefit all types of people in   the near future. So this was not completely  everything, so just look forward and become   educated in regards to all the things technology  wise and AI wise that can benefit the lives of   people with diverse disabilities. Thank you. >> Katie Kale: Thank you, Theo. I especially   liked during the beginning when you said by  the time your presentation would be done,   there probably would be more AI uses that  are out there. I think that's a great  

reminder of how fast everything is moving. All right. Next we're going to hear from   Robin Troutman, Deputy Director, national  Association of Councils on Developmental   Disabilities on the AI impacts for People With  Developmental and Intellectual Disabilities.   Robin, you may begin. >> Robin Troutman: Thank you so much,  

everyone. And I just spilled my water. Give  me just a moment. I'm Robin Troutman. I use   she/her pronouns. I'm the Deputy Director of the  National Association of Councils on Developmental   Disabilities or NACDD. For visual description,  I am a white woman and I'm wearing glasses,   a blacktop, and I'm wearing a beige jacket. In. ACDD works across the 56 U.S. states   and territories to ensure that people with  intellectual and developmental disabilities   can live in the community of their choice  and lead a self directed life. I'm excited   to join you today for this important hearing on  artificial intelligence and the potential impacts   on people with lived experience. Thank you to  the U.S. Access Board for inviting NACDD today.  

A little story. Back in January, I attended a  conference for meeting an event planners as part   of a professional development organization that  I am a member of. They had three keynote speakers   at different times during the event on the main  stage talking about AI, how AI is here whether we   are ready for it or not. How AI should be seen as  a tool and not a replacement. And the biggest take   away from all three in the exact same words, AI  is not going to take your job or replace a human,   but the person who uses AI as much as they use  Word, Excel, or Canva will take the job or advance   faster and further than those who do not. But what does that mean for people with   intellectual and/or developmental disabilities? We  have seen already that advancement in technology   help people with intellectual and developmental  disabilities live more independently,   as Theo said, be able to work more efficiently  and effectively and be able to hang out with   their friends wherever, whenever they want. But  these advancement only help those who can access   them. We still have a large digital divide in  our country and globally where more rural and  

poorer areas do not have access to stable, high  speed internet, which many of these tools and   technologies require. According to the World  Health Organization, there are more than 2.5   billion disabled people that will need one or  more assistive technologies by 2030. However,   the WHO, World Health Organization, also  states that almost 1 billion of those people   cannot access the products. So we have some work to do.   As my colleagues have mentioned,  since the Covid 19 pandemic,   we have seen an increase in applications  like Zoom or Teams and AI generated captions,   website accessibility overlays, and  other tools, and people, including the   word accessibility in their Diversity, Equity,  and Inclusion conversations as they should.   But these decisions are being made not  by the people who need to use them,   but by technology experts as they should be  at the same time, because they are experts   in coding and programming. But people with  intellectual and developmental disabilities   are experts in their needs and what works  for them and what does not work for them.  

If we're able to take a line from the hit  musical Hamilton, people with intellectual and   developmental disabilities and other disabilities  need to be in the room where it happens. They need   to be included on research and development teams,  included in user experience teams and demos,   and in impact or evaluation teams. AI is only  as unbiased as the data and algorithms it   relies on. We need to ensure that AI is  not only intelligent, but also ethical,   inclusive, and aware of human diversity. AnneMarie I'm going to get her last name   wrong. AnneMarie Imafidon is a British computer  scientist and social entrepreneur and she states  

that when a wider range of perspectives inform how  new advancement such as AI are designed and used,   the resulting technology is more equitable  and beneficial for everyone. This isn't just   a computer science thing. Understanding society is  crucial when releasing technology into the world.   For example, similar to what Lydia mentioned  in her presentation in their presentation,   I apologize, similar to what Lydia mentioned in  their presentation, if you're designing an app   for jobseekers, you need to ensure you have  people actively seeking employment involved   in user testing and have people with disclosed  disabilities involved in that user testing.   STEM, science, technology, engineering, and math,  and STEAM, science, technology, engineering, arts,   and math, need to represent an inclusive cross  section of society to create the best outcomes   possible. People with disabilities and especially  those with intellectual, cognitive, developmental,  

or neurodivergents disabilities must be included  in the STEM and STEAM workforce. By including   people with intellectual and developmental  disabilities in the AI and computer sciences   and information technology workforce, and let's be  honest, all areas of employment, it is less likely   to have bias in the AI algorithms. Currently,  the algorithms are consciously or unconsciously   discriminating against people with disabilities.  AI systems learned from vast datasets, which   often reflect societal biases. If these datasets  predominantly feature able bodied individuals,   the resulting algorithms may fail to accurately  interpret or serve people with disabilities.  

Developers might not fully consider the needs  of people with disabilities, leading to products   that are inaccessible. For instance, as Theo  mentioned, voice activated systems might struggle   to recognize the speech patterns of individuals  with speech difficulties. In addition, AI driven   hiring tools might screen out candidates with gaps  in employment history due to medical conditions   or autonomous vehicles might not recognize  wheelchairs as obstacles, posing safety on risks.   To combat ableism in AI, it is essential to  adopt a more inclusive and ethical approach   to AI development. AI systems must be trained  on diverse datasets that include representation   of people with disabilities and other  intersectionalities. We must involve  

people with disabilities in the design and  testing phases of AI development and there   must be ongoing training and education, as well  as listening, communication, and transparency.   This is not to say there aren't good tools out  there currently. Microsoft's AI for accessibility,   for example, projects include AI tools that  help blind or low vision people navigate the   world and apps that assist those who are deaf or  hard of hearing. In addition, there is voice ITT,   which is a speech recognition app designed  to understand nonstandard speech patterns.   The impact of AI on the intellectual and  developmental disability population is   still relatively new to determine, but if we are  mindful of including people with intellectual   and developmental disabilities early in the AI  processes, then we can see it be used to improve   the quality of someone's life, increase employment  opportunities, and educate a greater understanding   of people with disabilities, all of which  creates a more inclusive society. Thank you.  

>> Katie Kale: Thank you, Robin. We will now hear from AnneMarie Killian.   She's executive officer, TDI for Access,  Inc. and Jeff Shaul, software developer,   GoSign AI on the research and impact  of AI on sign language interpretation.   AnneMarie and Jeff, you may begin. >> AnneMarie Killian: Hello. First, I'm   just testing to make sure you can hear the voice  interpreter. Good. You can hear. Okay. Perfect.   So first of all, I want to introduce myself. I'm  AnneMarie, CEO of TDI for access. And I am with my  

partner, Jeffrey Shaul, who is cofounder of GoSign  AI, and I am going to do my image description. I   am a white woman, middle aged, brown hair wearing  black glasses and brown shirt. And before we   continued, I just really want to commend the U.S.  Access Board for organizing this important event.   As we've seen here, in the comments and  presentations shared today, it's clear   AI's presence in our daily lives requires  that we selectively work together to ensure   the safeguards and no undue harm on people with  disabilities. Today we are presenting an emerging   technology for language access representing  the advisory group on AI and sign language   interpreting. Our advisory group, primary goal is  to develop inclusive language and access standards   for all communities, including sign language  users. Our goals, we act on our counterparts  

to ensure interpreting standards and access  to all communities, including sign language.   As we act as counterparts to the interpreting and  safe AI taskforce, both align and we collaborate   together with interpreters and spoken languages.  We've been working with significant resources   through #DeafSafeAI, reports are available and  you can see the guidance. We study, we do the   research and we set that up, the policies and our  findings to ensure that safe AI has standards.   Engagement and activities involvement to promote  and implement these standards. We engage with   community through various activities, including  webinars, sympathy pose ups, and workshops all   over to gather the data for our findings. And  it's clear to get the messaging and our mission  

to technology to provide equitable for all  disabilities. And I would like to transition   to Jeff who will expand a little more on the  technical considerations and motivations that   are working. So I'm going to turn it over to  him to discuss the efforts and the highlights,   the disposition of the benefits of AI. Jeff? >> Jeff Shaul: Thank you, AnneMarie. And I   just want to make sure everybody can  see me. Perfect. Okay, thank you.  

Thank you, AnneMarie. My name is Jeffrey  Shaul. And I'm a software developer for   AI company called GoSign AI specialists  in signing captions and data gathering.   My image description is I am a white male  in my Thirties. I have short brown hair  

and blue eyes and a little bit of stubble  and I'm wearing a lavender collared shirt.   All right. So one of my favorite quotes by a  famous scientist and scholar is the future is   already here it's just not evenly distributed.  We've seen this happen many times throughout   history. For example, the telephone. It became  common in the early 1900s. The at large population   reached the benefits, while the Deaf and Hard of  Hearing community was left out, until the 1960s,   when the TTY arrived. There are countless other  examples now emerging throughout history.   So with the emerging technology, we hold a  great potential, but we have to be assertive   in ensuring that it's evenly distributed among  the communities. And that new technology is here  

in artificial intelligence. And that word is so  over used. It's mostly a buzzword for marketing.   I prefer to think of it as a model, a model that  detects patterns in new data using prior. Imagine   if you asked a kid to draw a picture of a human.  What is a kid going to draw? Possibly it's going   to draw a stick figure. They haven't been exposed  to the art and training and they haven't seen all   the examples of the art and had that. So a  kid internal, their model is not competent   for that. It's not complete. So the same with  sign language and the model for recognizing  

and translating and creating and generating. How to make sure that it's complete? There's so   many different considerations that need  to be made. First of all, the datasets.   They need to be inclusive. Sign language is just  so diverse across the nation and the country.  

People sign the same word many different ways. So the datasets need to be inclusive and include   all different classes of signing. You know,  gender, ages, sex, race. It doesn't matter.   So imagine if the model training  only was on College age applicants   and volunteers. They wouldn't perform as  well encountering to the senior citizens.   Just like in English speaking, they have accents.  There's many signers that have accents as well.  

For example, in Maine, their sign for their  Capitol City in Portland is this. It's the   same sign as the color purple in other areas. So  it's a conflict. So the model needs to be known   that it's a nuance, and there's many more examples  that contradict that. So the model is only as good  

as the data that it is exposed to. There's many other considerations as   well. Just like reliability, you know? How to make  sure that it is robust and, you know, what if the   power goes out? What if the internet drops? And  lastly, how do we evaluate and make sure that we   have the checklist and to guarantee that that  model is inclusive and safe and effective? And   how to communicate that with the public. So  the considerations on our minds are when the   advisory group and the taskforce developed  policies that, quote/unquote, automatically   interpret by artificial intelligence, quote  ain't quote, AI, by AI, should follow.   So now I will explain the principles that we  developed with the feedback in the signing   community. First of all, the user autonomy.  It means that they should be able to decide  

to use it or not. We need to empower the  user to be able to make their own decision.   Secondly, the model should improve the safety  and well being for all users. They should feel   empowered if they're technically not feeling  trapped. And they should be able to not feel   that they're stuck and they're in this box. The third model is strength and weakness. It  

should be transparent and clearly communicated  with everyone so that people can make informed   decisions. And fourth of all, it's  very important that providers should   be accountable for their errors and harm that  isn't cured by the models that they provide.   If you want to learn more, please, you can refer  to the QR code and the link to read the guidance   document that establishes and justifies  these principles. And you can learn more   why we've established the principles  and support the principles as well.   So that is what the advisory group is  all about. We call upon the U.S. board   to take action that aligns with the strategic  plan. I pulled out a few of the specifics,  

objectives from their plan. First, your first  objective is the technical specifications for   the technologies. You know, today the technical  specs for the new models, they're not mature. They   are underway and they're being developed in the  specs that they're trying to get ready to sample,   but the models are already out there and  into the wild. So it's really important   to follow the design equity principles and to  include Deaf and Hard of Hearing community in   every part of every step of the design process. Second, we have been engaging with the community  

and the public and the deaf community through  webinars, symposia, and workshops. And they   have provided effective learning and users' wants  and needs and concerns. So we call on the Access   Board to do similar efforts. It's a great start. Third and finally, we're here to make one very   important point. There is a disparity between  access to technology between Deaf and Hard of   Hearing and the population at large. We need to  work together to make sure that these amazing   new technologies are evenly distributed. Thank you for listening. And Katie Kale: Thank  

you, Jeff. And thank you, AnneMarie. Okay. Now  we are going to hear from our final presenter,   Melanie Fontes Rainer, who is the  Director, the Office of Civil Rights,   Health and Human Services on access to  healthcare and agency services for the   public. Director Fontes Rainer, you may begin. >> Melanie Fontes Rainer: Sure. And thanks for   having me today. My name is Melanie Fontes  Rainer. My pronouns are she/her/alla.  

I'm a mixed race Mexican American with light skin.  My hair is black and pulled back in a bun. I have   on a white jacket and bright red lipstick today. So for everyone, we put up the slide, maybe we   can hold off if putting it up in a second. We  wanted to talk a little bit about some of the   things that our office has been working on. So  we are the Civil Rights office for healthcare   and human services, which means it is our job  to advance, promulgate, work on, effectuate,   implement, and enforcing Civil Rights. As part of that, we do that through rule making,  

policy guidance and law enforcement. So we have  a unique role in the space and we've done a lot   of work already that we're excited to talk about  today. So one of those roles is section 504 of   the Rehabilitation Act of 1973. This is a law  that prohibits discrimination on the basis of   disability. When the programs are funded by  HHS, so the Health and Human Services agency   that is our national office, or when they're  conducted by those agencies. So recently,  

we updated our rules when it's for programs  that are funded or activities that are funded   by this department. And through that rule  making, we actually took on this issue. So   in that final rule, tools that used patient  information like age or health condition and   other factors to estimate the benefits of specific  healthcare, also known as value assessment tools,   often involve algorithms. Those are not be used  to discriminate on the basis of disability.   So what does that seen in so in healthcare, we  know these types of tools, value assessment tools.   They're used to advise or determine the value, the  dollar value of a particular health service, for   example, such as a drug, a device, or a surgery.  And the outputs of these tools often inform care   decisions, such as whether your health insurance  company will cover it in the first instance,   whether a state Medicaid agency will cover  that service, under what circumstances might   they cover that, meaning they'll pay for that  service, and they might be used to contain costs,   so to try to make costs lower, and they might  be used for quality improvement efforts,   but we also know from our experience that  sometimes they might lead to discriminatory   outcomes against people with disabilities when  they place a lower value on the life of a person   with a disability compared to the life of a  person that does not have a disability.  

So we know, for example, we've seen this before.  It's not new. Right? So we saw this during Covid   19, for example, when different treatments were  denied to people with disabilities, and we know,   for example, if two patients have nearly identical  characteristics, but one has Alzheimer's disease,   a value assessment tool may assign a lower value  to that person's overall health because of that   illnesses, and they might have a  lower quality of life in such a tool.   And so we know that a provider that might use  that tool, then, could use that tool to deny   that person a ventilator or some other treatment  that would violate the law and is discriminatory.   So in short, this Rule would prohibit that, which  we think is really important, because again,   while artificial intelligence, clinical  algorithms, whatever you want to call it,   these things are new, but also, they're not new  in healthcare. We've been seeing algorithms,  

value assessments being used for a long time  in healthcare. And in fact, my office has done   enforcement work and we know that they can  discriminatory on the basis of race, age,   disability. Right? And we know that we, as human  beings, we're intersectional. We're not just a   woman. We're also a woman who is Mexican American.  We're not just a person with a disability. We're a   person with a disability who has gender identity  or pregnancy status, et cetera, et cetera.   And so in addition to that rule, we also extend  that Rule and make very clear that it applies   in the child welfare context. This is important,  because again, we know, we've seen algorithms be   used in the Human Services space to deny parents  with disabilities placement with foster kids, to   deny kids placement with their own parents because  of a disability, and that's why this rule, which   has both this value assessment tool and this child  welfare provision, is so important in disability.  

I'm glad that this rule has gone forward.  We're continuing to implement it. But also,   it's really important, because while AI is new,  a lot of these tools, predictive analytics,   they have been used in the healthcare  space to make benefits decisions,   Social Services decisions for a long time. The other Rule my staff worked on that is   really important in this space is section 1557  of the Affordable Care Act. Section 1557 is part   of the Affordable Care Act. It's a provision,  it's a Civil Rights provision, and it literally   says nondiscrimination in health programs and  activities, and that applies across race, color,   national origin, sex, and age and disability.  And so unlike section 504, which is just   focused on disability, 5057 has all of these other  provisions and accounts for this intersectionality   between these protected classes. In addition, 1557 gives us jurisdiction  

over insurance, which is something that 504  does not, which we all know algorithms sometimes   can be used for prior authorizations, medical  management techniques, ways in which you mate   have restrictions in how you access prescription  drugs, healthcare services, and otherwise. So   under this new rule, those recipients of HHS  funding, so Health and Human Services funding,   may not or must not discriminate against an  individual through the use of a patient care   decision support tool. That includes artificial  intelligence tools, clinical algorithms,   flowcharts, eligibility tools, risk prediction  tools, value assessment tools, and more.  

And to clarify, we know clinical algorithms  are like a step by step guide that healthcare   professionals use to make decisions about  patient care, and those steps are guided by   specific symptoms, specific diagnostic information  about that patient, that person, test results,   other medical information inputs to the algorithm.  And that then those algorithms determine, again,   we talk about whether something is covered,  whether a particular treatment is needed or should   be different, and again, so we know that those  things don't always reflect the conversations with   provide ends, and we know that they oftentimes  may have input processes that might treat someone   differently because of a disability. So we think that there is a promise   in the use of these tools to reduce health  disparities and increase access to healthcare,   but where he also want to make sure that we also  want to make sure that providers of healthcare   use these tools responsibly so that they can  be ethical and make sure they're not driving   more harm and making sure that they're not  inadvertently discriminating through bias.   So through this rule, tools that have  an elevated risk of discrimination,   which might be a tool that has a lot  of variables with age, origin, sex,   or disability, that

2024-08-12

Show video