Federal Agency and Industry Practitioner Hearing on Artificial Intelligence

Show video

>> The time is now 2:00 p.m. Hearing chair Elver  Ariza Silva, would you please again the hearing?   >> Elver Ariza Silva: Indeed. Welcome to the  U.S. Access Board's artificial intelligence   virtual hearing for the disability community. My  name is Elver Ariza Silva. I am a public member  

of the Access Board appointed by President Biden  in 2022. I serve as the Vice Chair of the board   and I live here in the Washington, D.C. area. The Access Board's mission is to create access   role. The Access Board is an independent federal  agency dedicated to accessible design for people   with disabilities. The board is comprised  of 25 individuals, 13 of whom, like me,   are appointed by the President. The others  are representatives from various federal   departments. I would like to acknowledge my  fellow board members that have joined today's  

hearing and thank them for being here today. I would like to take a moment on behalf of the   board to additionally thank all of today's  presenters and those in attendance for being   with us today as we discuss artificial  intelligence in relation to the disability   community. A few housekeeping notes as we begin.  This hearing is being recorded. American Sign   Language interpretation and realtime captions  are being provided. The hearing will be posted   to our website and YouTube channel in the coming  days. All attendees will remain muted and with   your cameras off unless you are being called  upon to speak. You may use the chat feature  

to the host if you need assistance. For all  board members, presenters, and those that   preregistered to offer public comments, please  take the time now to ensure that the name you   are listed on Zoom is your full time so that we  can easily identify you and provide you with the   permissions needed to speak at the hearing. On the screen, you will find the agenda for   today's hearing. After my welcoming remarks,  we will begin the hearing with Alison Levy,   Director of the Office of technical and  information services for the Access Board   who will provide some foundational background  on our artificial intelligence AI work in the   Executive Order on artificial intelligence. After Alison, we will hear from a wide range   of presenters from federal agencies, as  well as industry practitioners on AI and   Accessibility. Following the presentations,  Access Board members and staff will be able   to ask panelists questions. Afterwards, there  will be a time for public comments from those  

that have preregistered to provide public  comments. Let us begin by welcoming Alison Levy.   Alison? >> Alison Levy: Thank you, Elver. Good afternoon.   I'm happy to share some background information  with you, starting with the next slide.   For those of you who are not familiar yet with the  U.S. Access Board, we have basically three primary   roles and responsibilities. First, to establish  accessible design, guidelines, and standards  

under Architectural Barriers Act and the Americans  with Disabilities Act, in addition to section 508   of the Rehabilitation Act, among others. Second, we provide technical assistance   and training on all aspects of  Accessibility for both the built   environments and the digital environment. And third, we enforce Architectural Barriers   Act, which applies to the federal  government buildings. Next slide.   So back last October of 2023, President  Biden issued executive order on artificial   intelligence. Within that Executive  Order, the Access Board was tasked  

with a few things to help with accessibility of  artificial intelligence. Next slide, please.   So our three tasks include the following. First,  we were asked to solicit public participation and   conduct community engagement to learn a little bit  more about what folks are feeling and experiencing   about their use of artificial intelligence. Second, we've been asked to issue technical   assistance and recommendations  on the risks and benefits of AI,   including use of biometrics as a data input. And third, we're working to help provide people   with disabilities access to information  and communication technology, as well   as transportation services. Next slide, please. To help us with this endeavor, we partnered with  

two national nonprofit disability organizations.  They are amazing and they are great team players   as we move forward in our communication with  the disability community, AI practitioners,   as well as other federal agencies. Those two  organizations are the Center for Democracy and   Technology and the American Association  of People With Disabilities, otherwise   known as AAPD. We engaged in this memorandum of  understanding, otherwise known as an MOU, back  

in May to really help us connect with the  disability community. Next slide, please.   One of the key outcomes of this partnership  is that we're working through a series of   five web hearing based iterations that include  hearings. The first was hosted on July 9 of this   year, specifically to the disability community  and actually the first one is not listed here.   The first one was a level setting. It was  basic information about AI to help people with   disabilities better understand AI and to level  set a basic understanding of this technology.  

The second one was two iterations with the  disability community. We hosted a morning   and an afternoon session on July 9th. Next we're hosting today's session for   AI practitioners and federal agencies to  share what they know about best practices,   pros and cons of artificial intelligence. And finally, our goal is to host our last  

iteration in November to share our findings,  recommendations on the use of AI and Accessibility   for people with disabilities. To look for  or to view any of the previous sessions,   please visit our new U.S. Access Board artificial  intelligence web page. The link is provided on   this slide. We'll also pop it in the chat box. But  if you visit our home page, just look for the link  

to our AI hearing information and you'll find a  wealth of resources there that will continue to   evolve as we move forward with this effort. Now I'll turn things back over to Elver to   introduce our next panelist. Thank you for joining  us today and we look forward to continuing to   support you in this endeavor. >> Elver Ariz Silva: Thank you,  

Alison, for that information. We will now transition to the panelist   presentations part of the hearing. Panelists  will be sharing perspectives op current research,   current [audio skipping] AI and other related  AI topics. We ask that all panelists keep their   presentation to around eight minutes. They  should be prepared to share their screen   at the time of their introduction. As a  reminder, please keep your cameras off   until it's your time to present. We ask that  all others remain muted with cameras off.  

The first presentation, we now welcome Mr. Zach  Whitman, Chief of AI officer of the General   Services Administration, GSA, on GSA approaches  to AI and Accessibility. Zach, you may begin.   >> Zach Whitman: Thank you, everybody. Really  appreciate the opportunity to speak with everyone   this afternoon. Especially regarding GSA's  consideration of AI and how that intersects with   our accessibility practices. One convenient thing  about my role as chief AI officer is that I'm also   the chief data officer and in that structure at  GSA, we run the accessibility for 508 program,   so we're closely involved in how we can  best leverage the latest technologies to   improve an accessible feature, not only for our  public services, but also for our internal team   as well. We're really committed to this and  having that synergy between the technology  

and the 508 office has been a really beneficial  relationship that we've been able to bridge.   Now, as we look at AI and its potential  future, we see this as a transformative   technology for some of the solutions  that previously may have been a little   difficult or prohibitive in our ability to offer.  We're seeing a democratization of these services   with these general services and applying them  to different applications in the work context,   but also in our public servicing offerings. Opens  up a wide aperture of potential for us. So we see   this opportunity, one, with excitement, but also,  one, with caution. We don't want to rush into  

nicks too quickly. We want to make sure that the  events that we're making are doing progress and   not doing any disservices as we roll out the  new technology, given its relative infancy.   Some of the things we're looking at specifically  at GSA are things like realtime captioning and   transcription services. We know that AI driven  tools that are currently commercially available   can do realtime captioning for videos or meetings  in live events, but we also know that some of   these things don't have the accuracy that would  be required for our workplace or to offer to the   public, and so taking a measured approach in how  we evaluate those services is really critical.  

Second would be an AI powered screen reader. This  is a new possibility for us to be able to not only   offer more context around the presentation  on the screen, but also provide insight as   to graphical information. We see examples of  how AI can start to interpret graphics in more   meaningful ways than simply representing  that this is a specific type of graphic,   but what that graphic means. And then that, again,  is an opportunity for us to start to lean into   this technology and provide a richer experience. We're also looking at automated image and video   descriptions. Now, this, again, is going into  how we are going to provide better services for   either our video presentations or our more static  assets that need to be enriched at scale. We have  

a lot of digital infrastructure that needs to be  looked at in certain contexts to ensure that the   images are properly vetted and categorized  in a way that is truly accessible. So this   is another efficiency that we could potentially  see that would improve the quality of services.   We're also making sure that we are considering  what it would mean to have predictive text or AI   based writing support be made available to  our workforce. We understand that a lot of  

folks can have challenges in writing without  having a cognitive load or motor disability   making it harder to create or to author  text, and we wanted to make sure that the   predictive text and AI writing support  can be a value add to those services.   Now, when we talk about the AI that is currently  available, and there are a number of different   applications or we can keep going into them,  but one thing I would want to call out is we   believe that AI needs to be the AI with regards  to accessibility needs to be opinionated. What   I mean by that is oftentimes we see standards  like WCAG trying to outline three basic ideas.  

First one would be what is the issue  that it's observing? Where the issue   is. And then how do you fix the issue? Now,  when we're talking about things like HTML,   it's critical that we identify these problems,  and AI can be of service in identifying issues,   but it doesn't necessarily mean that we have a  prescriptive answer as to how best to solve the   issue. And that's what I mean by the opinionated  concepts need to be present in our AI solutions   that are going to be supporting in terms of  writing, specifically code in this case, context.  

Sometimes the solution might not be quite  so obvious as to how to fix the problem to   an AI assistant who can identify where the  issue might be, but might not understand   how best to solve that issue. This is one  of the biggest shortcomings we see with   accessibility testing tools. The fact that they  mostly do a good job of finding these issues,   but the exact code is sometimes requires  subjectivity to ensure that the quality of   that solution is meeting the needs of the public. A good example would be these publicly available  

LLNs. They are largely populated and trained on  publicly available data, public websites. Now,   one thing that we have all come to understand is  that the data that's used to train these models   is public and, therefore, has a lack of duration  potential, so we don't necessarily ingest into   the system the absolute best solutions  for accessible websites, as an example.   We want to make sure that GSA plays a role in  improving the corpus of data that these public   LOMs are training on by providing high quality  accessible solution and code so that the better   results could be made available through these  general tools. I believe that the government   should be a source of delivering high quality  accessible contents through our websites. And  

so that's one of our main focuses going forward is  making sure that we are doing everything we can to   improve the quality of our digital experiences. Some of the things we're currently exploring is   we're using AI to convert PDF holdings into HTML.  We have a lot of PDFs on GSA.gov and we'd like to   convert into accessible formats. That will take  a series of different products and capabilities   out I talked about earlier to do effectively. Not  only that, though. We want to make sure that the  

quality is increasing as we deliver the services.  Also the processing that we're going to be put   into place to ensure that there's been quality  assurance in the outcome and the output of these   automated systems. They are also investigating  the meeting transcription services and document   summarization. These are, again, in the early  days because we want to make a very deliberate   decision as to how we move forward and not move  too quickly and potentially impart some biases,   which we will have constrained without proper  testing. So we're taking it very slow. And lastly,   looking at the AI visual interpretation tooling.  This would be for graphical interfaces that have  

charts and maps where an interpretation layer on  top of the visualization would be assistive in   trying to make sense of what the document is going  to say. Also, we're trying alternative routes to   gather new ideas. We hosted a hackathon recently  which had several accessibility submissions   specific to AI that were used to assist folks in  trying to either understand content that was on   federal websites or to make better submissions or  make forms more accessible. In fact, the second   place winner of that hackathon was one that  took jargon language that was hosted on federal   websites and made it into clear, plain language  through an AI interpreter. So we're really happy   to see new ideas coming forward on that front. And lastly, I'll close with we're making a heavy  

investment into the U.S. web design system in  terms of making sure that any new advancement   we make, especially in the domain of AI in the  sense of AI interfaces that are specific to web,   are accessible. And also, call out an  upcoming accessibility forum hosted by   archive that's hosted a couple days, I think  actually in September. And it's going to be   dealing with specifically AI and Accessibility,  which I think is a really exciting event that   will be hosted by archive out of Cornell. Anyway, that's my time. I really appreciate   everyone's attention. Please let me know if you  have any further questions in follow up. Thanks.   >> Elver Ariz Silva: Thank  you so very much, Zach.  

In our second presentation, we would welcome  Megan Schuller, legal Director of the Bazelon   Center for Mental Health Law on the promises  and perils of AI for people with mental health   disabilities. Megan, you may begin, please. >> Megan Schuller: Good afternoon. My name is   Megan Schuller. I'm the legal Director of  the Bazelon Center for Mental Health Law.   I use she/her pronouns, white woman in my  Forties with mostly blond hair wearing a   dark green shirt and gold necklace. I'm  coming to you from Acadia national Park,   home of the Wabanake nation. Thank you to the  Access Board and AAPD for the opportunity to  

speak to you all today about the impact of  AI on people with mental health disabilities,   including both the promise and the perils. The Bazelon Center for Mental Health Law has   been fighting for over 50 years to protect and  advance the Civil Rights of adults and children   with mental health and developmental disabilities  and the right to live with autonomy, dignity,   and opportunity in welcoming communities supported  bylaw, policy, and practices. As mentioned,   my focus today is to speak on the impact of  AI on people with mental health conditions in   particular, including serious mental illness. It  is an often overlooked population in the growing   discussions of AI policy and regulation. AI,  using that term broadly, is now being used for   very high stakes decision making from who gets  a job or a loan or help in jail or keeps custody   of their child. Often with the stated purpose of  reducing systemic and unconscious bias. Whatever  

you feel about it, AI is impacting everything  we do and it's not going away. It's growing and   expanding and absent standards and regulations,  so is AI bias and what our partners at CDT call   technology facilitated discrimination. To make this concrete, I want to talk   about pretrial sentencing tools. Courts are now  routinely using predictive algorithms to make   decisions in courtrooms. Algorithms use pools of  information to turn data points into predictions,   whether that's for online shopping, hiring  workers, or to make bail decisions at the   point of arrest. One such popular widespread  pretrial sentencing tool gets information from the   arrestee, feeds it into a computer algorithm, and  that outputs a risk score meant to quantify the   likelihood that the person will commit a crime or  fail to appear in court. High risk people equals  

Yale and low risk equals jail. A widely red pro  public an report on such a tool found significant   racial bias. It inaccurately predicted that 45% of  black arrestees would re offend who did not. While   the false positive rate for white was less than  24%. In addition to the horrifying racial bias,   note how inaccurate the tool is for everyone on  a decision that decides who gets incarcerated,   yet these tools are still widely used. Now, to unpack why software that does not   know the race of the person would produce such  discriminatory and biased results, we should start   by looking at the software and how it works, but  the courts and judges using these tools generally   do not have access to how the software is making  its predictions or what the score is based on,   because the company that created the tool  claims it's proprietary. Think about the due   process and constitutional implications of that. Now, despite this black box problem, we do know  

the answer to the question of why. The answer  is proxies. The tools rely on numerous proxies   for race that reflect the societal disparities  and institutional racism all around us. And then   not only replicates that racism. They increase it.  We see similar biases and proxies in the criminal   legal system across race and disability. A review  of eye few key numbers helps explain why. By one   report, people with disabilities account for  30 to 50% of incidents of police use of force.   Federal government estimates have found that  people in need of mental health support are 20   to 50% of the people shot and killed by police.  Black Americans are over three times as likely  

as white Americans to be killed by police.  And one study found that black people with   mental health disabilities were more likely to  be incarcerated than any other racial group.   Now consider a pretrial sentencing tool  that calculates your risk of re offending   or fleeing based on factors such as your age at  first arrest and prior misdemeanor convictions.   Those are going to disproportionately identify  black people and people with disabilities as   high risk. The same is true of other factors used  in these tools due to well documented disparities   and biases, both racial and disability  based in employment and housing.  

Now, the child welfare context provides another  concrete example of the risks of these predictive   decision making algorithms for people with  disabilities. Many child welfare agencies in the   United States are considering using these tools to  screen reports of neglect and make child custody   and placement determinations. In fact, several are  already using them. The Allegheny family screening   tool used in Pittsburgh, Pennsylvania, and  surrounding areas was specifically designed to   predict who is likely to have their child removed  from their custody in the next two years. Well,   based on historical data, the correct answer  is BIPOC parents and parents with disabilities,   not because their children are at greater  risk of abuse or neglect, but because of   racial disparities and well documented biases  against parents with disabilities in the child   welfare system. So by identifying these groups,  the software is working correctly. That's the   right answer to the question. And now the tool is  going to identify all people in those categories  

as high risk and increase the likelihood that  they'll have their children taken away from them.   The problem begins with the question asked. Why  are we not asking which children are at greatest   risk of abuse and neglect? Well, because the  algorithm cannot predict that, and yet it's   still being used. Now, to answer the questions  presented to it of who's going to have their  

children taken from them, the algorithm has used  a trove of detailed personal data collected from   child welfare history, birth, Medicaid, substance  use, mental health, jail, and probation records,   among other government datasets. This tool has all  the same proxy problems as the pretrial sentencing   tools. But this top has included the fact of  having a mental health diagnosis as a risk   factor. Grouping together a wide range of mental  health conditions with no individualized analysis.   It also includes the fact of treatment for mental  health as another risk factor, and the same for   a past history of substance use and treatment.  Treatment is specifically held against you. Based  

on one ACLU report on this tool, your risk score  can increase by three full points on a scale of   one to 20 based on the overt disability factors  alone. Never find all the proxies. The Americans   with Disabilities Act requires the decisions  be individualized. It requires that people   with disabilities be given an equal opportunity to  obtain the same result. Gain the same benefit or   achieve the same level of achievement as provided  to others. And it prohibits using criteria that   tend to screen out people with disabilities. Or  methods of administering a program that result in  

discrimination. Now, consider how the algorithms  we just discussed comport with these requirements.   And to be clear, there is no exception in the ADA  or other Civil Rights laws or AI. So this brings   me back to where we started. Many of the tools  discussed have been held up as a way to reduce   human bias and disparities in public systems  wrought with bias and discrimination. AI is   not going anywhere. How do we address and reduce  the perils while also pursuing and realizing their   promise? Some of the tools that pose the greatest  threat also presents the greatest promise. What  

if we could actually use them to reduce bias  and discrimination in the child welfare system   and in public benefits? We must first understand  the very real impact of these tools, stop them   from being used behind a curtain and with impunity  to strip BIPOC and disabled communities of their   rights. And then ensure the people most impact  are involved in implementing and developing these   stools with solutions. If we are truly to use AI  for good, to be clear, this is not a context where   we should be moving fast and break things. Those  things are people's lives and their families. But  

if an algorithm was carefully and thoughtfully  developed, deployed and implemented by and with   impacted communities, black and brown communities,  People With Disabilities, LGBTQIA+ communities, to   actually reduce bias in these high stakes systems,  that is a world worth imagining. Thank you for the   opportunity to speak to you today. >> Elver Ariz Silva:   Thank you so much, Megan. Our third presentation,   we are going to hear from Nathan Cunningham,  senior policy advisor of the Office of Disability   Employment Policy within the U.S. Department  of Labor and for their project manager of  

the partnership on Employment and Accessibility  Technology. PEAT. Nathan, you may begin, please.   >> Nathan Cunningham: Thank you so much. Good  morning, everyone, from Seattle, Washington.   My name is Nathan Cunningham. I am a white man  in my early Thirties wearing a blue plaid shirt  

and a Blazer and I use he/him pronouns. It's a pleasure to give remarks at the   U.S. Access Board hearing on behalf of the U.S.  Department of Labor. I am a Senior Policy Adviser   in the department's Office of Disability  Employment Policy or ODEP. Our Assistant   Secretary, Taren Williams, is the head of  my agency is a federal member of the U.S.  

Access Board and through our collaboration,  our agencies are able to advance our shared   mission to promote equity and access for people  with disabilities. So thank you for inviting me   and for all the panelist's remarks today. I also want to say I really appreciated   the weather of knowledge that disability  advocates shared during the hearings on   August 8th. Definitely learned a lot. I have  low vision and I believe it's necessary for   those of us with disabilities to make sure that  new technologies work for us as well. So I'm glad   that these voices are involved in the hearings. A70 a prime example of this. So my agency, ODEP,  

influences federal and state policies to make AI  fair and inclusive in the context of employment.   We create resources for employers, AI experts, and  workers to learn how to use AI in inclusive ways.   And whoa look at AI from two main perspectives.  So first, as the previous panelist described,   AI holds extraordinary promise and potential, and  in our case, for improving opportunity and access   for workers. It can even enhance accessibility  and support reasonable accommodations. Think  

of applications like computer vision or meeting  transcription, as one of the panelists discussed.   However, there's a second part of this angle.  Depending on how people develop and use AI,   this technology runs the risk of undermining  labor rights and causing bias and discrimination   against disabled jobseekers and workers. At  the Department of Labor, we are committed to   empowering our nation's jobseekers and workers. So  protecting worker rights and well being helps make   employment safer, healthier, and more inclusive.  These goals are critical during a time of rapid  

innovation in the case of AI, when advanced  technologies are reshaping how people work.   Earlier this year, the Department of Labor  released a set of principles for developers   and employers to promote worker well being  when they use AI. These principles offer a   North Star for inclusive AI governance. And here  with the Access Board, we know from the world of   Accessibility that robust governance efforts  are critical to help people follow policies   like section 508 of the Rehabilitation Act. And I will say we need similar guidance to   help people understand and follow inclusive  AI practices. So a similar standard that  

lays out people's responsibilities to enact  inclusive AI policies at an organization,   whether from an employer side,  vendor side, or even worker rights.   New tools powered by AI are in the spotlight,  because they can impact so many aspects of   work. AI can help people carry out job tasks,  take meeting notes, provide chat bot services,   process large datasets, scan applicant resumes,  and even match people to jobs. Many of us are   encountering AI all the time without realizing it.  And that's part of the problem here. Many of us   have talked about transparency as a key issue.  Transparency is one of the central pillars of  

inclusive AI governance. So jobseekers and  workers need to know when AI is present,   how it affects them, what accommodations they can  request, and even how they can opt out. Being Able   to call in a real person or a human for oversight  and assistance is not a separate function. This   is a key element of inclusive AI systems. AI is technical, but also very social. People   build AI systems and decide how to use them.  Unfortunately, the people most at risk from these  

systems are often left out of the conversation.  But not today. At ODEP we address this issue   head on. Our mission is to develop policies and  practices that increase the number and quality   of employment opportunities for people with  disabilities. AI is one issue that crosses many   of our policy areas, from education to employment  to workforce training. Through this work we fund   an initiative called the partnership on Employment  and Accessibility Technology or PEAT. PEAT works  

toward a future where all technology is born  accessible so disabled workers can succeed   in their careers. And for many years, PEAT has  influenced accessibility practices and policies   for traditional forms of technology, such as  computers and websites. As technology evolves,   we have also created resources that give employers  practical guidance on disability, inclusion   when they choose to adopt AI. These resources  guide employers through each step of choosing   inclusion I have technology, implementing it, and  training staff on best practices. To create more   useful and robust materials, we collaborate and  partner with a range of policy makers, disability   advocacy organizations, technology companies,  employers, AI experts, and researchers.   One of our first public resources on this topic  was in 2021, AI and Disability Inclusion Toolkit   found on our website, and it lays out for  employers steps they can follow if they're   interested in using AI in recruitment and hiring.  This is one of the biggest use cases that we're  

seeing in the employment context right now  and one that is deemed high risk generally,   and it's tricky, because according to data  from the society for human resource management,   I think 92% of employers are procuring these AI  tools from vendors, so there may be a lack of   understanding about how the tool functions, what  its risks are, especially to disabled workers.   So there are responsibilities that we're laying  out for employers to ask the right questions,   to put in place the right governance practices  so that these tools are not discriminating,   because as the previous panelist said,  the presence of AI does not negate   employer's responsibilities to nondiscrimination. We have also put out resources on the promise of   AI. So there are some disability led startups that  are using AI in exciting and innovative ways to  

match people with disabilities to jobs, to train  people with disabilities to fill in demand jobs,   and we have this research on our website as well. In 2023, we also put out a series of articles on   how automated surveillance can create barriers to  workers with disabilities, so moving out of the   hiring and recruitment context, looking at the  issues with ability based monitoring of workers   to energy productivity, advancement decisions,  and even term I have nation. There are many   accessibility challenges with these automated  tools that employers should be mindful of.   We are also working on new policy resources  with federal partners overseeing AI risk   management. These upcoming materials will guide  employers that use AI tools in recruitment and   hiring to follow concrete steps to maximize the  benefits of this technology and better manage   risks. And this will be a much more robust policy  resource that we're aiming to launch this fall.  

Ultimately, employers can use AI to reduce bias  and even open the virtual door to recruit more   workers with and without disabilities.  Disability led startups are leveraging   AI to advance inclusive hiring and major  hiring platforms are advancing inclusive   strategies in their own networks. To learn  more about our resources, I encourage you   to visit our website at www.PEAT works.org. In closing, I want to reiterate that we all have a  

duty to decide when and how to use new technology.  All uses of AI are not inevitable. Many are good.   Some require more attention. I hope the work I  shared today can help with how to address the   risks and benefits of AI for disabled people. I  want to thank the American Association of People   With Disabilities and the Center for Democracy  and Technology for their partnership with the   Access Board on this issue. And I look forward  to hearing from the rest of the panelists.   >> Elver Ariz Silva: Thank  you so very much, Nathan.  

Now for the fourth presentation, we will welcome  Sarah Decosse, Assistant Legal Counsel of the   ADA and GINA division within the Office of Legal  Counsel in the U.S. Equal Employment Opportunity   Commission. Sarah, you may begin, please. >> Sarah Decosse: Thank you so much. Let me   just share my screen. Thank you so much for  this opportunity. My name is Sarah Decosse.   I'm a disabilities attorney with the Office  of Legal Counsel at the EEOC. I use she/her   pronouns and I am a white woman wearing a blue  scarf and a blue sweater. I'm going to address   AI and other algorithmic decision making tools  and employment for people with disabilities. And  

I am delighted to follow on Nathan's presentation,  which is very apt in terms of what I'm going to be   discussing. I'm going to go into a little  more detail about just how the ADA raises   concerns about the use of AI in the workplace. So first, I just want to discuss a few of the   types of tools that we're seeing. There are many  and they are moving into the workplace rapidly.   So a few of those are video interviewing, chat  pots, resumé readers, productivity monitors,   key stroke counters, and gamified tests. In addition, there is a very large group of tools   that are called wearables. These are AI driven  devices that literally attach to someone's body  

and they can perform many different tasks. They're  designed to perform many tasks. So some examples   of wearables include eye tracking glasses, driver  fatigue detectors, posture and limb strength   trackers, workplace wellness monitors, movement  disorder detectors, and things like social   behavior monitoring. For example, monitoring body  language and tone of voice. Or there are several   devices that use headbands, headsets, or helmets  to detect emotion, attention, or mental focus,   and they also can perform things like EEGs. So where do we see potential concerns arising   about the ADA? We see them arising in three  different categories and because time is very   short today, I'm just going to quickly  run through those categories. The first,   and Nathan brought this up, is the failure to  provide reasonable accommodations. There are many   different circumstances in which individuals with  disabilities may need an accommodation to interact   productively with an AI tool. So for example,  if an individual with a visual impairment cannot  

navigate a hiring process because the algorithmic  tool is not fully screen readable, they may need   an accommodation that will allow them to do  so. Or if an individual with limited manual   dexterity cannot complete a timed knowledge test  that requires the use of a keyboard, even though   they're well qualified to the position, they may  need an accommodation to allow them to perform in   a way that will be assessed accurately. So we have a number of recommendations   promising practices for employers.  As a note, just as I was starting,   I put in the chat links to our two technical  assistants materials that address AI and the   Americans with disabilities act. One is more or  less focused towards employers. The other provides   several tips for employees and applicants. So I  hope that you will look apt those. Those go into   much more depth about what I'm discussing today. So what are some promising practices with respect  

to ensuring that people who need reasonable  accommodations when they're interacting with   AI tools can get them? First, making sure that  job applicants and employees know that reasonable   accommodations will be made available. Making  sure that HR staff know to recognize asked for   reasonable accommodations. And ensuring that that  alternative test formats, alternative formats for   any different kind of processes are available.  And of course, those same premises would apply   with a third party is undertake being the  hiring or employment task for an employer.   The second category of potential ADA violations  in employment relates to the ADA's limitations   on employers seeking individual information  about someone's disabilities or their medical   status. Medical information. Congress recognized  that allowing employers to get this information   might subject individuals in the workplace  to discrimination on the basis of disability,   and that's why these particular three provisions  I'm going to mention apply not only to people with   disabilities, but to everyone in the workplace. So there could be potential concerns if AI tools,  

for example, make disability related inquiries. In  other words, they ask questions that are likely to   elicit information about someone's disability. Or they're also prohibited, and again,   this is in most circumstances. There are some  exceptions. From collecting information that  

qualifies as a medical examination under the  ADA. And similarly, if employers click medical   information, the ADA requires them to keep it  confidential with very limited exceptions.   So what are some promising practices for employers  so that they can avoid either inadvertently   collecting this type of information or failing  to respect the ADA's requirements to keep medical   information confidential? Some promising practices  would be communicating with any Investigators who   might be selling these tools to ensure that  the tools do not ask questions that may elicit   information about disability. To ensure that the  tools do not engage in medical examinations unless   something like a request for a reasonable  accommodation has been made. And similarly,   ensure that any medical information collected  remains confidential and is only used for   appropriate purposes, according to the ADA. The third category of potential violation is  

what the ADA calls screen out of a qualified  individual with a disability. The ADA prohibits   the use of selection criteria that deny  employment opportunities to qualified   individuals with disabilities, whether the screen  outs are intentional or not. For example, if an   employer uses a chat bot that is trained to reject  applicants with significant employment gaps,   individuals who may have such gaps, because  of their disability, may be screened out,   even though they're qualified to do the job. Or an individual may have a disability that   affects their speech, and a video interviewing  tool may rate them poorly, because they do not   speak as quickly as other candidates. Again,  even though they're able to do the job.  

So here, too, we have a number of promising  practices for employers to reduce the risk   of screening out qualified individuals with  disabilities. Among those promising practices,   it would be helpful to clearly state that  reasonable accommodations are available. To give   all applicants and employees as much information  about the assessment tools as possible in advance   of the individual beginning the assessment,  we've noted that sometimes people don't know   what they're about to be asked to do and may not  recognize that they need an accommodation until   they're already midway through assessment,  which makes it a little bit more difficult.  

Further, it's important to make sure that  the assessment tools reflect actual essential   job qualifications so that those tools are  only measures qualifications that are truly   necessary for the job. And of course, those  qualifications should be measured directly,   not simply through correlations that may prove  to be inaccurate. And employers may wish to   directly ask software vendors questions such as  whether or not the user interface can effectively   interact with people with as many disabilities  as possible and if the materials are available   in alternative formats should they be needed. It's important to note that there are occasions   when both an employer and an AI vendor may both be  responsible under the ADA. So when AI algorithmic  

decision making tools are designed or implemented  by a vendor and the vendor acts as the employer's   agent, the vendor, in some circumstances, may  have the same legal obligations as the employer   to job applicants and employees under the ADA. Some quick final notes, I don't want to go too   long, first just to note that the EEOC is very  interested and has put a great deal of energy   into advancing our work on clarifying  not only the ADA implications of AI,   but also looking at other equal opportunity  laws and the impact that AI halls in other   protected factors that are covered under those  laws. These are just some of the references   to some of our initiatives on that front. And finally, I just want to make reference   to those two technical assistance materials  that appear on this slide with hyperlinks that   we can share with you, as well as a very recent  settlement just two weeks ago that we made with   respect to individuals who needed accommodations  of effective screenreaders to apply for jobs. You  

were not provided those accommodations. As well  as a link to our library of disability related   resources, which we welcome you to visit.  So thank you again for this opportunity.   I'm delighted to answer questions later. >> Elver Ariz Silva: Thank you so much,   Sarah. Very appreciate it. Now our next presenter,   Josh Mendelsohn, Acting Chief of the Disability  Rights Office at the Federal Communications   Commission. Josh, please proceed. >> Josh Mendelsohn: Hello, everyone.  

I am Josh Mendelsohn and I am the Acting Chief  of the Disability Rights Office of the Federal   Communications Commission. I actually white  bald middle aged man with a salt and chili   colored beard. I'm wearing a black suit  with a green shirt. I would like to thank   the Access Board for hosting this hearing  today to cover this very important issue.   The FCC has had a few ongoing initiatives that  have involved the use of artificial intelligence,   which has an impact on individuals and  people with disabilities, and I would   like to talk about three specific areas today.  The first being modern communications or access   to communications. The second being video  programming and the third being emergency  

access or emergency access communications. First of all, communication access or modern   communications cover a wide range  of applications and uses ranging   from telephones and telecommunications relay  services. P this also includes hearing aids,   as well as interoperable video conferencing  platforms, like what we are using right now   at this moment. And all of these temperatures  been impacted by the recent FCC action involving   artificial intelligence, otherwise known as AI. I am going to start by talking about robo calls   and robo texts. This is becoming a huge issue  with almost everybody who has a cell phone,   whether it be a landline or a mobile phone.  Many individuals are being deluged by these  

robo calls and robo text messages. And the  FCC is very well aware of this issue. We are   also aware of the use of artificial  intelligence on the side or, rather,   by those bad actors who are using AI to send  even more robo calls or make them more realistic,   and also, the use of AI to inhibit or reduce  and pre vicinity robo calls and robo texts.   In using this technology or using artificial  intelligence in robo calls, especially we are   concerned most particularly as we have recently  been collecting comments and soliciting those   comments on the accessibility and taking that  into consideration in defining those technologies,   particularly on the development of how to avoid  discouraging the development of beneficial AI   tools to detect and block unwanted and  fraudulent calls and text messages.   So how can AI be used to improve the ability  of people with disabilities to communicate   with called parties? Such as the ability to  revoke consent to future calls and messages.   And to work effectively with telecommunications  and relay services and to generate or translate   those messages. We have also been seeking  comments on steps that we can take in order   to ensure that important accessibility tools  such as voice altering individuals or, rather,   voice altering technology for individuals with  disabilities are not negatively impacted by   our rules, such as the TCPA rules, regulating  calls using artificial or prerecorded voices.  

We recently, earlier this month, held a vote  to adopt an NPRM or further rules and sought   comment on the positive use of AI to improve  access to the telephone networks for people   with disabilities. Especially those individuals  who are using artificial or AI generated voices   or synthesized voices to make these calls. Another area that we've been looking into and   that we have been regulating is telecommunications  relay services, which enabled people with   disabilities, people who are deaf, hard of  hearing, or speech disabled, to be able to   use a telephone network to make calls to people  who are not deaf or hearing on the other end.   One example of telecommunications relay services  is internet protocol captioned telephone services,   otherwise known as ICPTS in which a person who is  deaf or hard of hearing will be able to speak for   themselves and then the other person on the  other end of the line will hear that speech,   what the person is saying, and then when they  wish to speak back, what they say would then   be typed and sent as a text message back to  the person who is deaf or hard of hearing.   Six years ago, the FCC ruled that ICPTS could  also be provided on a fully automated basis   using only ASR or automatic speech recognition,  which we, at times, have referred to as a type   of AI. This has been used to generate  captions without the participation of a  

communications assistant or otherwise a relay  agent, which could serve as an intermediary.   Recently, just last month, we adopted a new  plan for compensation regarding IPCTS or,   rather yes. The ICPTS. IP captioned telephone  service providers using different rates. One   would be a higher compensation rate for that  which used communication assisted technology   or communication assisted captions, otherwise  a person, a human who would be printing the   captions. And a lower rate of compensation  for ASR only captions. That way, we sought to   reduce the incentive to provide only the lower  costing ASR caption service when captions by   a human intermediary would be better in some  circumstances or preferred by the consumers.   Recently, just several months ago, a consortium of  consumer groups filed a petition requesting that   the FCC initiate a rule making to require ICPTS  providers using ASR technology to also provide   consumers the option to switch to a communications  assistant at any point during an IPCTS telephone   call. Now, we currently have a number of IPCTS  providers and all of those providers are able to  

provide IPCTS using ASR, but only a few of  those providers also provide the option to   switch to a communications assistant. The petition asked or states, rather,   that ASR being used as a part of IPCTS frequently  misinterprets speech with accents, dialects,   or patterns which deviate from standard  American English or when used to recognize   speech in environments with background  noise. We are currently in the process   of soliciting comments and replies and all of  those comments and responses are due or, rather,   comments, I should say, are due September  3rd and supplies are due September 16.  

Another form of telecommunications relay services  is known as IP relay in which a deaf or hard of   hearing person or a person with speech disability  would type a message and then that typed message   would be spoken to a person on the other end,  then that person on the other end of the line   would be speaking and what that person speaks  would then be transcribed into text, which then   I caned read as a TRS user or IP relay user. We currently have two applications by current   IPCTS to also provide IP using  speech synthesizers and ASR.   Another area that's been impacted by AI is  interoperable video conferencing system or video   conferencing services. Just like this platform,  such as Zoom, teams, WebEx, and other types of   platforms. We at the FCC are seeing an increase in  the use of AI and ASR on these types of platforms.   And the STC has already ruled that these platforms  must be accessible to people with disabilities.   So we see AI to be used to generate automated  captions, also AI summaries of conversations,   and also being used for transcription  services. They also can be used to  

automatically dead ignite signers as speakers. One emerging area that we are seeing the use of   AI have an impact on is automated ASL interpreting  services, otherwise known as AIS, 234 which AI   is being used to recognize sign language and  translate those signs into speech or text. And   the same is true in the other direction. We are  seeing speech and text being then translated to   American Sign Language or other signs using an  Avatar or photo realistic personas or figures.  

We recognize that this holds a lot of promise  for the telecommunications landscape and other   contexts, and for other contexts as well. The use of AI is also emerging in video   programming. The use of audio descriptions, for  example, which interprets what's being shown on   the screen for those who are blind or have low  vision. Wore seeing AI be used to generate these   types of services and audio descriptions. Even  though we recognize that there are some complaints  

regarding the quality of these audio descriptions  and the quality of speech, ASR is also being used   to generate closed captions for video programming  more and more in recent years. However, we're also   seeing complaints regarding the dictionaries that  are being used by these services. Some are not   up to date or do not have the accurate dialect or  vocabulary that's necessary with those contexts.  

Now, there was a lot of promise when it  comes to new next generation televisions   or ATS [inaudible] 3.0 in terms of how AI can  be used to generate overlays of information   when it comes to signed information or automated  sign language or captions, various graphics or you   know at this pre stations of graphical information  on the screen. And I recognize that I'm soon out   of time, so I would like to encourage people to  visit our website, www.FCC.gov/accessibility.   And there you'll be able to find more  information about our recent initiatives   involving AI and people with disabilities. The  FCC is committed to the use of AI in order to  

enhance accessibility for people with disabilities  in modern communications and video programming,   as well as for emergency communications. >> Elver Ariz Silva: Thank   you so very much, Josh. Now in our next presentation, we will   hear from Rylin Rodgers, Disability Advisor for  Microsoft Accessibility on AI and Accessibility   at Microsoft. Rylin, you may begin, please. >> Rylin Rodgers: I'm the disability policy   directorate Microsoft. I'm a middle aged white  woman with brown and white hair wearing a Variety   of blue patterns today. I'm excited to join  this conversation and build on some of the  

great resources already shared. At Microsoft, we  believe accessibility is a fundamental right. It   breaks down barriers and opens doors to a  more equitable future. We're committed to   ensuring that people with disabilities  have access to accessible technology.   Our approach takes on four different areas.  We're really interested in the challenges  

people are facing and we understand that they  are complex and there's no single company or   sector that can solve them. So we're committed  to working with our global ecosystem of people   with disabilities, partners, and customers.  We really work together in four key areas:   Developing inclusive technology, working with  disabled people as the talent that drives   the [inaudible] forward. Modernizing Public  Policy to ensure access to fundamental right   of Accessibility and accelerating awareness  and connectivity through partnerships. It's   key that these pieces work together and are  grounded in the needs of disabled people.  

I want to take a step back to our conversation  around AI, because we've spoken a lot about the   term and what it means and the Access Board  is responding to the Executive Order on AI,   but holistically, it's important to understand  the history and that there are many types of AI   that are currently driving accessible features in  products and systems. AI has existed since 1956.   We've had multiple waves of advancement  from machine learning to deep learning,   and to our current age of generative  AI. I think this is an important point   of clarification when we're talking about risk,  opportunity, and regulation to be as inclusive   and clear as possible about what the risks are  and what types of technology we're targeting.  

At Microsoft, we have a long history of really  thinking about accessible and responsible AI.   We're committing to advancing AI through ethical  principles, and I think it's important to call   out that within those principles, fairness and  inclusivity are key components. This has been   an ongoing practice for multiple years, and our  accessible AI principles are regularly updated   and reviewed and used in all of our product  development. I encourage everyone to view   those principles, guidelines, and toolkits  on our transparent website so you can learn   more about our approach and consider it. The other part that's helpful if we've taken  

some of our learning and have been sharing  it in terms of a blueprint governing AI,   sharing what's possible and how can we think  collectively about ensuring privacy, security,   and accessibility in this New World? Where he  find ourself at this moment of generative AI,   which really is proving to be a paradigm shifting  innovation. There's an opportunity to create a   more inclusive experience at tremendous scale.  Individual assistants can be available to millions   of people across the spectrum of disability in  ways it hasn't been previously available. And   there's the potential to transform accessibility  as a practice and making design more inclusive.  

And I think that's a really important place  to start. Our partners at GitHub have really   been leading the way in terms of using AI  in coding modeling. We've seen two massive   steps forward in these efforts. One ensures  that new code is built accessible by design   as AI can acquire it and check is it going  forward. And the other, as it's transformed   the access to this practice of coding and other  parts of technology development, to a wider range   of disabled technologists. It's been critical  to drive forward the future of innovation.  

I want to take a minute to talk about what I think  is a bit of the elephant in the AI room, and that   is the foundational models and the reality  that they're built on the world as it exists.   I frequently say that the world as it exists is  racist, sexist, and ableist, and that really gives   us an opportunity to take a moment and think about  what that means in terms of the models and how we   need to be addressing it now and in the future. We are very clear that data libraries need   disability data to empower representation and  to protect against ableism. We're also aware   that historically, data libraries under represent  people with disabilities, disability experience,   and disability expertise. We've been tackling this  in a Variety of ways. As previously mentioned,   there's quite a lot of learning in the pace of  translating language access in the space of AI,   particularly in a Variety of sign language,  and an effort to get to more equity any   terms of language access for people. We have  ongoing research activities at Microsoft and  

a blog really outlining where we see it as  possible and what needs to still be done to   make sure that those practices are culturally  inclusive and meet the needs of consumers.   The other part of the learning is that it's not  just about filling and creating new data. It's   about making sure that we're testing and modeling  and really correcting the current data to prevents   harm. This includes a critical need to test  for disability bias in all parts of the design   process, the need to tune the foundational models  to more accurately represent who we all are and   Highway we all interact in the world, and again,  that ongoing commitment to filling the data gaps.   I'd like to point to one example of that  commitment to fill the data gap. It's one   of my favorites. It follows so well after  our previous speaker, because it really is  

around what is that challenge about getting to  accuracy for non typical speech in AI models?   But it's also one of my favorites, because  it's about partnership. We are going to get   to more inclusive datasets faster if we work  together. So the speech accessibility project   is a project across researchers and all of the  major tech companies working with disabled people   to gather high quality representative speech  s

2024-08-26

Show video