How to perform an accessibility audit
>> MALCOLM MEISTRELL: Hello, and welcome, everyone. My name is Malcolm Meistrell, and I am a Membership Services Coordinator with IAAP. Thank you for joining us for today's subject matter expert webinar, How to Perform an Accessibility Audit. Before we begin, we
have a few general housekeeping items to go over. Closed captioning is provided. To enable closed captioning, select the CC icon at the bottom of your Zoom screen. The StreamText links for English, French, German, Spanish, and Swedish will be posted in the Chat as well. American Sign Language interpretation is also provided. Microphones are muted to prevent any background noise or disruptions, and we ask that everyone please post your questions in the Q&A. The Chat will also be monitored
for general dialogue and technical issues. And today's webinar will be recorded and available in our webinar archives, and we will send out a copy of the recording and presentation to everyone who registered. And now I would like to turn today's program over to our presenters, Giacomo Petri and Michele Lucchini.
>> MICHELE LUCCHINI: Thanks for the introduction, Malcolm, and hi, everyone. As presented by Malcolm, today's session will focus on how to perform an accessibility audit. In order to do some context setting, let me introduce the company that Giacomo and I come from. We have more than 1,000 engagements and offices in New York, Austin, and Italy, where we are broadcasting from. We support a number of
different industries in the space of retail, travel and hospitality, technology, education, financial services, and healthcare in building their accessibility program. A couple of words also about Giacomo and myself. I am Michele Lucchini, Vice President of Delivery. I started UsableNet 23 years ago, and I am really glad for the opportunity that we have today to present our vision of how to perform an accessibility audit. Together with me today, I have Giacomo. Giacomo is the Director of our Accessibility Auditors. And with that said, I think we are good to start. So let's start reviewing together the agenda. We will start focusing on some foundational concepts, just to set the background of what we are talking about. What is the environment that relates to an accessibility audit? Trying
to define its complexity and how articulated it is. Then we'll discuss about what are the preliminary steps when we approach with our companies the accessibility audit. In terms of needs, goals, and objectives for an audit. We'll review together what we should expect from an audit. And we'll then dig into probably the core of the session, which is the differences between an in-use audit versus a code audit. And then we'll definitely leave some space to Q&A, so to question and answers. So let's start with some foundational concepts.
In order to better identify the topic, we need to start from the basics. And when we talk about an accessibility audit, the basic is represented by the Web Content Accessibility Guidelines in the version, the de facto standard right now, the 2.1 AA. I don't want to go into crazy details, but it is important that we first realize what is the structure. The guidelines are in principles. There are four, perceivable, operable, understandable, and
robust. There are 13 guidelines in total, and each glide has a set of success criteria to be met. And we are talking about 50 success criteria in total for the AA, as I said before, currently the standard when we talk about digital accessibility. The success criteria are organized in three levels -- A, AA, and AAA. The reason why we are presenting this structure is to highlight the complexity or the way it is the Web accessibility guidelines are organized. If we think about all the success criteria and techniques that an auditor must adopt in order to be able to verify if something is satisfied or not, the way these are organized represents the only possible way to make it usable for an accessibility expert in order to produce a report. So accessibility and
digital accessibility, it is not necessarily something very easy. It is something that requires an articulated process because of its natural complexity. As a second foundational concept, it is important to refine the language and the semantic. So
what is an accessibility audit? In these 20 years of experience, we have seen the word "audit" being used to represent a number of different deliverables. So today we'll give you our point of view on what an accessibility audit is and why we believe it is what we think it is. So an audit is an assessment of a digital property against a standard. As we said, this standard is the Web Content Accessibility Guidelines in its version 2.1 AA. As we are doing an assessment that is going to measure the conformance against the guidelines, this implies that we need to verify all the success criteria, including the guidelines. Which immediately means that just an automated tool, it is not enough to validate the entireness of the guidelines. And it is necessary to
combine assistive technologies during our test. With assistive technologies, we mean all the tools, software, and hardware that are required in case of different abilities. Examples of assistive technologies, it could be screen readers, screen magnifiers, and so on and so forth. When it comes to do an audit, there are a
set of preliminary steps that we highly recommend to consider. First of all, the goals. If I am a company that fills the needs of an accessibility audit, it is important I am very clear with the reason why I need an audit. Do I need an audit, an accessibility audit, just because I need to know where I am? What's the health from an accessibility point of view of my digital properties? Or do I need to do an audit in order to be able to generate a voluntary product accessibility template, which is also known as VPAT? It is a common documentation that is often required to -- as an assessment, an official assessment, to share with other companies around the accessibility status of a digital property. Or do I want to do an audit to highlight exactly which are the violations? Because I have in mind a program to remediate all the accessibility violations and improve the conformance level of my digital property. Obviously, these goals can be combined, are not mutually exclusive. But it is important to start with the reason why I need an audit.
Because the reason drives, in many cases, the approach for the audit. Do I need a quantitative audit? With this, I mean do I need a high volume of pages to be audited? Or do I need a qualitative audit, so just selecting a representative sample of my website so I can focus on the common violations and I can better define a remediation strategy which matches with the definition of this call? Do I need a large scope to be audited, or do I need a representative sample to be audited? It might seem to somebody these are trivial aspects. But I can guarantee that we have seen many companies struggling because they were not clear with their goals. So often they have been overwhelmed by the results
of the audit. Or the audit was not enough to perform the actions that they planned to activate after the reception of the audit. When we talk about accessibility, another important aspect is to identify actors and languages. Accessibility speaks different
languages. And this means that the way I talk about accessibility to a developer, maybe indicating what actions needs to be taken at code level, is different than the way I am discussing accessibility with a stakeholder. Maybe I need to stay more at a higher level discussing conformance, legal risks, strategies, planning. It is important that we recognize the different languages so our message is consistent and is appropriate across different teams. Because something that probably you have already learned is that accessibility cannot be just a one team responsibility. It involves a lot of different actors and
departments in your organization. So questions that we might want to ask and ideally answer when we are facing accessibility audit is who is going to analyze the audit and take decisions? We need to consider that when we do an investment, doing an audit, there are at least two expectations that we have. First one is that the information containing the audit will be presented in the way that fulfills our needs. Does it report and satisfy the goal that we had that drives the need of the audit? Is it clear to understand? As I said before, accessibility can be potentially complex. So we want to have an audit result that is clear to understand. Understand to do what? Understand to take actions. And actions
can be a very broad goal. So actions, very specific regarding what to remediate and not to remediate. An action can be how I can do my release planning starting to incorporate some accessibility remediation. So do I have the information in my audit on prioritization, for example? So these are all related to languages and actors that are involved into an audit process. So what to expect from an accessibility audit? First of all, I want an audit to be comprehensive. I want an audit to report conformance. And
I want an audit to provide parameters to determine who needs to do what, so identify the responsibilities. Facilitate and potentially drive the prioritization. And support the definition of a remediation strategy. I know I emphasize multiple times the importance
of a plan or a strategy, and this is fundamental when we talk about accessibility. Because simply reiterating a development or design like I have always been designing and developing, followed by an accessibility testing, and then I go back to do design and development table, I remediate, and I keep reiterating this process will make accessibility not sustainable and too expensive for everyone. So the idea behind my strategy plan are all focused on trying to learn from an initial remediation. It can be originated by an audit. And try to define an accessibility program so accessibility is sustainable. In other words, what I can learn from an audit beside the specific remediation to transform accessibility from a project into a requirement for all the teams that are involved into the digital property I am now focusing. When we talk about conformance level -- and
here you see just an example of one of our audits. You have, for example, the indication of what's the conformance level? A legal measure. How many success criteria have passed? And how many have failed? Then you have the classification of the issue by check type. Which are the issues that can be found and discovered in a fully automated way? Which can be found mixing automation and manual? Which can only be found with manual? Just to set some expectation, less than 5% of the success criteria of the Web Content Accessibility Guidelines can be tested fully automatically. And of course, this probably rearranges the expectation of many persons on how much we can rely on automation when we do Web accessibility.
The reason why we believe that this information that you are seeing here is important is, for example, for managers or the legal team to measure the potential exposure. The legal industry often is relying on test automation in order to check websites and potentially send a claim. So if I know how much of my accessibility violations can be only found simply using automated tool, that might help us with some prioritization strategy. Or like I said before, that could help me just measuring my legal exposure and my risk.
Another parameter we believe is fundamental is identifying a classification for responsibility. I mentioned before accessibility is not a one team focus. There are many departments involved. And there are issues that are at code level, that are a responsibility of the development team to resolve. There are other issues that can be design level. So it is time for the design team to find a solution. And many others that can be related to the content. So not design, not structure of the site, but pure content. With this classification,
it becomes immediate to understand which are the main actors, which are the doers for the remediation, and would simplify a lot the distribution of the work across our teams. I mentioned before the importance of parameters to define a prioritization. Two parameters that we use and we recommend are severity and complexity. Where severity measures the impact of an issue on the final user experience. And complexity is a parameter that indicates
how complex the resolution of that violation might be. So for example, a company might decide to concentrate on the low-hanging fruits, which are the issues with an easy complexity, or by high severity. It is all part of the strategies. And when it comes to what the doers who actually needs to remediate needs to receive from an audit, they need an audit to be actionable. So clear indication of what needs to be done. As accessibility very often is a new topic for the majority of the people that are involved for the first time in an accessibility remediation, an audit should also provide education. Do not underestimate the importance of understanding why something has been triggered as a violation of the guidelines. Just focusing on resolving something without actually understanding why
that thing is an error might have a consequence, probably some regression in the future. So if the team doesn't understand why something has been reported as an error, most likely that error will be introduced in the following releases. You should expect from an audit to receive instructions on how to resolve. You should expect an audit to be a tool to accelerate your remediation and education process. A couple of examples, always considering our audits just to provide you a demonstration of what we believe should be an audit. One of the actionable items is, for example, the
ability of identifying components. So recurring elements across the scope that has been audited so it is easy to also organize the work and identify which are the most recurring issues that potentially are in the two components that are in this example, header and footer, which are present on every single page. An audit should provide the list of all the issues that are present in a page. And again, details here are not fundamentally important. We will review a couple of details with Giacomo in a few minutes. But the importance of listing everything, so the comprehensiveness with the guidelines I mentioned before. An audit should provide education. So having a description of the issue and the solutions.
So people can learn while they work on the remediation. And where the issues are particularly complex, the audit should provide the instructions from the person that conducted the audit on what is the recommended solution. So now we want to spend some minutes focusing on the difference between an in-use audit, so an audit that is originated by using a digital property. So in our example, an audit that can be generated by an activity of using a website, so browsing a website with a variety of assistive technologies. Versus an audit
that is also -- that also includes a validation of the code, so of the HTML, CSS, and JavaScript code, which are big drivers of the final accessibility and conformance of a website. Giacomo, do you want to take over >> GIACOMO PETRI: Sure. What does an in-use audit mean? An in-use accessibility audit consists of an statement that uses a website or product against the accessibility requirement. In our case, against the Web Content Accessibility Guidelines. It involves using a series of tasks including using assistive technologies and simulating user behaviors to simulate real-world user scenarios. The metrics to measure the status of the website consists of in-use metrics, such as the time, the number of mistakes made, and so on. The primary goal of an in-use audit is to
identify accessibility issues that arise during actual usage of the website or product so it identifies issues in use in a sort of empirical way. It is important to note that an in-use audit may not always pinpoint the exact cause of an issue. While it might highlight the presence of an accessibility problem, it may not provide detailed information on why the issue exists or the underlying reasons for its failure to meet the Web Content Accessibility Guidelines. Directly connected to this point, although an in-use audit can present accessibility issues, it might not provide explicit solutions or recommendations for addressing each specific issue. The focus of this type of audit is primarily on identifying barriers and gathering empirical data which can serve as a foundation for further investigation. In summary, an in-use audit is an evaluation
approach that examines a website or digital product in action using assistive technologies to identify accessibility issues. However, it may not capture all the accessibility barriers and often does not provide information about the search or the solution for these issues. In this slide, we showcased a concrete example of an in-use test conducted using VoiceOver and Chrome. This slide displays a product details page from a retail website specifically highlighting the quantity drop-down. When interacting with using VoiceOver, the element
is announced as 1, quantity, menu pop-up collapsed, button. All the information required to understand the element appears to be correctly configured, the level quantity, the current value 1, the element type menu pop up button, and even the state, which is collapsed. Everything seems all appropriately conveyed to assistive technology users. This observation gives the impression that the element accessibility implementation is remarkably well executed.
Likewise, when testing the same component with a different combination of technologies -- in this case, NVDA and Firefox -- NVDA announces the component as Quantity combo box collapsed 1. The information presented may vary slightly between the different technologies. Each technology may announce the element in slightly different ways, but you know users are familiar with their respective technologies, including the terminology and the order of information provided. In this case, the input label is clear. NVDA says quantity. The element role or tie is identified as combo box. The state is indicated as collapsed. And the current value is appropriately presented as 1. Great. Again, everything appears to be working as expected. The quantity component
seems meeting our requirements. Now let's see what a code assessment brings to the table. In our UsableNet code audit process, the in-use audit represents the last step of the process specifically designed to validate and confirm the findings identified during the code assessment. During an accessibility audit conducted by UsableNet, the auditors follow a structured approach using the UsableNet audit platform. The audit starts first performing a comprehensive code assessment, which entails analyzing the code base against the WCAG, so the Web Content Accessibility Guidelines. This assessment aims to identify the root causes of accessibility failures and determine the most effective and robust solution to address each specific issue. The code assessment is a crucial step in the audit process, as
it enables the auditors to pinpoint the source of accessibility barriers and provides valuable insights for the remediation efforts. By identifying the cause of the problems and proposing appropriate solutions, the code assessment significantly accelerates the remediation phase, ensuring that accessibility improvements are implemented efficiently and effectively. In the upcoming slides, we have included screenshots of our UsableNet platform to demonstrate its capabilities and highlight what we consider necessary to provide during an audit and what represents a real game-changer while performing the remediation. In these screenshots, we will revisit the same previous example of a retail product details page featuring the quantity drop-down. This time, the page has been recorded and analyzed using UsableNet AQA. Previously we had evaluated the quantity component and considered
it to be in good shape. However, the screenshot presented now represents the true state of the quantity component when assessed using usable net AQA. AQA identified an issue indicated by the description, which is form control doesn't have an accessible name. Now let's delve into the various sections of the UsableNet AQA platform while exploring in detail. One important component of the platform is the dynamic and interactive preview
of the webpage. This preview visually highlights the specific accessibility issue which is, in that case, the quantity drop-down. This feature provides a comprehensive visual representation of the issue, allowing using to easily identify and understand the accessibility concern within the context of the page. Another crucial component is the navigable DOM tree, which allows auditors and developers to traverse through elements, providing a comprehensive understanding of the page structure. Additionally, we have a top panel that offers developers access to more detailed information about the current element and its accessibility properties. This includes a coding specter, where code can be reviewed, CSS properties, an assistive technology preview which emulates the behavior of a real assistive technology filtering the page by specific elements type, and so on.
Upon interacting with the code view, it becomes evident that the quantity drop-down lacks an accessible name. This is due to the quantity label not being neither explicitly nor implicitly associated with a drop-down element. Specifically, there is a div element containing a spun element with the quantity text inside, followed by a select element without any accessible name. Finally, the issue detail section provides a precise description of the issue, including its severity, which represents a metric to understand how impactful is the issue for the end user, its complexity, an estimate of how difficult will be the remediation of this specific issue, and the responsibility which highlights the teams involved into the remediation phase for this specific issue. It also offers a detailed explanation of the issue, along with potential solutions to address it. To summarize, during the in-use audit, the quantity component appeared to function correctly, but AQA has identified an accessibility issue. This might need someone to say, okay, even
though you are saying that the quantity input is currently failing from a WCAG standpoint, it doesn't seem to be affecting the end user at all. Both VoiceOver and NVDA users were able to recognize and understand the element properly. To address this observation, let's take a look at the screenshot demonstrating the behavior when using JAWS with Chrome. JAWS announces combobox 1, and that's all, where combobox indicates the role and 1 represents the current value. There is no information about the accessible name of the element. When using JAWS, it becomes difficult or even impossible to understand the purpose of the input. In summary, it is important to emphasize that while an in-use evaluation may not always reveal certain issues, they can be detected through a code assessment. This highlights the significance of conducting a code assessment
as part of an accessibility audit. Okay. We only have 20 more slides left, and then we are done. No, I am joking. I promise, this is the last one. And then we'll open the floor for questions and answers. I see there are some questions. Why did I include, again, the initial screenshot of UsableNet AQA, which showcases all the features we have previously discussed? Upon reviewing the previous slides, I would like to shift the focus to the remediation process. The code audit is intended to accelerate the remediation phase. The responsible party for remediating the accessibility failures has all the necessary information conveniently consolidated in a single dynamic view. This
is what you should provide within an audit. Which basically reflects what Michele had previously said. We should have something that is actionable by the teams responsible for the remediation. We should have something that allows to educate these teams from accessibility standpoint. And then something which provides instructions on what is the cause of the issue
and how to solve it. This kind of feedback provided during an audit plays a significant role in accelerating the remediation phase and acts as a game-changer for the next steps. >> MICHELE LUCCHINI: Great. Thanks, Giacomo. I see we have a few questions that I think are very helpful to probably better explain the code of this section. I have a response I already provided in the Chat in writing. At some point, it is slide 24 that I am just going to show again very quickly. We mentioned the importance of simulating
user behaviors. And the question was isn't simulating user behaviors more about usability testing and not conformance testing? Both are important, but an audit is typically more about conformance to Web Content Accessibility Guidelines; isn't it? And not necessarily with usability? Yes, the question is absolutely meaningful, and this highlights the importance of language; right? Actually, when we say "simulating user behaviors," we meant a different thing. We meant the importance during an audit to interact with the page. So just analyze statically a page might not be enough because there might be issues that could arise when I try to interact with the main navigation bar or a drop down.
So the importance of using the page to like we were a real user. That was the meaning of simulating user behaviors. And I hope that this makes sense. Ray asked a couple of great questions. I will start with the first one. Reading the WCAG, the Web Content Accessibility Guidelines, can be overwhelming as they are complex. What are some techniques you have seen done to simplify how the guidelines are presented to make things easier for those performing accessibility audits? Such a great question. I start with a completely useless answer. It has been 20-plus years
that we work on accessibility, and we still learn every day. And yeah, are complex. I think that the techniques is make sure that at least some of your auditors are also experts in Web technology. So have a development background. That helps a lot. Just creating that understanding around the whys. So why something is an issue and what could be the solution. The complexity of the Web Content Accessibility Guidelines is also related to the fact that they are quite old. They might not be very relevant for the technology that we use. So
it's kind of you have to deal with it, and if the goal of the audit you are conducting is to, as I assume, is to report issues across the entireness of the guidelines, it is what it is, unfortunately. And I would like to see if Giacomo has any input, as this is really his core focus. >> GIACOMO PETRI: Yes. As Michele mentioned, the audit phase requires knowledge. Knowledge, in terms of standards, so the auditor needs to know the WCAG standards. And then it requires a developer that runs because in order to understand if the current solution, the current implementation is affecting accessibility may require technical knowledge. If you think about the example we have previously seen
and reviewed together, the label was visually present. The quantity label was there. But it was not properly associated to its corresponding input. So from a technical point of view. And then it was reflected using JAWS. The input didn't have an accessible name and was not understandable by people that are using this assistive technology.
>> MICHELE LUCCHINI: Thank you, Giacomo. Ray posted another question. Our team has used tools to provide an accessibility score, but we know that is not complete. What advice would you give to help companies benchmark where they are on accessibility, and what gaps they need to fix? Yeah, another great question, Ray. The problem with accessibility at the moment is that it relates to a conformance, which is a binary measure. So you are conforming the guideline or you are not. So it is one or zero. There is no middle way; right? And consider that because of the way it is structured, you have to guarantee that something, a single success criterion, is satisfied across the entire site. And this highlights one of the biggest gaps, right, that we have trying to measure accessibility. Which is if I don't have a proper alternative for an image on my site,
I am failing -- I need to trigger a failure of the first success criterion 1.1.1. I need to mark the success criterion 1.1.1 if just one of my 1000 images on the website are not satisfied. And I would do the same if 999 images are not satisfying the same success criteria. So here we see how weak is just relying on conformance because I will lose that percentage because only of that one image. In UsableNet, we try to use like a very empirical
measure, trying to define what's the distance between where you are and satisfying that criterion. That's, let's say, a more delivery-oriented measure that seems like a score than maybe makes sense. In terms of benchmarking, that's also another good point. And it is -- I mean, ways we are helping companies is -- I mean, besides
what I already said -- is working a lot on comparing audits or tests over time. So showing the progress over time. Which also includes, as part of the equation, comparing user testing sessions, so sessions that are more focused on involving the disabled community to execute specific tasks and see what is the behavior or other parameters that you can measure during user testing are evolving over time. Another question -- and I hope I answered your question. Another question is do you consider a browser plugin scanning tool as automated or semiautomated? Yeah. There are many, and they are very popular. We have built ours, which is connected to our UsableNet AQA platform. I like the idea of, of course,
using a plugin that affiliates a lot, in particular the development or content team, let's say, initial scan. But I am trying to use the right word, but honesty and understanding of the potential of the tool is key. So you can get the best results when you know exactly how much that plugin can give you in terms of the overall accessibility needs. You know
that's a portion. That's great, for example, to build a program trying to mitigate the legal risk. That could be let's make sure that the majority of our pages don't have any issues that can be identified automatically. But do not just rely on it. But we agree that they are excellent tools when it comes to do a check while you work or before committing the code to another environment. Also, consider tools or solutions that are allowing you to add tests as part of your deployment pipeline. There are tools that
can be called using APIs, and you can better integrate them in the deployment process so you can have at least a, let's say, high-level -- let's call it in this way -- check before you promote your code across your environments. Another question: Are there special considerations required when reviewing mobile applications? Such a great question. I start saying that if we just think about the potential that we have on the Web technologies, so the Web code, in terms of making a website accessible, they are on another level compared to the smaller number of things that you can control on a native app. This means that one of the biggest drivers when it comes to accessibility of a native app is the user experience and user interface. What we could -- could go under the design umbrella -- I know I am not very precise in this -- is a big, big driver for accessibility. Then at code level
you have, of course, aspects defining accessible names of components and defining reading orders that, of course, you can do. But there is less, let's say, that you can do working on code than on Web. So on the native apps, our recommendation is to rely on the tools that are automated tools that are already offered by the main platforms, so like the accessibility scanner or accessibility inspector. And then manual testing activities. And then, I mean, the one probably this will be very complicated, would require an entire session. And then there are a lot of differences between fully native applications and applications that just include a wrapper so that let's say present Web content through a wrapper. Because the
behavior and the way these applications interact with the assistive technology, such as a screen reader natively installed on your devices, are very different. How do you respond to clients who wonder why you need an audit when you just could put a JavaScript overlay, like accessiBe or UserWay, on their site? That's another great question. Probably I would go back to the original slides and the first questions we got from Ray. Accessibility is complex and requires a deep, let's say, manual, human-based activity. The overlays are meant to provide probably a superficial resolution. They are good in addressing a few points, a few items. But when it comes to providing a solution to conform to the guidelines, that is probably where the gaps are more evident. In particular -- and again, this is our opinion that have been doing this
job for two decades -- also is that often overlays and solutions based on widgets are trying to provide the users or keep users with alternative ways to do what they already do. There are solutions that are also allowing you to activate the screen reader, but blind users want to use their screen readers. The ones that took so many hours to learn. Not to use another screen reader they don't know how to interact with. There are widgets allowing you to change the color palette or color screen, but now all our systems already include in the basic settings the way to adapt your screen to better present the information based on your visual deficit. So often the settings are just on that website
provided by the widgets might be in collision with the settings that you define on your system. So as you probably understood from our words, we are probably a little bit old school. And we want to resolve the root of the issue. Yeah, it takes time. It is more complex. It takes more money. But that's the only way you can really do accessibility in our opinion. And ten years ago, we were using -- I mean, not exactly similar technologies, but we were offering, let's say together with our platform to perform the accessibility remediation, there was, I mean, implemented by the developers, the remediation was not automatic. But we did increase the text size or change the colors. We removed it because it is no longer needed. How much time on average do you use for a
webpage audit, single URL? I mean, anything between four and ten hours. Depends on the complexity of the page. And for native apps, we are more or less around the same -- the same threshold. So I would say within 80 hours for a mobile app audit. Also consider for a mobile app, there is not a real solid guideline, because you need to apply the Web Content Accessibility Guidelines, which has been designed for Web, not native apps. So you need to rely on the official way on how to apply it to native apps. Plus we implemented a set of
guidelines to facilitate the auditing process internally. How to manage third-party apps if they are not compliant with WCAG. Yeah, that's another great question. It might not be a great answer, but let's start from the contracts. So let's start asking for a VPAT, for example, asking for reassurance from your third parties on the quality from an accessibility point of view of what they produce. Because ultimately, it will be your responsibility when you publish it on your website. Other ways is potentially inform your users in your accessibility statement page that there might be challenges because of the adoption of certain widgets or third-party components.
It will not resolve the issue, but at least it will facilitate the understanding of what happens when a user might find difficulties using your website. Then there is a question on -- around the coverage of an automated test. You mentioned only 5% of accessibility success criteria can be fully detected with automated testing.
Which tools do you use for automated testing? How many WCAG success criteria can be potentially detected by automated testing? We were using the most popular tools that you can consider. We decided to implement our technology. That decision was mainly dictated because we needed to support our auditing team with a much more robust platform, in particular in performing generating reports. And then we also developed all the automation part. Just to complete, let's say, the context around that 5%, 5% are the success criteria that can be tested completely automatically. So the automated test is fulfilling the techniques that you need to apply in order to verify whether something is an issue or not. There are another 20% to 25% that can partially be tested automatically, and then you might need to do a man I don't you will intervention in order to verify. For example, all the aspects
of the guidelines which relates to the meaningfulness of something. So the automated check and spot if an attribute is present or not. And then a human being is going to define whether that, for example, alternative text for an image is meaningful or not. >> GIACOMO PETRI: So just to add something here, if we think about the one, one, one success criterion, for example, a mismatch can be detected automatically, so the one, one, one is semiautomated. But the test must be done automatically. So the same test criterion
might be tested partially automatically and partially manually. >> MICHELE LUCCHINI: Thank you, Giacomo. I would like to answer all the questions. Maybe, Malcolm, you can help me save them and I can answer them in writing. I showed before our contacts and also my personal email. Feel free to reach out for any additional questions you might have. I hope this was useful, was deep enough to show what as a company or as an auditor you should either expect or provide your customers. And thanks to the IAAP for
the opportunity, and thanks to everyone that provided the support to organize this session. >> MALCOLM MEISTRELL: This is Malcolm. I want to thank Michele and Giacomo for such a great presentation. Like you mentioned, there's still a lot of questions in the Chat, but I will be saving that and sending a copy over as well. And Michele's email and the link to the UsableNet website was posted in the Chat as well. And we'll be following up with
a post-webinar email to everyone who registered with a copy of the presentation slides and the recording. So I want, once again, I want to thank our presenters for such a great presentation today and our support team for their lovely work with the sign language and captions. For upcoming IAAP webinars and events, we have on the 13th and 14th of June the IAAP EU Hybrid Accessibility event, and that's going to be online as well as on-site. And
then we have on June 21, as part of our Digital Accessibility series, Ensuring Your Secure Digital Assets are Accessibility. And on the 27th of June, Embracing the Ethos of Accessible Built Environments in Emerging Economies - Stepping it Up. And that is part of our Built Environment webinar series. And just a quick reminder for anyone that is not yet a member of IAAP, we invite you to join a network of over 5500 accessibility professionals in over 100 countries. And IAAP members get discounts on live webinars and
complimentary access to our library of over 100 archived webinars and our Connections networking platform as well. Anyone interested in a membership can sign up at www.accessibilityassociation.org or email info@accessibilityassociation.org to see how you can become a member. Once again, thank you very much for joining us, and I hope you have a great rest of your day.
2023-06-11 19:48