hi everyone my name is josh mcclure from dq i'll be moderating today's session scoring the accessibility of websites brought to you by jared smith and christoph rump so i'm going to take care of a few housekeeping things before handing it over to jared and kristoff firstly this session is being recorded and it will be hosted on demand for registrants immediately after the session finishes furthermore slides for today's sessions are available on the webpage if you require live captions for today's sessions you may access those in the video player or in the stream text transcription link in the session page now lastly we'll try to save the last 10 minutes or so for a q a in today's session so please post your questions in the q a section located next to the video stream and with that i'm going to go ahead and hand it on over to kristoff so hi all my name is christoph rump but you can call me chris i'm a test engineering manager from the company accenture i'm male in my mid late 30s with very short blonde hair and wearing a blue shirt and i'm uh yeah very happy to be here today uh together with jared from web aim he will introduce himself later after i've shown the agenda and yeah as already said we will now talk about the topic scoring the accessibility of websites but first a few more things about myself i'm located in switzerland and in zurich origi from from germany the city of colon here in zurich it's already quite late uh it's it's around 9 00 pm next to accessibility testing and overall quality engineering i'm involved in agile delivery hl coaching and specialist specializing in as delivery lead for projects in these areas just a few more private informations um i love being in nature in the mountains uh snowboarding hiking that's uh one of the reasons why i came to switzerland um i also love art painting with the crew i recently started the sound installation uh trying out other things um i'm also very interested in loose streaming and today i actually skipped one workshop for this presentation um so yeah but i'll be anyway able to see the recording afterwards so um now let me give you a brief overview on the introduction jared will start first then in the second chapter we will cover very briefly the wchg automation coverage jared will speak in chapter 3 about the difficulties of automated scorings then we will talk about the aim scoring methodology in chapter 4. and then in chapter 5 we'll speak about findings and conclusions and before then uh we go to to the question part and with that said i will hand over to jared now for the introduction okay thank you tristoff yeah i'm also very excited to be here i'm jared smith i'm the associate director of webaim webaim is the web accessibility in mind project we're based at the institute for disability research policy and practice at utah state university and web aim functions as a non-profit consultancy and training group we provide people help with accessibility our mission is to educate and empower organizations to build and maintain better accessible environments for those with disabilities i'm a middle-aged caucasian man i used to have hair before the pandemic but less so anymore utah state university is located in the mountains of northern utah where i live and um like kristoff i love to spend time outdoors in with my family so a little bit about as an introduction to what we're going to be talking about today so christoph and i met about first were introduced about 18 months ago and have had just some really invigorating and interesting discussions and collaboration over the last year and a half on this to developing and implementing this scoring methodology really was a collaborative uh process between web aim at utah state university and with kristoff and his team at accenture and our goal was really to better assess the accessibility of websites using the web content accessibility guidelines so as we started to have conversations we we had a lot of questions and things that we wanted to maybe better tackle and these are a few of kind of the problem statements that we came up with one was that it's that automated accessibility data we recognize is often insufficient to actually affect change with our clients with accenture's clients we just you know you can get a lot of data you can get numbers but sometimes that doesn't actually best cause people to implement better accessibility we also know that manual testing information and data can be more effective but it's also very difficult very expensive it requires expertise and a lot of time to to do manual accessibility testing we also realized that accessibility test data whether it's automated or manual is very often descriptive but not overly prescriptive in other words it can tell you what's what's wrong and what the problems are and maybe the depth of those those issues on a website it often doesn't provide very good guidance or prescription about how to start to address those accessibility issues and how to how to prioritize and actually how to make the best impact on the experiences of those with disabilities that are accessing that web content we also another you know question that we have was about the web content accessibility guidelines and performance testing and how that doesn't always align to human impact and we can have failures and we can count failures and and issues with performance and with automated testing but it's sometimes difficult to know how that aligns with the actual end user impact of those with disabilities so we asked the question of ourselves you know could we create a methodology that would provide automated data combined with manual testing and then provide some measure of human accessibility impact maybe put this together in a way that might be might be useful for our respective clients and perhaps to others so that has really led to the creation of our aim our accessibility impact methodology and we'll talk a little bit later how part of that is storing that's normalized to the webbing million which is an annual analysis of the home pages of the top 1 million websites this is a methodology that we know is not perfect it's still being refined it's being evaluated but we found great value in it for our constituents and our intent in presenting this is not to promote our methodology as the only solution but just really to have a discussion around the challenges of accessibility scoring and to present some possible ways to possibly address some of those challenges so with that introduction i'll throw it back to chris to present a little bit more on wiki and automated data thanks jarrett so um yeah let's speak very briefly about the web content accessibility guidelines and the test automation coverage so um standards and guidelines provide measures for documenting accessibility i think that's uh all common knowledge what's what's written here on on the slide so we have certain legal financial risks of course we have corporate social responsibility and other country specific guidelines then yeah we have the wcag um the worldwide standard with the four core principles perceivable operable understandable and robust and yeah what's important for us here for that slide it's actually i'm going more into the coverage right so for the guidelines we have four principles we have 13 guidelines and in total 78 criteria next slide please um speaking now about accessibility testing um this is the practice of measuring web and mobile app usability uh for users with disabilities and um here we are typically looking at different three different categories when speaking about tools for testing so first we have automated tools then we have semi-automated tools meaning with human interaction and then we have manual testing starting with the manual testing here as you know we typically use screen reader color contrast tools keyboard navigation and so on so we use tool accelerators to check manually if the criteria are fulfilled or not for the semi-automated tools we have listed a few here so we have the wave tool from webaim here specifically the wave extension the browser plugin then we have the aim methodology which we'll cover later in our in our presentation and yeah then we have a lot of different um open source tools just as an example nvda it's a screen reader caller contrast analyzer tools we have the andy book bookmarklet it's the accessible name in this description inspector which also supports doing several checks and also gives gives guidance to to the user to the tester we have a few more listed here the html visual validator or the web disability simulator which lets you experience how people different disabilities perceive a website for example simulating color blindness or low vision then for the third category the automated tools that means typically pure page code analysis we have just listed two the wav api and also google lighthouse but of course there are many many more out there looking at the test automation coverage meaning how many of the wcag criteria would you be able to test automatically a semi-automated or manual um you would typically say that around 30 percent can be tested completely automated another 30 semi-automated and for the remaining 40 you would need manual testing to to cover these as you might know this is more the conservative way of looking at the coverage there are new ways of automated accessibility testing which look differently at the the coverage um i'm pretty sure that you've for example seen the reports about the tool set from dq where an automated coverage of around 57 can be achieved and um i also heard about and from some colleagues of mine in costa rica um who are promoting a similar um more automated framework uh which covers a higher higher automation coverage and yeah now i will hand over back to jared for the next topic yeah so i will be talking about the difficulties of automated scoring and um you know kristoff's slide kind of uh kind of is a great introduction to that because there are so many different ways that we can measure accessibility and even when we look at tool coverage what does that percentage mean you know is the success criteria is it human impact is it is it individual failures and techniques within woodcad you know was a number of issues throughout a site there are a lot of questions here and these are some of the things that we that we were you know christoph and i on our teams together were asking about automated scoring and how these things might work so um as i introduced before we know that automated accessibility test data and the results do not always align with end user impact um you know there's a great example of this uh manuel and matuso which he created a website it was the most inaccessible site possible with a perfect lighthouse score so he generated a website had 100 100 stores across performance accessibility best practices and seo and it was like totally inaccessible and just totally awful when it came to the end user experience and you know that's just you know it's not a hit job on lighthouse or any other accessibility tool it's just a reality of automated accessibility testing that they can't always tell end user impact they're just looking for patterns of accessibility but you know it does kind of beg that question of what does what does that score mean what does a 100 percent score or an a grade mean when it comes to accessibility test results you know we know that a lack of detection of a failure does not mean that something is accessible right tools can detect failures they're pretty good at that at least detecting some failures but they aren't very good at detecting whether something actually passes or is accessible i mean something as basic as alternative text is a good use case they can detect instances of missing alternative text or obviously very poor alternative text but very rarely can do a very good job of telling a tester whether that alternative text is equivalent to the content of an image so anytime you know we're presented with accessibility scoring i think we it's important to ask the question what is the denominator right 100 percent of what um and how is that determined you know is it automated test uh results um is a wiki conformance is that somehow end user impact and how you know how do we how do we come up with this score and ultimately there are going to be some um a level of uh being arbitrary in that or at least maybe favoring specific disability types so this slide shows three different um measures or or uh detected errors in the web content accessibility giant guidelines with maybe a suggestion that they're all legal and this is this is not what we're presenting not what i'm suggesting that these are equal but it's uh some of the uh some of the the question that is posed when it comes to automated data so for instance um you know let's say you have one wikid 2.1.1 failure which is a keyboard accessibility failure it's defined in which guide is being a level a failure that's going to be very impactful for uh keyboard users and then maybe we compare that to a wikid 3.1.1
failure which is language of page ensuring that the document languages is identified it's also a level a failure in worked but you know what is the impact of that well interestingly the impact of language of page is most often nothing it's it's it's almost no impact except for when it is when you do happen to have a multi-lingual screen reader perhaps when their default language is different from the page and if that page is not the page language is not identified or is misidentified then suddenly the impact becomes significant meaning the entire page content might be rendered entirely inaccessible and so how do you how do you weight say a keyboard accessibility issue to a very rare potentially very significant issue of language of page are they are they equivalent or is one more impactful than the other and you might compare that to say uh wikid 4.1.1 failure which is parsing which requires you know certain valid html constructs also a level a failure you could for instance have a hundred parsing failures in a web page that have absolutely no impact on the end user experience [Music] and you know some of them you know you certainly can have parsing issues that do impact the user but many or i would argue probably most of those pricing issues generally do not have an impact on the user at least unless they are also failures elsewhere in location and so is you know are those parsing errors are they one percent as impactful as a language of page failure or a keyboard failure ultimately as you start to assign weightings or scorings to these things you have to make some of these decisions of what that impact might be for the user and how different failures are weighted in comparison with other with other failures and and those weightings i think are okay but i think we need really need to think through the process of how they're being determined and if there are maybe biases that are that are informing those weightings and um it's it's a tricky question and even the web content accessibility guidelines themselves arguably place disproportionate emphasis on screen reader users versus say those with cognitive and learning disabilities where there's not very much in the guidelines that cover those those impacts yet we know that they can be very significant for users with those disabilities so there are a lot of things that could go into the consideration for accessibility scoring that are not even in the accessibility guidelines at all so you know as we started to consider this and what this might mean one thing we we know is that the typical home page has about 51 automatically detectable accessibility issues or woochat failures at least based on data that we have in our web a million analysis of the top 1 million websites and that gives us a pretty good benchmark a pretty good set of data across a very wide swath of the web it gives us insight into what is actually happening now happening out there on the web when it comes to automatically detectable accessibility issues and so you know we can look at the number of issues and certainly a detectable issue is usually going to have a negative impact on users and that should and at least for our methodology is a big part of that scoring consideration we might also consider error density so error density is essentially the number of errors by page weight or lines of code or page elements or some other measure of the size or volume or amount of content within a web page and that's also i think really valuable um the the premise is that users may tolerate more errors if there's more content or maybe to put it another way if you have a page that has say 10 elements and it has two accessibility errors that's different than a page that maybe has a thousand elements and two accessibility errors right the the user may tolerate those two errors on say facebook as opposed to a very basic information page one of the difficulties with our density however is that if you want to improve an error density accessibility store it's usually easier to make your page bigger than it is to fix the accessibility errors and we have seen that in our web a million analysis we've seen over the last four years of this analysis of a million home pages and eight about an eight percent increase in page elements every year so the web is getting significantly bigger pages are getting much more complex but we've only seen about a one percent decrease in detected errors on average per year so in other words if you just looked at error density it would look like the web is getting much much better in reality it's only maybe getting a little bit better but it's getting much much bigger so that's the error density problem but we did consider error density in our methodology another consideration is content value or you know you know how would users value the page are they more tolerant of maybe accessibility issues on pages that are more of more value to them that becomes very difficult to measure how do you know how an end user might appreciate the content on a page and thus perhaps be willing to muddle through additional accessibility issues and it's an interesting question about maybe maybe scoring now we know that manual testing solves most of these difficult difficulties and questions it can provide great insight into these things but we also know it's very time consuming and expensive so these were a few of the things that we considered in our formulation of this methodology carl groves has a great article on this it's titled so you want an accessibility story i just would refer you to that he explores these and many other dilemmas and questions about accessibility scoring and so we we've set out to maybe address at least some of these questions and dilemmas okay back to christopher all right we are now chapter four so we'll now start speaking about the aim scoring methodology and for the scoring methodology um yeah that's basically the methodology we had uh derived um out of our research and this consists of four elements so we have the site crawling the preparation then we have the automated accessibility score then we have the manual impact score and then in the end we have an aim score um for the data crawling and um basically for for the preparation of the analysis uh you typically would define the scope um so of course like the website you want to want to test and you need to identify what we had chosen were four sample pages for for the later manual testing stage and yeah we figured through our analysis that for pages is a very very good sample size because it provides significance while minimizing time for the automated analysis we are using the wave api so we then have the number of page errors of of that website we have the error density and also we have the alerts which are likely to to be errors jump through the analysis we kind of looked at these three values and you were thinking how to how we were able to await these uh to each other so um um yeah first idea was looking more at the user impact of these and um yeah basically i think jared came up like with an approach to think about uh sixty percent average of the page error count then thirty percent of the um average page error density and taking into consideration also 10 of the page alert counts and um yeah in the future we definitely want to fine-tune that a little bit more so that's just like a very first model we came up with and of course when we're doing like more tests and more practical pilots in the future we think that we can also approximate these values for this automated testing um to the actual manual testing where we're performing then of course um because of yeah typically we have a lot of uh defects per page so um one page could be have like zero defects but could also have 1 000 or like many thousand detectable arrows um we all said you came up with normalization right so we took the web aim million there and aligned the scoring there in deciles and then came up like with an automated score from from 1 to 10 out of this alignment in dissolves then uh the manual impact score i think that's the most most interesting part in the end um here trained testers are guided through a manual testing process and yeah as jared already said manual testing solves a lot of difficulties right but it's very very time consuming and expensive so our goal was really to focus on a very time efficient check and yeah we kind of came up with the goal to have the overall manual testing process for one tester for only to take only one hour and yeah that managed uh that made it possible for us to um adding the end user impact on the various accessibility issues which had been detected in the automated process so here in the end yeah the tester also comes up then with with the score from one to ten again and um then out of the automated score and the manual score um we will then have the aim score um at the moment we're taking them um 50 50 so we're just like building building an average out of both scores and next to the aim score we are also producing a report which shows a little bit more more details as an example um the tester is also able to put in some comments about the test results what the testing he's done and that provides a little bit more more information for the person who then reads the uh the score and the report to make it a little bit better understandable uh gerbert will now briefly show us an example of the aim score on the report sure so this is um just our current representation of this this is something that we're providing as a service again we're not here to promote this particular service but to talk about how these things might come together and provide use and value so our aim scoring again is is really comprised of the automated accessibility score the manual impact score from human expert testing to generate the final accessibility impact or aim score so this is a sample uh report or results we did this uh on the nasa website so we did the automated testing um and we tested uh just over 15 000 pages on nasa.gov found 230 000 total accessibility errors for an average of 15.3 errors per page as we compare both the well the errors the error density and the potential are likely errors and align those with the web a million we'd get a score of 7.1 out of 10 in other words out of you know web pages generally when we compare nasa.gov we have a score of
about 7.1 so better than average but we know that average is pretty bad average is 51 errors on average so we do have to consider that you know even though this maybe looks like an ok store there still are 15 errors detected per page there's a brief overview of what those top accessibility errors are um oops accidentally click the link there okay um next comes the manual accessibility impact score again this is uh by manual testing kristoff we'll show you the items that we're testing in just a moment we do that on a sample of four pages uh our research has shown that a sample of four pages of a site provides a pretty good representation of the accessibility of the situs not perfect we had to balance you know time and effort with providing something that might be useful so expert testers go through and provide the scoring it's based on testing 10 aspects of accessibility through a guided process as well as providing a bit of a holistic store from that tester regarding the accessibility of that page there are also notes about from our testers about the aspects of accessibility so the person that gets this report can read the end user feedback about that and finally we have our ultimate aim store which is an average of the previous two gives a sense of maybe the accessibility of that page in comparison to other pages and based on that manual testing and then there's details about the actual detected accessibility issues through that process to help you prescribe and guide users as to what is happening on their site and the things that they might start to address to improve that accessibility so that's just a quick quick overview of the methodology that we have i'll go back to kristoff now to talk about our actual manual testing process yes so let me quickly go through the 10 questions we have um so we created this manual testing questionnaire where we identified the most impactful and readily testable criteria um and yeah of course we fully recognize that these are not comprehensive but we figured that these have a very very high end user impact and yeah let me briefly um read through them so question one is about the documents defined language two is about missing alternative text or poor alternative text three is about empty links and buttons for about the impact of labeled or unlabeled form inputs five impact of low contrast six accuracy and brevity of page title seven movement and animations 8 presents and visibility of keyboard focus 9 impact of keyboard accessibility barriers and 10 support for page 3 flow and responsiveness and additionally the tester also records an overall page accessibility impact score so this score is a little bit more subjective but that is the the intent to provide a human measure of end user impact and uh yeah we kind of figured that the average of the 10 questions the score is very similar actually to to the overall page accessibility impact score at the moment we're not differentiating between the weighting of this question of these different questions um but yeah that would be one point we are uh would be happily discussing at the end of the session maybe and clearly there are things that are missing from this list these were some things we found that could be readily tested in a fairly minimal amount of time now when we look at the automated testing data there's pretty good coverage across the web content accessibility guidelines when we just looked at the guideline level and 11 of the 13 guidelines have at least some components of data from automated testing that to provide insight um you know these could probably be expanded this is not like full coverage of every success criteria or every possible failure but we have some coverage fairly broad coverage when it comes to automatic testing when we add in the manual testing what we really wanted to focus on was deeper coverage and primarily focus on the end user impact and so this chart just shows that we got kind of deeper impact into 10 of those 11 covered guidelines so what that means for instance is you know automated data would give you information about alternative text our manual testing process looked at quality of alternative text and end user impact of both good and bad alternative text so it just shows that you know that our intent was to get was to get deeper and and and provide value and focus with the with a measure of end user impact um we didn't get we didn't get broader coverage when it comes to the guidelines but we got deeper coverage looking at additional things and hopefully a useful measure of end user impact from that manual scoring and when put together with the automated test data gave us our aim store which we hope is useful now chapter 5 findings and conclusions so we have applied a methodology in first practical pilots where we were able to perform a fine tuning of the of the scoring the very first thing was the accessibility index report um we started with that in late 2020 and back then the goal was to provide an accessibility ranking of yeah 100 large european websites but yeah we had to decrease our sample size drastically to two down to 30. and that forced us to to rethink um the ranking also the approach right to define accessibility score and yeah that's basically where we then came up with with the questionnaire and as previously said we try to make it as efficient as possible to minimize of course the intermanual efforts so we did that with testers from webm and accenture we found that tesla's rated the sites better on average than the automatic score we had a very high intra-class correlation coefficient which added great credibility to the process the manual testing process and we also had high levels of integrator reliability then for johns hopkins university um that was done by web aim collaborations and collaboration with the university and they created different dashboards one for vaccine websites um yeah where they figured out that they're actually notable barriers which was a huge uh concern and also had a yeah the research had a massive impact in the end on on the awareness of accessibility then another one about university disability inclusion with an analysis of the top 50 nih funded universities where they also found a disparity across universities and here also the supplemental nutrition assistance program short snap um yeah where they also could show that there are significant accessibility barriers um and this program is typically used by families with disabilities so again a major concern then now back to jared so we've been implementing this we're collecting data and we want to refine this over time and we'll continue to implement and ask questions and i'd love to hear your questions about this um about this methodology or you know poke holes in it we'd i'd be happy to hear that and see if together as a community we can maybe refine this a few things that we found we do feel that it is uh you know the is providing something useful though admittedly incomplete we know this isn't we're not intending to measure all aspects of accessibility but to provide some measure with minimal costs uh and effort you know our implementations we feel have been successful uh they have been informative and has they really have helped to promote improvements to accessibility in those entities to which we have provided these data um but we need we need more more tests larger sample size more feedback on this methodology so some of the primary questions that we have are you know can this methodology be expanded to provide weightings for error types or by what criteria or maybe by something else we we know that's probably possible and every time we go there it just it poses some of those questions uh that i presented before and some of those dilemmas of favoring or bias and some real difficulties there but we're hopeful maybe with more data that we might be able to further explore that um we also consider future versions of what with wiki 3.0 the accessibility scoring approaches are you know shifting from like a pass fell to more of a more of a storing approach and how that might inform our methodology or advice first it's a real interesting question as we shift again the guidelines more to a measure of end user impact you know the question of can these data and the test the manual test data be used to help inform broader accessibility issues in other words can automated data and a very minimal set of manual test results help you know other problems that might be present on a website you know can you continue aligning those data to help better address accessibility issues that maybe were not directly tested but are likely present because of the things that were tested and ultimately our hope and our question is how these things might better affect accessibility change to improve the experiences of those with disabilities on the web and with that we will say thank you and thank you to d2 for for hosting and having us and we'll um see if there are any questions jared and kristoff thank you so much for this wonderful session um there are many questions i do want to say that was great content so everybody who has contributed and asked questions thank you very much i'm going to pick through and and i do apologize if we can't get everything here but uh we will start off with uh one of the trending topics it seems uh does web aim have any plans to implement uh color contrast testing for um for shadows and uh and other um you know we'll say css or textual based content like that yeah i'm presuming the question is in regards to uh to wave and yeah that's an interesting interesting challenge from a from an automated testing uh perspective yeah we'd love to tackle some of those things we um yes no i will probably stay in the context of this of this methodology while wave can detect many contrast issues in in text one of those manual testing components was the impact of contrast failures that maybe extend beyond those that can automatically be be tested for instance with drop shadows and complex filters and background images and things like that so that i think highlights some of the power of this methodology combining the automated data with the manual test data so we could help fill in some of those gaps where the testing tools maybe can't fully test and allow a human tester to detect those things uh non-text contrast issues for instance those in in the images could be identified by that human tape uh tester and then provided a rating or spore that helps inform that overall score plus documentation about what those are to help guide remediation of those issues which you probably can't get from fully automated test data alone yep excellent uh one of the other questions here wilco asks how do you prevent drift of the score over time as automation improves oh that's a it's a real good question maybe i'll see if christopher kristoff has an answer to this too i think one is the alignment of that data to the web a million provides a bit of a moving benchmark we are updating that analysis every year yeah as you know hopefully hopefully there will be shifts right as our as our testing gets better as we refine the manual methodology hopefully we're getting a a better score that does pose a problem though maybe comparisons of stores across time when your benchmark is changing ultimately i'm not too concerned about that i'm more interested in giving giving people something that they can take right now know what the issues are and start to start to make their sites better christoph did you have any and you wanted to add on that one not not really i totally agree um i think i mean our methodology is also subject to to change right we're trying to improve i try to improve the uh like do the fine tuning also on the different weightings which is one uh still one of the open questions we have so i'm also not not really concerned i'm rather looking forward to yeah tweak and fine-tune further and get this in line yeah [Music] that's great uh amanda asks uh you indicated manual tests about four pages per site uh is there a reason like percentage of pages or page types i think the question ultimately is um why four pages it's a very fair question um it really was a balance of getting some depth plus a balance of time we wanted to scope it to about an hour of time that certainly could be expanded if we felt additional manual effort was justified and available for that test in our research we have found that when we do very broad automated accessibility scans and then test a sample of about four pages that there is is pretty good alignment between those those results so we do know that four is a pretty good proxy not perfect but an okay proxy um for that and we are not just choosing four random pages typically our approach is the home page to significant content pages and then one randomly selected page out of that broad sample that's automatically tested yeah so you want the representative sample of that site and which can be very challenging right the uh i'm sure with applications that are very dynamic and require a lot of user input and interaction might have different uh you know pages or form factors in that application so yes and and you know what we do want something that's representative we are choosing uh pages that are likely most impactful you know the home page and major content pages primarily to provide provide some guidance on fixing the pages that are most likely to be accessed by users absolutely no yeah and one thing to keep in mind right accessibility is always a work in progress it's not something you do once it's not something you test and then you're done right so um consider starting off with a a specific journey that is going to impact the end users the most right and then build from there and that is one reason for the methodology and that it could uh be replicated and redone readily to just to track that the in progress over time it's not intended to just be a one-time thing if that's all that it is that's great but that it could be redone over time to give you a sense of how things are improving over time yep absolutely okay we have one question left here uh or time for one question left here um one of the the themes that i'm seeing here in some of these questions uh is this testing methodology something that is open and available for use christoph you want to take that one yeah i mean we're uh right about to to figure that out how we want to evolve here in the future so there are some some parts yet which we have to to automate first um one some part about the data crawling for example the overall user interface also we're planning to do some some changes and yeah the idea is first certainly to get some more more practical experience to fine-tune that uh that scoring um yeah because in the end it can be sometimes challenging like bringing out the score to uh the worldwide map and like enabling companies to say okay they haven't scored from i don't know seven out of ten because some one of them made a test and they tried to make it look official so i would say we're not yet there but of course the plan is to to bring it out to the public right jared yep yeah correct yeah we want some feedback and wanted to have this discussion that's the idea is to make this or similar methodologies more available yeah absolutely well it does seem that uh that is something that the community is is very interested in um so audience definitely keep an eye out for that i'm i'm sure uh uh jared and kristoff will be uh providing some stuff out there in the wild so with that we are out of time i really really appreciate jared and kristoff joining and presenting and we really appreciate all of you out there taking the time to watch uh and and give your feedback and input this is a wonderful community we're so thrilled to have you here um virtual round of applause for jared and kristoff thank you very much
2023-09-05