Biometric Technology Rally 2020 Results and Findings

Show video

[Arun] Alright, good morning my name is Arun Vemury I'm with the DHS Science and Technology Directorates Biometric and Identity Technology Center First of all we would like to thank you for joining us today to learn more about some of the results of our 2020 Biometric Technology Rally in 2020. We definitely appreciate you taking the time to learn more about our efforts. It was one of the more unusual years of running a rally and I really want to celebrate our team that helped to execute the rally this year.

On the phone or on the telecom with us in this video. I'm joined by my colleagues Jake Hasslegren, John Howard, Yevgeniy Sirotin and Jerry Tipton. I'll go ahead and get us started and hand off to each one of my colleagues to brief different parts of the briefing deck. We'd like to ask that you hold questions until the end. You can put them into I think the Q&A or the chat box and we will take them at the end of the webinar. And again, you know we look forward to your questions. We look forward to explaining some of the

things that we saw at the rally. Explaining some of the results if there is any confusion and making sure that there is greater awareness and transparency on various issues. So let's go ahead and get started to the next slide. So today during this webinar we are going to give you an overview of the 2020 Biometric Technology Rally.

We will provide a quick intro on the Biometric and Identity Technology Center. How we have used technology rallies previously and up to now. Some of the unique things about the 2020 Biometric Technology Rally and how we adapted it for the COVID-19 pandemic and then talk a little bit more about you know who the about the system providers and the people who participated in the tests. My colleagues will provide additional information about how the acquisition systems were tested and how they performed. How the matching systems were tested and performed and we we'll go through the conclusions and answer any questions you may have. One of the things before we get started here or actually I think it's on my next slide. I'll wait for a moment

So who are we? What are we doing? The Biometric and Identity Technology Center are a group of science and engineering technical subject matter experts with strong expertise in biometric and identity technology. We support we basically provide core research, development, test and evaluation capabilities for DHS and DHS stakeholders domestically and internationally. Our goal is to help facilitate honestly understanding and accelerate understanding on the use of these technologies. How they work. What their capabilities are. What their limitations are and how certain risks and issues could be mitigated.

We've focused on how trying to share knowledge across different missions and stakeholders to help facilitate learning. To address the issues that often happen with the learning curve and help organizations learn how to use these technologies effectively, securely, fairly much better so that it doesn't take as much time or difficulty to start implementing these capabilities and it's in an effective way. So to do this we support cross cutting best practices, cross cutting solutions to drive operational efficiencies.

We provide objective test and evaluation capabilities across the department for anybody who has interests and needs. The goal is to make sure that people have a fair and clear understanding of how the technologies work and where the challenges are and then honestly what can be done to help address them and make them better. We also work actively with industry. You all know this today we do a strong industry outreach to try to help orient industry about our use cases. Our challenges, our problems to enable you to develop better solutions but also where our goal is to provide better feedback to you so that you can help make better technologies that will address the various operational, legal, privacy, civil rights and civil liberties concerns. And largely we are here as a catalyst. Our goal is to encourage innovation

across the Homeland Security enterprise. Next slide. So the Biometric Technology Rallies, what are they? So these are relatively unique opportunities, one of the things that we do here is unlike a lot of other biometric and technology of tests that happen in various parts of the world, this is a test of scenario testing. It's not a technology testing it's a use case it's a evaluation of how a specific type of operation or use case could be supported by biometric technologies. We're focusing primarily on use cases that address needs both in government but in the private sector. Our goal is to help industry evaluate, to become aware of these use cases to build technologies that will effectively address them and receive feedback to make the technologies more effective over time. We want to identify and mitigate risks associated with the use of these new technologies and provide objective feedback not only to the potential buyers and users of the technology but to the people who are building these technologies and looking to sell and make them available to government or various organizations that have similar needs.

A lot of our rallies or actually all of rallies for the last few years have focused on this idea of high-through put inspection or screening where we are looking at screening large numbers of volunteers or large numbers of people in a matter of minutes and this is really kind of thinking along the lines of like border control, physical access controls aviation security any sort of like maybe potentially surface transportation any number of different areas and we do our evaluations not only on things like biometric matching performance, things like true match rates or false not match rates. We are looking at a variety of things. We are looking at efficiency, how long does it take people to perform the task to the functions. How much staffing might be required. What kind of error rates are we seeing with these uses of the technology. User satisfaction, do we expect push back. Do we expect people that are using the technologies will have difficulties using them. Resulting in either low adoption or honestly unhappy users.

Neither one is a good thing. Privacy. We have a interest in promoting technologies that work in certain ways so that we can help address and minimize privacy risks to the people using the technologies and equitability. Does it work well for all people? In our rallies we test with a naive population of volunteers. In some cases we'll bring back people who have some familiarity with different biometric technologies. We have evaluated dozens of combinations of commercial technologies. Over the years fingers face and iris. In this particular rally a lot of focus or a lot of the providers were focused on face recognition for this high throughput use case, however we had a few technologies focused around iris recognition.

For information please visit the MDTF website, MDTF.org. Next slide. So 2020 was an unusual year. We had planned for a biometric technology rally that was

focused again on high through put use cases, where we had limited or inadequate staffing to help screen hundreds or thousands of travelers or individuals very, very quickly. Due to the COVID-19 national emergency we realized we needed to make a change and we had originally planned to process small groups of users going through these various technologies and solutions quickly, but however due to the emergencies we decided to defer screening small groups and instead focus on the new issues of how do we handle people that need to verify their identity but they are wearing face masks. Which could pose a challenge through traditional operations where we rely upon things like photo ID's and to get into the details I'll go ahead and turn things over to my colleague. So now we will talk about the biometric technology rally timeline. So please go ahead and thank you. [Jake] Thanks Arun. This is Jake Hasselgren. Yeah so before we get into some of the results that we

observed, I just wanted to give a couple of logistics and an overview of the test itself. So as Arun stated going over the timeline, we had our initial call for participation in March of 2020. Any interested parties had a little over a month to submit an application. And following the application we obviously did a little bit of a review from a number different organizations and from there we were able to send conditional acceptances to those that we had selected would perform well during the rally and following the conditional acceptance we gave those selected technology providers four months to build and develop their systems to be installed at the facilities. So at the end of September we had those

providers install their systems and then immediately following at the beginning of October we performed the rally data collection. I will note that between the time of conditional acceptance in the installation at the facility we did host a cloud API which mirrored the API that would used physically at the installation at the facility. And this would allow for technology providers and participants to test their systems while they are building up. Next slide.

Okay so we did receive a number of applications. That total, we received 24 applications from different companies that were actually headquartered in six different countries, so it was a very broad interest. Like I said, these applications were reviewed by a panel of experts from a number of different organizations and those include DHS, DoD, NIST and other industry parties. So we had a good broad spectrum of different reviewers to make sure that the systems that would perform well. We chose six acquisition systems and

13 matching systems, sorry to the participants during this 2020 rally. Next slide. Okay so as Arun said we had to switch focus here a bit because of the COVID-19 pandemic. And you know obviously as we are all aware there was a large spike in the use of face masks due to state and local government instituted mandates, but also people protecting themselves. So that became a focus of one of the focuses of the 2020 rally and at the time many biometric systems required masks to be removed so in different systems you had to remove the mask which increased the risk of infection for anyone who was taking off those masks. So obviously this became of interest to us and when we had finally decided to switch focus we had already selected our participants so we made sure to make sure to ask those selected to participate whether their system would be able to whether they could provide a system that would work with masks and this was an overwhelming popular response. So every system that we had chosen to participate including the

6 acquisition systems and the 13 matching systems all stated that they could provide a rally system that worked with masks. So this was good to hear and it was definitely a positive response for us, because this was a change that was made in response to an emergency, so it was good news for us. Next slide. Okay, so a couple more logistics about the acquisition systems tests and then I'll go over some of the results for this new acquisition system then we will transfer over to Yevgeniy for the matching system results. Next slide. Okay, so for the acquisition systems we had set a number of criteria for them to meet and this is what we use not only during the review but throughout the tests. So we had a couple of minimum requirements for acquisition systems. First they had to operate in an unmanned mode. So earlier I had mentioned this is a high throughput scenario and to accomplish this we decided to use an unmanned road like our previous rallies. So no operating instructor

in a given footprint. So we chose six foot by 8 foot and that was defined by us and was carried over from previous rallies. So what we wanted to make sure that these systems were capable of sitting in certain areas.

Each system had to collect at least face, biometric imagery or identification operations and each system had to provide at least only one biometric probe image per test image for modality and the system had to process and submit biometric data within time constraints defined by DHS S&T essentially the system had to send those probe images before the test volunteers had left the station. Actually, the acquisition systems could provide other modalities, iris and fingerprints specifically. We didn't have any fingerprints systems in this rally, but we did have an iris system. As we had stated we wanted to make sure that these systems were able to handle masks so Another requirement was that these systems could require images from people wearing face masks.

Next slide. Okay, just to give you an idea of what the test station looked like and how we handled it so similar to our previous rallies we had our group of volunteers that would line up at the stations. They would provide their ground truth identity via a QR code on a wrist band that they were wearing. So that was scanned before they entered the station. And when they entered the station, they crossed a instructed them to interact with them. Then the beam break and then they interacted with a system depending on how the system depending on how the volunteer would leave the station crossing another beam break and then finally would rate satisfaction with that system using a satisfaction kiosk. At the end of the

of the collection we had a good number of volunteers. We had 582 volunteers that ended up using each system. We made sure that those volunteers used those systems in a counter balancing order so we can distribute the learning effect across the system. Like I said, we had systems that submitted face images, iris images or a combination of both those modalities and each volunteer used each system twice. The first time that they used these systems they had took off their mask. So without mask was as a

condition and then the second time they used the system they kept their masks on. And finally, I think this is important to know, there were a lot of people that came into our facility and there was a lot of our staff that was required to complete this test but we were able to keep social distancing maintained at all times during the tests and we did this by using a number of different tools such as indicators on the floor and also different finds and things but we also required everyone to wear masks throughout the tests unless they were using a system and we made sure that social distancing was maintained so that the risk was minimized. Next slide please. Okay, so I will go through some of the results for acquisition metrics I will say that the acquisition system metrics pertain to the without mask conditions and our like Gene or Yevgeniy get into some of thee mask conditions with the matching system results. Also each one of these metrics has a

threshold and a goal value. So the threshold value is the value that we thought each system should be able to accomplish and the goal value is the value that would exceed our expectations. Next slide. Okay so the first metric that I want to go over is efficiency. So efficiency would be quantified as the average transaction time for a test volunteer to interact with a system at each rally station and if you can remember the little graphic that I had a couple of slides ago we considered it we calculated by the transaction time by calculating the time between the first beam break and the second beam break. Like I said this is only

for the no mask conditions and for this metric we had four systems Besek, Stone, Vly and West that met the threshold with Stone ,Vly and West actually exceeding the goal and the threshold value was under eight seconds and the goal value was equal to or under four seconds. Next slide please. Oh Oh sorry. the most efficient system was West at a 3.5 second average. Now next slide. Sorry about that.

So the next metric that I would like to go over is satisfaction. So for this test we measured satisfaction by looking at the positive attitude that these volunteers had towards these acquisition systems and we considered a positive attitude anything that was happy or very happy. These satisfaction ratings were rated on a satisfaction kiosk that four buttons.

It ranged from very happy to very unhappy. So yeah, like I said the positive response was anything that were rated happy or very happy. We had five systems that me the threshold. Vly, Pine, Besek, Dans and West and that threshold value was greater than 90%. And then three systems Besek, Dans and West

exceeded the goal value of satisfaction score of 95%. We did observe that the most satisfaction system satisfying system was West, which was a 99% happy or very happy rating and I will mention using these names. We did provide, we did assign an alias to each one of these systems. So if you have any questions feel free to email

DHS. Next slide. Okay so the next metric I'd like to go over is the fail to acquire rate. So this was quantified at the portion of test volunteers who no images of sufficient quality were obtained to template. So this could happen in two ways. A - there was either no image received or B - we did receive an image and it just was not able to template it. So we did reset this for

each modality separately and we had three systems Vly, West and Dans that met the threshold. The threshold value being a failure to acquire rate under 5%. We didn't have any systems that actually met the goal of under 1% failure to acquire rate. The lowest failure to acquire rate that we did observe was from the system Vly and that was a 1.7% failure to acquire rate. Next slide please.

Okay, so the final metric that I'd like to go over for acquisition systems is the MdFT true identification rate. It is quantified as the portion of test volunteers that were correctly identified using our internal matching system at the MdTF. Again like the failure to acquire rate this is a test for each modality. And for the true identification rate. We had three systems that met the threshold. Threshold value being greater than 95% and those were West, Dans and Vly.

And again we did not have any acquisition systems that met the goal value of 99% MdFT true identification rate. The highest true identification rate that we did notice was the face system Vly and that had a true identification of 97.8%. Next slide please. Okay, so just to give you a quick result summary of the acquisition system like I said these are all without masks. For efficiency we did have four acquisition systems from this phase that met the threshold value of 8 seconds or less. And three of those met the goal value. See that the goal value of equal to 4 seconds or less. The satisfaction rating we

had five acquisition systems that matched the threshold of value of 90% or greater. Three of those exceeded the goal of 95% or greater. For effectiveness we didn't have any systems that met the failure to acquire goal of under 1% or a sure identification goal of greater than 99%. We did have three face acquisition systems that met the failure to acquire threshold of under 5% and we did have three face acquisition systems that met the true identification rate threshold of greater than 95%. And we did observe the acquisition of images remain the challenge for the iris modality. Okay, so next slide please. And at this

point I will hand off to Yevgeniy to give a full review of the matching systems and the results. Thank you. [Yevgeniy] Thanks Jake. Next slide please. So I'd like to go over the results for the matching systems. As we previously stated we had a total of ten face matching systems and three iris matching systems participating in the rally in 2020. And what this slide captures are the technical requirements for each matching system. I'll just cover

the main points which are that the systems were all delivered within a self-contained docker image implementing a simplified biometric rally matching API. So critically each matching system provider also tested to the fact that their system could match images of people wearing masks. And this was really a critical component of the assessment as we looked at the way that these matching systems performed both with people wearing masks and people without masks. So next slide please.

So each matching system was evaluated in combination with each acquisition system. This is something that we do for the rally. We sort of disaggregate the results by the acquisition system and this resulted in a total of 60 face system combinations and three iris system combinations. The evaluation of performance was by performing matching against the small historic gallery of individuals.

Those galleries contain 500 identities in it and each system was evaluated based on how well they identified individuals in the two conditions during this test so as I mentioned before we looked at how well the systems performed without masks, which means that volunteers removed their masks by either dipping or taking them off. Fully prior to using the system or with volunteers keeping their masks on in the with mask conditions where volunteers kept their masks on while using the system and didn't remove them ahead of time. And all acquisition systems and matching systems were given distinct aliases in the with masks and without mask conditions. So definitely we wanted to maintain the privacy of the companies and in reporting the with masks results because this was a new type of assessment for this yeah and we didn't want to link the performance. This is something that came up later. So next slide please.

This slide shows a video, and I don't if we can play the video -- captures the variety of different face masks that were observed during the full 2020 rally. It shows masks worn personal masks worn by individuals and these are the masks that were acquired by the various acquisition systems of course these are face images. We had black masks, blue masks, white masks masks with patterns, complicated designs on them. We had a lot of bandanas and various other types of masks so a lot of variety in what acquisition matching systems could handle in the with masks condition. So next slide please. So as I mentioned previously we evaluated matching acquisition system combinations and we evaluated them on their ability to correctly identify each test volunteer. We looked at two metrics for the full system tier metric we looked at the performance of the overall combination of the matching acquisition system inclusive of all sources of error. So this metric is

true identification rate and it's expressed that the percentage of people that use the system who were correctly identify by the system. So the fraction in the test. The correct identifications divided by the total number. In the matching tier we took a look still at system combinations which we discounted any failures to submit images based on you know failure to acquire a image by the acquisition system. So this is what we call the matching focus tier and it is expressed as the percentage of acquired images that were correctly identified. So in this case the fraction

correct identifications divided by the number of images acquired and this helps us sort of zone in specifically on the matching system. Those still disaggregated by the source of the acquisition images and in both cases we set goals and thresholds ahead of time. We were looking for a 99% or higher true identification rate and threshold value was 95% so if the system achieves better than 95% then we consider it a viable high throughput system, where as if the performance is below 95% then it is not a viable high throughput system. Next slide please. So here is a matrix of the results this is for face system performance. Each one of these matrices, there is one on the left without masks and one on the right with masks. It shows the performance of that system

combination. Let's look at without masks first. So without masks there were four matching systems Maumee, James, Reese and Pearl that achieved above a 99% true identification rate. With at least one acquisition system. In this case they all achieved it with acquisition systems. West and the true identification rate value was 99.7% for systems combination Maumee West and that's a highlighted by this black box that you see in the chart. And these charts are available at MDFT.org as well. So about a third of the systems

received a true identification rate value above threshold 95% which is pretty good and this sort of matches the performance that we have seen previously at other rallies. Performance with masks on the other hand was a bit lower, not unexpectedly so. We are obscuring roughly half the face, but the top system combination Alan Fray did manage to obtain a true identification rate of 95.9% which is inclusive of all sources of error. And we think that is pretty good, because it meets the rally threshold for a high throughput acquisition system and matching system. Overall in addition 10% of the system combinations even with masks had true identification values above 90% which are within striking distance of the threshold, Next slide please. So next we took a look at the matching focus true identification rate Remember this one discounts any failures to submit images by the acquisition system and in this case things look significantly better.

So a third of the system combinations were able to meet the goal for this metric without masks. The best system combination was actually in this case Stone Pearl. It was the best, the values are similar to some of the others because the rounding but the performance of 99.8% matching focus true identification rate without masks. With masks, things look better as well. Many of the systems were able to now move the 95% threshold in this case nearly a quarter in fact were able to do so in this more challenging condition. And the best system combination with masks was Glen Alan which again discounting failure to submit images that system combination was able to match 98.7% of the images that were acquired. So next slide please.

So we also had an iris acquisition matching system in this assessment. We had three matching systems and one iris acquisition systems. So there are going to be three values disaggregated. However it was challenging to, it's always appears to be more challenging to acquire iris images and you can see in both without masks and with mask condition there was no system combination that met the threshold or goal. Certainly not goal requirements for iris matching. Next slide please. However discounting failures to submit images the matching systems worked very well.

Both without masks and with masks so although without masks none of the system combinations met the goal. With masks actually one of the system combinations did meet the goal and that's a little bit maybe paradoxical but what happened was you know the images when the iris images were acquired they tended to match very well. When the subjects were wearing face masks, however overall more images, images for more subjects were not acquired under this condition. So we think that based on these results that there is some promise to iris biometrics so long as we can work on the failure to submit rate a little bit. Next slide please. So this summarizes the matching system results for the full system true identification rate overall 23 face system combinations met the rally threshold. Four face system combinations met the

rally goal and one face system combination was able to meet the threshold with mask. No iris system combinations met these requirements in either test condition. For matching focus true identification rate again discounting failures to submit images, there are now 47 face system combinations out of 60 that met the threshold and 22 met the goal. So very strong performance with masks. 14 face system combinations met the threshold, none met the goal. Three iris system

combinations met the threshold without the masks, none met the goal and three iris system combinations met the thresholds with masks and one did meet the goal. Next slide please. Okay so the conclusions from the assessment are as follows: overall the tested acquisition systems had very fast transaction times and high user satisfaction as Jake told us earlier. As in prior rallies the largest source of error in the biometric performance here is image acquisition not matching performance primarily having to do with missing images or some volunteers that used the system. The face masks did challenge current systems in these high throughput operations, which was manifested by increases in operations in failure to acquire as well as some reductions in matching accuracy. Face recognition systems can perform very well in

the presence of masks as evidence by good performance observed for some system combination, for instance the top full system tier value with masks was 95.9% Alan right and the top matching focused true identification rate with mask was 98.7% rate combination by Alan Glen. We think that these are values that could be obtained by

other system combinations in future testing. Iris matching systems although they tended to under perform they have potential. Acquisition system errors were greater for iris but iris matching systems true identification rates were comparable to face matching focus true identification rates. We believe that if image acquisition is improved then iris systems performance may be comparable to the performance of face systems in the future.

Next slide please and at this point I am going to hand things off to my colleague John Howard, who is going to describe a new matching system evaluation that we have planned. [John] So thanks Arun, Jake and Yengeniy for sort of walking through the 2020 results hopefully that makes sense to everybody on the line but you know it can be a lot of numbers and questions for as long as we can, but before we sort of get to that Q&A session we want to take a slide or two to let everyone know about a new addition from the Biometric and Identity Tech Center, we are calling this an ongoing matching system evaluation. And sort of the genesis of the effort of this, is most of the acquisition vendors can attest, really since the very first rally, we sort of pride ourselves on the ability to get results from our test out quickly to the acquisition vendors.

For example we leave on the last day of the rally which sort of provides a report card that explains how well you did. The matching system evaluations have always sort of been a little bit more of a challenge and that always driven by two things, it's just a lot more numbers to deal with typically for matching systems than acquisition systems. That was the case for this year and last year. And two the technical support for the matching systems given on site in the acquisition vendors of their technical team to keep their systems up and running. Largely

the matching systems technical supports the remote, so it's usually sort of an effort between our engineering team and the matching system vendors. Working troubleshooting things. Things like docker and our API but could still be challenges but we think after two years of doing these matching system vendors themselves and folks on the line sort of matured in this process to the where we are ready to start doing more short term matching system evaluation.

In order to achieve that a couple of things need to change and I'm gonna just run through them very quickly. First the picture you see here on the right should be familiar to all the matching system vendors which is the info endpoint of the API implement as apart of your rally submission. As of last week it was upgraded to version 1.1 and we added two fields

to this info object. I can't point because I'm on the phone but the last two fields you see sort of image here are test and threshold field. The first field here I'll just what they are. The idea here is that this would be one of a few presets

values that tells us what kind of tests you'd like to run on your matching system. I actually have a whole slide on this right after this, I'll talk to that a little bit more in a second. The second input object is thresholds field and that's where you would sort of place your 1 in a 1,000, 1 in 10,000, 1 in a 100,000 etc., match thresholds. You guys will probably recall that these

are previously sort of emailed to the MTF around the time we start executing the rally matching system. This mechanism would sort of replace that manual email step. Which is you know obviously going to facilitate us running this whole process in a little bit more of an automated fashion. The second high level thing that would change is sort of enable this ongoing evaluation and is the way you send us the matching system. So the matching system vendors typically have done this with whatever internal process they like them.

Some people emailed them to us. Some people put them on a S3 or a FTP link. Now what we are going to move to is we are simply going to host a upload form at MDFT.org with a username and password. Where with a username and password you can just upload it an entire file and that would sort of be securely pulled down into one our enclave environment with test you control the API upload things in terms of just size, memory requirements and then we will run the evaluation that you specify in that test group. We think this capability will be coming online over summer of 2021 but the idea here is that once you go through that upload process, our systems are going to use the information kind of contained in this info object, namely the technical contact email that you see as a threshold in the test and we'll automatically execute an evaluation of your matching system and email you the results. Next slide.

So it should be slide 30 and I think the last slide before we get into the Q&A and I mentioned I would talk more about what the test field is, I think when this capability rolls out it may be limited to a few values that we will slowly add to over time. But basically the idea is we have a couple of predefined tests you tell us which ones you would like to run against your matching system. One that obviously one that you need to proof for us is populated in the field now.

The one that is Capital MDFT underscore 2020 underscore rally test type. That for example is simply we run the exact analysis that Yevgeniy presented earlier, first single matching systems and the results would look like something in the picture shown below there. You would see your matching system both masks and unmasked performance across the different probe images gathered by the various rally acquisition systems and then we would just automatically sort of attach this in an email and send it back to whoever is the technical contact in that field. This would hopefully be what happens, depending on the test and the other factors fairly quickly. Maybe you know we are talking about minutes to hours from the time that you submit them. So this could potentially a quick turn achievable objective. The goal here is sort of to allow you if you want to

run all of the evaluations of your product on some of the MDFT data so that you have that really nice movie slide earlier that sort of looked at all this mask data that we collected. Exactly what MDFT data we would potentially design these test rounds with something we currently solicited feedback on. We do have this opportunity I think that we have to collect, we rule which data sets because every time we run a test with new people. For some portions of our populations we've been working with for years others are new we sort of get to refine the questions and measurements we take every time a new set of people comes in. We get things like self-identify race and gender

taking skin tone measurements that is sort of this dermatology equipment. One of the sort of dermatology equipment. One of the sort of the next additional thing in addition to just the regular rally 2020 analysis that we think about rolling out is sort of to de-aggregate the rally 2020 results and buy those demographic categories of race and gender. You know that we currently don't do that as part of the rally public results on the website so everything Yevgeniy, Jake talked to you today everything that is on the website, it doesn't really mention by a demographic category but we think especially in terms of display the mask the challenge that is a part of the 2020 rally really could be sort of interest to matching system vendors namely the threshold of this unequal impact of mask to performance by race category. Is any of this really of interest to the rally matching system community? I hope so but I really don't know so just for purposes to help us understand what kinds of testing, which are your product development goals are currently open to feedback in this area. I'll not that this something that is only open to groups

that have this ongoing agreement with DHS. So it's not like a NIST RFVT style evaluation where anyone in the world can submit into. This is supposed to be something that provides value to more of an ongoing basis just to the people that participated in the rally.

I hope that you will sort of reach out and work with us to understand what those evaluations might be in the future. Next slide. and I think this is my last slide, there's the email address to provide that feedback and with that Arun I think I'll turn it back over to you. [Arun] Thank you very much, we appreciate your time and again reach out if you have any issues questions or feedback. Thank you very much. Thank you everyone.

2021-06-02

Show video