LRBAA Today: Data Analytics

LRBAA Today: Data Analytics

Show Video

Dusty Lang: Okay, well thank you for joining us today for our "LRBAA Today" episode on data analytics. We're in our second season, and this is our second episode, and I'm very excited about this one. This is a topic about which I get questions a lot when I'm talking to industry.

But, first, let's talk generally about the LRBAA announcements and what it is. So the LRBAA stands for Long-Range Broad Agency Announcement, and like any other Broad Agency Announcement or BAA, the LRBAA is seeking solutions for the challenges that it presents in the announcement. So you're not gonna find details, "This is what we want you to do. Here's the metric.

This is how it's gonna be able to perform," et cetera. They're gonna be for the long range, which is a five-year BAA with very high-level topics, broad topics. They're not gonna have those details. They're gonna provide a broader challenge. So, because of that, we have a special process set up that prevents us from having to go straight to a written proposal that's a very high lift, only to find out this isn't something that we have interest on from the beginning. So our three-step process starts with industry engagements, and it's a three-page white paper and a quad chart, and we will be reviewing that internally in the government once you submit that, and then we'll make a recommendation of whether or not you're going to go forward to the virtual pitch.

So at the virtual pitch, if you're recommended to go forward, then we will have an opportunity for you to develop 12 slides and have 20 minutes to present to a government evaluation team. They will have a couple of minutes, post your presentation, to develop some questions. They'll come back, ask those questions, and then subsequent to that, we'll make a determination as to whether you're recommended to go forward to the written proposal.

So a full, written proposal is then considered the third step and is the one where we're determining if we want to pursue an award. So the great news is that there's an opportunity for us to be able to get an idea of what you're presenting, tell you if we're interested, get a little bit more details, if needed, before we go all the way through to what generally costs at least about $100,000 of labor and time to write a full proposal. So we have roughly 20 topics out there right now on the LRBAA.

So we're looking for solutions to those topics from industry, academia, FFRDCs, wide range of folks that are eligible to submit to the LRBAA. The topics are up there for generally of least about a year. They are refreshed annually, so something that you see out there right now may go up or down, but they are out there for an extended period of time, and there's no due date for the initial industry engagement, so if you see a topic out there, you get to decide when you're gonna submit the written proposal.

Again, I do caution that we can take them up or take them down during an annual updates or at other times if we need to, and once you submit your initial industry engagement, then from that process, part of the process on, there are due dates for the next steps if you're invited. But as I said, one of the topics that we have out there now is our data analytics topic, and so I have two folks joining me today, Alex Fungzva and Syed Mohammad. So I welcome them to our episode. So, first, I wanted to ask Alex--thank you for joining us. Can you tell us your title, what you do at DHS S&T, and what role data analytics plays in the broad DHS mission? Alex Fungzva: Great.

Dusty, thank you for inviting me to speak about data analytics. It's certainly a topic that I'm very passionate about, and I know there's a lot of excitement around it as there should be. So data analytics within S&T is part of advanced computing, which you're gonna be introduced to Syed here in a minute. He does modeling and sim, but other groups under advanced analytics are also the Quantum group and cyber research areas and the artificial intelligence machine learning.

But data analytics specifically, we're also known as the big data group, so, you know, just about any mission that wants to be data-driven across DHS were relevant too, and I think that is one of the most exciting things about, you know, being a part of this group is I get to work on all sorts of missions, you know, from, you know, FEMA missions for disaster preparation response, to CBP Trade Enforcement. I've done a lot of it, and, you know, one thing that people--at the end of the day, right, it's about, you know, enabling data-driven decision-making at DHS, and one thing that people may not know about data analytics is we actually have our own lab in-house. We have our on-premise facilities, which we have had for many years now, plus our cloud resources, and that means that we're a very hands-on data--hands-on tools kind of environment.

You know, we like to bring different capabilities and then test them out against different mission use cases, but I think one of the most important things--and it's a hallmark of our lab--is that we actually bring together people and machines, and what does that really mean? So, on my staff, I have, you know, reach-back capabilities to cartologists, computer scientists, mathematicians, just depending on what we're working on. But I think, more importantly, you know, when I talk about people, I'm talking about the operators. When we are building or testing solutions within the lab, we actually bring in operators.

In the past, we brought in U.S. Coast Guard folks from as far away as Hawaii to come and spend a week in our lab to really participate in the technology development and make sure that whatever we test and, you know, develop is relevant, and we can transition to them. So, yeah, and, of course, security and privacy, you know, we're thinking about that throughout. We actually have, you know, three dedicated security staff on our team, and all they do is make sure that we're, you know, compliant with all the security policies for this environment. Dusty: Great.

I mean, it's very easy to see how data Linux would go across the board and have a big piece of the picture for what we do, so thank you. And thanks for helping to introduce Syed. Syed, can you help us understand about modeling and simulation and where that plays in terms of everything? Syed Mohammad: Sure, Dusty. Thank you. And thank you, Alex, as well.

And, again, my name is Syed Mohammed. I'm the director for the Modeling and Simulation Technology Center here at DHS Science and Technology Directorate. And as Alex mentioned, both the Data Analytics Technology Center and the Modeling and Simulation Technology Center fall within the Advanced Computing branch of Technology Centers Division within S&T. So modeling and simulation is a really, really broad field, right? So you think about things like augmented reality, virtual reality.

You think about the traditional discrete-event simulation, continuous-event simulation, and then everything in between now, everything from visualization of simulations, to the actual math that goes into simulations, right? So it's the full gamut, the full spectrum of capabilities involved in the simulation of various activities. So within the AR-VR domain, again, we're looking at all sorts of visualization technologies. We're looking at, if I may borrow from sci-fi for a little bit, the holodeck-type technologies where you're virtually present in what could be very realistic scenarios. And how do we use those types of technologies for the Department of Homeland Security and the Homeland Security Enterprise at large? So think about training applications, think about practice applications, think about situational awareness, think about looking at various scenarios, looking at "what if?"-type scenarios, and so on, and so forth.

How can we use these advanced technologies for that? And then more on the math side, if you start looking more at the discrete-event and continuous-event simulations, really being able to do various types of analyses on DHS and Homeland Security Enterprise problems and issues and in being able to understand how these scenarios unfold and how does the math behind them work. Now, the Modeling and Simulation Technology Center, we do dabble a bit in the artificial intelligence and machine-learning world as well. So, again, it's looking at--it's more on the applied side, so looking at applied AI/ML for simulation applications and purposes. So, again, it's a really, really broad field, and the other thing that we try to do within the Modeling and Simulation Technology Center is to bring together the MS community of interest within DHS. So reaching out to all the different practitioners at the different components, you think about the Coast Guard, you think about CBP, TSA, and so on, and so forth, and then you start looking at the training elements such as, let's say, the Federal Law Enforcement Training Center.

How do we bring these advanced simulation technologies to support the department and DHS--the mission space? Dusty: Thank you. That was a lot as expected but--and any time we can borrow from sci-fi, we should. I think it's always a good-- So, you know, obviously, okay, that's, high level, very broad and--but I'd like to get a little, you know, deeper into what we're doing at S&T generally. So, Alex, can you speak about your portfolio for data analytics within S&T? Alex: Absolutely.

So my current portfolio has three major areas, and I see the slide just came up here, but future computing architecture, so this is really about the environment for compute that enables, you know, the tools that you put inside of it and, you know, for people to really work in, but let's focus, just this first category about this ecosystem. So I mentioned our on-prem and cloud infrastructure. What that means there is, yes, we have these capabilities, but some of the research questions that we're asking there are, you know, well, you know, when you have a multi-cloud environment, you know, how do you manage in a cloud compute? Because, you know, we're kind of looking ahead, getting ready for a world where I might be in a Google cloud, but you're in Azure, but we still need to be able to collaborate, you know, and, of course, there's, like, hybrid, hybrid infrastructure that, you know, are relevant to some missions. Secure multiparty computation related to that is homomorphic encryption, so these are privacy-enhancing technologies that allow people to collaborate and do information-sharing at scale without revealing certain elements or certain fields of data, right? So that is of interest to us right now, and homomorphic encryption actually adds another layer of security because what you're doing is you're computing across multiple datasets that may be owned by different people, but the data remains encrypted. So it's kind of cool. And, you know, blockchain, we have had efforts in blockchain where we looked at the blockchain analytics, but right now, what we're really focused on is given that, you know, there's a lot of interest in blockchain applications, we're actually partnering with NIST and a third party to develop FISMA guidance on, you know, how to accredit a blockchain.

And, of course, real-time analytics and HPC and Edge, that kind of, you know, all kind of goes together, but the initial thought for high-performance computing is, you know, some of the cloud is enabling--I'll say commoditized, high-performance computing to solve problems that, you know, we really couldn't do just a few years ago. So the initial uses of it might be training an AI algorithm but going into the future, looking at different data analytic models when, you know, IoT and 5G, 6G, become, you know, more relevant. You know, those--we're trying to think about how--what analytics will look like then, right? So with real-time analytics, you can think of, like, a smart-city scenario but perhaps it's, you know--maybe it's a degraded environment that you're working in, and, you know, we're looking at models that tell us, you know, "Hey, I have all this data coming in from different partners to respond to this crisis." You know, are there things--are there models that can tell me, "Hey, you have some network latency here. If you wait, like, 30 seconds, you know, you're gonna get a better answer." So those are just kind of the concepts that we're thinking about there.

With media and augmented analytics, so media is text, video, image, right? So, you know, so this area in augmented analytics, it's talking about the tools that go into the environment. So the first area of the portfolio is about infrastructure environments. This part is about the tools that go in them. And so AutoML Pipeline, we're looking at, you know, auto tagging tools, data preparation tools, data cataloging tools, what we're calling self-service or AutoML tools. These are tools that, as Syed mentioned, you know, we're looking at--we're kind of on the application side of AI/ML, so the AutoML tools take short work, whereas, you know, before, data scientists might have spent, you know, weeks and months creating models. Some of these tools can spit out a whole bunch of models in a very short time.

And, of course, very important to that is, you know, looking at ways to manage models when they're in operations--right?--like making sure that your model is performing as was intended over time. Next-Gen augmented tools, I mean, there's a lot that can go into that category, like graph analytics. I mean, there are always new capabilities coming out in that area. And I mentioned a media work that we're doing.

There's a special focus right now on NLP tools, and, you know, in the past, we've done some image recognition and stuff, and so we--you know, I've seen computer visions and deep-learning applications are of interest. So moving into the last category, these are not actually--these are examples, as you can see there. They're actually not well-formed for us.

So human-machine teaming, you know, there's a whole--lots of groups and lots of people work on human-machine teaming, but what I'm looking at here is very small, smaller investments that can actually make a difference, so not looking for someone to create and validate, you know, brain models or anything like that, but, you know, what about, for example, how you measure performance of, you know, an analyst with certain models, right? Because we know that in, you know, in operations, different analysts perform differently, and it'd be good to understand, you know, when you team up certain analysts with certain models, what does that look like, and can you actually replicate if one, you know, group is successful? Can you actually replicate that across your enterprise? Bias, I think, you know, bias is just such a big issue these days, especially for the AI community. You know, we don't have all the answers for that, but I know that, you know, it's something that DHS is gonna have to deal with. We already do, but I think, you know, from a technology piece, it's gonna become a bigger deal. And just other topics, right? So this kind of last category, you know, will--as things come up, like, for example, a couple years ago, it was countering this information, right? So, something like that, you know, the shorter term, smaller investments would fit in this category.

Dusty: Gotcha, okay. Thank you. All right, Mohammed? I'm sorry--Syed Mohammed? Syed: Yeah, sure. Dusty: I know who you are. Syed: So just a few things to add.

Yeah, just a few things to add to what Alex just said. So, again, more from the Modeling and Simulation Technology Center side, the applied aspects of all of the various categories that Alex just mentioned and how that would apply to, say, a domain like training, for example, right? So how can we apply those capabilities to training in a virtual environment, and so on, and so forth? So I think there's a lot of opportunity for that as well. Dusty: Okay, and do you wanna talk about the modeling and simulation portfolio? Syed: Sure, so within our portfolio, again, I mentioned some of the broad areas, the AR-VR, more broadly, the immersive visualization category, so all types of visualization capabilities, whether, you know, they are head-mounted displays, whether they are, you know, short-throw displays that, you know, can really turn a room into a holodeck-type environment, again, the borrow from sci-fi, fully immersive capabilities such as simulating temperature, obviously, the sound of the visuals, but also temperature, haptic feedback, being able to, you know, stimulate all the senses.

So, ultimately, humans are experiential learners, right? So the more stimulation bias that you can remove from an environment, the easier it makes it for a human to be in that environment and to really feel as if they are virtually there or virtually present. So all technologies associated with that, again, we do partner quite a bit with some of our other federal organization--federal agencies and organizations. Think about, you know, DOD, for example, think about uniformed services, University of Health Sciences that really has--really advanced the state of the use of immersive simulation technologies for training. Then, you know, more on the math end of it, looking at application of various discrete-event or continuous-event simulation tools or novel applications of techniques for a simulation, again, there's always opportunities there. Think about things like various algorithms, various attack-defense tree-type applications, and so on, and so forth, and then, finally, on the AI/ML side, again, looking to model the patterns of various types of scenarios, and then how can machines learn from that and better inform the operators in the jobs that they do? So, again, a very broad set of opportunities for the various capabilities for modeling and simulation. Dusty: Thank you.

All right, so we've gone through the portfolios. Now I wanna start talking about the LRBAA topic. So the topic we have out there now, what we wanna do is we want folks from industry and academia and other parts of, you know, foreign, domestic partners to be able to submit some offers or some ideas on something that we may be interested in.

Can you talk about that from the perspective of what we're doing? Like, you know, we have these portfolios, and as you've discussed, there's parts that we have perhaps a better handle on, on what we're--how we're gonna approach this and how we're gonna pursue some things, and then maybe some others where we think there's a good opportunity for folks to be able to submit some ideas. So, Alex, do you wanna-- Alex: Sure, so I'll start by just saying that the data analytics topic is written broadly on purpose. I mean, it covers just so much, and, you know, we don't wanna limit the good ideas that could come through there. So I think one of the best ways to talk about the topic is to share some actual examples of past work that we've done. So with airport screening, for example, a few years ago, you know, one of the projects that we had was trying to actually apply game theory to airport screening to identify flight risk. And so, you know, that's an example of a project that would kind of fall into that third category--right?--like analytic concepts that have yet to mature because, you know, it turns out that, yes, people use game theory in finance, and they use it for patrolling, and, you know, they use it in many different scenarios, but when we actually tried to apply it to airport screening, which is so complicated, you know, you have layered security with people with dogs, and all sorts of other measures, and it got very complicated very quickly.

So one--but, you know, we learned a lot from it, and one of the things that we--I think the conclusion was, "Hey, this thing needs to bake a little longer." And then we have, you know, other projects, like--actually, the very first project that we did was trade enforcement, and that was a project where we actually built a big data environment, you know, tested the tools on it and delivered it to a data center. And then, you know, there's everything kind of in between, right? Sometimes a component will approach us and say, "Hey, I'm sitting on this data, and I need, you know, Congress--I need to, you know, defend my budget to Congress to request more money." And in that case, we might help them do a data on it to see what kind of requests that they might formulate based on, you know, the data that they're collecting, what kind of systems they're using, et cetera.

But I can say that, right now, one of the big focus areas for my group is actually cyber, and it's a very exciting project. It spans all three areas of the portfolio, and not every project does that. For the cyber effort, you know, we're actually building a multi-cloud environment where, you know, we are testing the tools and putting tools in there. We're also looking for, you know, innovative tools for, you know, if people have ideas for cyber threat hunting or, you know, whatever it is, you know, that you, you know, can enhance a cyber mission.

And then, of course, there's that, you know, last category of it is, you know, we're actually looking for new concepts to apply that perhaps people didn't think about before, for cybersecurity. So that kind of captures, you know, some examples of data--and that's why I can't say, "Hey, you know, make sure it's X-Y-Z." The topic is purposely broad. Dusty: Right, and it's a great then use of a BAA, so fantastic.

Dusty: And, Syed-- Syed: Alex, if I can jump in real quick? So I really like the last point that you just made, right? So the topic is broad on purpose, and that's deliberate on part of our organization as researchers looking at, you know, where is this space headed three years, five years out from now? So being able to look at not just the current technologies but the emerging technologies in these various fields. So from my perspective from the Modeling and Simulation Technology Center, again, we're not just looking at, you know, what are the current set of visualization technologies out there, but how will this change? So, just for example, looking at the low-profile haptic devices where, you know, these are no longer these large, clunky devices you have to put on your hand to get the type of feedback that you're looking at or being able to do gesture recognition without having to wear all types of sensors on you, right? You start looking at various head-mounted displays or visualization technologies. Who wants to use these large, bulky headsets that we've seen over the last couple of years when there's emerging technology that is very, very low profile and is more or less seamless to use? So, again, making this technology frictionless, if you will, making accessibility to this technology frictionless, so it really--again, from a Homeland Security Enterprise perspective, really allows an operator to focus on the mission and the job at hand versus saying, "Well, now I have to, you know, wear this bulky technology, and then all of these, like, you know, various devices on my hand, and sensors, and so on, and so forth, to really say, "Okay, this is really an enabling capability. It's not meant to get in the way but rather assist the operator on the mission or for training or for whatever." Dusty: Right, so it's helping, not "helping." Syed: Exactly, exactly.

Dusty: So we, of course, cannot have a discussion about, you know, announcements or something like that without talking about funding. So we'll just jump right into funding now. So there's, of course, a common question asked related to the LRBAA because we don't put out funding thresholds or limits or anything like that, so I'd like to talk about that, and I know from previous conversations, you both had thoughts on the life cycle costs that folks should be thinking about and taking into consideration. So thoughts on funding thresholds and life cycle costs? Alex: Hey, you're absolutely right, Dusty.

You know, cost is just dependent on the specifics of, you know, the proposal, but, life cycle cost is critical because, when you look at an organization like DHS, and you look at--you know, a lot of these are law enforcement organizations, or they're like FEMA, you know, we don't--there are pockets of people, you know, with data scientists, and, you know, lots and lots of technical skill sets, but there's also big swaths of DHS without those skill sets, and so, you know, thinking about the O&M is very important, and that's actually one of the things that, you know, we try to determine, like, "Hey, you know, can an organization actually take this on, and what does that look like in terms of their O&M, like, what kind of skill sets do they need to be able to maintain this?" And it's a big deal, and, you know, we also work with--as you know, Science and Technology, in general, works with First Responder organizations. Well, they have even less money than, you know, federal organizations, and so I think that one of the things that, you know, helps in, you know, looking at a proposal is to understand not just the actual costs for developing, but the cost for O&M, and what is the cost model? You know, is it licensing? Is it, "Hey, I'm gonna have to--you know, I'm gonna supply you with an army of people to maintain this," or something else? Dusty: And where is the source of that? And so the cost models, like, "Who's the one willing or able to pay for that if it's a larger project? Yeah. Syed? Syed: Yeah, Cost can vary, right? I mean, depending on what the technology is, right? I mean, that's basic. Now, looking at the different communities of potential application of technologies, there are thresholds within reason, right? So you start looking at "Is this a technology that's being proposed that would be used by, say, all or potentially all first responders across the country?" Now you're talking about hundreds of thousands, if not millions of potential users of this capability, however, you know, budgets vary from department to department. Budgets vary from jurisdiction to jurisdiction. Budgets vary depending on "Is it a state level agency? Is it a federal level agency? Is it a local? Is it a tribal agency? I mean, what--is it EMS? Is it police? Is it Fire? I mean, what is the target audience for this technology? And, again, you know, proposing a pencil, for example, that's relatively cheap to buy it from, but the led costs a hundred dollars every time you need to refill it.

That's potentially a non-starter, however, if there are capabilities that are being proposed that are reasonable in cost, not just the initial acquisition cost but the entire life cycle cost, then we can have a much more robust dialogue in terms of applications technology. Now, all that being said, you know, I did mention, you know, we are looking at three to five years' out, technologies, and further beyond as the technology landscape is changing. Don't shy away from proposals just based on cost, right? So it's not just--yes, cost is a big part of it, but it's not just about cost. So we are looking for novel solutions.

If there is a novel solution that may be a little bit more expensive up front but reduces significantly the life cycle costs, then that is also a discussion point. So cost is a very important factor, but it is not the only factor, and I would say keep an open mind in terms of proposals and looking at how to calculate the various life cycle costs of various proposals. Dusty: I think that's all really great information. I think it's very important for folks to understand the costs, you know, going in.

If you're gonna say the rough--you know, with the industry engagement, this is your rough estimate on what it's going to cost that, that's something that is planned around in terms of what value is this bringing, and is that a feasible cost that we'd be able to pay, and then, like you both have mentioned, there's--you also have to look at the return on investment from the operational perspective as well. So those are really great, and I appreciate that, and I think it also feeds into--to this next question in terms of the submissions. So when we--I went through the three-step process for the submission, and we get the white paper, and then, again, we're gonna ask for a rough-cost estimate with that white paper, and I think it is important to not under- or overvalue that, but what other information helps you understand, you know, one, is this something that we are interested in to go forward, two, does that bring a value that is commensurate with the costs that are being proposed, meaning, that, you know, okay, this might be something highly expensive, but this makes a potential huge leap in capabilities if we're able to fund it.

What--I mean, it's three pages, and we're doin' this on purpose so it's not a heavy lift, but it's still important to get that critical information in there for decision-making. So thoughts on what those pieces of critical information may be? Syed: So, again, you know, focusing not just on the broad generalities of the technology in the field but rather focusing on the novelty of the technology itself. How will this impact the mission at the end of the day? How will this impact training? How will this significantly reduce X, Y, Z, or how are you measuring success of the use of this project? So focusing on those areas, focusing on how this particular technology is a game changer for the Homeland Security Enterprise versus, you know, going to the generalities of, well, this is an advancement in simulation that, you know, increases X, Y, Z, or whatever the case might be. Alex, over to you? Alex: Yeah, no, I want to--I agree with you completely. You know, I always wanna know how your product is different from everyone else's, and I also like to see that, you know, it's a well-thought-out proposal, right? Meaning, you know, if you've developed a product, you know, did you actually test it, you know, against--if it's, you know, an NOP project, did you actually test it against the dataset that people test things against? You know, these are widely available.

You know, if you're doing image recognition, there are tons of image databases out there that you can actually go out and test against--because that tells me that you did your homework. You benchmarked it. By the time you get to this three-pager, you should've already done your literature search. I mean, that should've been, you know, squared away.

And the proposal needs to be crisp, right? And it should have a specific customer in mind. It's not a "Hey, give me your data," and, you know, "I have smart people, and, you know, I can do this for you." I see that a lot, so, you know, and I will say this: For us, you know, my group specifically, we're more interested in products than we are in services. That's not to say that we won't fund services. Like, an example of funding services is, you know, "Hey, I'm gonna develop this.

I have this great idea for risk methodology, and this is what I'm bringing to the table, and it truly is unique." There, you know, we're funding something to create, like, something that is really novel and cool and applicable, right? But in general, you know, we like to see products. We're not into, you know, consulting services, or we don't tend to fund things like that. Dusty: Yes, we can't under the LRBAA so, perfect, yes. Good to know. Good to know, yes. All of the efforts under the LRBAA have to be a--you know, you're coming forward with your idea, and this is the product that you're going to be producing at the end.

And thank you so much for the comment on doing your research 'cause I actually skipped one of the questions I wanted to ask. So for the LRBAA, we are--this is something that is, like I said, broadly open to a very wide swath of the public, and we are always very, very interested in making sure we're reaching all the places where there is the thoughtful innovation. We want--if there's place that we are currently hitting, you know, we want it. At the same time, sometimes those folks in those areas may not have as much exposure or knowledge about DHS, what our missions are, what things may fit, so, for instance, you know, they have this great idea for a new technology, and it's kind of goin' back to something Syed mentioned.

Is it a big and bulky--or if it's something that has to go onto an ambulance or something that there's already equipment, you know, a lot of equipment, so how does this integrate into it? So can you help to--folks to be able to better understand where to get some, you know, good resources that's easy to do research, but you don't necessarily know, "Okay, is this still research that's key to helping me understand what the needs are?" Syed: Absolutely, Dusty, and that's a great, great segue into looking at, you know, what are some of the national forms that really highlight some of these various technology areas? I can speak for modeling and simulation, for example, looking at national training simulation association, or I/ITSEC, the Interservice Interagency Training, Simulation and Education Conference. I think I got the alphabet soup right. So looking at that, looking at mod-sim world, looking at simulation or operability working group, looking at the various other M&S conferences and forums around the country, just to name a few, and in seeing how is the industry shaping up for this particular domain, seeing what are others doing in this area.

And, again, a lot of these forums are widely attended by various government agencies, so how are other organizations and agencies within the government at the federal, state, local, and tribal level, how are they using these various types of technologies, and what are their plans for future application of these types of technologies? I think these are all great forms in which to get that kind of information, looking at technical journals, looking at professional publications, and so on, and so forth, and really seeing where the field is headed. Alex, if you can mention a couple in the data analytics domain? Alex: Yeah, so just to get a sense of our thinking, you know, when it comes to big data and AI, we actually participate in a lot of interagency working groups, and, you know, so the NITRD, N-I-T-R-D, Group on AI and big data are a couple places to look just to think about, you know, big picture, where we kind of--how we kind of see things, but I mentioned benchmarking earlier, and NIST. You know, NIST puts out tons of data since they all do evaluations all the time, and, you know, when you want to, you know, test your product and say, "Hey, look, I actually tested my product, and it, you know, it does what I say it does," that's a great place to look. But, you know, the other thing I see sometimes is that people will claim their product is unique, and so, you know, we, as the government, you know, we are always speaking to academia.

We're always speaking to industry or foreign partners, et cetera, and we actually go to conferences like Strata and Hadoop. So if you really think your product is unique, look in those forums, you know? Strata, Hadoop, has, like, many, many events every year, and see if your product really is unique, you know? I think that, you know, events like that are just gonna be helpful to validate and make your proposal that much stronger. Dusty: I like that, and it helps to go into my next question which--so we talked a little bit about the work, you know, some of the things that have already been done under the LRBAA or under other efforts within your portfolios, and, you know, the whole making sure your white paper you're providing is unique, and I always wanna caution people, like, listen to when, like, when Alex was talking about, "Hey, there's this area of work that we did." That needs to bake a little bit longer. So that means that that's not something that's solved.

So if you see a success story out there, understand, "Okay, we're saying we have taken care of this," and kind of goin' back to what both of you said is that means we're moving then onto another challenge, and often I get companies comin' up to me and saying, "Hey, I saw you guys were doin' this. I can do that too." It's like that's fabulous.

We've got that handled. So come with something new. Come with something innovative. Come with something unique. Understand what that is and the value that it brings.

So, given that, are there any thoughts on ideal outcomes from the solutions and work and effort that can be done under this LRBAA topic? Alex: Yeah, so in terms of ideal outcomes, I think, you know, and, you know, our mission, it's to deliver solutions. You know, unlike a lot of agencies who perhaps, you know, do research that's very far out, like, you can think about cancer research, for example, you know, we don't have that luxury. I think ideally, whenever we engage in research, we wanna actually deliver a solution to a partner because it doesn't help for us to have something sitting on the shelf. You know, we're always being asked, like, "What've you done for, you know, DHS missions lately?" And so, yeah, so, Syed, do you wanna add to that? Syed: No, I think you covered the bulk of it, and, again, there's, you know, we'll keep looking at our mission alignment, for example, looking at, you know, where we, as an organization, fit, and so on, and so forth. There's a lot of information that can be drawn from a lot of different sources. Dusty: Thanks.

And so I, of course, threw everybody off by missing the question earlier, so I meant to tell folks that the resources that we were alluding to, we would be throwing up some of the links, and we also will be putting in the chat--if we haven't already, somebody's gonna be putting links in there so that you guys didn't have to spend a lot of time, but we're gonna be going to questions next, and so, if you have questions you'd like to put out for consideration for folks to answer, we'd like to do that. So one of the ones that I know that bit was submitted for this webinar is related to "What is transition?" And there were different variations of that question, so I'm just gonna kind of combine them into "What does transition look like a lot of times for data analytics products and modeling and simulation?" Alex: So my favorite answer, "It depends," right? So you heard me describe before, sometimes you build up the environment, we test the tools, and then we actually deliver it. Other times, you know, we may work with a vendor to--perhaps the product isn't quite ready for DHS yet, but we further develop that, and that might be like a licensing model, you know, where, "Hey, we get it to a certain point where it's relevant to DHS, and now the DHS components can procure it via licensing." Of course, there are others where, you know, if it's unique enough--that's a big "if," right? If it's unique enough that, you know, will build something that will, you know, be unique to DHS, then will it just go into their operational environment? But I think it's very important for me to say this: Given that, you know, what I described as different levels of analytic maturity within DHS, again, some people, data scientists, they're, you know, off to the races, and others, not so --open source, but commercially-- Dusty: Alex, you're goin' in and out. All right, Syed, do you wanna-- Syed: So did Alex drop off? Dusty: Yeah, I think we've lost her.

Maybe she'll come back. But in the meantime, do you wanna offer some thoughts on the transition? Syed: Yeah, so, again, you know, just like Alex was mentioning--right?--I mean, it's the full range of TRLs that we're looking at, what--you know, apart from, you know, Alex had mentioned, you know, we're not focused a whole lot on the very basic-- so it's a very, very low TRL. Let's put that out for now, but if you start looking at mid- to high-TRL items, depending on where they are in the technology readiness of that various capability, we may either further develop the capability or we may either further research a capability, or we may start working with some of our component partners to transition some of that capability, working with the DHS S&T Technology Transition Office, either as a transitional product or capability or as a commercial product back to industry that, then could then be further developed. So, again, I would say the gamut is fairly wide in terms of whether it's purely for transition or whether it's, you know, for further research within our own organization. Again, there are plenty of capabilities that we like to further develop if there are things that are being developed in industry that we think may be applicable to the Homeland Security Enterprise domain.

Three years, five years, ten years out from now, we may start playing with it in our labs. We may start experimenting with it, trying to understand how this will impact the mission, or if it's a fairly mature capability that a component is interested in looking at or utilizing, then we may experiment with it for a little--for a short period of time, but then, you know, really start looking at how can this really be applied to the mission space right now and then start the transition process at that point, once we have a working prototype or capability. Dusty: Which, that's perfect. You answered both a TRL question that came in and started down the path towards answering the question that came in that--so, folks, this is somebody that's either within the DHS community or has worked with them before, asking about how this topic--or LRBAA in general, but I know it's different, so we'll talk about it for this topic--plays with MCS PMs, and I think, you know, going to what you were just saying when we're talking about some of them are more--are closer to the fundamental research, and some of them are closer to transition, so those ones that would be close to the transition, we certainly would have the MCS PMs and the PFMs from the component engagement involved with the-- Syed: Right, and I think that that's a great segue into the inner-workings of how we operate within the building and working with various portfolio managers, working within mission capability support and the various project and--program and project managers. Again, a lot of that is internal sausage-making, right? But ultimately, it's how do we, as an enterprise, work to deliver the capability? So depending on where a proposal falls on the TRL, what the maturity of that capability is, it could be just for internal research purposes, in which case we may look at it organically as a technology center, however, if there is significant component interest in that capability, then that's where the portfolio managers and the MCS PMs come in.

We're saying, "Hey, look, we're actually maturing this technology deliberately for transition," in which case a portfolio manager has a validated and vetted requirement from a component partner. Mission capability support is ready to take it on as a life cycle project within one to three years. We have to look at the resourcing within the building for the pilot deployment and capability development, and then working across the board. Again, it's really about collaborating with the portfolio managers out of--within the organization as well as the MCS, Mission Capability Support PMs, and then really being able to develop and mature that capability for transition out, and, again, there are other partners that have to come into play as well, and you can talk about the transition office, TST. You start looking at the transition partner on the component side.

You look at, you know, if it's an IT system, for example, going through all the accreditation, the authorities, the test, the authorities to operate. Again, transition of a capability is a lot more than just handing it over the fence. It's about really working across the enterprise to really get that capability accepted by a component partner or, quote, unquote, "customer," if you will, that they can then take it and run with it, and also providing the future modernization and future research on that capability to ensure that we don't run into obsolescence issues. Dusty: Oh, so Alex didn't just leave us.

She's back. Thanks for rejoining, Alex. We've been puttering along without you, but I appreciate having you back. So, I think that's great information, Syed. There is a question here that I think is really interesting and could use advice from you guys for how to approach--I'll do a short intro for it, but so there's a discussion about--or a question about interest in topics that cross over potential areas of solution or whatever. Within the LRBAA, if we get two submissions that are exactly alike to two different topics, we call the folks or e-mail 'em and say, "Hey, can't submit two identical topics, proposals, to two different places." We wanna make sure that we're looking at this--like we said, we do engagement across the board when appropriate, so, you know, select the one for which this is most appropriate.

That doesn't necessarily mean we're not interested in understanding the value that may be brought to another area and may, in fact, then make that bang for the buck, that--better. So can you talk about the best way for somebody to, sort of, maybe address that in the right paper submission so that it's understood there's this other potential area? Alex: Just to make sure I understand the question, so is it about, you know, a topic that may be relevant to different missions, and so people are asking about submitting it to different topics? Dusty: It says, "Interest in projects that are cross-topics." So theoretically, it could go two different ways.

Either way, we can answer for either, which is, you know, the totality of the solution crosses them, or it could apply to two different program areas. Alex: Yeah, so, I think we're certainly open. So, here, the philosophy from my group is, you know, when we look at a specific product, we are looking at it from a cross-mission perspective.

So I think that, you know, submissions like that are just fine, however, that, you know, what Syed said earlier about, you know, the value of what they're proposing just needs to really come out, and part of showing the value is showing us that you actually understand how it fits into these different areas of missions. It can't just be "Hey, it fits into, you know, border security and including maritime." It can't just be a list.

We'd like to see it, kind of, really--that, you know, submitter has an idea of how it fits and the difference it's gonna make for those areas. Dusty: Yeah, so, I think, from my perspective, we have to, I think, wrap up here. I think the key is there. You can mention it.

You don't wanna spend too much time on multiple different things. You wanna be able to show a distinct benefit to the folks that you're targeting this to. If you have a solution that is that flexible, you can submit two entirely different submissions, but they must be different submissions. They must address those different areas in--and give your full and thoughtful attention to each of them. So I'm gonna say thank you to Alex and Syed before we do a couple of things in terms or showing where folks can get to the topic we've been talking about, so I will have to provide the caveat that we've had some technical difficulties with our portal, and so if we--I think we have a slide that shows for in the future, how to get to--yes, here we go. So the portal, the website is there.

You're gonna go there, you can click on "Funding Opportunities." That's gonna take you to a page, and you can see the red rectangle there. It says, "Select." Under "Research Area," select "Protecting from Terrorist Attacks."

Again, there's "Topic Areas" and "Research Areas." For LRBAA topics, you wanna make sure you're under "Research Areas." So you're gonna look for topic number "PROT 02 01," and then that will provide you links on how to create a submission.

So, very, very much thanks to Alex and Syed for today's episode of "LRBAA Today." I've very much been looking forward to it. Alex knows that I kept talking to her, "I'm so excited about this topic." We do have another "LRBAA Today" planned. We are going to be doing that next month, and I think we have a slide that has the details for that--yep.

On August 18, at 2 p.m., we're gonna be doing "LRBAA Today" on "Screening at Speed." So this "LRBAA Today" was recorded, and once we are able to get everything 508-compliant, we'll be posting it onto YouTube, and we'll be letting the folks that participated today know that that's out there and put it up for other folks, and we look forward to seeing you on August 18, if you're interested in the "Screening at Speed" topic. Thank you very much. Have a great day.

Syed: Thank you so much, Dusty. It's been a pleasure. Alex: Thank you, Dusty.

2021-08-02 15:31

Show Video

Other news