NIH Proof of Concept Network Common Metrics and Outcomes Tracking

Show video

with that I will introduce the next speaker  uh uh for the NIH Proof of Concept Network   common metrics and outcomes tracking that  will be given by Alan O'Connor so Alan thank you very much for that Krishan and I hope  everyone's doing well um this is usually one of   my you know favorite things to do during um our  annual get togethers to just have that opportunity   to connect with everyone and report out on the  success of our network one of the exciting things   about this particular year is of course that we  have a full and open public meeting and that means   that it's a great opportunity to report out on  what we're doing across the network in terms of   what our outcomes are the processes that we use  and to share some of the ideas and insights that   we have accumulated over the past seven years that  we've been running the NCAI and REACH programs um so for those of you who are not familiar with  me my name is Alan O'Connor I am an economist and   a program evaluator based at RTI International   with a particular focus on evaluating  science and innovation programs these   are something that has been kind of a hallmark of  my career something that I've really focused in on   and I joined this network in 2015 um to help  with developing strategies for tracking the   overall outcomes and performance um for our  particular um for this particular network   um so what I thought I would do today would be to  share some thoughts on um why do we go about doing   um you know monitoring evaluation um have a few  highlights about our evaluation program overall   touch on some of the metrics that we use  um for tracking technology progression   to market um you know thinking a little bit and  sharing some insights about the infrastructure   and the overall approach that we use and then  summarize with some of the the current status   of the portfolio with respect to progress  to market overall including such things as   follow-on funding startups small business support  via STTRs SBIRs and of course market approvals   market approvals is being probably one of  the most important thing for NIH because   the overall mission of the NIH in general is to  turn discovery into health um and these programs   are great mechanisms for working with academic  innovators on how to go about the process of   taking their great discovery and their ideas and  moving them towards the bedside to assist patients   um I will note that today I'm primarily going  to be focusing on funded projects and also   technologies as outcomes so there's a portfolio  of around 386 projects that have been collectively   supported by the 11 NCAI and REACH sites we're  going to be talking about the progression   to market and the outcomes for those funded  projects I want to emphasize that a big portion   of the um the overall goal of these programs is  around culture change it's around stimulating and   engaging academics in translational science  it's about building capacity it's about   connecting institutions more strongly with their  regional innovation ecosystems so there really   is a broader focus to these programs that's  very important to keep in mind I'm only going   to be talking today about a small portion of  what we're doing to kind of report out and   track outcomes and that's mainly those that are  associated with uh the funded projects okay um   those primarily stem from two of the programs  so NCAI and REACH both 2015 and 2019 um also   the NIGMS Tech Transfer Hubs are important part  of this network but they only have a very small   number of projects and they're still  being incorporated into our metrics   and so I'm not going to present on  those today um but will at a later date so just as a quick refresher for those of you  who may not have been able to attend yesterday   but the NCAI and REACH sites are  distributed across the country   um going from Maine to California the  state the states here in which there is a   REACH or NCAI institution are highlighted in  blue there are 11 some of the sites are consortia   namely MBArC uh the University of Kentucky  and University of Louisville KYNETIC program the Long Island Bioscience Hub UC-CAI in  California NCAI-CC in Michigan Illinois and Ohio   and B-BIC which is largely concentrated in   the Boston area but also includes  institutions in Maine and Rhode Island   other programs are single site so that would be  WE-REACH which is uh based out at the University   of Washington SPARK-REACH at the University of  Colorado and Rutgers HealthAdvance um which is   based at uh Rutgers University collectively there  are uh 70 more than 70 institutions included   in the network um the largest numbers are from  MBArC and KYENTIC largely because um uh MBArC has   a broad uh collection of institutions across  Missouri and Nebraska um but then all uh the   KYNETIC program also includes the technical  and community college system for Kentucky so that's a quick overview of the network  just to kind of remind everyone of some of   the details that Matt reviewed yesterday morning  so here I just want to kind of touch on and just   remind everybody about why it is that we  do this and um I'm on the faculty of the   International School for Research Impact  Assessment and we always tell our um you   know our funders and our research performing  entities that you know there it's really the four   A's okay so um there's analysis which is really  where we're thinking about um learning so that we   can make evidence-informed strategy program  design and policy decisions going forward   um there's also an accountability function um  and that's particularly important in this program   because although the NIH and congress have  provided a substantial amount of resources for   NCAI and REACH um there's also matching support  coming from the institutions that are part of   these networks those institutions committed  their own resources either from their state   governments or from institutional accelerator  programs to kind of support and encourage   the the overall progress within their own  communities so it's really important that   we're not only accountable to the uh the federal  stakeholders into the NIH but it's also important   that we're accountable to the institutions that  have allocated very scarce resources to support   these projects and to further these programs  because they're that invested in the mission so accountability is really something that's  very important to us and of course there's also   allocation decisions how do we allocate a marginal  dollar so if I have um a certain pot of money   that I can put into a translational science  program what's my programmatic mix what are the   types of levers that I want to play with where  where do I put that marginal dollar and sorry   I'm an economist so I'm going to use terms like  that but that's really important for helping to   shape strategy and to make effective decisions  and lastly um it's really about advocacy which   tends to be more about providing data information  lessons insights so we can talk about the   importance of these programs um so that we  can share best practices and lessons learned   and then also to um say hey this is what  we're learning from this experiment this   is why this was valuable and this is what  you should be taking away from this and   how can we implement that into either our formal  culture or organizational uh structures um or um   other aspects that kind of characterize  our our research ecosystems so what's RTI's approach um so we designed this in  close collaboration um I would say it was probably   2015 2016 working with NHLBI in particular as well  as representatives from other parts of the NIH   to come up with a holistic evaluation plan that  would be meaningful useful and that would provide   value for all the reasons that I just finished  describing so in a nutshell um you know it really   involves um kind of engagement with the site teams  who are developing and implementing new strategies   and approaches for managing translational  science and encouraging academic entrepreneurship   that also means that they're developing  strategies for cultivating an application pipeline likewise we're really interested in the profiles  of applicants who are attracted to these programs   one of the things that's always been amazing  to me is both REACH and NCAI have been able to   reach those innovators that really don't have  a whole lot of experience in commercialization   um and yet they have an idea they're  attracted they want to try it especially   those earlier career investigators and  their postdocs and their associated teams   that really get excited about the program so it's  great to learn about what their experience is   um and it's interesting how you get this diversity  of perspectives from those who um are are more the   um kind of the usual cast of characters those  tenured professors who really know how to   to kind of move technology they have a lot of  experience in commercialization then you have   the other half who have very little experience  um and what what's interesting is that you know   the outcomes are largely the same um so it's been  great over the years to see these innovators grow   um and mature as entrepreneurs and some of them  are now CEOs we also spent a lot of time working   with the with the funded innovators um just kind  of engage understanding their program experience   to get suggestions um insights how how things can  improve what works well what doesn't work well   kind of is there a secret sauce with a particular  site from their perspective because they are the   target customer in many ways um and then what I'm  going to focus most my talk on on today which is   um tracking projects over time and this really  relates to what's the technologies progression   what are the commercialization milestones and  outcomes um I don't want to move on without   saying that a big part of what we're doing  is also thinking about skills development   and engagement and of course throughout this whole  process there has been sustained collaboration   between RTI NIH and all the sites so there are  papers being published um data is shared freely   um across the network um and so that's really been  one of the things that has kind of characterized   this particular network it's the fact that  it's very much been a place where people can   collaborate they can share their insights they  can share best practices troubleshoot challenges   and and have a sustained dialogue where  it's focused on particular issues and   academic innovation particularly as  starts to connect to entrepreneurship so what are some of the the summary metrics  categories that we care about right so that's   that's the title of my talk common metrics  so um here's a quick overview um you know as   as Matt was saying yesterday uh at the beginning  there were about 400 ideas for different vectors   metrics to capture and I remember getting a call  one day from um from one of the Deloitte team   and they were just like Alan we have this  list uh you know it's about 400 things long   and you know we're trying to sort out what's  meaningful you know what what's the best approach   to do here can you can you help and I'm like  yeah I want to do this this is exciting stuff   so what we did is we started like streamlining  everything down to what would be meaningful um so   first we start thinking about um you know  the innovator profile so what is their   commercialization experience do they have they  licensed something before have they been a startup   have they engaged with industry before um so  that's important to know because you have your   usual cast of characters like I mentioned before  but then you also have also have those newbies   right and those are in many ways they're kind of  like the targets when it comes to culture change   then there's also demographics you need to be  thinking about um what is our representation and   how does that compare to what you know about  the broader innovator community overall um   so we think about race we think about ethnicity  uh we think about um academic rank um gender   and other issues um we also track technology  profile um so what's the technology type such   as diagnostic assay, say um therapeutic you also  track disease areas cancers and cardiovascular   so so those are some of the high-level things that  are really important to us now when it comes to   the funded projects um and we're tracking  progression our ultimate goal is market   availability right so the mission here is how can  we take these discoveries that we've invested so   much basic science dollars in and push them out  to solutions that will help patients so market   availability at the bottom here is my most is the  number one thing that we're paying attention to   now there are other metrics that we can use as  signals along that path right because we know   that it can take if you're in the private sector  10 years most translational science programs   17 years so it takes a while to really move a lot  of these technologies along to the market and so   some of the things that we track on an ongoing  basis would be startup formation and growth   um it's the intellectual property status it's  the licensing status are there SBIRs or STTR   applications or awards how much follow-on funding  and who's who's providing that follow-on funding   is it something that's coming from NIH is it  a strategic partner um is there venture round   involved uh for particular concern those  are things that we really pay attention to   um and likewise we pay attention to technology  maturity so DOD and BARTA have done everyone   a favor they've come up with a basic rubric for  biomedical technology around technology readiness   levels or TRLs and we can we have adapted  them back to the clinical and translational   science setting that we can use to essentially  um track the maturity of the technology over time   um so um one of the other things that I'm a  big fan of is AUTM so I'm stoked that they're   presenting later on today um because AUTM has  worked really hard for the past couple of decades   coming up with some common definitions so why  reinvent the wheel they have great definitions   for everyone to leverage they're well understood  they're well characterized um so we use those   um and then of course we also track the regulatory  pipeline um because that's what allows us to   say well where how close are we getting to  potentially having a product on the market   so how do we collect the data um so we have a  tool that we've created in collaboration with   the sites over the past few years it's called  the UpdateTracker which is really just technology   update tracker we were not feeling particularly  creative and it just stuck so that's how that   came about but it's a web-based infrastructure and  we jokingly say it's like turbo tax for technology   development projects you there are wizards and  recording features there's a business logic   all of that is programmed in by um a stellar  team that essentially sees the world as an   endless series of databases and they know how to  think about how to characterize and pose questions   in such a way that it's easy for the sites who are  our collaborators in this to provide information   and of course we can also ingest data from  salesforce or other CRM tools and there's a lot   of role-based access control features for security  um and the way that we go about doing this is is   the project managers at each of the sites who  are overseeing the technology development uh   projects themselves they'll meet with  their staff during um update meetings   um you know ask them a few questions by using  an I-pad or maybe you'll have the screen open or   something like that um and they will ask them  the questions they will just populate um the   data as they move through their milestone meetings  sometimes it takes a minute sometimes it leads to   a broader conversation about commercialization  strategy so it's also an educational tool   um and then we go through a process of  validating um the data largely using   a a whole host of third-party data sources um  and generally speaking this has been an approach   that's been well received by the team um so we're  really happy um that over the past several years   that this has made it relatively easy uh for  us to um collect um technology outcomes data   now if you were to talk to um an individual person  on the sites they're saying well we still have to   you know collect information from innovators  we still need to plan this into our workflow   so it's not a lift but it's not painful and we're  not necessarily asking for a whole bunch of stuff   that no one will ever look at or use which is part  of the strategy so basically just to kind of give   people like a little show and tell um you know we  have you know log a website where it's called the   UpdateTracker where everyone can create an account  or log in and then once you're there um you know   essentially you'll be able to enter details about  a project um so there will be a project abstract   a lot of metadata information and then a running  timeline about what's going on with the project   so that the technology managers the Tech Transfer  Offices whoever can go into the site later and   review what the progress is and have an informed  conversation the other thing that's kind of nice   about all of this is that once it's in  the database there are ways that we can   visualize the data so that people can essentially  run queries download graphics um prepare for   presentations um so that's really important but  what's great about it is that it saves the NIH and   the sites from a constant stream of one-off emails  and conversations asking for updates and it really   starts to drive um people a little crazy so so  here's the project update wizard that I mentioned   and then and then here's a kind of an example  of the application pipeline where we have   um an application that's um you know what  the what the total total demand so to speak   for the programs are so that gets visualized  we have follow-on funding interactive reports   so there's just a variety of tools that we've  made available using this infrastructure that   that has really kind of made it um easy for us  to track so many projects over time um involving   both the sites so that they see all the data  that we have any data that we find out um   using third-party sources we populate in here  so everybody can stay on the same page to how   the projects are are doing um I want to just  kind of offer a quick um you know additional   editorial remarks about metrics tracking um you  know again I mentioned earlier that um it's really   important that um everyone be always mindful  about um about whether each metric that's provided   contributes to meaningful situational awareness  um if you're not going to do anything with it um   if a metric is being proposed but you're just like  well how am I going to make a decision with that   and what is it really going to tell me then it's  probably not something that you need to collect   I mentioned earlier the the critical  importance of harmonizing common   clear definitions um something that Kathleen  Rousche shared with me um who's the program   officer for NCAI she's just like oh Alan  make sure you you tell them that it's really   important to start early plan and prepare  because the last thing that you want to do   is to have to go back and collect information  when you're five years into a program   one of the other things that we've done here  is that we've also provided resources and tools   um you know we always strive to kind of create  and sustain value for the user community   um we have a monthly development cycle to  make sure that we're meeting everyone's needs   because we want to keep our customer base  which is the sites happy um so we will you   know make tweaks and adjustments to suit their  purposes um to help kind of sustain engagement   and you know and honestly we've grown  and adapted over time this is not   like a a one-time engineered solution we've just  you know kept growing and morphing to make sure   that our infrastructure and our evaluation  program are as dynamic as the technologies   and the programs that we are tasked with  providing monitoring and evaluation for   the last thing before I start launching into  some kind of overview of our high level metrics   is that um I just want to remind everyone that  these metrics are just a snapshot in time and   they do not reflect important intangibles okay so  these would include things such as culture change   learning and demonstration effects professional  development um whether there are insights from   these programs that inform one's tech transfer  and innovation management strategies the data also   like tend not to really show a good picture about  what is the underlying institutional research   strengths mission or portfolio composition right  so if you're only looking at summary metrics then   you're kind of looking at like the the outputs and  outcomes and you're forgetting about well what am   I working with and what why is what why is it like  that um there can also be some regional context   but I will tell you regional context is is not  as important as people um think um you know the   Midwest um the upper Midwest in particular punches  way above its weight class when it comes to   innovation performance um and it's not necessarily  someone would expect when people tend to think   about the kind of concentrated ecosystems of  Boston or the San Francisco Bay Area or San Diego so um as I touch there's just a couple  of things that I want to remind everybody   from Matt's talk the other day um so the  first thing is that remember when we talk   about the NCAI program it's focused on heart  lung blood sleep okay REACH 2015 and 2019   pan-NIH mission something else that I didn't  put on the slide is that there's also a resource   difference between NCAI and REACH so the REACH  sites generally have about two million dollars of   capital that they're working with per year and  they were funded for um about uh the first cohort   for about three years with some extensions  it ended up going a little bit longer and   then for the current cohort of sites I believe  it's four or five years um so you have to keep   that in mind the NCAI program was funded for a  full seven years so there are some programmatic   differences and that of course will show up in the  results when we start talking about the numbers so um you know I don't want everyone to really  focus on this slide too much the main takeaway   is that the overall composition of the funded  projects between NCAI and REACH when it comes   to technology type is more or less comparable  what I've noticed is the big difference   is around NCAI with its larger resource base was  more likely to have additional therapeutic devices so across therapeutic areas again NCAI  has a real focus on heart lung blood sleep   and then for REACH um you have a strong  weighting towards cancer but there's also   cardiovascular technologies in that portfolio  as well there's also a large number of research   tools apps health IT that really don't have  a specific disease or organ indication that's   something that's been really interesting about the  research portfolio in particular because some of   our innovators have put out um apps they have  put out new health um new APIs and other health   technologies that for example first responders  can use um so it's a it's a real difference in   kind of like the technology composition and that  relates also to the underlying research strengths   of the institutions where these projects  from which these projects emerged   so in terms of follow-on funding um right now  um across the whole network we are approaching   uh 1.9 billion dollars in follow-on funding um the  vast majority of this is from the private sector   um in 2021 alone um there was 980 million dollars  worth of follow-on funding invested in this   technology portfolio and it's been increasing over  time there were a couple of early investments that   were quite substantial um some of the earliest  projects um you know basically picked off low   hanging fruit and really accelerated technologies  forward and there was some kind of large amounts   of follow-on funding secured but there's also a  lot of technologies that are attracting additional   investment so um in across 386 projects in 2021  alone there were 43 investments of less than one   million dollars okay um there were um you know  19 investments of more than one million dollars   um and there was also new strategic partner  funding totaling more than 700 million so   it's quite substantial the amount of interest that  this portfolio has attracted keep in mind many of   our innovators had very little commercialization  experience prior to participating in this program so um as would be expected um some of the the  earliest projects um funded back in 2014 are   those that kind of attracted have had more time to  germinate so to speak and a couple of those have   really led to some quite significant investments  from partners but as you look across the years um   there's actually a kind of a nice distribution of  follow-on funding events so um for example in 2019   um you know out of that particular funded project  year you know 15 of those 31 projects are already   getting follow-on investment 2020 projects 17  of them so we're really seeing a lot of interest   um from investors in the projects that  have been selected for this portfolio   here's just a high level summary of our funding  by source um most notably some Amgen and some   of the large um uh kind of biotech companies have  really invested in technologies that were rooted   in um this particular portfolio so if I  were to kind of censor out some of the like   that big money um for a couple of projects um this  is what the balance looks like um so you still   have something like 300 million dollars in venture  capital has been attracted you have a lot of um   other follow-on NIH funding for either uh clinical  work translational further uh translational work   uh contracts etc. and then others parts of  the federal government are investing heavily   there's about 36 million coming from the SBIR  program and about 4 million from the STTR in terms of success rates when it comes  to um uh the uh application for SBIR   programs and um STTR programs we're  seeing a pretty healthy SBIR success rate   of around um you know somewhere between 40  and 50 50 percent success rate which is um   really high it's the typical would be about  half that if that so the SBIR success rate is   really great and that's important because  we're trying to to kind of connect these   technologies and the startups that are formed to  the SBIR portfolio um and then uh there's some   kind of other uh data here that people can look  at later but I'm gonna keep moving for time   um in terms of uh cumulative startups and jobs  created um we're now at 101 startups overall um   48 projects have either a startup um and SBIR  and STTR app so there's really gonna look kind   of like that there's a lot of connections between  these different programs which is great to see   um something else that we observed  when we were looking at the latest data   is that the the median time from project start  to startup formation it's about seven months   um so most teams are learning pretty early on  um what is the best commercialization mechanism   for their particular technology should they try  licensing or should they startup startup formation for um you know startup or excuse me for um  licenses and options um you know we have some   some small businesses are actually coming in and  licensing technologies um which is kind of which   is really great to see um but we're noticing  there's takes a couple years between um when   a project has started where um someone is kind of  picking up on the um on that and kind of optioning   or licensing the technology I think right now  we're at 16 licenses and then there are 11 options so I mentioned earlier around um how do we track  technology progression and high level um you know   BARTA has a structure um where basically it goes  all the way from kind of a review of the science   scientific base and then it ends with FDA approval  um so along this trajectory um in collaboration   with the sites and using some tools we have  developed a plan uh whereby a lot of the sites   are now trained to kind of identify and track  and demarcate what the status of the particular   technology is and then there are resources and  tools available to help them with that process   but what I find to be particularly interesting  um is that um if you focus kind of on the the   right hand side um is that typically the  the time progression to market um for a   translational science program is is roughly  17 years and this is based on meta-analysis   um that was kind of done several years ago um  but that's probably that's like a really um like   well-known kind of reference point for how long  it takes to get some projects from an academic lab   onto the marketplace and what we've seen with  this program is that in fewer than eight years   there are products on the market and so what this  particular graph in front of you is showing is   that um in general depending on how well how long  ago your project was funded so that's the project   age along the x axis um you know that we're  seeing broad progression so you know if   it's blue that means at least you know one TRL  advancement for a project from that particular   cohort or that particular year of support um and  so you see most of the technologies they're going   at least you know once and the interesting  thing is that the technologies that are   you know between four and seven years  old the outcomes in terms of progression   um are similar kind of implying that like once  you get going that you know that really um there's   that momentum um can be sustained um and that  depending upon what the broader character   kind of characteristics or market dynamics are  um there can be uh uh you know a substantial   or a strong pathway to um the next two stages of  technology maturity I think we had one technology   that went from TRL 2 so kind of very early  proof of concept all the way to market approval so speaking of market approval um here is  kind of some updates about where we are   so I mentioned at the beginning of my talk  um that we really care about market scouts   so on the market today um there are 18 tech or  excuse me there are uh four technologies that have   regulatory clearance for clinical use there are  three additional technologies in the market that   are not requiring regulatory approval and then  there are an additional 18 technologies that   are either in clinical trials or research  settings right now so that's a phenomenal   um success after only eight years I have not seen  such a strong rate of progress in other programs   a couple of highlights would be some  pulmonary stents there's also this great   first responder toolkit that received a lot of  attention because of its use and application for   healthcare workers who are struggling with the  the trauma and the challenge of the pandemic um   so really there are some of these tools have  really had quite an impact on um on the community   so um a couple of parting remarks and then I'm  going to open things up for questions there are   some some things that these teams have done very  very well um so one is on-site team composition   um you know they've chosen leaders that have a  strong track record of academic entrepreneurship   they have good visibility and they're  supported by really strong teams around them   they also have empathetic project managers with  industry uh product development expertise and   empathetic is really important because remember  a lot of our the innovators in this particular   community they've gone kindergarten to postdoc  to a faculty position they're very good at the   science they're very good at science um but they  have never many of them have never kind of moved a   technology or a concept into the translational  pipeline before and so being paired with an   industry trained product manager who has really  good uh uh communication skills can coach that   innovator can can really help kind of add a lot  of value and ensure that that kind of technology   development project doesn't drift into hypothesis  driven research but stays on track towards what   we need to do to add value and get your great  idea into a product or service to help somebody   um something else that um kind  of characterize the sites was um   close integration with Technology Transfer  Offices in some cases the Tech Transfer Offices   are part of the leadership teams or in a  couple are the leadership teams themselves   they also leverage resources from their  CTSAs and local innovation ecosystems   something else that was particularly effective  was um kind of leveraging external advisory   boards and proposal selection boards um we  saw a couple we had one instance where a board   for choosing applications for funding um the  composition was just basically a lot of like uh   university administrators and senior faculty  and deans and people just like oh these are the   wrong people to be evaluating technologies  for uh for commercial merit and scientific   promise and relevance for the market um and  so what we saw is that a lot of proposals   the selection boards were retooled using life  science executives entrepreneurs and people   from the broader community um something else  that um that kind of characterized the sites   was active outreach to the research community um  um so the more that the site teams were engaged   in interacting with faculty the more likely  that they would ask questions about the program   and increase the propensity to apply so actual  active engagement not just passive engagement   or newsletters but actually going to meetings  being there championing program and program   and sustaining that proved to be particularly  significant something else that's just a signal   because it's an n of 1 but I do like to just to  share and I believe that um Paula Bates from the   University of Louisville shared this in a talk a  couple of years ago but essentially the the full   innovative innovator-facing team from the  University of Louisville ExCITE program were women   and it just so happened that in the end  sixty-three percent of funded PIs or co-PIs   were also women and in our interviews with  those innovators they said that representation   and seeing someone like themselves in a leadership  position as an entrepreneur inspired them to take   the next step forward with their idea so I think  that's probably themes that Monique and team   will pick up on when they talk about  diversity and inclusion here shortly   um something else that was kind of a great  practice was you know application processes   as an opportunity for learning and feedback  so watching innovators struggle filling out um   a you know kind of a market plan a business  development plan um you know it was also an   opportunity to kind of coach and provide feedback  um something kind of related to this is that for   educational programming um many of the sites were  starting to time all their their seminars their   boot camps with the application cycles um and that  pursuit proved to be particularly useful to help   those innovators as they muddled through their  application for technology development funding   of course once the projects were funded a lot  of the innovators said well the most important   training was my one-on-one coaching with the site  team and my project manager because you know they   basically counseled me they were my sounding board  and they helped me solve some of these challenges   about how to add value to technology when this  is the first time I've ever done it um so that   was a real common anecdote that we've heard across  the board they also were effective at um you know   implementing milestone-based project management  tools so tranchd funding fast fail strategies   the use of target product profiles and other  tools um you know things that are tend to   be um you know uh fairly new to an academic  investigator but it proved to be particularly   um helpful in um building out their knowledge  about how to move these projects forward so high level takeaways for me um so this  has been I think this is the sixth time I've   given a talk like this um and I really love  it um but the thing that always strikes me   is um that really there's a lot of evidence  about how these types of programs can transition   those basic science discoveries to help patients  and as a science policy nerd um you know I really   think about the return on basic science investment  NIH is a 40 billion dollar a year enterprise   a large proportion of that is going to basic  science research to help patients and people and   discover new knowledge and technologies and it's  great to have programs that kind of de-risk the   probability that you know a great idea just stays  a great idea and doesn't make it into a patient   um it's also really exciting to see how this kind  of fuels our small business innovation programs   um so it provides um some great technology um  both to existing small startups uh but then also   uh kind of forms new ones um such as what we've  seen with the 101 that have started in this these   two uh programs in particular I'll mention it  again products to market in less than eight years   when typical 17 um from a translational setting um  and then you know some some of you um are likely   um you know uh run a small fund either at a  university or um maybe a a state-based innovation   fund you know if you're you know Pittsburgh Life  Sciences Greenhouse or you know one of the other   Ben Franklin's uh for example you're probably  saying well you know I really want at least a 10x   on our return well this program right now for you  know if you were to you apply those typical rates   of return um measures that are used by the the the  broader venture community either public or private   and um you know you're basically looking at 35x  for NCAI and REACH um combined um that's not bad   um and then I guess the other kind of parting note  that I just want to share is that there's a strong   base of program management insights emerging from  these teams within um within the past year alone   I think uh three or four new publications have  emerged um from you know Colorado-SPARK from   B-BIC in Boston and there's one from uh from  the NCAI-CC team we put together an overview   so there are resources available particularly in  the journal of clinical and translational science   so if you're really interested in this stuff about  how you can kind of improve program delivery what   are the types of metrics you should use what  are some of the key insights and perspectives   from the leaders and these these really great site  teams you know these journals are the place to go   to kind of check out what the takeaways  are and with that I will take any questions thank you Alan for a great presentation it's  always uh you know good to hear you give   sort of an overview of the network but you know  also talking about why these you know outcomes   and metrics and collection and the standardization  is so important uh so they were we we have been   answering a bunch of questions but uh there's  one that you know it might be helpful for you   to weigh in on uh someone is asking uh if we  have looked at or if you have looked at uh   innovative demographics in terms of that follow-on  funding gender race and all of those factors like   is there do we have any break break break out of  that uh if you could comment on that yeah actually   that's a that's a really great question um we we  have started to investigate that data in general   I can tell you that once innovators reach so for  the SBIR portfolio okay just as a reference point   once you have your phase one SBIR outcomes are  the same for phase two transition and thereafter   there's no difference by gender or race  so that that means that really what we   need to be focusing on when it comes to broader  incluse inclusion when it comes to kind of these   types of commercialization outcomes is getting  getting the innovator to that first milestone   and in this program uh largely we're  seeing strong results um the the hard   part is is that we have noticed that there are  differences in the technology profiles by gender   so um you know we have a lot  of the therapeutics um are   largely from men but and when we have a strong  base of research tools um diagnostics and assays   from women that's the only thing that I've  observed but we are starting to look across   the portfolio it's just hard to draw inferences  because it's only 386 projects so just kind of   take all this with a grain of salt of salt it's  just a signal not nothing you can take to the bank thanks Alan and uh you know I think we are we  are trying to look at some of the data on the   uh non-NIH side as well but that's a little tricky  and uh it's always challenging to sort of figure   out uh the the exact attribution and all but we  are interested in figuring that out and looking   into it uh someone was asking about I think like  you were talking about that milestone driven work   and fail fast philosophy versus hypothesis driven  research um and so just one or two you know there   was a question about what does that mean and  can you clarify why hypothesis driven research   should be avoided so just before I pass it to  you I wanted to make a clarification that we   are not saying that we should avoid hypothesis  driven research and that's what most of what   NIH does uh our pure basic research but for  the kind of work that we are supporting with   the NIH Proof of Concept Network for those product  development work uh you know the milestone driven   approach is what we are uh advocating for but  um that and I'll let you comment on that well I   think I think you're I think you're touching on  it really well Ashim there's a time and a place   for everything and um this particular program  is focused on that the additional validation   and maturation of a known technology it's  filling a gap because a lot of the basic   science funding is really looking at hypothesis  driven research generation of knowledge right   it's not at validating and further developing  something that you've already discovered   and the focus of this particular program um is  to kind of do that validation and the additional   technology development work so that the idea moves  forward into something that's a practical usable   tool therapeutic device etc. so that's the picture  here hypothesis driven research that is bread and  

butter for NIH this program is distinct from that  because it has a very particular lens on trying to   essentially move that technology forward  that's the that's a that's the difference here   thank you uh I think one let's see uh I wanted  to acknowledge uh you know Paula and Claire they   have put the link for some of the articles so  please take a look at those are definitely a   great resource uh someone was asking about some of  the metrics that we are using and collecting so is   it possible for you know people to access those uh  do we have I’m trying to think like I mean I think   in some of those articles we might have those  but we can also think of ways to put it out so   because it's you know nothing is proprietary  we want everyone to use the same data elements   that's right so um in response to this um so I  would say that all the papers that I mentioned are   you can go to the journal of clinical and  translational science um or Paula Bates   has dropped in that if you're interested in  that news article she can get that to you   um there's also paper in nature review drug  discovery um so um when it comes to this   evaluation um program I’m in the process right  now of crafting a full paper that describes   our entire evaluation program as a reference  tool um and so if you drop me an email   I’m more than happy to share with you the  evaluation plan that I've already produced and other materials I can email it to  you but there's also a paper that's   that's currently in develop development thanks Alan uh let's see uh one question about  that you know 35 x return on investment uh   you know how do we calculate that could you  sort of talk about that a little bit and   uh how we are measuring it yeah absolutely um so  so basically it's a ballpark it's a moving picture   at any point in time but it's essentially  it is the um kind of the the um non-federal   follow-on funding as a numerator with the federal  support for these programs as the denominator   so the denominator will be the full cost so  all like you know I don't, I forget that number   off the top of my head but I think that um REACH  um is some the current cohort it's like 4 million   per site per year or 1 million per site per  year for four years um so basically we have that   um and then for um for NCAI is a bit larger  so what we do is we just kind of lump all   that together and that becomes the denominator and  the numerator is the non-federal follow-on funding   and that is about 35x it changes but that's the  ballpark and in terms of uh you know how do we   know like if the REACH product was relevant to it  again there is a lot of validation we do right so   we check with the program project managers and  program managers said each of the Hubs and sites   we validate with the innovator who was funded  as well we want to make sure that we are getting   getting the right attribution so we're  very careful uh in terms of making sure   that you know whatever numbers we're quoting has  attribution to the original REACH or NCAI project   that's right and we also we have a lot we  have a lot of kind of data science team on   our side that are essentially running crawls  and they're getting all the validation data   and really you know the sites um in  collaboration with um RTI and NIH I mean   we've just done a phenomenal job over the  years getting all this stuff down and you   know every now and then we'll be something  that we'll miss um that they'll remind us or   vice versa but largely we feel pretty good  about our broader situational awareness   great this has been great I think we are a few  minutes over but this we're supposed to have   a break so uh that's totally fine uh thank you  again Alan uh and you know we will be we'll be   sharing this recording at some point in the future  we're figuring out where and when uh but you'll   all have access to it and then yeah you know we'll  share all of the the relevant links for all of the   papers that were mentioned earlier so you all have  access to that as well but yeah as Alan said reach   out to us if you have any questions about some  of the processes that we are using and like the   the metrics we're collecting and um and yeah i  would love to uh and Lena just put out a link for   uh for one of the uh the RTI's evaluation uh  articles so thank you all uh we have a break for   the next 10 minutes and we'll have a very exciting  discussion on uh equity diversity and inclusion   uh with members of the uh NIH Proof of  Concept Network uh EDI Action Committee uh   and Eric Padmore who is co-leading the um NIH  working group on biomedical entrepreneurship   workforce development um and Almesha Campbell from  Jackson State and Jonathan Fay from University of   Michigan so it's going to be a great panel  so we'll see you back here in 10 minutes   uh you know go around and stretch your legs  and we'll see you back here in a little bit

2023-02-12

Show video