RIC 2022 T1 Am I A Robot How Artificial Intelligence and Machine Learning are Impacting Nuclear

Show video

welcome my name is terry lalane i'm the deputy director in the division of systems analysis in the office of nuclear regulatory research here at the nrc and your session chair for am i a robot how artificial intelligence and machine learning are impacting the nrc and nuclear industry next slide please ai is one of the fastest growing technologies globally and is the next frontier of technological adoption in many industries including the nuclear industry as a modern risk and form regulator we must keep pace with technological innovations while ensuring the safe and secure use of ai and nuclear facilities our expert panel today brings a range of ai perspectives and experience from domestic to international and nuclear industry our federal partners and their approaches to similar questions regarding ai and the nrc's current activities following the briefings we'll have an open discussion and audience question and answer period so please be sure to submit your questions throughout the session next slide please it's my pleasure to introduce our panel today say welcome to mr gene kelly senior manager at constellation generation mr kelly has over 40 years of experience in the nuclear industry including design analysis and licensing he's a senior manager in risk management for constellation generation responsible for risk informed initiatives across the constellation fleet he was also the technical lead responsible for re-licensing of the limerick nuclear station managed engineering programs and designs at limerick and worked previously with the nrc as a branch chief and a senior resident inspector mr kelly holds a bachelor's degree in physics from villanova and a master's degree in mechanical engineering from the university of pennsylvania welcome to miss aline de claw she has been recently appointed as the division director of nuclear power in the department of nuclear energy of the iaea mr claw has a extensive experience as a program director of several new build projects she managed large investments projects for conversion and enrichment for facilities such as flammonville 3 epr and a portfolio of nuclear civil and equipment activities including smr development she is also engaged in gender balance and diversity actions notably president of wynn women in nuclear for france and is an active member of win global mr cloud holds a master's degree in science and engineering technology from a cola polytechnique a master's degree in civil engineering technology from the ecola day ponzi chu says and an mba from college day engineers welcome mr ben schumach mr schumach is the sophomore quality lead in the quality engineering and system assurances directorate of the us army futures command devcom armament center in the u.s department of the army he leads research in testament valuation and verification and validation capabilities for artificial intelligence machine learning automation and other technologies and assisting the quality engineering and system assurance directorate in developing policies and procedures to be used by the army events center he currently leads the army ai software sub safety subgroup focused on the test and evaluation and verification and validations of ai systems and data mr schumacher also spent a year with the safety and mission assurance office at nasa's johnson space center assisting in software quality assurance for commercial visiting vehicles to the international space station he holds a bachelor's degree in computer engineering from the pennsylvania state university and his master's degree in computer engineering from the stevens institute of technology and welcome to mr louis spenacort the chief of the accident analysis branch in the u.s nuclear regulatory commission's office of nuclear regulatory research mr banacort leads highly skilled data scientists in developing the nrc's artificial intelligence ai strategic plan to enable the safe and secure use of ai in nuclear facilities and accelerate iai utilization across the nrc mr benecor joined the nrc in 2008 as a digital instrumentation and controls engineer in research since that time he's held several positions from the technical assistant for nrr acting chief of the instrumentation controls an electronics engineering branch an instrumentation and controls engineer and a new reactor project manager throughout his career he's been a key proponent of science technology engineering and mathematics education and continues to volunteer and represent the agency in multiple annual youth outreach invests in the washington d.c area before joining the nrc he worked as a control engineer for ge aviation and a new project's engineer striker endoscopy mr benicort has a bs in electrical engineering from the university of puerto rico a professional certificate in public sector leadership from cornell university he's a senior member of the institute of electrical and electronics engineers and a registered professional engineer in the state of maryland that i welcome all of our presenters and i will start our briefings with mr gene kelly's presentation stay in your lane dude thank you terry um and good afternoon everyone and i'm very honored to be on this panel with a a very uh excellent group of panelists and experts here in the area and uh you know what i'm hoping to share with you today as we put the slides up is um you know some of the lessons learned that we've we've garnered here at excel or at constellation energy now uh you know as we've deployed some of these uh new technologies and artificial intelligence and and we're gonna share those lessons learned with you here uh next slide please now you're probably wondering um why i've used and chosen this uh picture and it turned out i was watching one of my favorite movies the big lebowski with jeff bridges and john goodman and steve buscemi and you know happened to be talking to one of our project uh experts in leeds and he had been driving home in in his new car and and it was a very difficult um uh trip up i-95 and it was raining very heavily he couldn't see well and he said that the uh you know the technology in the car now um enabled him to stay in the lane even though he could hardly see the road and and it occurred to me that uh you know in in in the theme of this conference that there is concern sometimes that we go to full autonomy with artificial intelligence and machine learning but the reality is when you look at automotive applications there's various levels of autonomy and we're far from a totally autonomous vehicle and and basically the the applications we've developed thus far at constellation are really intended to uh keep the users fully engaged and in essence keep them in their lanes so they can focus on what's important and uh you know we're gonna we're gonna walk you through some of the examples here's in the subsequent slide so uh that's really the reason for the uh the humor and and the big lebowski next slide please now this slide is is pretty interesting and it sequences so i'm going to ask you to uh bump it a little bit but you know we started out this way with what you see with the initial ideas of here's what we're going to do and we're going to go in and automate certain aspects of our corrective action process and our work control process and then we sat down and engaged the end users and you know that's really our first and maybe most important lesson is you you really find out what the problem you're going to solve when you sit down and engage the end users and there's there's just no substitute for doing this due diligence it's it takes some time it takes some effort but it's worth its weight in gold because it really tells you the problem you really need to solve so if you hit the next button what you'll see is once we sat down with them just click on that slide we found out that there were other things that they wanted to add and and that's when we started to understand uh what we could really do for them to to really kind of reduce the effort and and really help them in doing their job every day so if you hit the button again uh you'll see this slide kind of fills in as we started to learn more on the left hand side about you know what we were going to do with our corrective action screening and prioritization and if you hit the button again on the right we sat down with work week managers and what we call cycle managers you can hit it again there and you can see that we we eventually filled in the blanks of all the things we want to do and you know we ended up really designing um 11 different algorithms and models but this is worth its weight in gold because this is where we really uh honed in on on where the savings are going to be next slide please many times people will ask you know why cap data corrective action process data and i mean it's first of all it's a big data source right we we all in in the nuclear industry we generate uh a number of condition reports every year on the order of you know five six thousand per site and uh it's a big data source right it's it's also an important cornerstone of the nrc's reactor oversight process and and and the way i would term it is that just about everything that happens at a plant that's important is reflected in that cap data and and uh but but you can see from the statistics that we have a scheme for both significance and severity and and type and uh you know there's thankfully very few very significant things that happen that require extensive investigations and the vast majority of the data almost 99 of it is is is low level significant and and and really the message on this slide is that our algorithms and what we're doing to automate aspects of the process is going to allow us to focus on the really important uh conditions which is where we think our uh you know focus should be next slide please i bring this up uh just because this is an application we've already had in place this has been very successful we've had it in place two years now at constellation it's it's it's used for our maintenance rule process and and we've been able to automatically identify uh potential maintenance rule functional failures the the users have provided excellent feedback and i i think it's worth pointing out in that second bullet that the software really isn't making the failure determination right all it's doing is flagging those condition reports that are worthy of human review so you know the the message here is that the end user is still fully engaged and even more so their their backstop fully backstop because our system engineers and strategic engineers still monitor the day-to-day traffic in that system for their systems and the components in those systems and so you know this is this is fully backstopped such that you know you're not just totally relying on a software and and you know we've gained confidence with this over two years uh through the continuous feedback from the users uh and lastly i would just point out that we bias the software in a way that's more more focused on high safety significant component fares so that we have very few if any miss rates in fact our miss rate has been zero for two years so we think this has been very successful uh and it's it's and and the key is we've now built subsequent applications based on this first successful one uh next slide please this slide probably bears some real close looking and i guess if i were to pick one slide that was the most important in the whole presentation this is it because this is the graphical user interface this is what the end user sees as a result of the algorithm that we built and and it's it's really awesome i i don't have the time here to explain all the details but it's really showing you the confidence values and why certain condition reports are flagged it it has textual comments to provide the context on how the decisions reached it shows you what's called the word grams which is how the artificial neural networks are built and and finally uh you know you have to revisit this you can't just walk away from it after you build it because you may have procedure or rule changes in your process you your your performance data may change the plant and uh you know so it's really uh important here that humans continue to validate the model's predictions and and again the the time with the end users is very well spent to develop that graphical user interface next slide please just a few words here about you know every anybody who's in any of these innovations knows you have to make the business case i would point out that our our our industry has many processes so there's there's lots of opportunities there to apply these technologies in these processes and and you know we we see that we can improve data quality we can improve our organizational decision making and and and also employee bandwidth i think one of the commissioners talked about that this morning but you know particularly for us as a new company who's just split and we're getting into new areas uh you know you you want to be able to uh deploy your your your resources and your people you know where the new priorities and work is so this is really going to give us the opportunity to do that maybe probably one of the most important bullets here is that this is an opportunity for us to eliminate low value work we talk about that a lot in our workplaces it's easy to say it's hard to do and it's hard to let go but this has really given us a a golden opportunity to evaluate eliminate low value work uh next slide please and i should say as as we go to the uh the next one here that uh you know that the key message from that last slide is that it's it's it's really helping us to focus on what's important and if there's any one theme throughout this whole preso that's the one i would continue to reemphasize is this technology is helping us to focus on what's really important we we have worked and collaborated with the department of energy and idaho national labs and and uh what we're finding and it was a surprise to me i'm not a data scientist but there there are a variety of methods and all sorts of approaches and hybrid approaches supervised unsupervised and and what we're finding is literally what the slide says that you know one size doesn't fit all and and um you know i love this quote from the article i've i've read a lot here and in the journey over the last year or so but uh you know it really the algorithms you're going to pick and the techniques you're going to pick are going to depend upon the kind of data you're working with and the problem you want to solve and you know what you want to get to so bottom line is when you get into another lesson learn we've had is as you get into these you'll find that there's many ways to do this and it's not just one or two approaches uh so uh interesting uh lesson we've learned here uh thus far next slide please so finally you know where are we headed um you know and i guess i would point out that with each successive application we've done we've learned a little more and we've built upon it so that first one with maintenance for functional failure has been pretty successful and and we're going to build on that with the next two we're going to start the pilots for the corrective action and the new work screening here later this month and then we're going to set our sites and some other processes and like i say there's there's a lot of processes that you can aim this at uh but one of the biggest challenges when you read the literature is that integrating this into your uh your systems and your your processes is probably one of the biggest challenges so uh you know we're we're going to continue to look at additional areas we have a lot of good ideas on where we can apply it but we start first with small things and then work up from there uh next slide please so uh you know i guess i'd end today with uh you know sharing with you this is a feeling that's been with me the whole time i've been involved with this for the better part of a year or so and that is uh when i think about artificial intelligence and machine learning it's it's really not a matter of if it's only when i think that we're all going to be there and and you know the picture here of course is to say that you know it's probably only a matter of if when we're going to be driving autonomous vehicles as well you know and but but i really do think that uh this technology allows us to really focus on what's important and and boy that's just so valuable in our business for safety uh and and the second bullet is very fascinating to me but you know a lot of us and our companies uh struggle or have the challenge of you know knowledge retention and and retaining tribal knowledge is as people leave and retire and and and new new people come in and you know that the use of this it gives you a solution i think in that regard and that you can continue to make this algorithm smart and and it retains the wisdom and so you know perhaps there's a solution there for all of us on um you know how to you know solve the knowledge retention issues as well in various processes and and again you know there's probably the the opportunity here for a very powerful industry outcome and as as uh one of the doe uh directors dr curtis smith has said to me uh and i think he aptly described ai and ml it's it's the new math so uh you know with that i think i'll stop and uh thank you terry i'm done my presentation great thank you gene our next panelist is miss elaine deschloe with the presentation ai for nuclear energy okay thank you terry so do you see my presentation yeah so i am very honored to be part of this panel today i am director of the division of nuclear power in the iaea department of nuclear energy and it's really in our mission in the agency to share the knowledge among all our member states about new technologies to enable the development of these technologies to define the necessary condition and artificial intelligence is really part of our task so i will next slide please yes so i will tell you where we are today because it's quite a long journey uh well this slide shows you in a broad view what is artificial intelligence in a common language so it's leveraged computers and machines to mimic problem-solving and decision-making capabilities of human mind as a general topic and so where can we apply this in the nuclear industry in several field as you can see on this slide so regarding machine learning and deep learning which is uh on left top part of the slide we can support predictive analysis for example on nuclear power plants we can use that to improve modeling and simulation capabilities as well as enhanced performance of digital twins with by adding simulation tools to these twins another part is a natural language processing which is a branch that enables machine to understand human language we can do that in the support of classification translation and data extraction for example we can use it in analysis nuclear power specific requirements it's a field where quality assurance can benefit for example by ensuring the product or service is meeting the specified requirements through techniques of natural language processing another field is expert system it emulates decision making ability of human expert it can be used for knowledge representation for generation of models for processing of models particularly for diagnosis and this have can have wide application to nuclear safety um if we go to technologies like computer vision it's also uh there are also quite interesting technologies to take meaningful information from digital image we all have in mind the image coming from regular inspection and destructive inspection for example and it can provide insight that would be missed by human manual analysis only automation it's not really antibiotics it's not really a new technology however these techniques can be really enhanced by artificial intelligence for example by using computer vision technologies and last but not least all these base algorithms could potentially also be used for design and optimization of nuclear reactor cores designs so this is quite a broad view next slide and now i will go a little bit deeper in what we do in the iaea so next slide so we have had several uh technical meetings working groups and technical meetings and this slide shows you where we are what is the state of the art uh in the ai where it is applied and this is already taken from a return of experience of our experts participating in these technical meetings so as i've said one of the first quite obvious field is automation because it can automation automative automated process can reduce uh release the human factor in in the work activity nuclear activities it increases reliability it reduces time also of operation in um optimization also is a part where we can optimize complex process complex processes like for example a plan of strategy and strategies for inventory management outage scheduling fuel cycle parameters so it can help to process a lot of data it's also in use in building information modeling and also for verification and validation another part another field where we also see many applications is analytics also for model validation for advanced computer simulation and as i said uh in the at the beginning it's of use in digital twin application and um another part is prediction probability prognosis by looking at events we can reduce failure or at least detect a failure in advance assess current asset conditions and for example remaining use the useful life of components and all these insights will help to extract and fuse and use data from multiple knowledge sources and that are collected from thousands of years of operating experience massive libraries of scientific smart and validation experiments so all these techniques are used are now more and more commonly deployed however next slide please uh we know all that there are deployment challenges and this is i think today uh topic first of all because this data can be or the result of uh ai can be interpretable uh we don't there is a question of trust of robustness of the performance of the a.i

um we cannot use the traditional verification and validation approaches for ai because it's quite limited transparency and the high level regulatory safety assessment principle and guidance may need to be developed it's not yet really recognized worldwide and of course all the security cyber security issues with data with adversarial attacks are there are already there but we also have an increased risk of cyber security by using artificial intelligence and like well also due to the limited transparency of what's in the uh machine learning uh tools so what's next uh can you change the slide yes so we have we work on different aspects first on uh less mature technology what that's what we call technology development uh and so we need further development of technology before applying that on nuclear power plants that's our view at least and we have also categorized some technology with which you call which are in a deployment stage for example all this automated analysis of non-illustrative examination uh it's almost i think it's more and more commonly used or all what is about predictive maintenance procedure and then there is a field also where we work it's technology enabling so of course by developing legal regulation for this application by developing common requirement database and common requirements that are understand understandable by ai for use of optimization simplification and specification because it's not uh today's the requirement out they are written and it's uh dependent on the the user mainly and the operator and we also have to develop algorithms that are accessible so give more transparency to the algorithm and understandable to artificial intelligence next slide next to slide yeah so what we do uh for example for activities so last year we had a big technical meeting on artificial intelligence for nuclear so you can see that there are many fields uh it's not only nuclear power but it also relates to ethics food and agriculture as nuclear and physics next slide we are also part of the international telecommunication union of the united nations and we participate in webinars like this one was ai for good or ai for atoms and next slide please and every year uh there is a publication from the itu not specific only to nuclear but there we have quite a few numbers of examples and we share the development of ai for nuclear technology and application so it's also accessible on the internet side and before finishing i would like also to mention one point that homie is very important especially in this day of uh women's in the international day is that it relates also to ethics uh is that there are a lot of uh well the developers are mainly men i can say and in computer and i.t science we are lacking of women so if we could do all everything to attract women that would be very good because i think that diversity in uh developing algorithm in uh yes looking at the requirements is very important uh to have something which is very close to human brain and to have the all the diversity um and i would like uh terry to offer you this uh because the question is am i a robot so definitely i don't know if i am a robot but if i would be one i would choose this image with a nice picture and i think that we should share this to attract more young girls into our domain thank you very much thank you elaine uh reminder you can submit your questions um for our q and a sessions so if you have any questions for our speakers please make sure to submit those our next panelist is mr benjamin schmidt with the presentation u.s army combat capabilities development command armament center every event thank you good morning and good afternoon everyone um as dr elaine mentioned my name is ben schumic i'm representing the afc armament center devcom armament center specifically our quality engineering and system assurance group so also thank you for having me today i know i'm maybe i'll say the slight oddball in the group here as more from the dod but hopefully kind of going through this presentation i can give you an idea as to why we kind of feel that it's important that we kind of talk together and work together on some of these challenges with artificial intelligence especially when it comes to safety of those systems next slide please so this first slide kind of talks a little bit about some of the reasons why the dod specifically has been very kind of aware and tracking what's going on with artificial intelligence and especially some of those challenges probably the biggest thing that came out was the nscai or the national security commission on artificial intelligence which was i believe a congressional-led and congressional funded research into what artificial intelligence means not only for the dod but of course for the federal government and that report really pointed out many key areas that need to be followed and i kind of highlighted a couple here that really impact myself as part of our quality engineering group uh thinking about data science verification validation reliability safety and of course human system integration a lot of these other reports that you can see on the screen also talk to these very same aspects especially safety you know one of the reasons i'm here today and i wanted to point out that last one on that bottom right a little hard to see but that is the responsible ai memo that was released by the honorable secretary hicks concerning how we are going to ensure that the systems that are developed by the dod maintain those five ethical principles uh next slide please uh so just kind of a little bit about uh why i'm here and who i am uh so armament center is the primary i'll say development organization development command that's looking at conventional weapon systems and ammunition for the army uh so it's you know as with any kind of system and new novel technology there are ways that this could revolutionize the way you know ai and ml could revolutionize the way that these technologies are being developed by armament center uh but of course you know that brings challenges and it brings things that we want to ensure that we're looking at uh so some of these challenges of course you know we're looking at what does it mean for continuous learning what does it mean for these very complex statistical algorithms that are going to be used and how are we going to ensure configuration management what kind of new methods or or procedures or processes are going to have to implement to make sure that we can assure and i'll talk about that in a second assure that what we are developing meets the intent and the needs of what we are developing it for making sure to look at different sensors different inputs and how this data as you know you'll see data is very critical from a machine learning perspective how can we assure that it is uh unbiased that it's correct that it's accurate that it meets the context of the environment that it's being used in and still maintaining these reliable ethical safe and robust capabilities of this system so what the armament center did is we looked at what's called the armor material release process which is the final gate that a system must go through before it can be deployed and be utilized out in the field next slide and so i'll kind of briefly just talk about that for just a second we want to ensure that anything that's released by the dod meets these what we call the three s's safe suitable and supportable uh i won't go to each question here but as you can imagine safety being one of our top priorities so we have a lot of things and a lot of stakeholders and a lot of different milestones documentation deliverables things like that that have to be met um and those are listed on the left side uh suitable you know is it the right system was it developed correctly does it meet verification requirements does it meet validation requirements so we have a lot of independent testing that takes place a lot of safety assessments that will take place to make sure that that system meets that suitability requirement and lastly supportability can the system be supported in the field do we have the right logistics in place uh do we have the right fielding plans and the right training for any sort of operators of any of our systems so this applies to any system you know any system that's being released by the army that will go through our office it must meet all of these uh requirements before it can be as we say kind of put in the hands of of a soldier next slide please uh so i wanted to touch just briefly on one of those aspects you know we're working a lot of different things and i'll show you that in a second but i wanted to touch on safety because i feel that that's probably where we'll have a lot of cross collaboration and a lot of good technical cross discussions with the nrc and their partners so i think it goes without saying that the safety challenges are significant when you're thinking in a ml system there's a lot of complexity to that design you know there could be changing and differing and off nominal environments how we're looking at the cognitive interaction of the human in that loop of the system and what kind of perceptions are they going to have about different behavior or unexpected possibly unexpected then behavior of that system and so looking at what do our levels of rigor uh when we look at different software intensive systems need to be changed so some of those things that we're looking at looking at different safety methodologies different safety precepts looking at ways to adjust or recommend new ways to do a functional hazard analysis general safety requirements what artifacts might need to be needed and sort of identifying ai safety critical functions and any of that data that leads to that function be it as part of design or as part of what we call inference when the actual model is active of course understanding the concepts of operations what are environments understanding those enabling technologies and what kind of autonomy may or may not be involved in that system taking all that in and thinking about what kind of levels of rigor must take place what kind of metrics and measures must be developed and what artifacts can be delivered lastly looking at both the hazard mitigation guidance as well as any sort of adjustment to kind of our safety risk assessment approaches for ais the different levels of autonomy lor it's kind of summarizing it into what we believe would be good practices possible regulations or policy changes and that's why i have that little blurb there about mill standard 882 echo that is our safety standard that we follow within the army which is undergoing revision and we plan to submit a lot of uh suggested changes and working with that group to make sure that any of the needs that come from ai and ml technologies are appropriately included in that next slide and i believe this is my last slide so you know i just touched on that at one point about safety but we're looking at a lot of different things at arm ring center we're viewing a lot of the policies and identifying the gaps in those policies you know we have many many army regulations dod directives dod instructions so kind of doing our analysis of that to see where we are and where we think there could be better you know things made better and better improvements looking at data science you know with as i kind of said already ai and ml is or ml specifically is very critical of data science and making sure you have the right data and making sure you analyze that data and understand that data as it will be developing that system for you verification validation of course goes without saying very important very critical part of any system development so we want to ensure that whatever methods that might need to be adjusted or created or developed or collaborated with developing organizations is done as well safety you know spoke to that already a little bit but again you know trying to ensure that these systems that are developed are still fake and remain appropriate for their use mature relief that is kind of as i mentioned our final gate where we're kind of culminating a lot of these data points that you just saw into that material release that can be reviewed by stakeholders and by all these different panelists very similar to today to ensure that that system is good to go for deployment and lastly it kind of brings us to trust you know we want to have that trust and what we're kind of calling assured trust in that system not over trusting and not under trusting but finding that right level of trust through things like human system integration uh what we call soldier touch points and things like that to make sure that that system is going to be used uh the way that it was intended to be used and that the soldier or operator trusts it and will abide by what they need to do to utilize it and i believe that my last slide so thank you for your time thank you ben our next panelist is mr louise benicort with the presentation increasing nrc readiness and artificial intelligence decision making over deuce thank you dr elaine good morning and good afternoon everyone as dr alain said my name is luis patencord and i am the branch of champion for artificial intelligence i am pleased to be here today to discuss what are we doing as an agency to increase our readiness in evaluating ai technologies uh next slide please so as dr elaine mentioned in her open remarks ai is actually one of the fastest growing technologies globally and is actually the next frontier of technological adoption for the nuclear industry it has the potential to transform the industry by providing new embedded insights into vast amounts of data generated during the design and operation of a nuclear facility and it offers new opportunities to potentially enhance safety security improve operational performance and potentially implement autonomous control and operation and as a result we have been seeing that the industry is researching and using applications to meet future energy demands it is critical for us as an agency to focus on how these external factors are driving an evolving landscape and growing interest in deploying ai technologies so over the last year we have been seeing that landscape steadily evolving and ai is currently being used in a wide range of nuclear power operations including what you heard today from from gene from mining nuclear data for predictive maintenance to understanding core dynamics for more accurate reload planning so we as an agency we recognize the potential for using data science and ai irregular to the decision making but at the end of the day what we are interested is understanding what are the possible regulatory implications of using ai within uh within a nuclear power plant so at the end of the day what we want to do is to ensure that these technologies are developed safely and securely so we see today this is an opportunity for us to start shaping the norms and the values to enable the responsible and ethical use of ai so we as an agency must we must be prepared to evaluate these technologies and next slide please so we as an agency we are anticipating that the industry will be deploying ai technologies that may require regulatory review and approval in the next five years and beyond as such we are proactively developing an ai artificial intelligence a strategic plan to better position the agency in ai decision making so the plan could only house the goals for ai partnerships like what you see here today cultivating an ai proficient workforce utilizing ai tools to enhance our energy processes but at the end of the day to assure our readiness for ai decision making so we want to use this plan as a tool to increase our regulat regulatory stability and certainty and the plan will also facilitate communication to enable the staff to provide timely regulatory information to our internal and external stakeholders so while we were developing the plan we formed an interdisciplinary team of ai soldier experts across the agency and to be able to increase the awareness of ai's technological adoption in the industry we hosted three public workshops in 2021 that basically brought together the nucleotide community to be able to discuss current and future state of ai we also initiated dialogues within the nuclear community and with our international counterparts gaining valuable insights and identifying potential areas of collaboration one node uh one thing to know like you heard from ben like the analysis is not alone when it comes to overseeing the safe and secure deployment of ai the topics of explanability trustworthiness bias robustness ethics security and risk are actually coming from any entity that wants to deploy ai technologies in designing and operating a nuclear facility so that's one of the reasons that we're meeting with other government agencies including the department of defense to be able to identify new partnerships to leverage their expertise and experience of ai lastly we are committed in providing opportunities for the public to be able to participate in a meaningful way in our decision-making process so as we continue developing this plan we plan to solicit comments from the public and feedback from the advisory committee and react to safeguards in the summer of 2022 uh slide please uh as i mentioned earlier uh we do recognize the public interest in the potential regulatory implications of ai and we want to provide opportunities for the public to be heard uh that's one of the reasons that that we are uh that we're trying to meet uh the principle of good regulations to be open and transparent in everything that we do and to be able to ensure stakeholder engagement we have developed this timeline shown in the slide of what are our current activities for the remainder of the year so i do encourage everybody here to participate and provide comments on our plan uh our team is planning to host an ai workshop in the summer of 2022 to be able to remain aware of the fast pace of technological adoption of ai in the industry but as well as we want to communicate whereas dennis state called us about the innocence progress and ai activities lastly our plan is to issue the strategic plan by the fall of 2022 but i want to mention that early condemnation dialogue and pre-planning are key for us to increase in our regulatory readiness and stability for the industry to be able to deploy these technologies as you heard today from one of the commissioners we don't want to become a barrier we want to become an enabler for this technology for this technology if the industry decides to move forward with that uh so early engagement and information exchange is important for to support understand knowledge to be able to have that timely deployment and the execution of the air strategy next slide please so in closing here's our contact information so if you want to reach out to us after the rig that basically concludes our presentation and i would like to now turn it over to the torah lane so we can commence the q a session so put our line back to you all right thank you louise so we're now going to move into the question and answer portion you can continue to submit questions so please do so as we chat this afternoon so the first one louise i'm going to hand over to you are you finding any unique skills necessary in the area of ai and data analytics and how are you addressing skill needs that's a really good question i think data science works actually is a unique skill set that the agency really needs to have but that field of science actually has several subdomains as well as we know we have computer science mathematics and statistics uh for data science skills i think it's important for that person to know a lot about python or java which are basically very common commonly sold after but one of the things that we're doing as an agency is developing this ai strategic plan and one of the goals that we have is called cultivating an ai proficient workforce and as part of that what we're trying to do is to identify what is the pipeline of data science a staff to be able to evaluate an ai technology coming down the road and also to be able to develop ai tools internally to be able to better improve our processes and as part of that we develop a data science training clarification plan and then the plan basically provides on the job training as well as some of the skill sets that we believe our staff needs to be able to evaluate these technologies great thank you louise gene a question for you that came in what happens to the reports that are not worth human review yeah so the the analytic will look at what what are probable failures or probable uh outcomes that we're looking for and it'll assign a confidence level and then it'll allow the end user to make the call if you will for the ones that aren't shown they're usually very low confidence so they're not shown uh however as i mentioned there's backstopped processes that uh still provide feedback you know uh on for example if we would have misses and and what we've learned is that um uh you know that it's important to have those backstop processes so that if you do have a miss and it's not shown to the end user in that process you still get the opportunity to understand why you missed and then go correct the algorithm and and that's indeed what we've done on our first application with maintenance rule functional failures and and so far we've had zero misses uh since we've done that but uh again you'd rely on backstop processes to uh to see those misses as they're called great thank you okay question for ben on your slide for path to assured ai i'm interested in understanding a bit more about the bnb frameworks for aiml any suggestions sure um so i will say of course vnb i think of ai systems is always going to be fraught with challenges um you know especially when you're talking uh let's say a machine learning deep neural network understanding what each of those nodes can you know achieve what is being activated and how the impact uh your final result is going to be challenging um but some of the things that we're kind of looking at and um let's see here i kind of jotted a few down looking at you know modeling simulation you know i think that's always going to be a factor in uh the bnb of an ai system uh putting it into that simulated environment and trying to see how it reacts uh con currently with that thinking about design and experiment thinking about monte carlo simulations again putting them through kind of that simulated environment to see how it reacts and i should clarify this is not necessarily just for image you know you could do images you could do uh classification linear regression decision making um to treat even you know a lot of these different things through simulations of data inputs and mapping that to their outputs uh where something we're looking at you know explainability of ai not necessarily is a way to prove how something is working uh but also as a way to help validate what an ai system might be uh trying to achieve or trying to desire trying to you know that that answer that it's trying to arrive at can give us some guidance as to how it's getting there uh and i think the last thing i'll mention kind of is instrumentation of that ai um trying to you know we may not know exactly why let's say a node has activated for a deep neural network but maybe we can compare that to other nodes or maybe we can kind of compare it to other similar systems that may not use ai to try to understand how those lower level functions are impacting that decision to give us that confidence during a vnv assessment thank you ben all right next question for ellen how is your organization identifying areas where ai or data analytic approaches are applicable and have the potential for the greatest positive impact um well that's what i explained in my presentation we have a methodology based on organizing technical meeting while we define uh with our member states a mandate and then we deploy these methods so um the technical meeting we organized last year was really serving that purpose which was to provide international cross-cutting forum to discuss foster cooperation and artificial intelligence application methodologies tools and enabling infrastructure to have the potential to advance nuclear technology and application so it's quite a long title um and so through this meeting we are able to understand state of art uh we identify our role also in the acceleration of ai in the nuclear field and we and we have cartilage view from r d to already technologies that are deployed and so we it includes nuclear data and nuclear fusion nuclear physics is shown on the picture so a nuclear power security radiation protection and nuclear safeguards because also i was morally speaking about nuclear power but ai applies also on all this domain and this ai methodology can have very positive impact uh to improve modeling and simulation capabilities so that's how we organize ourselves yes thank you and of course and of course it's uh everything is available as public information wonderful thank you all right louise the next question's for you how old are you how will the strategic plan fit in with the nrc's hierarchy of documents and what's next after the strategic plan is released that's a good question so we are looking at that right now so the strategy will be a no wreck report a kind of similar of uh the rest of the agency's strategy documents uh the strategy itself is not long is 15 pages uh however there's a compiler document that we're developing that is called like an ai roadmap and the ironman has basically the what how we're gonna how we're gonna be doing this and one of the things that we wanna do is to start i'm doing some research on an ai methodology to have a basis to because the industry during the last worship dimension that they are interested in for the nrc to provide some type of a regulatory regard or guidance document but in order for us to develop that guidance document we need to have some type of white paper a technical basis that we can put into that regulatory guidance so what we want to do uh after the strategy plan we'll do some research but at the same time we want to keep engaging the industry and what are their plans in potentially deploying because in order for us to develop guidance we need to have a better understanding of uh where indus is planning to use this uh is industry interested in autonomous control is industry interested in using ai for safety systems depending on what we hear about those discussions we will start doing more research and the idea is for us to be agile we want to have the framework available in the next five years all right next one for gene i'm going to combine a couple questions here so this is around the cap tool if that's an off the shelf and then how your data science team is set up and built around your capabilities yeah so the the tool we're using right now was developed by jensen hughes jensen uses a company we've worked with for many years at constellation uh they've done a lot of our probabilistic risk assessment and models and they have great uh capabilities in in the area of ai and ml and and again they they started with this first application two years ago so they had already developed an algorithm they they understood our interfaces with the it systems and databases and servers uh they had relationships with our it people so they were in essence the perfect storm so they've they've developed this algorithm they call it data advisor and uh and uh we're we're now starting to look at other applications to use that that particular technology and and we think this this has real benefits because we already have contractual uh uh situations set up with them they're very familiar with our programs and processes and procedures they many of their engineers hold constellation uh qualifications technical qualifications so uh you know we find that working with them is very uh seamless and smooth um on the on the second question uh you know i think i think this sort of comes goes a long way towards answering that right that we're uh you know it would become expensive if you go outside you go to various vendors but we're finding by utilizing them working with our own it people it's it's been very efficient thus far but you know these are small applications we've started with we haven't really tried big yet you know if you read some of the literature they they advise against big moon shots right you know take small steps small bites of the elephant uh you know look to uh achieve adoption and confidence as you move into the bigger application so for example what ben said we we don't have deep learning algorithms yet uh those those would present you know bigger challenges for vnv and things like that but uh right now we're trying to stay small get some wins uh and and build on that as we move forward so i'll kind of stop there terry i think if that answered the question all right thanks so ben can you talk a little bit more about repeatability especially in the context of ai and ml and what is kind of in that framework of what might be achievable sure um so uh from from kind of my perspective you know repeatability is going to be paramount um you know we don't want to have a system i think for anyone for for the dod for the nrc uh as as one of their customers you know they don't want a system that they don't feel is going to be repeatable in terms of it's the way it operates uh so we kind of are taking the idea that uh whatever system is presented it has to be repeatable and we have to be able to prove that to the best of our abilities and one of the things i feel we can achieve with ai and ml systems is that if we are able to identify all of the inputs that a system is going to be receiving when it gets that decision that will give us a good step towards meeting that repeatability um we're not going to be at least i don't believe we're going to be looking at systems that are coming up with new um what should i call it kind of new methods of completing tasks or looking at the way things are working on their own you know we kind of call it online learning i think which i don't know if that's an official term but you know because that's where you do start to run into those issues of repeatability you know if something has been retrained or relearned but if you have the ability for what i'll call a static aiml system to to lock down that system and lock down that training and truly understand and that's the key point you know truly understand the inputs to that system i believe you can obtain that repeatability and i think we are going to have to get to that achievable state of repeatability uh if not then we have to start thinking risk mitigations risk assessments and possibly bounding of system capabilities to make sure that if it's not going to repeat exactly the way it should we have hard stops we have the ability to bound the system so that if it doesn't go repeatable it still stays in with that bound so i would say the goal you know if you will the objective is to have a fully repeatable system but the threshold is repeatable with um some guidance and some bounding in the off chance that we've encountered something that makes it no longer repeatable thank you so elaine question for you does the international environment have unique challenges for ai development and use uh yes it's i say unique in the term of uh yes indeed there are many uh ai uh now in the industrial world and we have to apply it in the nuclear industry and we know that uh is a lot of conservatism in our in our world and especially linked with a nuclear safety that we shine through so yes it's a unique challenge but i would say it's multiple in because ai covers a lot of uh techniques and a lot of application and i guess that some are easier to use than uh others and really um what is for me um important to to have is a kind of framework where even if step by step vnv is not possible we define the condition the outer conditions that are necessary to have a safe development of ai meaning uh what are the uh yes the physical um running the the the model in certain ways that we are sure that it does not exceed certain limits in the result and one part of the challenge also is to have a uniform requirements uh to feed the system because um not everyone but uh ai's or at least the uh deep learning machine all these things that that well is building itself when running uh when feeding data all these systems they you cannot fit them with the same requirements and what uh was said before is that uh yes if we can have something repeated repeated data and get the same results it's true if you don't change the system inside but and so we also have to work to develop a kind of international recognized standards on how to settle the requirements input data to the system so that it can be reputable not only because we have the same data and the same system but uh because we have the same data and we want to have more or less the same results i i don't know if i am can be understood but it's uh so not only a question of ev of the internal system but it's also um standardization for what is should be uh the requirement in the input data the format itself oh great thank you all right so we've gotten a question that's for all the panelists so we'll go around on this one and it's your thoughts on cyber so as we work in the area of ai how do we know that the ai hasn't been cyber compromised how do you basically build that trust with the ai known cyber landscape so i'm going to start gene with you yeah when i saw the question my my first thought was that you know what where it's embedded and used is is within internal systems that are already cyber uh protected so uh you know this is not like it's external and separate from uh any of the databases and and uh software's that constellation already uses so i would say we we just rely on the existing cyber protections um ben your thoughts sure um i i think i do agree with gene you know a lot of cyber hardening is going to be dependent on the system but i think something to keep in mind that we're looking at as well is cyber security of your supply chain so as an example for an aiml system supply chain could be your data so not only about the security and cyber resiliency of your your development envi

2022-04-09

Show video