May 25, 2021
Feature

Graduate Fellow Explores Artificial Intelligence, Nuclear Nonproliferation

Building the next generation of national security experts

PNNL and NNSA are exploring the complex reasoning behind artificial intelligence decisions.

Through opportunities like the NNSA Graduate Fellowship Program, early-career researchers are gaining first-hand experience in the complex reasoning behind artificial intelligence decisions.

(Composite image by Timothy Holland | Pacific Northwest National Laboratory)

(Fourth in a series about explainable AI and national security at PNNL; see the first, second, and third articles.)

In our recent explainable artificial intelligence (AI) highlights, we have heard about cats and dogs, wolves and huskies, and thin data—these myriad examples dove into the complexities of understanding and explaining the reasoning behind AI decisions. But what’s it like to be one of the minds behind the work?

Meet National Nuclear Security Administration (NNSA) Graduate Fellow Marc Wonders, who has spent the past year working with NNSA researchers in Defense Nuclear Nonproliferation (DNN). Through the NNSA Graduate Fellowship Program (NGFP), administered by Pacific Northwest National Laboratory (PNNL), Wonders has helped coordinate leading events convening experts around the AI national security mission space.

NNSA Graduate Fellow Marc Wonders
Wonders, NNSA graduate fellow in the DNN Office of Research and Development. (Photo by Marc Wonders | Pacific Northwest National Laboratory)

You pursued NGFP after your doctoral studies at Penn State University. What attracted you to the program? What did you hope to get out of the experience?

  • I viewed NGFP as the ideal opportunity to figure out what I wanted to do next while learning a lot along the way. In graduate school, I became increasingly interested in work that goes on in Washington DC. NGFP was a chance to learn more about nuclear security firsthand, and the opportunity to do the fellowship in the DNN Office of Research and Development made the decision a no-brainer. I was looking forward to gaining a broader view of the research that DNN funds, learning about missions that support nonproliferation, seeing how research is carried out from the sponsor side, and meeting a lot of people.

Your background is in nuclear engineering, radiation detection, and imaging for nuclear nonproliferation. How did that lead you to work in AI?

  • AI has become a powerful and widespread tool that nearly every field is looking to incorporate. Staying on the cutting edge requires some awareness of AI no matter the field. Toward the end of graduate school, I began exploring AI, to help with some of the problems I was researching. When I arrived to my assigned position in DNN, AI was something I wanted to learn more about. As the NNSA AI and data science portfolio is evolving, it is defining the path forward for AI research in nonproliferation and beyond. Being part of that evolution has been an exciting opportunity, and I’ve been learning a ton ever since I started my involvement.

How do you explain explainable AI to others (like a parent or a friend)?

  • Many of the state-of-the-art AI capabilities we see in the news come from AI’s ability to discover and leverage highly complicated relationships between large numbers of variables or features that are not obvious to a human. Because of this, they are frequently considered “black boxes.” This very characteristic that makes them so powerful also leads to difficulties, especially for high-stakes applications such as national security. Explainable AI describes techniques that we apply to look under the hood (although in practice, sometimes the hood stays closed) and learn why AI is reaching particular conclusions. Explainable AI helps build trust in AI systems by helping us determine which AI model is best suited for a certain task, understand how it might react to a scenario it has not seen before, and even generate new knowledge or insights by uncovering new relationships between the input features.

Your fellowship has been with DNN Office of Research and Development, whose mission is to advance our nation’s capabilities to detect and monitor nuclear material production and movement, weapons development, and detonations across the globe. AI is not something we easily associate with such tasks. How do you see the role or impact of explainable AI in that mission?

  • It’s super important! In many cases, explainable AI is necessary for usable AI in nuclear proliferation detection. Potential AI applications continue to expand, including in our field. AI can be used to both augment current approaches and create entirely new methods that otherwise would be impossible, even leveraging new proliferation indicators to provide novel insights to the activities and intentions of various actors. However, while AI is improving capabilities in an array of missions, it is not a hammer with every task being a nail—AI is not the best approach for all cases and sometimes its advantages are uncertain. Effective use of AI requires careful consideration of its implementation and limitations. This is where explainability can be helpful in determining if the problem under consideration is really better addressed using AI or by alternative methods, and most importantly, what are the potential drawbacks and failure modes of using AI?

You jumped right into your fellowship, immediately tasked with coordinating logistical planning for a series of AI workshops. What have been some of the key topics of discussion from those events?

  • Key topics have included the importance of understanding the context for techniques under development and the desire to not blindly trust data as the sole input. While AI’s data-driven nature makes it a valuable tool, AI systems should be enhanced where possible with explainability or by incorporating additional information beyond the training data. Good training data sets in proliferation detection are scarce and difficulties in developing them are omnipresent. Even if we can generate usable training data, the applicability of an off-the-shelf algorithm on a certain data set in a new environment is not guaranteed. Ensuring transferability and generalizability of systems developed is hugely important and another opportunity for explainable AI.

What’s next for you and your work with DNN in the AI mission?

  • In collaboration with some others, mostly from PNNL, I am finishing the report for the AI workshop on domain-aware methods. This will be a helpful resource to the proliferation detection community. After that, an upcoming workshop on next-generation AI will complete the “Next-Gen AI for Proliferation Detection” series that began with the explainability workshop. It has been enlightening to better understand the nuances for AI applications, and I feel much better prepared to successfully apply AI in my future research to avoid pitfalls from naïve use.

NGFP aims to provide current and recent graduate students like yourself with leadership, training, and networking experiences. While much of this year has been remote, what types of opportunities have you had?

  • While I haven’t been able to do much in person, I have been able to access and learn about most of the research funded by our office and NNSA. This includes participating in calls with researchers throughout the national laboratories and learning about our various projects. Being involved in the funding processes has been enlightening. I have also been able to engage in opportunities to support larger efforts, such as these AI workshops, which let me help create deliverables with a tangible impact on the research going on throughout the complex. Finally, the virtual environment enabled me to connect with more people. These connections have been some of the most useful and enjoyable things to come from the fellowship.  

Lastly, what’s next after the NNSA fellowship?

  • I just accepted an applied scientist position at Lawrence Livermore National Laboratory, and so my next step is to take everything I have learned this year and apply it to my future research! I am excited to return to research with a greater understanding of the NNSA and U.S. Department of Energy funding processes and operations. This year reaffirmed my desire to do research and equipped me with so much knowledge to be a much more effective researcher and scientist in the national security mission space and beyond.

If you are a current or recent graduate student interested in pursuing a nuclear security career, NGFP is currently accepting applications for fellowships beginning in June 2022. To learn more and apply online, visit http://www.pnnl.gov/projects/ngfp.

###

About PNNL

Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the Department of Energy’s Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://www.energy.gov/science/. For more information on PNNL, visit PNNL's News Center. Follow us on Twitter, Facebook, LinkedIn and Instagram.

Published: May 25, 2021