Decoding the ‘Black Box’ of AI to Tackle National Security Concerns

(Second in a series of four articles about explainable AI at PNNL; see the first article)

For all the promise artificial intelligence holds for addressing serious issues, discussion of the topic often starts with talk about animals. Cats and dogs are the most popular. Maybe that’s because pretty much everyone knows about cats and dogs, offering an easy entry point into heady discussions about neural networks, natural language processing and the nature of intelligence. 

At Pacific Northwest National Laboratory, Tom Grimes begins the conversation about explainable AI by talking about wolves and huskies. 

A few years back, scientists created a program designed to learn how to sort pictures of huskies from pictures of wolves, and the system seemed to learn to distinguish the two. But when scientists tested the system with a fresh batch of photos, the program failed miserably. Why? 

It turned out that most of the photos of huskies had been taken indoors and most of the photos of wolves had been taken outdoors. The program had not learned how to sort dogs and wolves; it had learned to sort based on the background—separating indoors from outdoors. 

That’s the type of mistake that cannot be allowed to happen when it comes to national security.

 

Exploring the basis of AI insights

When the discussion involves the detection of nuclear explosions or the movement of materials that endanger the nation’s security, scientists, policy makers and others demand to know the basis of AI-based insights. Explainable AI—understanding and explaining the reasoning behind AI decisions—is a growing priority for national security specialists. The U.S. Department of Energy’s National Nuclear Security Administration and its Office of Defense Nuclear Nonproliferation Research and Development are supporting a team of PNNL researchers that is developing next-generation AI expertise in this critical space.

Whether in national security, finance or health, a decision made by a cold silicon box, without explanation, is no more palatable than a decision made without any explanation by a closed group of executives. The deeper understanding that comes courtesy of explainable AI is important for moving projects forward; for instance, it’s what led researchers to pinpoint the difficulty in the dogs—wolves problem above. 

“Oftentimes, we can’t say exactly why a system makes a certain decision, though we do know that it’s been correct 99 times out of 100. We want to know exactly why it has made all those correct decisions—what were the factors and how they were weighted? That understanding makes the decisions much more trustworthy,” said Mark Greaves, a PNNL scientist involved in the laboratory’s explainable AI efforts.

Think of PNNL scientist Emily Mace, who spends her days combing through thousands of signals, searching for the critical few that could indicate potential nuclear activity. Hard-wired into her neurons—in a thought process hard to replicate artificially—are the features she uses to prioritize which signals to inspect more closely. Her knowledge about traits like pulse shape, timing, duration and place of origin equip her to make decisions about whether signals are from cosmic rays, stray electrical noise, radon or an unknown radioactive source. (Mace has just undertaken a three-year project, also funded by NNSA’s Office of Defense Nuclear Nonproliferation Research and Development, to enhance the work.) 

Such deeper analysis is readily available as part of the national security mission at PNNL. Events might not be common, but understanding is deep. 

“In the national security space, we often find ourselves in situations where we don’t have the data we’d like to have to solve the problem,” said Grimes, who is working with colleagues Greaves, Luke Erickson, and Kate Gibb. “Instead of relying on techniques developed for situations where data are abundant and the training environment is a good match for the test environment, we need to adjust our network designs to entice the network to ignore the background and latch onto the signal. Similar to the wolf and husky example, we have to make sure the network is using the right aspects of the data to make its decisions. Just as important, we then need to verify that it has done so. This is where explainability tools are invaluable.

“You want to trust this network; you need to trust this network,” said Grimes. “It would be ideal if you could train networks in such a manner that they always predicted correctly and always used the correct criteria to make sound decisions. Unfortunately, we can’t assume that. We need to understand exactly how it arrives at decisions.” 

Grimes’ work on developing capabilities to explain AI has led to new solutions to a challenge in national security: How to squeeze out reliable conclusions from slim and incomplete data.

# # #

(Coming next: Sidestepping the thin-data problem)

withyou android app