Scientists are developing new techniques to make the most of limited data in the national security space, using explainable artificial intelligence to extract more meaning from the information in hand.
Tag: explainable AI
Decoding the ‘Black Box’ of AI to Tackle National Security Concerns
Cats and dogs. Huskies and wolves. While AI research sometimes seems dominated by talk about animals, the discussions are critical for understanding AI decisions. This “explainable AI” research is critical for many domains, including the detection of nuclear explosions or the movement of materials that endanger the nation’s security.
Explainable AI: A Must for Nuclear Nonproliferation, National Security
Understanding the choices and recommendations of artificial intelligence systems is crucial, especially when the stakes are high, as they are with national security issues like nuclear nonproliferation. A PNNL team is using explainable AI to further the effectiveness of AI systems.