The analysis of almost 10,000 pregnancies has discovered previously unidentified combinations of risk factors linked to serious negative pregnancy outcomes, finding that there may be up to a tenfold difference in risk for infants who are currently treated identically under clinical guidelines.
Tag: explainable AI
Sidestepping the Thin Data Problem in National Security
Scientists are developing new techniques to make the most of limited data in the national security space, using explainable artificial intelligence to extract more meaning from the information in hand.
Decoding the ‘Black Box’ of AI to Tackle National Security Concerns
Cats and dogs. Huskies and wolves. While AI research sometimes seems dominated by talk about animals, the discussions are critical for understanding AI decisions. This “explainable AI” research is critical for many domains, including the detection of nuclear explosions or the movement of materials that endanger the nation’s security.
Explainable AI: A Must for Nuclear Nonproliferation, National Security
Understanding the choices and recommendations of artificial intelligence systems is crucial, especially when the stakes are high, as they are with national security issues like nuclear nonproliferation. A PNNL team is using explainable AI to further the effectiveness of AI systems.