AI-based Pregnancy Analysis Discovers Previously Unknown Warning Signs for Stillbirth and Newborn Complications

A new AI-based analysis of almost 10,000 pregnancies has discovered previously unidentified combinations of risk factors linked to serious negative pregnancy outcomes, including stillbirth.

The study also found that there may be up to a tenfold difference in risk for infants who are currently treated identically under clinical guidelines.

Nathan Blue, MD, the senior author on the study, says that the AI model the researchers generated helped identify a “really unexpected” combination of factors associated with higher risk, and is an important step toward more personalized risk assessment and pregnancy care.

The new results are published in BMC Pregnancy and Childbirth.

Unexpected risks

The researchers started with an existing dataset of 9,558 pregnancies nationwide, which included information on social and physical characteristics ranging from pregnant people’s level of social support to their blood pressure, medical history, and fetal weight, as well as the outcome of each pregnancy. By using AI to look for patterns in the data, they identified new combinations of maternal and fetal characteristics that were linked to unhealthy pregnancy outcomes such as stillbirth.

Usually, female fetuses are at slightly lower risk for complications than male fetuses—a small but well-established effect. But the research team found that if a pregnant person has pre-existing diabetes, female fetuses are at higher risk than males.

This previously undetected pattern shows that the AI model can help researchers learn new things about pregnancy health, says Blue, an assistant professor of obstetrics and gynecology in Spencer Fox Eccles School of Medicine at the University of Utah. “It detected something that could be used to inform risk that not even the really flexible, experienced clinician brain was recognizing,” Blue says.

The researchers were especially interested in developing better risk estimates for fetuses in the bottom 10% for weight, but not the bottom 3%: babies that are small enough to be concerning, but large enough that they are usually perfectly healthy. Figuring out the best course of action in these cases is challenging: will a pregnancy need intensive monitoring and potentially early delivery, or can the pregnancy proceed largely as normal? Current clinical guidelines advise intensive medical monitoring for all such pregnancies, which can represent a significant emotional and financial burden.

But the researchers found that within this fetal weight class, the risk of an unhealthy pregnancy outcome varied widely, from no riskier than an average pregnancy to nearly ten times the average risk, based on a combination of factors such as fetal sex, presence or absence of pre-existing diabetes, and presence or absence of a fetal anomaly such as a heart defect.

Blue emphasizes that the study only detected correlations between variables and doesn’t provide information on what actually causes negative outcomes.

The wide range of risk is backed up by physician intuition, Blue says; experienced doctors are aware that many low-weight fetuses are healthy, and will use many additional factors to make individualized judgment calls about risk and treatment. But an AI risk-assessment tool could provide important advantages over such “gut checks,” helping doctors make recommendations that are informed, reproducible, and fair.

 

Why AI

For humans or AI models, estimating pregnancy risks involves taking a very large number of variables into account, from maternal health to ultrasound data. Experienced clinicians can weigh all these variables to make individualized care decisions, but even the best doctors probably wouldn’t be able to quantify exactly how they arrived at their final decision. Human factors like bias, mood, or sleep deprivation almost inevitably creep into the mix and can subtly skew judgment calls away from ideal care.

To help address this problem, the researchers used a type of model called “explainable AI”, which provides the user not only the estimated risk for a given set of pregnancy factors, but also includes information on which variables contributed to that risk estimation, and how much. Unlike the more familiar “closed box” AI, which is largely impenetrable even to experts, the explainable model “shows its work,” revealing sources of bias so they can be addressed.

Essentially, explainable AI approximates the flexibility of expert clinical judgment while avoiding its pitfalls. The researchers’ model is also especially well-suited to judge risk for rare pregnancy scenarios, accurately estimating outcomes for people with unique combinations of risk factors. This means that this kind of tool could ultimately help personalize care by guiding informed decisions for people whose situations are one-of-a-kind.

The researchers still need to test and validate their model in new populations to make sure it can predict risk in real-world situations. But Blue is hopeful that an explainable AI-based model could ultimately help personalize risk assessment and treatment during pregnancy. “AI models can essentially estimate a risk that is specific to a given person’s context,” he says, “and they can do it transparently and reproducibly, which is what our brains can’t do.”

“This kind of ability would be transformational across our field,” he says.

 

###

Other University of Utah Health researchers on the study include first author Raquel Zimmerman; Edgar Hernandez, PhD; Mark Yandell, PhD; Martin Tristani-Firouzi, MD; and Robert Silver, MD.

These results were published in BMC Pregnancy and Childbirth as “AI-based analysis of fetal growth restriction in a prospective obstetric cohort quantifies compound risks for perinatal morbidity and mortality and identifies previously unrecognized high risk clinical scenarios.”

Research was funded by the One U Data Science Hub Seed Grant Program, R Baby Foundation, and the NICHD (award numbers U10 HD063020, U10 HD063037, U10 HD063041, U10 HD063046, U10 HD063047, U10 HD063048, U10 HD063053, U10 HD063072, 2K12 HD085816-07). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

withyou android app