sciencenewsnet.in

AI system accurately detects key findings in chest X-rays of pneumonia patients within 10 seconds


From 20 minutes or more to 10 seconds.

Researchers from Intermountain Healthcare and Stanford University say 10 seconds is about how quickly it took a new system they studied that utilizes artificial intelligence to accurately identify key findings in chest X-rays of patients in the emergency department suspected of having pneumonia.

The study found that those ultra-quick findings may enable physicians reading X-rays to accurately confirm a pneumonia diagnosis significantly faster than current clinical practice, enabling treatment to start sooner, which is vital for severely ill patients who’re suffering from pneumonia.

Findings from the collaborative study will be presented at the European Respiratory Society’s International Congress 2019, held in Madrid, Spain, on Monday, Sept. 30, 2019.

Researchers from Intermountain and Stanford studied the CheXpert system, an automated chest X-ray interpretation model developed at Stanford University that utilizes artificial intelligence, to review X-ray images taken at a number of emergency departments at Intermountain hospitals throughout Utah.

Upon review, researchers found the CheXpert system identified key findings in X-rays very accurately – with high agreement to a consensus of three radiologists – in about 10 seconds, which significantly outperforms current clinical practice.

“CheXpert is going to be faster and as accurate as radiologists viewing the studies. It’s an exciting new way of thinking about diagnosing and treating patients to provide the very best care possible,” said Nathan C. Dean, MD, principal investigator of the study, and section chief of pulmonary and critical care medicine at Intermountain Medical Center in Salt Lake City.

The CheXpert model was developed by the Stanford Machine Learning Group, which used 188,000 chest imaging studies to create a model that can determine what is and is not pneumonia on an X-ray. These images were taken from the Stanford Medical Center in Palo Alto, Calif

Since patient populations are different per geographic locations, CheXpert was then fine-tuned for Utah by reading an additional 6,973 images from Intermountain emergency departments.

“We’ve been developing a deep learning algorithm that can automatically detect pneumonia and related findings in chest X-rays,” said Jeremy Irvin, a PhD student at Stanford, and member of the research team. “In this initial study, we’ve demonstrated the algorithm’s potential by validating it on patients in the emergency departments at Intermountain Healthcare. Our hope is that the algorithm can improve the quality of pneumonia care at Intermountain, from improving diagnostic accuracy to reducing time to diagnosis.”

In a typical emergency department, Dr. Dean explained, patients suspected of having pneumonia get a chest X-ray. While creating those images is a quick process, having them read can be time consuming since those X-rays go into a line with other images to be interpreted by radiologists. That process can take to 20 minutes or more, which means potential delays in the start of antibiotics for very sick pneumonia patients.

At Intermountain emergency departments, radiology reports are run through the Cerner Natural Language Processing (NLP), which is a support tool currently used to get needed information from the radiologist report. NLP then feeds the information into ePNa, an electronic clinical decision support tool part of usual pneumonia care at Intermountain.

For most emergency departments where ePNa is not available, the CheXpert model could provide the information from chest X-rays directly to clinicians, said Dr. Dean.

“Using the CheXpert system, we found the interpretation time was very swift and the accuracy of the report to be very high,” he added.

For the study, Intermountain radiologists categorized chest images from 461 Intermountain patients as being “likely,” “likely-uncertain,” “unlikely-uncertain,” or “unlikely” to have pneumonia. They also identified images they believed showed pneumonia in multiple parts of the lungs, and whether these patients had parapneumonic effusion, which is fluid build-up between the lungs and chest cavity.

The radiologists differed from each other in their categorizations in more than half of patients, as has been commonly shown in prior studies. The CheXpert model performance on the same images was comparable to the radiologists.

Researchers found that the CheXpert model outperformed the current system of using a radiologist to create radiology reports for all key pneumonia findings, plus NLP. It also did so in less than 10 seconds, compared to the 20 minutes to hours from NLP. NLP of radiology reports was the most frequent cause of errors within ePNa.

“A 2013 study published in JAMA Internal Medicine found that 59 percent of errors made by ePNA were due to NLP processing of radiologist reports, so we’re eager to replace it with a better, faster system,” Dr. Dean said.

Outside of ePNa concerns, emergency department physicians looking at radiology reports often are challenged to understand the unstructured language used by radiologists in interpreting shadows on chest X-rays, Dr. Dead added.

The next step, he said, is for the CheXpert model to be used live in emergency departments, which he expects to happen in select Intermountain Healthcare hospitals this fall.

###

The Intermountain Research and Medical Foundation funded this research.

This part of information is sourced from https://www.eurekalert.org/pub_releases/2019-09/imc-asa093019.php

Jess C. Gomez
801-718-8495
jess.gomez@imail.org
http://www.ihc.com