UNIVERSITY PARK, Pa. — A new tool created by researchers at Penn State and Houston Methodist Hospital could diagnose a stroke based on abnormalities in a patient’s speech ability and facial muscular movements, and with the accuracy of an emergency room physician — all within minutes from an interaction with a smartphone.
“When a patient experiences symptoms of a stroke, every minute counts,” said James Wang, professor of information sciences and technology at Penn State. “But when it comes to diagnosing a stroke, emergency room physicians have limited options: send the patient for often expensive and time-consuming radioactivity-based scans or call a neurologist — a specialist who may not be immediately available — to perform clinical diagnostic tests.”
Wang and his colleagues have developed a machine learning model to aid in, and potentially speed up, the diagnostic process by physicians in a clinical setting.
“Currently, physicians have to use their past training and experience to determine at what stage a patient should be sent for a CT scan,” said Wang. “We are trying to simulate or emulate this process by using our machine learning approach.”
The team’s novel approach is the first to analyze the presence of stroke among actual emergency room patients with suspicion of stroke by using computational facial motion analysis and natural language processing to identify abnormalities in a patient’s face or voice, such as a drooping cheek or slurred speech.
The results could help emergency room physicians to more quickly determine critical next steps for the patient. Ultimately, the application could be utilized by caregivers or patients to make self-assessments before reaching the hospital.
“This is one of the first works that is enabling AI to help with stroke diagnosis in emergency settings,” added Sharon Huang, associate professor of information sciences and technology at Penn State.
To train the computer model, the researchers built a dataset from more than 80 patients experiencing stroke symptoms at Houston Methodist Hospital in Texas. Each patient was asked to perform a speech test to analyze their speech and cognitive communication while being recorded on an Apple iPhone.
“The acquisition of facial data in natural settings makes our work robust and useful for real-world clinical use, and ultimately empowers our method for remote diagnosis of stroke and self-assessment,” said Huang.
Testing the model on the Houston Methodist dataset, the researchers found that its performance achieved 79% accuracy — comparable to clinical diagnostics by emergency room doctors, who use additional tests such as CT scans. However, the model could help save valuable time in diagnosing a stroke, with the ability to assess a patient in as little as four minutes.
“There are millions of neurons dying every minute during a stroke,” said John Volpi, a vascular neurologist and co-director of the Eddy Scurlock Stroke Center at Houston Methodist Hospital. “In severe strokes it is obvious to our providers from the moment the patient enters the emergency department, but studies suggest that in the majority of strokes, which have mild to moderate symptoms, that a diagnosis can be delayed by hours and by then a patient may not be eligible for the best possible treatments.”
“The earlier you can identify a stroke, the better options (we have) for the patients,” added Stephen T.C. Wong, John S. Dunn, Sr. Presidential Distinguished Chair in Biomedical Engineering at the Ting Tsung and Wei Fong Chao Center for BRAIN and Houston Methodist Cancer Center. “That’s what makes an early diagnosis essential.”
Volpi said that physicians currently use a binary approach toward diagnosing strokes: They either suspect a stroke, sending the patient for a series of scans that could involve radiation; or they do not suspect a stroke, potentially overlooking patients who may need further assessment.
“What we think in that triage moment is being either biased toward overutilization (of scans, which have risks and benefits) or underdiagnosis,” said Volpi, a co-author on the paper. “If we can improve diagnostics at the front end, then we can better expose the right patients to the right risks and not miss patients who would potentially benefit.”
He added, “We have great therapeutics, medicines and procedures for strokes, but we have very primitive and, frankly, inaccurate diagnostics.”
Other collaborators on the project include Tongan Cai and Mingli Yu, graduate students working with Wang and Huang at Penn State; and Kelvin Wong, associate research professor of electronic engineering in oncology at Houston Methodist Hospital.
###
The team presented their paper, “Toward Rapid Stroke Diagnosis with Multimodal Deep Learning,” last week at the virtual 23rd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI).
Penn State has also filed a provisional patent application jointly with Houston Methodist on the computer model.
This part of information is sourced from https://www.eurekalert.org/pub_releases/2020-10/ps-ntc102220.php