When working with persons who experience daily challenges such as autism, ADHD, learning difficulties or mental health issues, practitioners often rely on their professional judgment to determine whether behavior is improving following intervention. But that’s not enough, according to the study.
“Unfortunately, experts often disagree when drawing conclusions based on behavioral data, which may lead to the premature interruption of an effective intervention or to the continuation of an ineffective treatment,” said lead author Marc Lanovaz, a researcher at the Institut universitaire en santé mentale de Montréal.
To find a better way, Lanovaz and colleagues at UdeM-affiliated Polytechnique Montréal and Manhattanville College in Purchase, N.Y. independently labeled more than 1,000 graphs and trained new decision models using machine learning.
The conclusions drawn by these models were then compared to those produced by the visual-aid tool most studied by today’s researchers.
“Although we always assumed that our models would perform well, we did not expect them to be as accurate,” said Lanovaz, an associate professor who heads the Applied Behavioral Research Lab at UdeM’s School of Psychoeducation.
“Not only did the conclusions drawn by our models match the interpretation of experts more frequently than the most popular tool, they also produced more accurate conclusions on novel data,” he said.
According the authors, these models could eventually support practitioners in making better decisions about the effectiveness of their interventions.
“By improving decision-making, practitioners should more rapidly and accurately identify effective and ineffective behavioral interventions,” said Lanovaz. “Ultimately, we hope this change would translate to better tailored interventions for people with developmental disabilities, mental health issues or learning difficulties.”
Original post https://alertarticles.info