A RUDN scientist named neural networks that will help doctors interpret the results of an electroencephalogram (EEG) and other test of brain activity. The best of them works with almost 100% accuracy, while not only giving the result, but explaining why it turned out the way it did. The results are published in Mathematics.
One of the key stages in the diagnosis of brain pathologies is neuroimaging. This is the visualization of brain activity and brain tissue using CT, X-ray, electroencephalogram (EEG) and other methods. The interpretation of the results of such analyzes is carried out by specially trained professionals. But even an experienced eye can not always draw the right conclusion. Artificial intelligence can help in interpretation. Since we are talking about a doctor-computer tandem, and not about replacing a person with artificial intelligence, such models are needed that not only give the result, but can “explain” why it turned out that way. This property is called interpretability. A RUDN researcher with colleagues from the Baltic Federal University selected the best models that are suitable for this purpose.
“Artificial intelligence in the analysis of biological and medical data is an important and actively researched area. This also applies to the analysis of medical images. One of the central points here is interpretability. This is important for decision-making systems, when a medical worker must understand and interpret the result of the work of artificial intelligence. Therefore, it is important to develop different neuroimaging approaches that are interpretable. Our goal was to find a good mathematical model for classifying brain states with an emphasis on the interpretability of the results,” Alexander Hramov, Doctor of Science in Physics and Mathematics, Leading Researcher at the RUDN University Department of Transport, Chief Researcher at the Baltic Center for Neurotechnologies and Artificial Intelligence, Immanuel Kant Baltic Federal University.
To find the best models, the researchers used EEG data taken from patients as they looked at different images. The first is the famous Mona Lisa painting, and the second is the optical illusion of the Necker Cube, which depicts a simple frame of a cube. The fact is that the figure does not indicate which faces are in front and which are behind. A person usually does not notice the contradiction and interprets the picture unambiguously, but for a computer this task is not so simple. Therefore, the Necker cube is used to test computer models of the human perceptual system. Five people took part in the experiment. Based on the results of the EEG, the neural network had to determine the brightness of the image that a person sees. In addition, using a special algorithm, the neural network identifies specific parameters that influenced the final decision of the model.
The engineers compared several models of artificial neural networks. The model with the so-called adaptive gradient algorithm (adagrad) turned out to be the best. This is an optimization method that tunes the neural network based on the frequency with which this or that feature occurs. A neural network with an adaptive gradient made it possible to achieve a model accuracy of 92.9%.
“Adagrad turned out to be the best optimization method. Our results will help to select suitable explainable machine learning methods for the correct training of brain-computer interfaces,” Alexander Hramov, Doctor of Science in Physics and Mathematics, Leading Researcher at the Department of Transport of the RUDN University, Chief Researcher at the Baltic Center for Neurotechnologies and Artificial Intelligence of Immanuel Kant Baltic Federal University.