Past research has shown that computer modeling can be used to decode and reconstruct speech, but a predictive model for music that includes elements such as pitch, melody, harmony, and rhythm, as well as different regions of the brain’s sound-processing network, was lacking. The team at UC Berkeley succeeded in making such a model by applying nonlinear decoding to brain activity recorded from 2,668 electrodes, which were placed directly on the brains of 29 patients who then listened to classic rock. Brain activity at 347 of the electrodes was specifically related to the music, mostly located in three regions of the brain: the Superior Temporal Gyrus (STG), the Sensory-Motor Cortex (SMC), and the Inferior Frontal Gyrus (IFG).
Analysis of song elements revealed a unique region in the STG that represents rhythm, in this case the guitar rhythm in the rock music. To find out which regions and which song elements were most important, the team ran the reconstruction analysis after removing the different data and then compared the reconstructions with the real song. Anatomically, they found that reconstructions were most affected when electrodes from the right STG were removed. Functionally, removing the electrodes related to sound onset or rhythm also caused a degradation of the reconstruction accuracy, indicating their importance in music perception. These findings could have implications for brain-machine-interfaces, such as prosthetic devices that help improve the perception of prosody, the rhythm and melody of speech.
Bellier adds, “We reconstructed the classic Pink Floyd song Another Brick in the Wall from direct human cortical recordings, providing insights into the neural bases of music perception and into future brain decoding applications.”