CHICAGO, Sept. 28 (Xinhua) -- Researchers at Northwestern University (NU) Medicine and Weinberg College of Arts and Sciences have moved closer to creating speech brain machine interfaces by unlocking new information about how the brain encodes speech.
To do this, NU researchers recorded brain signals from the cortical surface using electrodes placed in patients undergoing brain surgery to remove brain tumors. The patients had to be awake during their surgery, so researchers asked them to read words from a screen.
After the surgery, the researchers marked the times when the patients produced phonemes and gestures. Then they used the recorded brain signals from each cortical area to decode which phonemes and gestures had been produced, and measured the decoding accuracy. The brain signals in the precentral cortex were more accurate at decoding gestures than phonemes, while those in the inferior frontal cortex were equally good at decoding both phonemes and gestures.
"This can help us build better speech decoders for brain machine interfaces (BMIs), which will move us closer to our goal of helping people that are locked-in speak again," said lead author Marc Slutzky, associate professor of neurology and of physiology at NU Feinberg School of Medicine and an NU neurologist.
In the next step, the researchers will develop an algorithm for brain machine interfaces that would not only decode gestures but also combine those decoded gestures to form words.
The study was published Sept. 26 in the Journal of Neuroscience.