WASHINGTON, Jan. 29 (Xinhua) -- American engineers developed a system that can translate brain signals into intelligible speech, a breakthrough that may help those who cannot speak to communicate with the outside world.
The study, published on Tuesday in the journal Scientific Reports, showed that by monitoring one's brain activity, an AI-enabled technology can reconstruct words a person hears with unprecedented clarity.
Neuroscientists from Columbia University trained a voice synthesizer or vocoder to measure brain activity patterns of epilepsy patients already undergoing surgery while those patients listened to sentences spoken by different people.
Those patients listened to speakers reading digits between zero to nine, while recording brain signals via the vocoder.
Then, they used a neural network, a type of artificial intelligence, to analyze the signals, and gave robotic-sounding voices, according to the study.
"We found that people could understand and repeat the sounds about 75 percent of the time," said the paper's senior author Nima Mesgarani from Columbia, "which is well above and beyond any previous attempts."
Previous research showed that when people speak or even imagine speaking, distinct patterns of activity take place in their brain and those pattern of signals also emerge when we listen to someone speak or imagine listening.
Mesgarani and his team planned to test more complicated words and to run the same tests on brain signals emitted when a person speaks or imagines speaking.
Mesgarani called it a "game changer," that may give anyone who has lost their ability to speak a new chance to connect to the outside world.