Breakthrough Device reads Brainwaves and Translates Thoughts into Speech
- Victor Nwoko
- Apr 3
- 3 min read
![The brain-computer interface (BCI) breakthrough enables near-real-time speech streaming with no delay, using electrodes on the skull to capture brain activity from the motor cortex. It was tested on a woman named Ann [pictured] who is paralyzed and cannot speak](https://static.wixstatic.com/media/170126_5bbb640dded0458cb401e88eef069fc9~mv2.png/v1/fill/w_634,h_422,al_c,q_85,enc_avif,quality_auto/170126_5bbb640dded0458cb401e88eef069fc9~mv2.png)
Researchers at the University of California have developed a revolutionary brain-computer interface (BCI) that converts a person's thoughts into audible speech. The system, which utilizes electrodes adhered to the scalp to measure brain activity, could restore the ability to communicate for individuals with paralysis.
The technology works by analyzing brainwaves and converting them into spoken words through artificial intelligence. The motor cortex, which controls speech, continues generating signals even when a person has lost the ability to speak. Using advanced AI models, the researchers captured these signals and converted them into speech in about one second, allowing for real-time communication.
The BCI was tested on Ann, a woman with severe paralysis who participated in a previous study with an older system that had an eight-second delay. The latest iteration significantly improved response time, enabling fluid and continuous speech. Kaylo Littlejohn, a Ph.D. student at UC Berkeley and co-leader of the study, noted that the AI successfully recognized and decoded Ann’s unique speech patterns, demonstrating its ability to generalize to unseen words.

Ann, who suffered a stroke in 2005 that left her unable to speak, reported feeling more in control of her communication with the device, which helped her reconnect with her body. While technology that deciphers brainwaves into spoken sentences is still in its early stages, previous studies were limited to decoding a handful of words rather than full phrases or sentences.
This proof-of-concept study, published in Nature Neuroscience, represents a major advancement. The researchers focused on specific brainwave signatures produced in the motor cortex when a person attempts to speak. These signals, which dictate movements of the lips, tongue, and vocal cords, were collected through electrodes placed on Ann’s scalp as she silently attempted to speak.

The AI system was trained using recordings of Ann’s voice from before her paralysis, allowing it to generate speech in her natural tone. As she mentally rehearsed speaking prompted phrases like “Hello, how are you?” her brain signaled speech commands, which were detected and decoded in real time. Over time, the AI adapted to her unique speech patterns, recognizing new words she had not explicitly visualized, such as “Alpha” and “Bravo.”
The system also demonstrated an ability to fill in gaps, completing sentences even when Ann did not fully imagine each word. Dr. Gopala Anumanchipalli, an electrical engineer at UC Berkeley and co-leader of the study, emphasized the breakthrough: “Within one second of intent, we are getting the first sound out. And the device can continuously decode speech, allowing uninterrupted communication.”
Accuracy was another key success of the program. Dr. Littlejohn highlighted the significance of the achievement: “Previously, it was not known if intelligible speech could be streamed from the brain in real time.”
Interest in BCI technology has surged, with scientists and tech developers exploring its potential. In 2023, researchers at Brown University’s BrainGate consortium implanted sensors in the cerebral cortex of Pat Bennett, an ALS patient. Over 25 training sessions, an AI algorithm learned to decode her brain signals, recognizing phonemes—speech sounds like “sh” and “th”—and assembling them into words displayed on a screen.
While the study showed promise, accuracy varied. With a limited vocabulary of 50 words, the error rate was around nine percent, but it increased to 23 percent when expanded to 125,000 words—approaching the range of natural language.
Though these systems are not yet flawless, researchers believe they represent an essential step toward perfecting brainwave-to-speech technology. As machine learning tools continue to advance, the ability to translate thoughts directly into speech may soon become a reality, transforming communication for those with severe disabilities.
Comments