Skip to Content

Brain implants can help people with paralysis ‘speak’ through screens faster, more accurately than before, new studies show

<i>Noah Berger/UCSF</i><br/>A participant in a study of speech neuroprostheses uses a digital link wired to her cortex to interface with an avatar.
Noah Berger/UCSF
A participant in a study of speech neuroprostheses uses a digital link wired to her cortex to interface with an avatar.

By Jacqueline Howard, CNN

(CNN) — Dr. Jaimie Henderson had a single wish throughout childhood: for his father to be able to speak with him. Now a scientist and neurosurgeon at Stanford Medicine, Henderson and his colleagues are developing brain implants that might be able to make similar wishes come true for other people with paralysis or speech impairments.

Two studies published Wednesday in the journal Nature show how the brain implants, described as neuroprostheses, can record a person’s neural activity when they attempt to speak naturally, and that brain activity can then be decoded into words on a computer screen, through audio speech or even communicated using an animated avatar.

“When I was 5 years old, my dad was involved in a devastating car accident that left him barely able to move or speak. I remember laughing at the jokes he tried to tell, but his speech ability was so impaired that we couldn’t understand the punchline,” Henderson, an author of one of the studies and professor at Stanford University, said in a news briefing about his research.

“So I grew up wishing that I could know him and communicate with him,” he said. “And I think that early experience sparked my personal interest in understanding how the brain produces movement and speech.”

Henderson and his colleagues at Stanford and other US institutions examined the use of implanted brain sensors in 68-year-old Pat Bennett. She had been diagnosed with amyotrophic lateral sclerosis in 2012, and it affected her speech.

The researchers wrote in their study that Bennett can make some limited facial movements and vocalize sounds but is unable to produce clear speech due to ALS, a rare neurological disease that affects nerve cells in the brain and spinal cord.

In March 2022, Henderson conducted the surgery to implant arrays of electrodes in two areas of Bennett’s brain. The implants recorded neural activity as Bennett attempted to make facial movements, make sounds or speak single words.

The arrays were attached to wires that came out of the skull and were connected to a computer. Software decoded the neural activity, converting the activity into words that were displayed on the computer screen in real time. When Bennett finished speaking, she pressed a button to finalize the decoding.

The researchers evaluated this brain-to-computer interface with Bennett attempting to speak with vocalizations and to only “mouth” words with no vocalization.

With a 50-word vocabulary, the rate of errors in the decoding was 9.1% on the days Bennett vocalized and 11.2% on the silent days, the researchers found. When using a 125,000-word vocabulary, the word error rate was 23.8% across all vocalizing days and 24.7% for silent days.

“In our work, we show that we can decipher attempted speech with a word error rate of 23% when using a large set of 125,000 possible words. This means that about three in every four words are deciphered correctly,” Frank Willett, an author of the study and Howard Hughes Medical Institute staff scientist affiliated with the Neural Prosthetics Translational Lab, said in the news briefing.

“With these new studies, it is now possible to imagine a future where we can restore fluent conversation to someone with paralysis, enabling them to freely say whatever they want to say with an accuracy high enough to be understood reliably,” he said.

The researchers say the decoding happened at high speeds. Bennett spoke at an average pace of 62 words per minute, which “more than triples” the speed of previous brain-computer interfaces that had about 18 words per minute for handwriting models.

“These initial results have proven the concept, and eventually technology will catch up to make it easily accessible to people who cannot speak,” Bennett wrote in a news release. “For those who are nonverbal, this means they can stay connected to the bigger world, perhaps continue to work, maintain friends and family relationships.”

Yet for now, the researchers wrote that their findings are a “proof of concept” that decoding speaking movements with a large vocabulary is possible, but it needs to be tested on more people before it can be considered for clinical use.

“These are very early studies,” Willett said. “And we don’t have a big database of data from other people.”

‘There is hope’

The other study published Wednesday involved Ann Johnson, who could not speak clearly due to paralysis after she had a stroke in 2005, when she was 30 years old. In September 2022, an electrode device was implanted in her brain at UCSF Medical Center in San Francisco, with no surgical complications.

The implant recorded neural activity, which was decoded into text on a screen. The researchers wrote in the study that they found “accurate and rapid large-vocabulary decoding” with a median rate of 78 words per minute and a median word error rate of 25%.

Separately, as Johnson attempted to silently speak, her neural activity was synthesized into speech sounds. The researchers also developed an animation of a facial avatar to accompany synthesized speech, based on the patient’s attempted facial movements.

“Faster, more accurate, and more natural communication are among the most desired needs of people who have lost the ability to speak after severe paralysis,” the researchers, from the University of California, San Francisco and other institutions in the United States and United Kingdom, wrote in their study. “Here we have demonstrated that all of these needs can be addressed with a speech-neuroprosthetic system that decodes articulatory cortical activity into multiple output modalities in real time.”

The two new brain implant studies are “overlapping” in their findings and their “long-term goal” to restore communication for people with paralysis, Dr. Edward Chang, a neurosurgeon and author of the study from UC San Francisco, said in the news briefing.

“The results from both studies — between 60 to 70 words per minute in both of them — is a real milestone for our field, in general, and we’re really excited about it because it’s coming from two different patients, two different centers, two different approaches,” Chang said. “And the most important message is that there is hope that this is going to continue to improve and provide a solution in the coming years.”

Although the devices described in both new papers are being studied as proofs of concept — and are not currently commercially available — they could pave the way for future science and, possibly, future commercial devices.

“I’m actually very excited about the commercial activity in the brain-computer interface area,” Henderson said. “I’ve come full circle from wishing I could communicate with my dad as a kid to seeing this actually work. It’s indescribable.”

The-CNN-Wire
™ & © 2023 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Health

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

KION 46 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content