www.smithsonianmag.com
Enhanced Brain Implant Translates Stroke Survivors Thoughts Into Nearly Instant Speech Using Artificial IntelligenceThe system harnesses technology similar to that of devices like Alexa and Siri, according to the researchers, and improves on a previous model Researchers connect stroke survivor Ann Johnson's brain implant to the experimental computer, which will allow her to speak by thinking words. Noah BergerA brain implant that converts neuron activity into audible words has given a stroke survivor with severe paralysis almost instantaneous speech.Ann Johnson became paralyzed and lost the ability to speak after suffering a stroke in 2005, when she was 30 years old. Eighteen years later, she consented to being surgically fitted with an experimental, thin, brain-reading implant that connects to a computer, officially called a brain-computer interface (BCI). Researchers placed the implant on her motor cortex, the part of the brain that controls physical movement, and it tracked her brain waves as she thought the words she wanted to say.As detailed in a study published Monday in the journal Nature Neuroscience, researchers used advances in artificial intelligence (A.I.) to improve the devices ability to quickly translate that brain activity into synthetic speechnow, its almost instantaneous.The technology brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses, study co-author Gopala Anumanchipalli, a computer scientist at the University of California, Berkeley, says in a statement. Neuroprostheses are devices that can aid or replace lost bodily functions by connecting to the nervous system.Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming, he adds. The result is more naturalistic, fluent speech synthesis.Chang et al. - Supplementary Video 2Watch on Previously, the research team had worked with Johnson to generate speech using an automated voice and digital avatar. That system, which had a delay of about eight seconds to decode her brain patterns, would speak full sentences at once.Older BCIs like that, which generate speech only after processing an entire sentence, are similar to a conversation via text, says Christian Herff, a computational neuroscientist at Maastricht University in the Netherlands who wasnt involved in the study, toNature News Miryam Naddaf. I write a sentence, you write a sentence, and you need some time to write a sentence again, he says. It just doesnt flow like a normal conversation.Now, the enhanced experimental device can continuously identify words from brain activity and translate them into speech within about three seconds, per Nature News.Its not waiting for a sentence to finish, Anumanchipalli says to the Associated Press Laura Ungar. Its processing it on the fly.Chang et al. - Supplementary Video 1Watch on To train the artificial intelligence, researchers asked Johnson to mouth phrases that appeared on a screen from a list of 1,024 words, such as, hey, how are you? The system learned to interpret the resulting brain activity in continuous, 80-millisecond increments, which Anumanchipalli calls a streaming approach, per the AP. It converts her intent to speak into fluent sentences, he adds. The A.I. was also trained on recordings of Johnsons voice before her stroke to make its speech sound more like her.The system performed well when the team tested it with words outside of the training data, demonstrating that it is indeed learning the building blocks of sound or voice, study co-author Kaylo Littlejohn, a researcher at UC Berkeleys Department of Electrical Engineering and Computer Sciences, says in the statement.Despite clear improvements from previous trialsand a huge jump in efficiency over Johnsons current communication systemthe enhanced BCI was still not quite as natural as regular human speech. It produced between 47 and 90 words per minute, while humans usually speak around 160 words per minute, according to Nature News.This is where we are right now, Edward Chang, a study co-author and neurosurgeon at UC San Francisco, says to the publication. But you can imagine, with more sensors, with more precision and with enhanced signal processing, those things are only going to change and get better.Get the latest stories in your inbox every weekday.