Skip to main content
Jonathan Viventi
Jonathan Viventi holds an ultrathin film implant that brings more electrodes into the brain to better gather neural signals. Photo by Chris Hildreth

Building better tools to decode speech

The goal is to develop prosthetics that can understand signals from the brain

Jonathan Viventi, assistant professor in the department of biomedical engineering in the Pratt School of Engineering, works on improving devices implanted in the brain to address problems such as epilepsy. “My group develops new electrode arrays for interfacing with the brain at higher resolution,” he says, “while having broad coverage and high-density sampling,” giving scientists enormous amounts of information readable from brain signals. 

He eventually hopes to have “a pacemaker for the brain, something that can predict and prevent seizures,” though that’s a long-term project. His lab is now working on higher-density ultrathin films that bring more, and more-responsive, electrodes into the brain to better gather neural signals. Many current films have electrodes every 10 millimeters or so; “that’s been the standard for the past 50 or 60 years.” Viventi’s lab is working on spacing electrodes less than a millimeter apart, enhancing resolution by a factor of 100.

“We're going from standard definition TV, or something even before that, to high definition.” They also are working on “a new kind of material for electrode arrays,” he says, “a liquid crystal polymer” that will better stand up to the corrosive saline environment inside the body. “And the application we’re working with Greg’s lab on is speech prosthetics, and being able to decode signals from the brain that correspond to speech.”

Jonathan Viventi
Greg Cogan
Greg Cogan

That’s the speech and language processing lab of Greg Cogan, assistant professor in neurology and assistant professor of neurosurgery. “The project I work with John on is trying to develop a speech decoder,” Cogan says. People with Lou Gehrig’s Disease (ALS), multiple sclerosis, or other motor disorders often lose the capacity to speak. Using the higher-resolution electrode arrays Viventi’s lab is developing, Cogan’s lab can help decode what those people are trying to say.

Cogan is quick to note that this work – using implanted electrode films – is not mind-reading. “When you think of something to say, there's this cascade of processes that have to happen,” he says. You start with an idea, then pick a word, then create the sounds that will make the word and string them together. “And really we're piggybacking off of the part of the brain that does the movement part,” he says. That is, it’s a lot easier to read motor intention than your thoughts. He likens the process to the creation of prosthetics. To move a prosthetic limb, technology doesn’t read your mind; it learns which muscles you want to move. 

There is still a long way to go. “We're at the point where it's still much slower than natural speech,” Viventi says, “but you can see the trajectory where you might be able to get there.”