- Researchers have successfully developed a 'speech neuroprosthesis'.
- The technology translates signals from the brain to the vocal tract directly into words that appear on screen.
- It's enabled a man with severe paralysis to communicate in sentences.
Researchers at UC San Francisco have successfully developed a “speech neuroprosthesis” that has enabled a man with severe paralysis to communicate in sentences, translating signals from his brain to the vocal tract directly into words that appear as text on a screen.
The achievement, which was developed in collaboration with the first participant of a clinical research trial, builds on more than a decade of effort by UCSF neurosurgeon Edward Chang, MD, to develop a technology that allows people with paralysis to communicate even if they are unable to speak on their own. The study appears July 15 in the New England Journal of Medicine.
“To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak,” said Chang, the Joan and Sanford Weill Chair of Neurological Surgery at UCSF, Jeanne Robertson Distinguished Professor, and senior author on the study. “It shows strong promise to restore communication by tapping into the brain's natural speech machinery.”
Each year, thousands of people lose the ability to speak due to stroke, accident, or disease. With further development, the approach described in this study could one day enable these people to fully communicate.
Have you read?
Translating brain signals into speech
Previously, work in the field of communication neuroprosthetics has focused on restoring communication through spelling-based approaches to type out letters one-by-one in text. Chang’s study differs from these efforts in a critical way: his team is translating signals intended to control muscles of the vocal system for speaking words, rather than signals to move the arm or hand to enable typing. Chang said this approach taps into the natural and fluid aspects of speech and promises more rapid and organic communication.
How is the World Economic Forum bringing data-driven healthcare to life?
The application of “precision medicine” to save and improve lives relies on good-quality, easily-accessible data on everything from our DNA to lifestyle and environmental factors. The opposite to a one-size-fits-all healthcare system, it has vast, untapped potential to transform the treatment and prediction of rare diseases—and disease in general.
But there is no global governance framework for such data and no common data portal. This is a problem that contributes to the premature deaths of hundreds of millions of rare-disease patients worldwide.
The World Economic Forum’s Breaking Barriers to Health Data Governance initiative is focused on creating, testing and growing a framework to support effective and responsible access – across borders – to sensitive health data for the treatment and diagnosis of rare diseases.
The data will be shared via a “federated data system”: a decentralized approach that allows different institutions to access each other’s data without that data ever leaving the organization it originated from. This is done via an application programming interface and strikes a balance between simply pooling data (posing security concerns) and limiting access completely.
The project is a collaboration between entities in the UK (Genomics England), Australia (Australian Genomics Health Alliance), Canada (Genomics4RD), and the US (Intermountain Healthcare).
“With speech, we normally communicate information at a very high rate, up to 150 or 200 words per minute,” he said, noting that spelling-based approaches using typing, writing, and controlling a cursor are considerably slower and more laborious. “Going straight to words, as we’re doing here, has great advantages because it’s closer to how we normally speak.”
Read the rest of the story on the UCSF website.