Facebook Reality Labs Wants to Restore Speech Loss Through Brain-Computer Interface
Facebook Reality Labs expands research with UCSF that shows potential of Brain-Computer Interface for restoring speech communication.
Facebook Reality Labs (FRL) Brain-Computer Interface (BCI)was established in 2017 with a goal to create a non-invasive speech interface that lets people type words simply by thinking about what they want to say.
The team has made good progress over the past several years, investing in the area of head-mounted optical BCI as a next generation computing platform. This will be a method of communication for AR/VR devices without the need to actually type words via keyboard or touch screen.
Facebook supported a team of scientists from UCSF working on developing communication prosthesis for patients suffering from inability to speak. The goal of the sponsorship was to see whether a silence BCI can help patients type 100 words each minute, and the type of neural signals that are required.
A new milestone has been achieved by the UCSF team with results published in the The New England Journal of Medicine.
This is the first time in history that someone with severe speech impaired was able to express what they want to say instantly simply by thinking about it. The team has restore the person's ability to talk through decoding their brain signals from the motor cortex to the muscles of the relevant vocal tract. This is a huge milestone for UCSF, Facebook, and the field of neuroscience.
The technology is similar to auto-correct on your smartphone, which automatically completes and/or correct the word you type in a text message. The same technique can be used with BCI to help improve the accuracy of predicting what a person wish to say through the algorithm.
This is crucial implication for the future of aiding technology, as this has potential to unfold conversations with those that simply can't communicate.
Other areas that FRL have explored includes developing a wearable device that uses near-infrared light to measure a person's blood oxygen level in the brain in a non-invasive way.
FRL recent have shifted their focus from head-mounted optical BCI technologies to wrist based devices power by electromyography.
How does this work?
When people decide to move their hands and fingers, their brain sends signals to the arm via motor neurons, instructing them to move in very specific ways to perform things such as swiping or tapping. These signals can be picked up and analyzed by the EMG of the kind of movements you have decided to make with your hand and fingers. These movements are then translated into digital instructions for the device. These type of EMG-based neural interface have ways to drastically expand beyond the way we communicate with our future devices, such as high speed typing simply by intention alone.
FRL is currently working on developing natural ways to interact with AR glasses, so that interacting with our device and physical world can be overlapped rather than separation. The electromyography (EMG) technology is still in the infancy stage, but FRL believes this will be the core technology of input for the future of AR glasses.