Now the answer is in—and it’s not close at all. Four years after announcing a “crazy amazing” project to build a “silent speech” interface using optical technology to read thoughts, Facebook is shelving the project, saying consumer brain-reading still remains very far off.
In a blog post, Facebook said it is discontinuing the project and will instead focus on an experimental wrist controller for virtual reality that reads muscle signals in the arm. “While we still believe in the long-term potential of head-mounted optical [brain-computer interface] technologies, we’ve decided to focus our immediate efforts on a different neural interface approach that has a nearer-term path to market,” the company said.
Facebook’s brain-typing project had led it into uncharted territory—including funding brain surgeries at a California hospital and building prototype helmets that could shoot light through the skull—and into tough debates around whether tech companies should access private brain information. Ultimately, though, the company appears to have decided the research simply won’t lead to a product soon enough.
“We got lots of hands-on experience with these technologies,” says Mark Chevillet, the physicist and neuroscientist who until last year headed the silent-speech project but recently switched roles to study how Facebook handles elections. “That is why we can confidently say, as a consumer interface, a head-mounted optical silent speech device is still a very long way out. Possibly longer than we would have foreseen.”
The reason for the craze around brain-computer interfaces is that companies see mind-controlled software as a huge breakthrough—as important as the computer mouse, graphical user interface, or swipe screen. What’s more, researchers have already demonstrated that if they place electrodes directly in the brain to tap individual neurons, the results are remarkable. Paralyzed patients with such “implants” can deftly move robotic arms and play video games or type via mind control.
Facebook’s goal was to turn such findings into a consumer technology anyone could use, which meant a helmet or headset you could put on and take off. “We never had an intention to make a brain surgery product,” says Chevillet. Given the social giant’s many regulatory problems, CEO Mark Zuckerberg had once said that the last thing the company should do is crack open skulls. “I don’t want to see the congressional hearings on that one,” he had joked.
In fact, as brain-computer interfaces advance, there are serious new concerns. What would happen if large tech companies could know people’s thoughts? In Chile, legislators are even considering a human rights bill to protect brain data, free will, and mental privacy from tech companies. Given Facebook’s poor record on privacy, the decision to halt this research may have the side benefit of putting some distance between the company and rising worries about “neurorights.”
Facebook’s project aimed specifically at a brain controller that could mesh with its ambitions in virtual reality; it bought Oculus VR in 2014 for $2 billion. To get there, the company took a two-pronged approach, says Chevillet. First, it needed to determine whether a thought-to-speech interface was even possible. For that, it sponsored research at the University of California, San Francisco, where a researcher named Edward Chang has placed electrode pads on the surface of people’s brains.
Whereas implanted electrodes read data from single neurons, this technique, called electrocorticography, or ECoG, measures from fairly large groups of neurons at once. Chevillet says Facebook hoped it might also be possible to detect equivalent signals from outside the head.
The UCSF team made some surprising progress and today is reporting in the New England Journal of Medicine that it used those electrode pads to decode speech in real time. The subject was a 36-year-old man the researchers refer to as “Bravo-1,” who after a serious stroke has lost his ability to form intelligible words and can only grunt or moan. In their report, Chang’s group says with the electrodes on the surface of his brain, Bravo-1 has been able to form sentences on a computer at a rate of about 15 words per minute. The technology involves measuring neural signals in the part of the motor cortex associated with Bravo-1’s efforts to move his tongue and vocal tract as he imagines speaking.
To reach that result, Chang’s team asked Bravo-1 to imagine saying one of 50 common words nearly 10,000 times, feeding the patient’s neural signals to a deep-learning model. After training the model to match words with neural signals, the team was able to correctly determine the word Bravo-1 was thinking of saying 40% of the time (chance results would have been about 2%). Even so, his sentences were full of errors. “Hello, how are you?” might come out “Hungry how am you.”
But the scientists improved the performance by adding a language model—a program that judges which word sequences are most likely in English. That increased the accuracy to 75%. With this cyborg approach, the system could predict that Bravo-1’s sentence “I right my nurse” actually meant “I like my nurse.”
As remarkable as the result is, there are more than 170,000 words in English, and so performance would plummet outside of Bravo-1’s restricted vocabulary. That means the technique, while it might be useful as a medical aid, isn’t close to what Facebook had in mind. “We see applications in the foreseeable future in clinical assistive technology, but that is not where our business is,” says Chevillet. “We are focused on consumer applications, and there is a very long way to go for that.”
Facebook’s decision to drop out of brain reading is no shock to researchers who study these techniques. “I can’t say I am surprised, because they had hinted they were looking at a short time frame and were going to reevaluate things,” says Marc Slutzky, a professor at Northwestern whose former student Emily Munger was a key hire by Facebook. “Just speaking from experience, the goal of decoding speech is a large challenge. We’re still a long way off from a practical, all-encompassing kind of solution.”
Still, Slutsky says the UCSF project is an “impressive next step” that demonstrates both remarkable possibilities and some limits of the brain-reading science. “It remains to be seen if you can decode free-form speaking,” he says. “A patient who says ‘I want a drink of water’ versus ‘I want my medicine’—well those are different.” He says that if artificial-intelligence models could be trained for longer, and on more than just one person’s brain, they could improve rapidly.
While the UCSF research was going on, Facebook was also paying other centers, like the Applied Physics Lab at Johns Hopkins, to figure out how to pump light through the skull to read neurons noninvasively. Much like MRI, those techniques rely on sensing reflected light to measure the amount of blood flow to brain regions.
It’s these optical techniques that remain the bigger stumbling block. Even with recent improvements, including some by Facebook, they are not able to pick up neural signals with enough resolution. Another issue, says Chevillet, is that the blood flow these methods detect occurs five seconds after a group of neurons fire, making it too slow to control a computer.