The Washington PostDemocracy Dies in Darkness

Silicon Valley wants to develop mind-reading tech. We need to regulate it first.

The potential applications are revolutionary, but guardrails are a must

A scan is performed at a hospital. (JohnnyGreig/(iStock))

A few weeks ago, scientists from the University of California at San Francisco and the University of California at Berkeley reported that they had produced a device that could read a person’s mind. Their findings, published in the New England Journal of Medicine and funded in part by Facebook, describe efforts to connect the brain of a stroke victim directly to a computer with software to interpret his thoughts.

The work is preliminary: It involved just one patient, required dozens of sessions to train the system, made use of a vocabulary of only 50 words, and was only about 50 percent accurate. But this technology, already a breakthrough, will only improve. These neuroprosthetics will increasingly replace broken connections in the brain, spinal cord and nerves, which will allow people to walk, hold objects and even speak again. Although Facebook has announced that it will stop funding brain-machine interfaces for the time being because it anticipates that the technology is a long way from commercial use, company representatives have expressed optimism that such devices will eventually be feasible. In addition, Facebook is continuing to develop neuroprosthetics that will connect computers to other parts of the nervous system, for example by detecting electrical signals from muscle twitches. Meanwhile, Elon Musk’s Neuralink, which is funded in part by Google Ventures, is developing a brain implant that will transmit electrical signals from the surface of the brain via USB-C port and Bluetooth to smartphones and other devices. Musk has called it “a Fitbit for your skull.” Peter Thiel has backed a rival neuroprosthetics firm, Blackrock Neurotech.

While the potential applications of these technologies are promising — and potentially revolutionary — it should give us pause that these people and corporations are pursuing them. Silicon Valley’s dominant business model is to figure out what users are thinking and use it to persuade them to buy things. Few things would allow them to do that more efficiently than tools that literally tell them what’s on our minds.

As a neurologist, I make use of numerous invasive and noninvasive tools to monitor brain activity. We routinely stick electrodes on the scalp to monitor for seizures and get CT scans and MRIs to look at the structure of the brain and spinal cord for evidence of things like strokes, tumors and bleeding. We occasionally even work with our neurosurgical colleagues to implant temporary probes deep into the brain of patients with severe brain injury from trauma or bleeding to assess pressure, oxygen levels, electrical activity and metabolism.

Neuroprosthetics may be an extension of such tools, but as they advance and migrate from medicine into daily life, they demand constraints on how we use technology in our bodies. Silicon Valley’s interest in neuroprosthetics, in particular, should lead us to demand strong regulation — even at this early stage when more powerful applications are relatively far off. Before we can set out those rules, we need a framework to understand the risks neuroprosthetics pose.

A startup is using photos to identify you. Big tech can stop it from happening again.

Three questions should drive how we evaluate any new neuroprosthetic. First, what part of the nervous system does it access? Second, is it a closed connection in the body or does it transmit information out of the body? Third, does it only read information, or does it also write information, inputting new signals onto the nervous system?

Take the study from UCSF as an example. The team set out to develop a device to bypass the motor circuits controlling speech, which the subject’s stroke had damaged, and decipher the man’s thoughts directly from his brain activity. To do so, the team implanted an array of electrodes on the surface of his brain, known as the cortex, in an area that governs language. They then recorded electrical activity from that area during language tasks involving individual words and sentences. They collected signals from the highly sensitive sensorimotor cortex and transmitted them to a computer, but they did not write new information to the brain. Each of these features of what a neuroprosthetic does is important in determining its risk.

The part of the brain that a technology accesses matters because it determines the kind of information that is available and the effects of modifying it. The UCSF researchers accessed cortical areas involving language where thoughts become words and sentences. These regions of the brain send, for example, the idea of “hello” to the motor cortex and to deeper structures involved in coordination to generate a motor program delivered into the brain stem and out into the nerves that control the muscles of the face and vocal cords, which then contract to produce the sounds that make up the word “hello.”

When we feel things, electrical signals travel in the opposite direction. For example, if you touch a hot stove, pain receptors in your skin send an electrical signal through nerve cells that extend from your hand up into the spinal cord and finally the brain, where they activate other nerve cells. This information is processed all the way up to the sensory cortex, which connects to a range of structures that help to generate emotions and the idea of pain. While you sense the hot stove in your hand, you understand it in your brain. A device that stimulates the nerve in your hand can generate a pain signal, but would have to conduct it through all the normal filters and controls of your brain. We often speak of the brain as if it has discrete parts that perform different functions, but these parts interact with each other in complicated ways to generate behaviors and sensations.

Sophisticated neuroprosthetics will require advances in our understanding of how these interactions work, but the complexity of these connections also creates the possibility of unintended consequences when tampering physically in the brain. Accessing the brain directly could remap those signals in a way that gets much closer to our thoughts, personality and consciousness. For that reason, we must be much more stringent the closer in the nervous system a neuroprosthetic gets to such information.

Similarly, it matters whether a proposed technology transmits externally or just works within the body. The device Neuralink is developing and the one developed at UCSF gather information from the brain and send it to computers. Not only could someone hack these external transmissions, even their stated purposes — from the clinical to the commercial — represent a tremendous risk to privacy, potentially broadcasting information about a person’s thoughts or movements across a network. By contrast, scientists have developed artificial nerves that can sense pressure or chemical signals and transmit them to other nerves in the body. Such devices could one day replace nerves injured in accidents. This sort of closed system only communicates locally, carrying a much lower risk to privacy because the information does not leave the body. Externally connected systems must undergo a greater level of scrutiny than closed ones.

Are tech giants robbing us of our decision making?

Finally, we have to consider devices that will not only read signals, but also write them onto the nervous system. Such devices already exist. For decades, neurosurgeons have implanted stimulators that write electrical impulses onto the deep structures of the basal ganglia to treat tremors in patients with severe Parkinson’s disease. New devices could write information to even more sensitive areas of the brain, such as the hippocampus, which stores memories. Nearly a decade ago, a team led by Nobel laureate Susumu Tonegawa conducted experiments implanting false memories in mice. The setup was complex and not reproducible in humans, requiring genetic engineering to allow the researchers to activate neurons with light. However, other approaches, perhaps more like those used in deep brain stimulators, could make it possible to edit human brain activity in the future.

The difference in risk between the basal ganglia and the hippocampus shows how one criterion (location) can affect the risk associated with another (writing to the brain). The interactions between location, transmission and writing will define the implications that any given neuroprosthetic technology has for society. As we think about how to regulate them, it will be critical to take those complexities into account.

We will need a comprehensive approach to determine whether individual neuroprosthetics are socially and societally safe. These technologies have enormous potential to improve people’s lives, but only if we safeguard against the grave risks they could pose to privacy, security and well-being. We cannot afford to catch up to the technology after it is released. As we grow increasingly uneasy that Facebook, Google and others have access to our private messages and photographs, we should worry even more about them having access to our inner monologues and precious memories.