Moises Velasquez-Manoff
Brains are talking to computers, and computers to brains. Are our daydreams safe?
Jack Gallant never set out to create a mind-reading machine. His focus was more prosaic. A computational neuroscientist at the University of California, Berkeley, Gallant worked for years to improve our understanding of how brains encode information — what regions become active, for example, when a person sees a plane or an apple or a dog — and how that activity represents the object being viewed.
By the late 2000s, scientists could determine what kind of thing a person might be looking at from the way the brain lit up — a human face, say, or a cat. But Gallant and his colleagues went further. They figured out how to use machine learning to decipher not just the class of thing, but which exact image a subject was viewing. (Which photo of a cat, out of three options, for instance.)
One day, Gallant and his postdocs got to talking. In the same way that you can turn a speaker into a microphone by hooking it up backward, they wondered if they could reverse engineer the algorithm they’d developed so they could visualize, solely from brain activity, what a person was seeing.
The first phase of the project was to train the AI. For hours, Gallant and his colleagues showed volunteers in fMRI machines movie clips. By matching patterns of brain activation prompted by the moving images, the AI built a model of how the volunteers’ visual cortex, which parses information from the eyes, worked. Then came the next phase: translation. As they showed the volunteers movie clips, they asked the model what, given everything it now knew about their brains, it thought they might be looking at.
The experiment focused just on a subsection of the visual cortex. It didn’t capture what was happening elsewhere in the brain — how a person might feel about what she was seeing, for example, or what she might be fantasizing about as she watched. The endeavor was, in Gallant’s words, a primitive proof-of-concept.
And yet the results, published in 2011, are remarkable.
The reconstructed images move with a dreamlike fluidity. In their imperfection, they evoke expressionist art. (And a few reconstructed images seem downright wrong.) But where they succeed, they represent an astonishing achievement: A machine translating patterns of brain activity into a moving image understandable by other people — a machine that can read the brain.
Gallant was thrilled. Imagine the possibilities when better brain-reading technology became available? Imagine the people suffering from locked-in syndrome, Lou Gehrig’s disease, the people incapacitated by strokes, who could benefit from a machine that could help them interact with the world?
He was also scared because the experiment showed, in a concrete way, that humanity was at the dawn of a new era, one in which our thoughts could theoretically be snatched from our heads. What was going to happen, Gallant wondered, when you could read thoughts the thinker might not even be consciously aware of, when you could see people’s memories?
“That’s a real sobering thought that now you have to take seriously,” he told me recently.
The ‘Google Cap’
For decades, we’ve communicated with computers mostly by using our fingers and our eyes, by interfacing via keyboards and screens. These tools and the bony digits we prod them with provide a natural limit to the speed of communication between human brain and machine. We can convey information only as quickly (and accurately) as we can type or click.
Voice recognition, like that used by Apple’s Siri or Amazon’s Alexa, is a step toward more seamless integration of human and machine. The next step, one that scientists around the world are pursuing, is technology that allows people to control computers — and everything connected to them, including cars, robotic arms and drones — merely by thinking.
Gallant jokingly calls the imagined piece of hardware that would do this a “Google cap”: a hat that could sense silent commands and prompt computers to respond accordingly.
The problem is that, to work, that cap would need to be able to see, with some detail, what’s happening in the nearly 100 billion neurons that make up the brain.
Technology that can easily peer through the skull, like the MRI machine, is far too unwieldy to mount on your head. Less bulky technology, like electroencephalogram, or E.E.G., which measures the brain’s electrical activity through electrodes attached to the scalp, doesn’t provide nearly the same clarity. One scientist compares it to looking for the surface ripples made by a fish swimming underwater while a storm roils the lake.
Other methods of “seeing” into the brain might include magnetoencephalography, or M.E.G., which measures magnetic waves emanating outside the skull from neurons firing beneath it; or using infrared light, which can penetrate living tissue, to infer brain activity from changes in blood flow. (Pulse oximeters work this way, by shining infrared light through your finger.)
What technologies will power the brain-computer interface of the future is still unclear. And if it’s unclear how we’ll “read” the brain, it’s even less clear how we’ll “write” to it.
This is the other Holy Grail of brain-machine research: technology that can transmit information to the brain directly. We’re probably nowhere near the moment when you can silently ask, “Alexa, what’s the capital of Peru?” and have “Lima” materialize in your mind.
Even so, solutions to these challenges are beginning to emerge. Much of the research has occurred in the medical realm where, for years, scientists have worked incrementally toward giving quadriplegics and others with immobilizing neurological conditions better ways of interacting with the world through computers. But in recent years, tech companies — including Facebook, Microsoft and Elon Musk’s Neuralink — have begun investing in the field.
Some scientists are elated by this infusion of energy and resources. Others worry that as this tech moves into the consumer realm, it could have a variety of unintended and potentially dangerous consequences, from the erosion of mental privacy to the exacerbation of inequality.
Rafael Yuste, a neurobiologist at Columbia University, counts two great advances in computing that have transformed society: the transition from room-size mainframe computers to personal computers that fit on a desk (and then in your lap), and the advent of mobile computing with smartphones in the 2000s. Noninvasive brain-reading tech would be a third great leap, he says.
“Forget about the Covid crisis,” Yuste told me. “What’s coming with this new tech can change humanity.”
Dear Brain
Not many people will volunteer to be the first to undergo a novel kind of brain surgery, even if it holds the promise of restoring mobility to those who’ve been paralyzed. So when Robert Kirsch, chairman of biomedical engineering at Case Western Reserve University put out such a call nearly 10 years ago, and one person both met the criteria and was willing, he knew he had a pioneer on his hands.
The man’s name was Bill Kochevar. He’d been paralyzed from the neck down in a biking accident years earlier. His motto, as he later explained it, was “somebody has to do the research.”
At that point, scientists had already invented gizmos that helped paralyzed patients leverage what mobility remained — lips, an eyelid — to control computers or move robotic arms. But Kirsch was after something different. He wanted to help Kochevar move his own limbs.
The first step was implanting two arrays of sensors over the part of the brain that would normally control Mr. Kochevar’s right arm. Electrodes that could receive signals from those arrays via a computer were implanted into his arm muscles. The implants, and the computer connected to them, would function as a kind of electronic spinal cord, bypassing his injury.
Once his arm muscles had been strengthened — achieved with a regimen of mild electrical stimulation while he slept — Kochevar, who at that point had been paralyzed for over a decade, was able to feed himself and drink water. He could even scratch his nose.
There are about two dozen people around the world who have lost the use of limbs from accidents or neurological disease, who’ve had sensors implanted on their brains. Many, Kochevar included, participated in a United States government-funded program called BrainGate. The sensor arrays used in this research, smaller than a button, allow patients to move robotic arms or cursors on a screen just by thinking. But as far as Kirsch knows, Kochevar, who died in 2017 for reasons unrelated to the research, was the first paralyzed person to regain use of his limbs by way of this technology.
This fall, Kirsch and his colleagues will begin version 2.0 of the experiment. This time, they’ll implant six smaller arrays — more sensors will improve the quality of the signal. And instead of implanting electrodes directly in the volunteers’ muscles, they’ll insert them upstream, circling the nerves that move the muscles. In theory, Kirsch says, that will enable movement of the entire arm and hand.
The next major goal is to restore sensation so that people can know if they’re holding a rock, say, or an orange — or if their hand has wandered too close to a flame. “Sensation has been the longest ignored part of paralysis,” Kirsch told me.
A few years ago, scientists at the University of Pittsburgh began groundbreaking experiments on that front with a man named Nathan Copeland who was paralyzed from the upper chest down. They routed sensory information from a robotic arm into the part of his cortex that dealt with his right hand’s sense of touch.
Every brain is a living, undulating organ that changes over time. That’s why, before each of Copeland’s sessions, the A.I. has to recalibrate — to construct a new brain decoder. “The signals in your brain shift,” Copeland told me. “They’re not exactly the same every day.”
And the results weren’t perfect. Copeland described them to me as “weird,” “electrical tingly” but also “amazing.” The sensory feedback was immensely important, though, in knowing that he’d actually grasped what he thought he’d grasped. And more generally, it demonstrated that a person could “feel” a robotic hand as their own, and that information coming from electronic sensors could be fed into the human brain.
Preliminary as these experiments are, they suggest that the pieces of a brain-machine interface that can both “read” and “write” already exist. People cannot only move robotic arms just by thinking; machines can also, however imperfectly, convey information to the brain about what that arm encounters.
Who knows how soon versions of this technology will be available for kids who want to think-move avatars in video games or think-surf the web. People can already fly drones with their brain signals, so maybe crude consumer versions will appear in coming years. But it’s hard to overstate how life-changing such tech could be for people with spinal cord injuries or neurological diseases.
Edward Chang, a neurosurgeon at the University of California, San Francisco, who works on brain-based speech recognition, said that maintaining the ability to communicate can mean the difference between life or death. “For some people, if they have a means to continue to communicate, that may be the reason they decide to stay alive,” he told me. “That motivates us a lot in our work.”
In a recent study, Chang and his colleagues predicted with up to 97 percent accuracy — the best rate yet achieved, they claim — what words a volunteer had said (from about 250 words used in a predetermined set of 50 sentences) by using implanted sensors that monitored activity in the part of their brain that moves the muscles involved in speaking. (The volunteers in this study weren’t paralyzed; they were epilepsy patients undergoing brain surgery to address that condition, and the implants were not permanent.)
Chang used sensor arrays similar to those Kirsch used, but a noninvasive method may not be too far away.
Facebook, which funded Chang’s study, is working on a brain-reading helmet-like contraption that uses infrared light to peer into the brain. Mark Chevillet, director of brain-computer interface research at Facebook Reality Labs, told me in an email that while full speech recognition remains distant, his lab will be able to decode simple commands like “home,” “select” and “delete” in “coming years.”
This progress isn’t solely driven by advances in brain-sensing technology — by the physical meeting point of flesh and machine. The A.I. matters as much, if not more.
Trying to understand the brain from outside the skull is like trying to make sense of a conversation taking place two rooms away. The signal is often messy, hard to decipher. So it’s the same types of algorithms that now allow speech-recognition software to do a decent job of understanding spoken speech — including individual idiosyncrasies of pronunciation and regional accents — that may now enable brain-reading technology.
Zap That Urge
Not all the applications of brain-reading require something as complex as understanding speech, however. In some cases, scientists simply want to blunt urges.
When Casey Halpern, a neurosurgeon at Stanford, was in college, he had a friend who drank too much. Another was overweight but couldn’t stop eating. “Impulse control is such a pervasive problem,” he told me.
As a budding scientist, he learned about methods of deep brain stimulation used to treat Parkinson’s disease. A mild electric current applied to a part of the brain involved in movement could lessen tremors caused by the disease. Could he apply that technology to the problem of inadequate self control?
Working with mice in the 2010s, he identified a part of the brain, called the nucleus accumbens, where activity spiked in a predictable pattern just before a mouse was about to gorge on high-fat food. He found he could reduce how much the mouse ate by disrupting that activity with a mild electrical current. He could zap the compulsion to gorge as it was taking hold in the rodents’ brains.
Earlier this year, he began testing the approach in people suffering from obesity who haven’t been helped by any other treatment, including gastric-bypass surgery. He implants an electrode in their nucleus accumbens. It’s connected to an apparatus that was originally developed to prevent seizures in people with epilepsy.
As with Chang or Gallant’s work, an algorithm first has to learn about the brain it’s attached to — to recognize the signs of oncoming loss of control. Halpern and his colleagues train the algorithm by giving patients a taste of a milkshake, or offering a buffet of the patient’s favorite foods, and then recording their brain activity just before the person indulges.
He’s so far completed two implantations. “The goal is to help restore control,” he told me. And if it works in obesity, which afflicts roughly 40 percent of adults in the United States, he plans to test the gizmo against addictions to alcohol, cocaine and other substances.
Halpern’s approach takes as fact something that he says many people have a hard time accepting: that the lack of impulse control that may underlie addictive behavior isn’t a choice, but results from a malfunction of the brain. “We have to accept that it’s a disease,” he says. “We often just judge people and assume it’s their own fault. That’s not what the current research is suggesting we should do.”
I must confess that of the numerous proposed applications of brain-machine interfacing I came across, Halpern’s was my favorite to extrapolate on. How many lives have been derailed by the inability to resist the temptation of that next pill or that next beer? What if Halpern’s solution was generalizable?
What if every time your mind wandered off while writing an article, you could, with the aid of your concentration implant, prod it back to the task at hand, finally completing those life-changing projects you’ve never gotten around to finishing?
These applications remain fantasies, of course. But the mere fact that such a thing may be possible is partly what prompts Yuste, the neurobiologist, to worry about how this technology could blur the boundaries of what we consider to be our personalities.
Such blurring is already an issue, he points out. Parkinson’s patients with implants sometimes report feeling more aggressive than usual when the machine is “on.” Depressed patients undergoing deep brain stimulation sometimes wonder if they’re really themselves anymore. “You kind of feel artificial,” one patient told researchers. The machine isn’t implanting ideas in their minds, like Leonardo DiCaprio’s character in the movie “Inception,” but it is seemingly changing their sense of self.
What happens if people are no longer sure if their emotions are theirs, or the effects of the machines they’re connected to?
Halpern dismisses these concerns as overblown. Such effects are part of many medical treatments, he points out, including commonly prescribed antidepressants and stimulants. And sometimes, as in the case of hopeless addiction, changing someone’s behavior is precisely the goal.
Still the longer-term issue of what could happen when brain-writing technology jumps from the medical into the consumer realm is hard to forget. If my imagined focus-enhancer existed, for example, but was very expensive, it could exacerbate the already yawning chasm between those who can afford expensive tutors, cars and colleges — and now grit-boosting technology — and those who cannot.
“Certain groups will get this tech, and will enhance themselves,” Yuste told me. “This is a really serious threat to humanity.”
The Brain Business
“The idea that you have to drill holes in skulls to read the brains is nuts,” Mary Lou Jepsen, the chief executive and founder of Openwater, told me in an email. Her company is developing technology that, she says, uses infrared light and ultrasonic waves to peer into the body.
Other researchers are simply trying to make invasive approaches less invasive. A company called Synchron seeks to avoid opening the skull or touching brain tissue at all by inserting a sensor through the jugular vein in the neck. It’s currently undergoing a safety and feasibility trial.
Kirsch suspects that Elon Musk’s Neuralink is probably the best brain sensing tech in development. It requires surgery, but unlike the BrainGate sensor arrays, it’s thin, flexible and can adjust to the mountainous topography of the brain. The hope is that this makes it less caustic. It also has hairlike filaments that sink into brain tissue. Each filament contains multiple sensors, theoretically allowing the capture of more data than flatter arrays that sit at the brain’s surface. It can both read and write to the brain, and it’s accompanied by a robot that assists with the implantation.
A major challenge to implants is that, as Gallant says, “your brain doesn’t like having stuff stuck in your brain.” Over time, immune cells may swarm the implant, covering it with goop.
One way to try to avoid this is to drastically shrink the size of the sensors. Arto Nurmikko, a professor of engineering and physics at Brown University who’s part of the BrainGate effort, is developing what he calls “neurograins” — tiny, implantable silicon sensors no larger than a handful of neurons. They’re too small to have batteries, so they’re powered by microwaves beamed in from outside the skull.
He foresees maybe 1,000 mini sensors implanted throughout the brain. He’s so far tested them only in rodents. But maybe we shouldn’t be so sure that healthy people wouldn’t volunteer for “mental-enhancement” surgery. Every year, Nurmikko poses a hypothetical to his students: 1,000 neurograin implants that would allow students to learn and communicate faster; any volunteers?
“Typically about half the class says, ‘Sure,’” he told me. “That speaks to where we are today.”
Jose Carmena and Michel Maharbiz, scientists at Berkeley and founders of a start-up called Iota Biosciences, have their own version of this idea, which they call “neural dust”: tiny implants for the peripheral nervous system — arm, legs and organs besides the brain. “It’s like a Fitbit for your liver,” Carmena told me.
They imagine treating inflammatory diseases by stimulating nerves throughout the body with these tiny devices. And where Nurmikko uses microwaves to power the devices, Carmena and Maharbiz foresee the use of ultrasound to beam power to them.
Generally, they say, this kind of tech will be adopted first in the medical context and then move to the lay population. “We’re going to evolve to augmenting humans,” Carmena told me.” There’s no question.”
But hype permeates the field, he warns. Sure, Elon Musk has argued that closer brain-machine integration will help humans compete with ever-more-powerful A.I.s. But in reality, we’re nowhere near a device that could, for example, help you master Kung Fu instantaneously like Keanu Reeves in “The Matrix.”
What does the near future look like for the average consumer? Ramses Alcaide, the chief executive of a company called Neurable, imagines a world in which smartphones tucked in our pockets or backpacks act as processing hubs for data streaming in from smaller computers and sensors worn around the body. These devices — glasses that serve as displays, earbuds that whisper in our ears — are where the actual interfacing between human and computer will occur.
Microsoft sells a headset called HoloLens that superimposes images onto the world, an idea called “augmented reality.” A company called Mojo Vision is working toward a contact lens that projects monochrome images directly onto the retina, a private computer display superimposed over the world.
And Alcaide himself is working on what he sees as the linchpin to this vision, a device that, one day, may help you to silently communicate with all your digital paraphernalia. He was vague about the form the product will take — it isn’t market ready yet — except to note that it’s an earphone that can measure the brain’s electrical activity to sense “cognitive states,” like whether you’re hungry or concentrating.
We already compulsively check Instagram and Facebook and email, even though we’re supposedly impeded by our fleshy fingers. I asked Alcaide: What will happen when we can compulsively check social media just by thinking?
Ever the optimist, he told me that brain-sensing technology could actually help with the digital incursion. The smart earbud could sense that you’re working, for instance, and block advertisements or phone calls. “What if your computer knew you were focusing?” he told me. “What if it actually removes bombardment from your life?”
Maybe it’s no surprise that Alcaide has enjoyed the HBO sci-fi show “Westworld,” a universe where technologies that make communicating with computers more seamless are commonplace (though no one seems better off for it). Rafael Yuste, on the other hand, refuses to watch the show. He likens the idea to a scientist who studies Covid-19 watching a movie about pandemics. “It’s the last thing I want to do,” he says.
‘A Human Rights Issue’
To grasp why Yuste frets so much about brain-reading technology, it helps to understand his research. He helped pioneer a technology that can read and write to the brain with unprecedented precision, and it doesn’t require surgery. But it does require genetic engineering.
Yuste infects mice with a virus that inserts two genes into the animals’ neurons. One prompts the cells to produce a protein that make them sensitive to infrared light; the other makes the neurons emit light when they activate. Thereafter, when the neurons fire, Yuste can see them light up. And he can activate neurons in turn with an infrared laser. Yuste can thus read what’s happening in the mouse brain and write to the mouse’s brain with an accuracy impossible with other techniques.
And he can, it appears, make the mice “see” things that aren’t there.
In one experiment, he trained mice to take a drink of sugar water after a series of bars appeared on a screen. He recorded which neurons in the visual cortex fired when the mice saw those bars. Then he activated those same neurons with the laser, but without showing them the actual bars. The mice had the same reaction: They took a drink.
He likens what he did to implanting an hallucination. “We were able to implant into these mice perceptions of things that they hadn’t seen,” he told me. “We manipulated the mouse like a puppet.”
This method, called optogenetics, is a long way from being used in people. To begin with, we have thicker skulls and bigger brains, making it harder for infrared light to penetrate. And from a political and regulatory standpoint, the bar is high for genetically engineering human beings. But scientists are exploring workarounds — drugs and nanoparticles that make neurons receptive to infrared light, allowing precise activation of neurons without genetic engineering.
The lesson in Yuste’s view is not that we’ll soon have lasers mounted on our heads that play us “like pianos,” but that brain-reading and possibly brain-writing technologies are fast approaching, and society isn’t prepared for them.
“We think this is a human rights issue,” he told me.
In a 2017 paper in the journal Nature, Yuste and 24 other signatories, including Gallant, called for the formulation of a human rights declaration that explicitly addressed “neurorights” and what they see as the threats posed by brain-reading technology before it becomes ubiquitous. Information taken from people’s brains should be protected like medical data, D
Yuste says, and not exploited for profit or worse. And just as people have the right not to self-incriminate with speech, we should have the right not to self-incriminate with information gleaned from our brains.
Yuste’s activism was prompted in part, he told me, by the large companies suddenly interested in brain-machine research.
Say you’re using your Google Cap. And like many products in the Google ecosystem, it collects information about you, which it uses to help advertisers target you with ads. Only now, it’s not harvesting your search results or your map location; it’s harvesting your thoughts, your daydreams, your desires.
Who owns those data?
Or imagine that writing to the brain is possible. And there are lower-tier versions of brain-writing gizmos that, in exchange for their free use, occasionally “make suggestions” directly to your brain. How will you know if your impulses are your own, or if an algorithm has stimulated that sudden craving for Ben & Jerry’s ice cream or Gucci handbags?
“People have been trying to manipulate each other since the beginning of time,” Yuste told me. “But there’s a line that you cross once the manipulation goes directly to the brain, because you will not be able to tell you are being manipulated.”
When I asked Facebook about concerns around the ethics of big tech entering the brain-computer interface space, Chevillet, of Facebook Reality Labs, highlighted the transparency of its brain-reading project. “This is why we’ve talked openly about our B.C.I. research — so it can be discussed throughout the neuroethics community as we collectively explore what responsible innovation looks like in this field,” he said in an email.
Ed Cutrell, a senior principal researcher at Microsoft, which also has a B.C.I. program, emphasized the importance of treating user data carefully. “There needs to be clear sense of where that information goes,” he told me. “As we are sensing more and more about people, to what extent is that information I’m collecting about you yours?”
Some find all this talk of ethics and rights, if not irrelevant, then at least premature.
Medical scientists working to help paralyzed patients, for example, are already governed by HIPAA laws, which protect patient privacy. Any new medical technology has to go through the Food and Drug Administration approval process, which includes ethical considerations.
(Ethical quandaries still arise, though, notes Kirsch. Let’s say you want to implant a sensor array in a patient suffering from locked-in syndrome. How do you get consent to conduct surgery that might change the person’s life for the better, from a person who can’t communicate?)
Leigh Hochberg, a professor of engineering at Brown University and part of the BrainGate initiative, sees the companies now piling into the brain-machine space as a boon. The field needs these companies’ dynamism — and their deep pockets, he told me. Discussions about ethics are important, “but those discussions should not at any point derail the imperative to provide restorative neurotechnologies to people who could benefit from them,” he added.
Ethicists, Jepsen told me, “must also see this: The alternative would be deciding we aren’t interested in a deeper understanding of how our minds work, curing mental disease, really understanding depression, peering inside people in comas or with Alzheimer’s, and enhancing our abilities in finding new ways to communicate.”
There’s even arguably a national-security imperative to plow forward. China has its own version of BrainGate. If American companies don’t pioneer this technology, some think, Chinese companies will. “People have described this as a brain arms race,” Yuste said.
Not even Gallant, who first succeeded in translating neural activity into a moving image of what another person was seeing — and who was both elated and horrified by the exercise — thinks the Luddite approach is an option. “The only way out of the technology-driven hole we’re in is more technology and science,” he told me. “That’s just a cool fact of life.”
- Moises Velasquez-Manoff, author of “An Epidemic of Absence: A New Way of Understanding Allergies and Autoimmune Diseases,” is a contributing opinion writer.
New York Times