Mind-reading technology has arrived

A person in a plaid shirt puts a device over the head of a person lying down about to enter an MRI machine.
PhD student Jerry Tang prepares to collect brain activity data in the Biomedical Imaging Center at the University of Texas at Austin. | Nolan Zunk/The University of Texas at Austin

An AI-powered “brain decoder” can now read your thoughts with surprising accuracy.

For a few years now, I’ve been writing articles on neurotechnology with downright Orwellian headlines. Headlines that warn “Facebook is building tech to read your mind” and “Brain-reading tech is coming.”

Well, the technology is no longer just “coming.” It’s here.

With the help of AI, scientists from the University of Texas at Austin have developed a technique that can translate people’s brain activity — like the unspoken thoughts swirling through our minds — into actual speech, according to a study published in Nature.

In the past, researchers have shown that they can decode unspoken language by implanting electrodes in the brain and then using an algorithm that reads the brain’s activity and translates it into text on a computer screen. But that approach is very invasive, requiring surgery. It appealed only to a subset of patients, like those with paralysis, for whom the benefits were worth the costs. So researchers also developed techniques that didn’t involve surgical implants. They were good enough to decode basic brain states, like fatigue, or very short phrases — but not much more.

Now we’ve got a non-invasive brain-computer interface (BCI) that can decode continuous language from the brain, so somebody else can read the general gist of what we’re thinking even if we haven’t uttered a single word.

How is that possible?

It comes down to the marriage of two technologies: fMRI scans, which measure blood flow to different areas of the brain, and large AI language models, similar to the now-infamous ChatGPT.

In the University of Texas study, three participants listened to 16 hours of storytelling podcasts like The Moth while scientists used an fMRI machine to track the change in blood flow in their brains. That data allowed the scientists, using an AI model, to associate a phrase with how each person’s brain looks when it hears that specific phrase.

Because the number of possible word sequences is so vast, and many of them would be gibberish, the scientists also used a language model — specifically, GPT-1 — to narrow down possible sequences to well-formed English and predict which words are likeliest to come next in a sequence.

The result is a decoder that gets the gist right, even though it doesn’t nail every single word. For example, participants were asked to imagine telling a story while in the fMRI machine. Later, they repeated it aloud so the scientists could see how well the decoded story matched up with the original.

When the participant thought, “Look for a message from my wife saying that she had changed her mind and that she was coming back,” the decoder translated: “To see her for some reason I thought she would come to me and say she misses me.”

Here’s another example. When the participant thought, “Coming down a hill at me on a skateboard and he was going really fast and he stopped just in time,” the decoder translated: “He couldn’t get to me fast enough he drove straight up into my lane and tried to ram me.”

It’s not a word-for-word translation, but much of the general meaning is preserved. This represents a breakthrough that goes well beyond what previous brain-reading tech could do — and one that raises serious ethical questions.

The staggering ethical implications of brain-computer interfaces

It might be hard to believe that this is real, not something out of a Neal Stephenson or William Gibson novel. But this kind of tech is already changing people’s lives. Over the past dozen years, a number of paralyzed patients have received brain implants that allow them to move a computer cursor or control robotic arms with their thoughts.

Elon Musk’s Neuralink and Mark Zuckerberg’s Meta are working on BCIs that could pick up thoughts directly from your neurons and translate them into words in real time, which could one day allow you to control your phone or computer with just your thoughts.

Non-invasive, even portable BCIs that can read thoughts are still years away from commercial availability — after all, you can’t lug around an fMRI machine, which can cost as much as $3 million. But the study’s decoding approach could eventually be adapted for portable systems like functional near-infrared spectroscopy (fNIRS), which measures the same activity as fMRI, although with a lower resolution.

Is that a good thing? As with many cutting-edge innovations, this one stands to raise serious ethical quandaries.

Let’s start with the obvious. Our brains are the final privacy frontier. They’re the seat of our personal identity and our most intimate thoughts. If those precious three pounds of goo in our craniums aren’t ours to control, what is?

Imagine a scenario where companies have access to people’s brain data. They could use that data to market products to us in ways our brains find practically irresistible. Since our purchasing decisions are largely driven by unconscious impressions, advertisers can’t get very helpful intel from consumer surveys or focus groups. They can get much better intel by going directly to the source: the consumer’s brain. Already, advertisers in the nascent field of “neuromarketing” are attempting to do just that, by studying how people’s brains react as they watch commercials. If advertisers get brain data on a massive scale, you might find yourself with a powerful urge to buy certain products without being sure why.

Or imagine a scenario where governments use BCIs for surveillance, or police use them for interrogations. The principle against self-incrimination — enshrined in the US Constitution — could become meaningless in a world where the authorities are empowered to eavesdrop on your mental state without your consent. It’s a scenario reminiscent of the sci-fi movie Minority Report, in which a special police unit called the PreCrime Division identifies and arrests murderers before they commit their crimes.

Some neuroethicists argue that the potential for misuse of these technologies is so great that we need revamped human rights laws to protect us before they’re rolled out.

“This research shows how rapidly generative AI is enabling even our thoughts to be read,” Nita Farahany, author of The Battle for Your Brain, told me. “Before neurotechnology is used at scale in society, we need to protect humanity with a right to self-determination over our brains and mental experiences.”

As for the study’s authors, they’re optimistic — for now. “Our privacy analysis suggests that subject cooperation is currently required both to train and to apply the decoder,” they write.

Crucially, the process only worked with cooperative participants who had participated willingly in training the decoder. And those participants could throw off the decoder if they later wanted to; when they put up resistance by naming animals or counting, the results were unusable. For people on whose brain activity the decoder had not been trained, the results were gibberish.

“However, future developments might enable decoders to bypass these requirements,” the authors warn. “Moreover, even if decoder predictions are inaccurate without subject cooperation, they could be intentionally misinterpreted for malicious purposes.”

This is exactly the sort of future that worries Farahany. “We are literally at the moment before, where we could make choices to preserve our cognitive liberty — our rights to self-determination over our brains and mental experiences — or allow this technology to develop without safeguards,” she told me. “This paper makes clear that the moment is a very short one. We have a last chance to get this right for humanity.”


Post a Comment

0 Comments