Should We All Be Putting Chips in Our Brains?

Gli impianti neurali come quelli che Elon Musk intende commercializzare attraverso la sua società Neuralink promettono progressi su diversi fronti, ma anche enormi problemi etici.

Anil Seth

Bandera UK
Sarah Davison

Speaker (UK accent)

Aggiornato il giorno

470  The Guardian Chips in the Brain b Adobestock

Ascolta questo articolo

Stampare

Are we on the verge of a new era in which brain disorders become a thing of the past, and we all to merge seamlessly with artificial intelligence? This sci-fi future may seem one step closer after Elon Musk’s recent announcement that his biotech company, Neuralink, has implanted its technology into a human brain for the first time. But is mind-melding of this kind really on the way? And is it something we want?

Founded in 2016, Neuralink is a newcomer in the world of brain-machine interfaces, or BMIs. The core technology has been around for decades, and its principles are fairly straightforward. A BMI consists of probes — usually very thin wires — that are inserted into the brain at specific locations. These probes eavesdrop on the activity of nearby brain cells and transmit the information they gather to a computer. The computer then processes this information in order to do something useful — perhaps control a robot, or a voice synthesiser. BMIs can also work the other way round, driving neural activity through electrical stimulation carried out by the probes, potentially changing what we think, feel and do.

BMI technology is developing rapidly and for good reason. There’s the potential to restore movement in people with paralysis, blind people might be able to see again, and much more besides. But, beyond medical applications, there’s the chance that BMIs may endow us — or some of us, at least — with new cognitive capabilities. This territory is ethically treacherous, and the outsized media attention paid to Neuralink can be partly explained by Musk’s eulogising of such a cyborg future.

The Guardian Chips in the Brain

The medical appeal of BMIs is relatively uncomplicated, and many advances have already been made. Human clinical trials date back to the 1990s — Neuralink was by no means the first — when a researcher at Georgia Tech called Phil Kennedy implanted a basic system into a patient with severe paralysis. After extensive training, this patient was able to control a computer cursor through focused thinking. (In 2005, exemplifying a certain Muskian zeal, Kennedy implanted a BMI into his own brain.)

More recently, other research teams have demonstrated impressive progress. Last year, researchers in Lausanne helped a paralysed man walk, while, at Stanford, scientists used a BMI to allow motor neurone disease patients who had lost the physical ability to talk to communicate using their thoughts. BMIs have been used to suppress epileptic seizures, and to alleviate the symptoms of Parkinson’s disease through targeted neural stimulation.

While Neuralink has some catching up to do, its engineering prowess may well accelerate these desirable clinical applications. The development of precision surgical robotics to perform implantations with superhuman delicacy, the increased bandwidth through scaling up the number and density of probes, and the application of huge amounts of computational power could all make a difference. The company’s first stated goal is to restore movement in paralysed people, and it’s plausible that they’ll make rapid progress.

On the other hand, BMI development is as much a scientific problem as it is an engineering challenge, and Musk’s typically hard-charging engineering approach may not transfer over smoothly. Unlike electric cars and space rockets, understanding how the brain works is not a solved scientific problem and it’s unlikely to be so anytime soon. Medical research of any kind has to proceed slowly, to minimise the suffering of any animals involved, and to ensure human safety. Nobody wants a “rapid unscheduled disassembly” inside their own heads, as happened with one of Musk’s rockets not so long ago.

This brings us to the wider ethical issues raised by BMIs, and to the critical distinction between medical uses and cognitive enhancement. While most of us might agree that treating neurological disorders is a good thing, the ethics of the latter are far murkier.

First, there are questions of feasibility. Musk paints a picture of a future in which all of us may use implants to improve ourselves, going far beyond medical need. To “unlock human potential tomorrow”, as Neuralink’s website puts it. A quick trip to a high-street neurosurgeon, and bingo, you’re super-intelligent.

But how likely is this? The scientific challenges BMIs have to overcome mean that the first non-medical applications will probably be limited to things such as controlling apps on our phones or other devices. Will people really undergo elective brain surgery so that they can doom-scroll social media with their minds alone? I know I wouldn’t. I already have pretty effective brain-world interfaces, such as my hands and my mouth. A new hole in the head seems excessive.

Then there are deeper questions about desirability. One worry is that differential access to enhancement will create an overclass of cognitively superior elites. This is a valid concern, though one tempered by those feasibility issues. A more pressing worry is algorithmic bias — well recognised in AI circles, but still poorly addressed. If BMIs are trained on data from only a subset of society — and guess which subset that might be — then getting them to work properly may require us to think in ways characteristic of that subset. This would install social biases directly into our minds, potentially fostering a kind of mental monoculture.

Finally, there’s what might happen if we allow companies and organisations access to our neural data — probably the most intimate form of personal information imaginable. Most of us have already traded privacy for convenience in various ways, but the combination of BMI technology with AI raises the ethical stakes significantly. Remote mind reading, while scientifically distant, brings with it the Orwellian prospect of governments punishing people for having the ‘wrong’ thoughts. Even more concerning is the prospect of remote mind control through neural stimulation. Again, this scenario is probably very far away, if it is possible at all, but the consequences would be existential. When we lose autonomy over our own mental states, over our own conscious experiences, we have arrived at a place where what it means to be a human being hangs in the balance. Whatever the benefits, this is a high price to pay.

Maybe it’s a failure of imagination on my part, but while I am truly excited about the medical opportunities of neural implants, I would rather unlock human potential in ways that are far less invasive. And we should certainly think twice before hooking our brains directly to the servers of corporations — while we still can.  

Published in The Guardian on February 26, 2024. Reprinted with permission. 

More in C2 Advanced

Journalism: A Question of Trust
GETTY IMAGES

Current Affairs

Journalism: A Question of Trust

Oggi i media tradizionali si trovano a competere con numerosi canali e ormai le versioni di una stessa notizia sono talmente tante che il concetto di verità è diventato quasi un’illusione. Come affronta il giornalismo questa nuova realtà?

Alex Phillips

More in Explore

TODAY’S TOP STORIES

Julian Barnes: Truth and Delusion
Free image

Classic Books

Julian Barnes: Truth and Delusion

Nei suoi libri, tra cui figurano Flaubert’s Parrot, England, England e il vincitore del premio Booker The Sense of an Ending, l’autore inglese Julian Barnes tratta temi universali come la storia, l’identità e la memoria. È una delle figure letterarie britanniche che si lamentano dell’assurda uscita del Regno Unito dall’UE.

Alex Phillips