New tools will allow us to better understand the workings of the brain, helping the sick and opening up economic opportunities, but they will also put our mental privacy at risk.
The Covid-19 crisis has revealed, among many things, the fragile privacy of our personal data and the shortcomings of national and international systems to address this problem. But I am writing to highlight an even greater problem – our mental privacy. It is an issue that is approaching us now, but one we still have time to avoid.
The story begins in February 2013 in the United States Congress. In a joint meeting of the House of Representatives and the Senate, then-US president Barack Obama delivers the State of the Union Address and announces the launch of a large-scale, long-term scientific project called BRAIN – Brain Research through Advancing Innovative Neurotechnologies.
Compared by Obama to the feat of putting man on the moon, this initiative will develop electronic, optical, molecular and computational tools that will be applied to the brains of laboratory animals and human patients. These tools will record brain activity or interfere with it, using both invasive devices (inside the brain) or non-invasive devices (attached to the skull).
Why did Obama launch this project? Neurotechnology is needed for scientific, clinical and economic reasons. From a scientific perspective, it is essential to understand the brain, the last frontier of our comprehension of the human body. By doing so, we will understand each other from within for the first time.
From a medical perspective, the tools will facilitate diagnoses, as well as an understanding of and cure for psychiatric and neurological diseases, which are having an increasingly devastating impact on the population.
In terms of the economy, neurotechnology is likely to open up an enormous field of development for business and industry, similar to what happened with the Human Genome Project – the major international genetic mapping initiative launched in the late 1980s that saw a 124-fold increase in investment in this field. This economic factor was the one that did most to convince the US Congress.
With estimated funding of $6 billion (€5.08 billion) – the budget has not been reduced despite the change of administration at the White House – the BRAIN initiative is now in its fifth year of a prospective 12, and involves more than 500 laboratories developing all kinds of neurotechnologies. Just as the genome sequencing technique revolutionized biomedicine, BRAIN is a methodological revolution for neuroscience. And it is not only happening in the US: since 2013, projects focusing on the brain have been launched in China, Japan, Korea, Australia, Canada, Israel and the European Union, and are all part of an international BRAIN initiative.
Recording brain activity will sooner or later allow access to the subconscious. Besides these public efforts, private companies, pharmaceutical corporations and, above all, technological enterprises are increasingly involved in the development of brain-computer interfaces that wire the brain to the internet. These interfaces may become equivalent to the iPhones of the future.
This neuro technological revolution is not only good and necessary, it is in fact urgent, as readers with family or friends suffering from neurological and psychiatric diseases will know; we need to develop more effective therapies. But science is neutral, and these techniques that can do so much good for humanity can also have negative consequences.
It would be possible, for example, to use neurotechnology to read a person’s brain activity, or to interfere with their brain and change their behavior. This is not science fiction; it is something we already do with lab animals and, sooner or later, it will be done with humans.
How far off are we? Since 2008, a lab in Berkeley, California, has been using magnetic scanners to guess with increasing accuracy what image a volunteer is thinking about. Facebook is developing a non-invasive brain-computer interface, like an electronic cap, that can decipher the word the user is thinking about and type it on the screen.
These types of devices can revolutionize industry, but they can also destroy our mental privacy. Brain activity generates not only conscious thoughts, but also subconscious ones. Recording brain activity will sooner or later allow access to the subconscious.
We believe we are facing a problem that affects human rights, since the brain generates the mind, which defines us as a species. On account of these and other developments, a group of 25 scientific experts – clinical engineers, psychologists, lawyers, philosophers and representatives of different brain projects from all over the world – met in 2017 at Columbia University, New York, and proposed ethical rules for the use of these neurotechnologies. We believe we are facing a problem that affects human rights, since the brain generates the mind, which defines us as a species. At the end of the day, it is about our essence – our thoughts, perceptions, memories, imagination, emotions and decisions.
To protect citizens from the misuse of these technologies, we have proposed a new set of human rights, called “neurorights.” The most urgent of these to establish is the right to the privacy of our thoughts, since the technologies for reading mental activity are more developed than the technologies for manipulating it. To defend mental privacy, we are working on a three-pronged approach. The first consists of legislating “neuroprotection.” We believe that data obtained from the brain, which we call “neurodata,” should be rigorously protected by laws similar to those applied to organ donations and transplants. We ask that “neurodata” not be traded and only be extracted with the consent of the individual for medical or scientific purposes.
This would be a preventive measure to protect against abuse. The second approach involves the proposal of proactive ideas; for example, that the companies and organizations that manufacture these technologies should adhere to a code of ethics from the outset, just as doctors do with the Hippocratic Oath. We are working on a “technocratic oath” with Xabi Uribe-Etxebarria, founder of the artificial intelligence company Sherpa.ai, and with the Catholic University of Chile.
Although Spain is not a leader in the creation of neurotechnology or artificial intelligence, it could be a leader in its social and ethical aspects. The third approach involves engineering, and consists of developing both hardware and software so that brain “neurodata” remains private, and that it is possible to share only select information. The aim is to ensure that the most personal data never leaves the machines that are wired to our brain. One option is to systems that are already used with financial data: open-source files and blockchain technology so that we always know where it came from, and smart contracts to prevent data from getting into the wrong hands. And, of course, it will be necessary to educate the public and make sure that no device can use a person’s data unless he or she authorizes it at that specific time.
This is only the beginning when it comes to dealing with the problem. We are working in Spain, Chile and other countries, as well as with international organizations, to raise awareness of the need for parliaments and governments to take action.
In Spain, the launch of the Charter of Digital Rights within the National Artificial Intelligence Strategy may be a good start. Although Spain is not a leader in the creation of neurotechnology or Artificial intelligence, it could be a leader in its social and ethical aspects. Right now, it is urgent that we get ready to prevent the next epidemic – perhaps not a viral one, but an epidemic that affects our most basic human rights. This is the ideal time to lay the foundations for the future, and for the society that we want to be.