After gene editing and artificial intelligence, neurotechnology has become the next emerging technology to generate international concern about ethical risks. The Organisation for Economic Co-operation and Development (OECD) has issued the first international recommendations with concerns about data privacy, liability, and the possibility of mind control. The OECD recommendation represents the first formally agreed international statement on the topic, though it has no legally binding force.
Over the past few years, a growing number of experts have warned that brain implants and monitors could one day be used to manipulate human behaviour or attitudes. The technology, while it can be used for good, to treat mental illness or understand the mind better, it can also have nefarious uses, such as invading privacy or controlling human behaviour.
In the first quasi-official recognition of that risk, on 11 December 2019 the 36 member-states of the Organisation for Economic Cooperation and Development issued a formal recommendation that governments, companies and researchers world-wide pay greater attention to governance of the possible misuse of neurotechnology. It is a document about general principles rather than specific proposals for action, but it’s important anyway to prompt governments to think now about the potential consequences of this technology.
Neurotechnology covers a broad sweep of research and products. Some applications are already in use clinically, such as wearable devices to monitor patients’ brain activity or implants to help people move disabled hands or legs. It is at the heart of several mega-research efforts – such as the European Commission’s Human Brain Project and the US BRAIN initiative to map, model and understand how the brain works. Some companies have developed AI tools to analyse patients’ brain waves, help diagnose mental disorders and personalise antidepressant treatment.
So far, the applications have been benign. But experts worry how these and as-yet undeveloped technologies could be used in future.
For instance, in theory neurotechnologies could one day be used to enhance human mental powers, change people’s personalities or alter how they perceive the world. Brain data could be used to categorise people by intelligence or temperament – so companies could target marketing individual-by-individual, or authoritarian governments could control citizens. Police could try to predict crimes and detain people in advance, or develop powerful lie detectors. In the process, innocent people could be charged, unfair biases amplified, or freedoms curtailed.
Then there are unintended consequences that might arise. For instance, last year researchers at the University of Zurich studied nine cases of Parkinson’s Disease patients around the world who had received brain implants to control their tremors. The implants improved their symptoms, but had an unexpected side-effect: while they were all previously good swimmers, they could no longer swim. At least one nearly drowned, when he jumped in the water and suddenly discovered the problem.
For the OECD, the next steps will include gathering more data and sharing information internationally about the technologies and quandaries they raise. But there is not yet any agreement among the countries leading neurotech research about whether or when to translate any of this into hard regulation. Either way, further discussions are likely.