While recovering, DeGray, who was 53 at the time, learned from his doctors that he was permanently paralyzed from the collarbones down. With the exception of vestigial twitches, he cannot move his torso or limbs. “I’m about as hurt as you can get and not be on a ventilator,” he told me. For several years after his accident, he “simply laid there, watching the History Channel” as he struggled to accept the reality of his injury.
Some time later, while at a fund-raising event for stem-cell research, he met Jaimie Henderson, a professor of neurosurgery at Stanford University. The pair got to talking about robots, a subject that had long interested DeGray, who grew up around his family’s machine shop. As DeGray remembers it, Henderson captivated him with a single question: Do you want to fly a drone?
Henderson explained that he and his colleagues had been developing a brain-computer interface: an experimental connection between someone’s brain and an external device, like a computer, robotic limb or drone, which the person could control simply by thinking. DeGray was eager to participate, eventually moving to Menlo Park to be closer to Stanford as he waited for an opening in the study and the necessary permissions. In the summer of 2016, Henderson opened DeGray’s skull and exposed his cortex — the thin, wrinkled, outermost layer of the brain — into which he implanted two 4-millimeter-by-4-millimeter electrode arrays resembling miniature beds of nails. Each array had 100 tiny metal spikes that, collectively, recorded electric impulses surging along a couple of hundred neurons or so in the motor cortex, a brain region involved in voluntary movement.
After a recovery period, several of Henderson’s collaborators assembled at DeGray’s home and situated him in front of a computer screen displaying a ring of eight white dots the size of quarters, which took turns glowing orange. DeGray’s task was to move a cursor toward the glowing dot using his thoughts alone. The scientists attached cables onto metal pedestals protruding from DeGray’s head, which transmitted the electrical signals recorded in his brain to a decoder: a nearby network of computers running machine-learning algorithms.
The algorithms were constructed by David Brandman, at the time a doctoral student in neuroscience collaborating with the Stanford team through a consortium known as BrainGate. He designed them to rapidly associate different patterns of neural activity with different intended hand movements, and to update themselves every two to three seconds, in theory becoming more accurate each time. If the neurons in DeGray’s skull were like notes on a piano, then his distinct intentions were analogous to unique musical compositions. An attempt to lift his hand would coincide with one neural melody, for example, while trying to move his hand to the right would correspond to another. As the decoder learned to identify the movements DeGray intended, it sent commands to move the cursor in the corresponding direction.
Brandman asked DeGray to imagine a movement that would give him intuitive control of the cursor. Staring at the computer screen, searching his mind for a way to begin, DeGray remembered a scene from the movie “Ghost” in which the deceased Sam Wheat (played by Patrick Swayze) invisibly slides a penny along a door to prove to his girlfriend that he still exists in a spectral form. DeGray pictured himself pushing the cursor with his finger as if it were the penny, willing it toward the target. Although he was physically incapable of moving his hand, he tried to do so with all his might. Brandman was ecstatic to see the decoder work as quickly as he had hoped. In 37 seconds, DeGray gained control of the cursor and reached the first glowing dot. Within several minutes he hit dozens of targets in a row.
Only a few dozen people on the planet have had neural interfaces embedded in their cortical tissue as part of long-term clinical research. DeGray is now one of the most experienced and dedicated among them. Since that initial trial, he has spent more than 1,800 hours spanning nearly 400 training sessions controlling various forms of technology with his mind. He has played a video game, manipulated a robotic limb, sent text messages and emails, purchased products on Amazon and even flown a drone — just a simulator, for now — all without lifting a finger. Together, DeGray and similar volunteers are exploring the frontier of a technology with the potential to fundamentally alter how humans and machines interact.
Scientists and engineers have been creating and studying brain-computer interfaces since the 1950s. Given how much of the brain’s behavior remains a mystery — not least how consciousness emerges from three pounds of electric jelly — the aggregate achievements of such systems are remarkable. Paralyzed individuals with neural interfaces have learned to play simple tunes on a digital keyboard, control exoskeletons and maneuver robotic limbs with enough dexterity to drink from a bottle. In March, a team of international scientists published a study documenting for the first time that someone with complete, bodywide paralysis used a brain-computer interface to convey their wants and needs by forming sentences one letter at a time.
Neural interfaces can also create bidirectional pathways of communication between brain and machine. In 2016, Nathan Copeland, who was paralyzed from the chest down in a car accident, not only fist-bumped President Barack Obama with a robotic hand, he also experienced the tactile sensation of the bump in his own hand as the prosthesis sent signals back to electrodes in his brain, stimulating his sensory cortex. By combining brain-imaging technology and neural networks, scientists have also deciphered and partly reconstructed images from people’s minds, producing misty imitations that resemble weathered Polaroids or smeared oil paintings.
In the history of life on Earth, we have never encountered a mind without a body. Highly complex cognition has always been situated in an intricate physical framework, whether eight suction-cupped arms, four furry limbs or a bundle of feather and beak. Human technology often amplifies the body’s inherent abilities or extends the mind into the surrounding environment through the body. Art and writing, agriculture and engineering: All human innovations have depended on, and thus been constrained by, the body’s capacity to physically manipulate whatever tools the mind devises. If brain-computer interfaces fulfill their promise, perhaps the most profound consequence will be this: Our species could transcend those constraints, bypassing the body through a new melding of mind and machine.
On a spring morning in 1893, during a military training exercise in Würzburg, Germany, a 19-year-old named Hans Berger was thrown from his horse and nearly crushed to death by the wheel of an artillery gun. The same morning, his sister, 60 miles away in Coburg, was flooded with foreboding and persuaded her father to send a telegram inquiring about her brother’s well-being. That seemingly telepathic premonition obsessed Berger, compelling him to study the mysteries of the mind. His efforts culminated in the 1920s with the invention of electroencephalography (EEG): a method of recording electrical activity in the brain using electrodes attached to the scalp. The oscillating patterns his apparatus produced, reminiscent of a seismograph’s scribbling, were the first transcriptions of the human brain’s cellular chatter.
In the following decades, scientists learned new ways to record, manipulate and channel the brain’s electrical signals, constructing ever-more-elaborate bridges between mind and machine. In 1964, José Manuel Rodríguez Delgado, a Spanish neurophysiologist, brought a charging bull to a halt using radio-controlled electrodes embedded in the animal’s brain. In the 1970s, the University of California Los Angeles professor Jacques Vidal coined the term brain-computer interface and demonstrated that people could mentally guide a cursor through a simple virtual maze. By the early 2000s, the Duke University neuroscientist Miguel Nicolelis and his collaborators had published studies demonstrating that monkeys implanted with neural interfaces could control robotic prostheses with their minds. In 2004, Matt Nagle, who was paralyzed from the shoulders down, became the first human to do the same. He further learned how to use his thoughts alone to play Pong, change channels on a television, open emails and draw a circle on a computer screen.
Since then, the pace of achievements in the field of brain-computer interfaces has increased greatly, thanks in part to the rapid development of artificial intelligence. Machine-learning software has substantially improved the efficiency and accuracy of neural interfaces by automating some of the necessary computation and anticipating the intentions of human users, not unlike how your phone or email now has A.I.-assisted predictive text. Last year, the University of California San Francisco neurosurgeon Edward Chang and a dozen collaborators published a landmark study describing how a neural interface gave a paralyzed 36-year-old man a voice for the first time in more than 15 years. Following a car crash and severe stroke at age 20, the man, known as Pancho, lost the ability to produce intelligible speech. Over a period of about 20 months, 128 disk-shaped electrodes placed on top of Pancho’s sensorimotor cortex recorded electrical activity in brain regions involved in speech processing and vocal tract control as he attempted to speak words aloud. A decoder associated different patterns of neural activity with different words and, with the help of language-prediction algorithms, eventually learned to decipher 15 words per minute with 75 percent accuracy on average. Although this is only a fraction of the rate of typical speech in English (140 to 200 words a minute), it is considerably faster than many point-and-click methods of communication available to people with severe paralysis.
In another groundbreaking study published last year, Jaimie Henderson and several colleagues, including Francis Willett, a biomedical engineer, and Krishna Shenoy, an electrical engineer, reported an equally impressive yet entirely different approach to communication by neural interface. The scientists recorded neurons firing in Dennis DeGray’s brain as he visualized himself writing words with a pen on a notepad, trying to recreate the distinct hand movements required for each letter. He mentally wrote thousands of words in order for the system to reliably recognize the unique patterns of neural activity specific to each letter and output words on a screen. “You really learn to hate M’s after a while,” he told me with characteristic good humor. Ultimately, the method was extremely successful. DeGray was able to type up to 90 characters or 18 words a minute — more than twice the speed of his previous efforts with a cursor and virtual keyboard. He is the world’s fastest mental typist. “Sometimes I get going so fast it’s just one big blur,” he said. “My concentration gets to a point where it’s not unusual for them to remind me to breathe.”
Achievements in brain-computer interfaces to date have relied on a mix of invasive and noninvasive technologies. Many scientists in the field, including those who work with DeGray, rely on a surgically embedded array of spiky electrodes produced by a Utah-based company, Blackrock Neurotech. The Utah Array, as it’s known, can differentiate the signals of individual neurons, providing more refined control of connected devices, but the surgery it requires can result in infection, inflammation and scarring, which may contribute to eventual degradation of signal strength. Interfaces that reside outside the skull, like headsets that depend on EEG, are currently limited to eavesdropping on the collective firing of groups of neurons, sacrificing power and precision for safety. Further complicating the situation, most neural interfaces studied in labs require cumbersome hardware, cables and an entourage of computers, whereas most commercially available interfaces are essentially remote controls for rudimentary video games, toys and apps. These commercial headsets don’t solve any real-world problems, and the more powerful systems in clinical studies are too impractical for everyday use.
With this problem in mind, Elon Musk’s company Neuralink has developed an array of flexible polymer threads studded with more than 3,000 tiny electrodes connected to a bottlecap-size wireless radio and signal processor, as well as a robot that can surgically implant the threads in the brain, avoiding blood vessels to reduce inflammation. Neuralink has tested its system in animals and has said it would begin human trials this year.
Synchron, which is based in New York, has developed a device called a Stentrode that doesn’t require open-brain surgery. It is a four-centimeter, self-expanding tubular lattice of electrodes, which is inserted into one of the brain’s major blood vessels via the jugular vein. Once in place, a Stentrode detects local electric fields produced by nearby groups of neurons in the motor cortex and relays recorded signals to a wireless transmitter embedded in the chest, which passes them on to an external decoder. In 2021, Synchron became the first company to receive F.D.A. approval to conduct human clinical trials of a permanently implantable brain-computer interface. So far, four people with varied levels of paralysis have received Stentrodes and used them, some in combination with eye-tracking and other assistive technologies, to control personal computers while unsupervised at home.
Philip O’Keefe, 62, of Greendale, Australia, received a Stentrode in April 2020. Because of amyotrophic lateral sclerosis (A.L.S.), O’Keefe can walk only short distances, cannot move his left arm and is losing the ability to speak clearly. At first, he explained, he had to concentrate intensely on the imagined movements required to operate the system — in his case, thinking about moving his left ankle for different lengths of time. “But the more you use it, the more it’s like riding a bike,” he said. “You get to a stage where you don’t think so hard about the movement you need to make. You think about the function you need to execute, whether it’s opening an email, scrolling a web page or typing some letters.” In December, O’Keefe became the first person in the world to post to Twitter using a neural interface: “No need for keystrokes or voices,” he wrote by mind. “I created this tweet just by thinking it. #helloworldbci”
Thomas Oxley, a neurologist and the founding C.E.O. of Synchron, thinks future brain-computer interfaces will fall somewhere between LASIK and cardiac pacemakers in terms of their cost and safety, helping people with disabilities recover the capacity to engage with their physical surroundings and a rapidly evolving digital environment. “Beyond that,” he says, “if this technology allows anyone to engage with the digital world better than with an ordinary human body, that is where it gets really interesting. To express emotion, to express ideas — everything you do to communicate what is happening in your brain has to happen through the control of muscles. Brain-computer interfaces are ultimately going to enable a passage of information that goes beyond the limitations of the human body. And from that perspective, I think the capacity of the human brain is actually going to increase.”
There is no technology yet that can communicate human thoughts as fast as they occur. Fingers and thumbs will never move quickly enough. And there are many forms of information processing better suited to a computer than to a human brain. Oxley speculated about the possibility of using neural interfaces to enhance human memory, bolster innate navigational skills with a direct link to GPS, sharply increase the human brain’s computational abilities and create a new form of communication in which emotions are wordlessly “thrown” from one mind to another. “It’s just the beginning of the dawn of this space,” Oxley said. “It’s really going to change the way we interact with one another as a species.”
Frederic Gilbert, a philosopher at the University of Tasmania, has studied the ethical quandaries posed by neurotechnology for more than a decade. Through in-depth interviews, he and other ethicists have documented how some people have adverse reactions to neural implants, including self-estrangement, increased impulsivity, mania, self-harm and attempted suicide. In 2015, he traveled to Penola, South Australia, to meet Rita Leggett, a 54-year-old patient with a very different, though equally troubling, experience.
Several years earlier, Leggett participated in the first human clinical trial of a particular brain-computer interface that warned people with epilepsy of imminent seizures via a hand-held beeper, giving them enough time to take a stabilizing medication or get to a safe place. With the implant, she felt much more confident and capable and far less anxious. Over time, it became inextricable from her identity. “It was me, it became me,” she told Gilbert. “With this device I found myself.” Around 2013, NeuroVista, the company that manufactured the neural interface, folded because it could not secure new funding. Despite her resistance, Leggett underwent an explantation. She was devastated. “Her symbiosis was so profound,” Gilbert told me, that when the device was removed, “she suffered a trauma.”
In a striking parallel, a recent investigation by the engineering magazine IEEE Spectrum revealed that, because of insufficient revenues, the Los Angeles-based neuroprosthetics company Second Sight had stopped producing and largely stopped servicing the bionic eyes they sold to more than 350 visually impaired people around the world. At least one individual’s implant has already failed with no way to repair it — a situation that could befall many others. Some patients enrolled in clinical trials for Second Sight’s latest neural interface, which directly stimulates the visual cortex, have either removed the device or are contemplating doing so.
If sophisticated brain-computer interfaces eventually transcend medical applications and become consumer goods available to the general public, the ethical considerations surrounding them multiply exponentially. In a 2017 commentary on neurotechnology, the Columbia University neurobiologist Rafael Yuste and 24 colleagues identified four main areas of concern: augmentation; bias; privacy and consent; and agency and identity. Neural implants sometimes cause disconcerting shifts in patients’ self-perception. Some have reported feeling like “an electronic doll” or developing a blurred sense of self. Were someone to commit a crime and blame an implant, how would the legal system determine fault? As neural interfaces and artificial intelligence evolve, these tensions will probably intensify.
All the scientists and engineers I spoke to acknowledged the ethical issues posed by neural interfaces, yet most were more preoccupied with consent and safety than what they regarded as far-off or unproven concerns about privacy and agency. In the world of academic scientific research, the appropriate future boundaries for the technology remain contentious.
In the private sector, ethics are often a footnote to enthusiasm, when they are mentioned at all. As pressure builds to secure funding and commercialize, spectacular and sometimes terrifying claims proliferate. Christian Angermayer, a German entrepreneur and investor, has said he is confident that everyone will be using brain-computer interfaces within 20 years. “It is fundamentally an input-output device for the brain, and it can benefit a large portion of society,” he posted on LinkedIn last year. “People will communicate with each other, get work done and even create beautiful artwork, directly with their minds.” Musk has described the ultimate goal of Neuralink as achieving “a sort of symbiosis with artificial intelligence” so that humanity is not obliterated, subjugated or “left behind” by superintelligent machines. “If you can’t beat em, join em,” he once said on Twitter, calling it a “Neuralink mission statement.” And Max Hodak, a former Neuralink president who was forced out of the company, then went on to found a new one called Science, dreams of using neural implants to make the human sensorium “directly programmable” and thereby create a “world of bits”: a parallel virtual environment, a lucid waking dream, that appears every time someone closes their eyes.
Today, DeGray, 68, still resides in the Menlo Park assisted-living facility he chose a decade ago for its proximity to Stanford. He still has the same two electrode arrays that Henderson embedded in his brain six years ago, as well as the protruding metal pedestals that provide connection points to external machines. Most of the time, he doesn’t feel their presence, though an accidental knock can reverberate through his skull as if it were a struck gong. In his everyday life, he relies on round-the-clock attention from caregivers and a suite of assistive technologies, including voice commands and head-motion tracking. He can get around in a breath-operated wheelchair, but long trips are taxing. He spends much of his time reading news articles, scientific studies and fiction on his computer. “I really miss books,” he told me. “They smell nice and feel good in your hands.”
DeGray’s personal involvement in research on brain-computer interfaces has become the focus of his life. Scientists from Stanford visit his home twice a week, on average, to continue their studies. “I refer to myself as a test pilot,” he said. “My responsibility is to take a nice new airplane out every morning and fly the wings off of it. Then the engineers drag it back into the hangar and fix it up, and we do the whole thing again the next day.”
Exactly what DeGray experiences when he activates his neural interface depends on his task. Controlling a cursor with attempted hand movements, for example, “boils the whole world down to an Etch A Sketch. All you have is left, right, up and down.” Over time, this kind of control becomes so immediate and intuitive that it feels like a seamless extension of his will. In contrast, maneuvering a robot arm in three dimensions is a much more reciprocal process: “I’m not making it do stuff,” he told me. “It’s working with me in the most versatile of ways. The two of us together are like a dance.”
No one knows exactly how long existing electrode arrays can remain in a human brain without breaking down or endangering someone’s health. Although DeGray can request explantation at any time, he wants to continue as a research participant indefinitely. “I feel very validated in what I’m doing here,” he said. “It would break my heart if I had to get out of this program for some reason.”
Regarding the long-term future of the technology in his skull, however, he is somewhat conflicted. “I actually spend quite a bit of time worrying about this,” he told me. “I’m sure it will be misused, as every technology is when it first comes out. Hopefully that will drive some understanding of where it should reside in our civilization. I think ultimately you have to trust in the basic goodness of man — otherwise, you would not pursue any new technologies ever. You have to just develop it and let it become monetized and see where it goes. It’s like having a baby: You only get to raise them for a while, and then you have to turn them loose on the world.”
The Man Who Controls Computers With His Mind. By Ferris Jabr. The New York Times, May 12, 2022.
If used responsibly, neurotechnology — in which machines interact directly with human neurons — can be used to understand and cure stubborn illnesses like Alzheimer's and Parkinson's disease, and assist with the development of prosthetic limbs and speech therapy.
But if left unregulated, neurotechnology could also lead to the worst corporate and state excesses, including discriminatory policing and privacy violations, leaving our minds as vulnerable to surveillance as our communications.
Now a group of neuroscientists, philosophers, lawyers, human rights activists and policymakers are racing to protect that last frontier of privacy — the brain.
They are not seeking a ban. Instead, campaigners like Yuste, who runs the Neurorights initiative at Columbia University, call for a set of principles that guarantee citizens' rights to their thoughts, and protection from any intruders, while taking advantage of any potential health benefits.
But they see plenty of reason to be alarmed about certain applications of neurotechnology, especially as it attracts the attention of militaries, governments and technology companies.
China and the U.S. are both leading research into artificial intelligence and neuroscience. The U.S. Defense department is developing technology that could be used to tweak memories.
It's not just scientists; firms, including major players like Facebook and Elon Musk's Neuralink are making advances too.
Neurotech wearables are now entering the market. Kernel, an American company, has developed a headset for the consumer market that can record brain activity in real time. Facebook funded a project to create a brain-computer interface that would allow users to communicate without speaking. (They pulled out this summer.) Neuralink is working on brain implants, and in April 2021 released a video of a monkey playing a game with its mind using the company’s implanted chip.
“The problem is what these tools can be used for,” he said. There are some scary examples: Researchers have used brain scans to predict the likelihood of criminals reoffending, and Chinese employers have monitored employees' brainwaves to read their emotions. Scientists have also managed to subliminally probe for personal information using consumer devices.
“We have on the table the possibility of a hybrid human that will change who we are as a species, and that's something that's very serious. This is existential,” he continued. Whether this is a change for good or bad, Yuste argues now is the time to decide.
Inception
The neurotechnology of today cannot decode thoughts or emotions. But with artificial intelligence, that might not be necessary. Powerful machine learning systems could make correlations between brain activity and external circumstances.
“In order to raise privacy challenges it’s sufficient that you have an AI that is powerful enough to identify patterns and establish correlative associations between certain patterns of data, and certain mental states,” said Marcello Ienca, a bioethicist at ETH Zurich.
Researchers have already managed to use a machine learning system to infer credit card digits from a person’s brain activity.
Brain scans have also been used in the criminal justice system for diagnostics and for predicting which criminals are likely to offend again, both of which at this stage — like lie detector tests before it — offer limited and at times flawed information.
That could have dire consequences for people of color, who are already likely to suffer disproportionately from algorithmic discrimination.
“[W]hen, for example, lie detection or the detection of memory appears accurate enough according to the science, why would the public prosecutor say no to such kind of technology?” said Sjors Ligthart, who studies the legal implications of coercive brain reading at Tilburg University.
With brain implants in particular, experts say it's unclear whether thoughts would be induced, or originate from the brain, which poses questions over accountability. “You cannot discern which tasks are being conducted by yourself and which thoughts are being accomplished by the AI, simply because the AI is becoming the mediator of your own mind,” Ienca said.
Where is my mind?
People have never needed to assert the authority of the individual over the thoughts they carry, but neurotechnology is prompting policymakers to do just that.
Chile is working on the world’s first law that would guarantee its citizens such so-called neurorights.
Senator Guido Girardi, who is behind Chile’s proposal, said the bill will create a similar registration system for neurotechnologies as for medicines, and using these technologies will need the consent of both the patient and doctors.
The goal is to ensure that technologies such as “AI can be used for good, but never to control a human being,” Girardi said.
In July, Spain adopted a nonbinding Charter for Digital Rights, meant to guide future legislative projects.
“The Spanish approach is to ensure the confidentiality and the security of the data that are related to these brain processes, and to ensure the complete control of the person over their data,” said Paloma Llaneza González, a data protection lawyer who worked on the charter.
“We want to guarantee the dignity of the person, equality and nondiscrimination, because you can discriminate people based on on his or her thoughts,” she said.
The Organisation for Economic Co-operation and Development, a Paris-based club of mostly rich countries, approved nonbinding guidelines on neurotechnology, and lists new rights meant to protect privacy and the freedom to think one's own thoughts, known as cognitive liberty.
The problem is that it's unclear whether existing legislation, which did not have neurotech in mind when drafted, is adequate.
“What we do need is to take another look at existing rights and specify them to neurotechnology,” said Ligthart. One target might be the European Convention of Human Rights, which ensures, for example, the right to respect private life, which could be updated to also include the right to mental privacy.
The GDPR, Europe’s strict data protection regime, offers protection for sensitive data, such as health status and religious beliefs. But a study by Ienca and Gianclaudio Malgieri at the EDHEC Business School in Lille found that the law might not cover emotions and thoughts.
Yuste argues that action is needed on an international level, and organizations such as the U.N. need to act before the technology is developed further.
“We want to do something a little bit more intelligent than wait until we have a problem and then try to fix it when it's too late, which is what happened with the internet and privacy and AI,” Yuste said.
Today’s privacy issues are going to be “peanuts compared to what's coming.”
Machines can read your brain. There’s little that can stop them. By Melissa Heikkilä. Politico,
August 31, 2021.
Recording memories, reading thoughts, and manipulating what another person sees through a device in their brain may seem like science fiction plots about a distant and troubled future. But a team of multi-disciplinary researchers say the first steps to inventing these technologies have already arrived. Through a concept called “neuro rights,” they want to put in place safeguards for our most precious biological possessions: our mind.
If a well-placed magnet causes you to “leave” your body, what else is possible with a little transcranial stimulation?
This question is of growing concern as wearable scanners become increasingly common. Last week, Columbia University neuroscience professor Rafael Yuste advocated for the United Nations to adopt “neuro-rights” into its Universal Declaration of Human Rights thanks to a burgeoning industry promising to alter—some would say manipulate—consciousness.
As Yuste phrased it during an online conference,
“If you can record and change neurons, you can in principle read and write the minds of people. This is not science fiction. We are doing this in lab animals successfully.”
Neurotechnology is a growing field that includes a range of technologies that influence higher brain activities. Therapeutics designed to repair and improve brain function are included in this discipline—interventions for sleep problems, overstimulation, motor coordination, epilepsy, even depression.
So are more insidious intentions, however. You can imagine such devices in the hands of a cult leader, for example. Or perhaps a political leader steeling up their base. If the human imagination can create an idea, it can be transformed into reality, and not all humans are benevolent.
The ethical question is not new. For example, the debate over embryonic stem cells raged for years. Promises of trait enhancement concerned people who thought scientists would play the role of a god. While that debate has mostly died down, the use of neurotechnology by militaries and tech companies—particularly concerning privacy—will be contentious for decades.
Cognitive liberty is a term assigned to those who believe every individual must be allowed to maintain their own agency. An extension of the concept of freedom of thought, cognitive liberty is defined as “the right of each individual to think independently and autonomously, to use the full power of his or her mind, and to engage in multiple modes of thought,” as written by neuroethicist Dr. Wrye Sententia and legal theorist Richard Glen Boire.
The challenges to cognitive liberty include privacy, which they argue must encompass the domain of inner thought; autonomy, so that thought processes remain the province of the individual; and choice, provided that the individual is not harming others.
Yuste believes the U.N.’s declaration, which was created in the wake of World War II in 1948, needs immediate revision. Deep brain stimulation is already an FDA-approved procedure. Whereas social media creates its own addiction and mental health problems, a sense of agency still exists. When tech has the capability to get “under the skull and get at our neurons,” as Johns Hopkins professor of neurology and neuroscience, John Krakauer, says, a sense of urgency exists.
For Yuste it’s completely a matter of agency—and liberty.
“This is the first time in history that humans can have access to the contents of people’s minds. We have to think very carefully about how we are going to bring this into society.”
Scientists urge UN to add ‘neuro-rights’ to Universal Declaration of Human Rights. By Derek Beres. Big Think, December 9, 2020.
No comments:
Post a Comment