09/09/2022

Why We Need a Universal Declaration for Neurorights

 




 
BRAD EVANS: I would like to begin this conversation by asking about a recent claim you made concerning how we are within a decade of producing technologies that will be able to read a person’s mind. Can you talk more about the scientific basis for this assertion, which for many would seem to belong to the realm of science fiction?
 
RAFAEL YUSTE: With laboratory animals, neuroscientists have already been able to decode brain activity to different degrees and even manipulate this activity. We have shown it is possible to take control over the functioning of the brain, which is a process that’s been in operation for more than a decade now. And we see it becoming an increasingly powerful possibility, especially as the technologies to intervene are becoming more advanced. One example that draws upon the work we do in our laboratory uses optical neurotechnology, which allows us to study vision in mice; by monitoring the activity of neurons in the visual cortex we can actually decode what the animal is looking at, and by activating those neurons we can take it a stage further and make the animal see what they are not seeing.
 
This is just an example of what you can do with animals, which can have a profound impact on what they perceive and how they respond to manufactured stimuli. Decoding brain activity in this way could lead to the controlling of behavior, by altering perception and recoding the images in the mind. What we can already do with mice today, we will no doubt be able to do with humans tomorrow. It’s unavoidable. So, what matters now is how we learn to regulate it and ensure it is not abused in any way.
 
My colleagues at Stanford recently showed how it is now possible to decode speech in paralyzed patients up to some 20 words per minute with 95 percent accuracy, just by recording neuroactivity with implanted devices. In terms of stimulating and controlling brain activity, there is also a longer history of interventions for medical conditions such as Parkinson’s disease and depression that reveals the ways the brain is receptive to these technologies. While these procedures haven’t set out to control behavior, studies have shown that in some cases they can cause alienation and an alteration in the identity of the patients.
 
BE : I appreciate here that you are not raising your concerns with the active manipulation of the mind as a critical thinker but as a practicing neuroscientist. When did the alarm bells first ring for you concerning this issue?
 
RY : In the past I wasn’t at all working in ethics, but my passion was really to try and understand how the brain works. This was driven by a commitment that I still have, to do the very best for my patients. Through this I helped launch the US BRAIN Initiative back in 2013 during the Obama administration. It was during the conversations we had at the early stages of this initiative that I started to become more and more concerned with the commercial applications of neurotechnologies, which I realized could easily veer away from medical applications and be adapted for wider social usage. This prospect increasingly worried me as a scientist who was committed to exploring fully their medical capabilities. There seemed to me to be a fundamental disconnect between using neurotechnologies to save a person’s life, as opposed to their wider rollout for profits and everyday consideration.
 
So, my first response was the join the ethical council of the US BRAIN Initiative, to really try and push forward a discussion on an issue that nobody was really talking about. Following this, I helped organize a grassroots meeting held at Columbia University in 2017, which featured 25 experts from all over the world including neuroscience, neurosurgery, medical ethics, AI, and other areas, and representing different international brain initiatives. The ambition was to map out the potential ethical and societal challenges caused by neurotechnology and, more importantly, consider what we could do as a collective response. Through our discussions we concluded that what we were dealing with wasn’t an issue simply for medical ethics; it was a human rights problem that demanded an urgent solution. The brain is not just another organ. It is the organ that generates all our mental and cognitive abilities. After all, is there a more defining right than the rights we have to our mind and intellectual faculties?
 
BE : What you are proposing would seem to be rather self-evident in terms of protecting the integrity of the mind, and yet it also proposes a remarkable shift in how we might also conceptualize violence. We still often think about human rights violations as representing an assault on the body, and while there has been a notable shift in recent times to recognize the lasting psychological effects of violence and trauma, our analysis still often remains wedded to bodily concerns. What you seem to be suggesting is a shift in the very order of importance for rights, which I understand you have been trying to codify with the NeuroRights Foundation?
 
RY : The mind is the essence of what makes us human. So, if this is not a human rights issue, then what is a human rights issue? The NeuroRights Foundation began precisely from this ethical standpoint. This led us to identify the five neurorights we felt needed legal protections, which included the rights to mental privacy, personal identity, free will, fair access to mental augmentation, and protection from bias. What brings all of them together is the necessity to see the integrity of the mind as central to any human rights deliberations, which led to the publication of an article in the journal Nature that dealt with the ethical priorities faced as we started to come to terms with neurotechnology and AI, along with instigating more advocacy work with international organizations and countries.
 
Working with Jared Genser who is a leading human rights lawyer, and his team, we have been putting our energies into raising awareness about the issue of neurorights and pushing for legislation that can mitigate against potential abuses before they have the chance to arrive, at which point it may be too late to act with any effectiveness. A notable success has been in Chile, which in 2021 became the first country to officially ratify neuroprotections, by approving a constitutional amendment, with unanimous support, that protects brain activity and the information extracted from it.
 
I should stress that these concerns have been a direct response to the development of the technologies, which have in turn inspired a number of conversations in which we have sought to bring together human rights law with the ethical ideas as founded in the Hippocratic Oath. It required those of us working in the medical or biomedical professions to be alert to the potential dangers and reach out to those working in the legal fields. It became very obvious to us that we were not going to resolve this by relying upon medical ethics alone. It needed to be framed more as a policy issue, which required speaking with those who had vast experience of working to mitigate against some of the worst human rights abuses. Jared, for example, worked with victims of torture and tries to help free persons who are being politically detained by authoritarian regimes. In the beginning, you don’t think of such expertise when you first encounter neurotechnologies, but you eventually realize the evident crossovers. That was a real breakthrough for me, learning to see how we could both gain in terms of our responses.
 
I should also add: this is not just about neurotechnologies. We have all these other disruptive technologies being introduced, which have the potential to dramatically alter the human condition. The Metaverse, robotics, gaming, surveillance technologies, the internet, genetic engineering, and artificial intelligence — these will all require us to think more rigorously about their impacts and what it means to be human. Surely if what it means to be human changes, then so must our conception of the rights humans should be entitled to? Such issues cannot be left to soft-law regulations as we see them in the culture of medical ethics alone. We need to develop and incorporate legal provisions that can protect humans of the future.
 
BE : I am very taken here by this idea of the broadening of the framework for human rights to account for what William Connolly once termed the “neuropolitical.” Mindful of this, I would like to press you more on the problem of violence. We often think of violence in very invasive ways; yet what also seems to concern us is the anticipated arrival of noninvasive technologies that might also demand a rethinking of what violation actually means?
 
RY : Our angle is exactly to look at the abusive use of noninvasive technologies as representing a new kind of psychological violence. From World War II onward, our conception of human rights, as you mentioned, has been tied to concerns with violence and mobilized to physically protect people from harm. We are entering a world, however, where technologies no longer simply threaten our bodies. They are directly affecting our minds. But maybe we shouldn’t look at this negatively. Just as the human rights framework has provisions that positively help improve the human condition, so we might also see the protection of neurorights as part of an attempt to expand the freedom of thought. So, it’s not only about violence, but also reveals something deeper about the essence of the human.
 
We really try to deal with this through our commitment to mental privacy that should be guaranteed for all. If mental privacy was to be recognized as a human right, for example, by appealing more broadly to the legislation concerning psychological violence and torture, similar to those horrific cases we hear about where people suffer unimaginable abuse in the attempts to extract information from the brain, we would already be prepared to face up to the challenge of unwarranted mental interventions. Consent would be needed before the integrity of the mind was brought into question.
 
BE : Despite what you are saying and my full agreement, it strikes me that a major challenge we are going to need to confront with the commercial application of such technologies (as seen in the ambitions of companies such as Neuralink) is the seduction and appeal they will have in creating so-called better augmented humans?
 
RY : This is a major issue. Let’s track back here. We can potentially use neurotechnologies to read, write, code, and manipulate the activity of the mind. This is a formidable power. But we should remember, at this stage, neurotechnologies for reading are far more advanced than those which might manipulate. But if we can set the frameworks in place now to protect mental privacy, then what will be possible in the coming decades will already be partially legislated against. The problem, however, is that the conversation and legislation regarding social deployments is regulated by the field of consumer electronics, which are not focused on human rights. This becomes a more pressing concern when we start to properly consider the potential for mental augmentation.




 
The moment this ability arrives, it will be inevitable and its consequences unavoidable. We will all be required to augment, or else we face the prospect of creating some dual kind of citizenship based on new mental aptitudes. What kinds of discriminations might this produce? Will we end up producing a new hybrid species? Could this create new fractures and divisions that further exacerbate existing inequalities? And what will it mean for us personally and for the societies in which we live? Are we really prepared to wash away an intellectual system that’s upheld civilizations for a few millennia? Now is the time to have these conversations. We have to know in advance what we are giving ourselves over to and think very clearly about what to do before commercial augmentation happens. And this is a conversation that urgently needs to be taken out of the realm of consumer electronics.
 
We are already starting to learn painful lessons about what happens when we rush to introduce technologies without properly considering their full ethical implications. Social media is perhaps the most obvious example here. How many of its applications would properly pass considered and rigorous ethical scrutiny, given what we now know about their impacts on mental health, addiction, and wider social harms, which are affecting billions of people around the world? There are stark warnings to be learned about following the technology-first-ethical-considerations-later approach for sure.
 
BE : As a political theorist, whenever somebody talks about the wonders of any new technology, my instinct is to immediately ask about their political and military applications. Do these also factor into your immediate concerns?
 
RY : Given the forces at work, the uses you mention are invariably a pressing and deeply troubling concern. It’s not been our primary focus as we have sought to begin by having a more civil public conversation, but it is something that is really worrisome in terms of the longer-term possibility for a new neurotechnological arms race. Let me just give you a couple of examples of the weaponization of neurotechnologies that have already been considered by military leaders and strategists. First, in the United States, is the N3 project launched by DARPA to develop wireless and noninvasive BCIs (Brain-Computer Interfaces) for soldiers to be better equipped on the battlefield. This is about improving performance and the maneuverability of intelligently designed weapons systems. Secondly, in China, there is an extensive state-sponsored brain initiative, which is supposedly three times larger than the one that was instigated in the United States, but, crucially, run by the military. Supposedly, its primary purpose is to merge neuroscience and AI to enhance the power of the security state.
 
Next to where I am speaking to you from right now at Columbia University is Pupin Hall, where the first atomic reactor was built, which started the Manhattan Project. The atomic bomb was planned in close proximity to where I am sat. I am reminded of this every single day as I go to work, mindful of the consequences. It is also worth remembering that the physicists who built the bomb were the first to advocate for guidance on the use of atomic energy, and because of their advocacy, the Atomic Energy Commission was established in Vienna by the UN. I work in this shadow, realizing technologies are neutral and can be used for good or bad.
 
Though, to be honest, at this stage I am more concerned with the big corporations and what it might mean to have economies modeled around the ambition of drawing brain data. It’s not just like eavesdropping on emails, or monitoring and even manipulating social media feeds and the rest. Mental privacy is the sanctuary of our minds. What we are dealing with are fundamental questions concerning our sense of self and how we live together in this world. Right now, I think we are seeing the first steps towards living in what we might call “the neurotechnological age,” which will place monumental responsibilities upon our shoulders.
 
BE : To conclude, I would like to ask you to consider more the work you do in light of the horrors caused by the Manhattan Project. As a scientist, do you ever step back and think the risks to the work you do might outweigh the benefits?
 
RY : I am an optimist and bullish about neurotechnology and its future, but, unfortunately, the world is not perfect. And of course, I wish that bad intentions did not exist, but we could confront potential negative scenarios for humanity. Mindful of this, we shouldn’t stop trying to develop tools that will push forward and help humanity because they could be used for alternative or nefarious uses. Instead, we need to be vigilant to those uses and legislate against them to the best of our abilities. As a medic, I am duty-bound to try to help people to the best of my professional ability. While it is a duty I take very seriously, I know in my flesh the real devastation caused by mental trauma and illness, so it’s something I care very passionately about in terms of finding solutions.
 
Almost every day I get an email from a patient, mother, or friend asking whether neurotechnology might help cure their disease or save the life of someone they love. And it’s heartbreaking to continually read them. These are people who urgently need our help. If anybody could make any kind of breakthrough in terms of how we might better understand the brain to assist those in need, I would encourage them to work 24/7 without hesitation. So, when I look at the Manhattan Project, I want to see what ethical lessons can be learned and how we can work in advance to preempt negative outcomes. We need to set up commissions prior to a neurotechnological disruption in society, for example, in terms of the mental privacy, identity, or agency of its citizens. We need to protect our minds from technology, something that previous generations did not have to worry about. It is precisely the way the technology could diminish the essence of our humanity that compels me to insist upon a universal framework for the protection of neurorights.
 
 
Histories of Violence: Why We Need a Universal Declaration for Neurorights. By Brad Evans. Los Angeles Review of Books, September 5, 2022.
 







On the evening of Oct. 10, 2006, Dennis DeGray’s mind was nearly severed from his body. After a day of fishing, he returned to his home in Pacific Grove, Calif., and realized he had not yet taken out the trash or recycling. It was raining fairly hard, so he decided to sprint from his doorstep to the garbage cans outside with a bag in each hand. As he was running, he slipped on a patch of black mold beneath some oak trees, landed hard on his chin, and snapped his neck between his second and third vertebrae.
 
While recovering, DeGray, who was 53 at the time, learned from his doctors that he was permanently paralyzed from the collarbones down. With the exception of vestigial twitches, he cannot move his torso or limbs. “I’m about as hurt as you can get and not be on a ventilator,” he told me. For several years after his accident, he “simply laid there, watching the History Channel” as he struggled to accept the reality of his injury.
 
Some time later, while at a fund-raising event for stem-cell research, he met Jaimie Henderson, a professor of neurosurgery at Stanford University. The pair got to talking about robots, a subject that had long interested DeGray, who grew up around his family’s machine shop. As DeGray remembers it, Henderson captivated him with a single question: Do you want to fly a drone?
 
Henderson explained that he and his colleagues had been developing a brain-computer interface: an experimental connection between someone’s brain and an external device, like a computer, robotic limb or drone, which the person could control simply by thinking. DeGray was eager to participate, eventually moving to Menlo Park to be closer to Stanford as he waited for an opening in the study and the necessary permissions. In the summer of 2016, Henderson opened DeGray’s skull and exposed his cortex — the thin, wrinkled, outermost layer of the brain — into which he implanted two 4-millimeter-by-4-millimeter electrode arrays resembling miniature beds of nails. Each array had 100 tiny metal spikes that, collectively, recorded electric impulses surging along a couple of hundred neurons or so in the motor cortex, a brain region involved in voluntary movement.
 
After a recovery period, several of Henderson’s collaborators assembled at DeGray’s home and situated him in front of a computer screen displaying a ring of eight white dots the size of quarters, which took turns glowing orange. DeGray’s task was to move a cursor toward the glowing dot using his thoughts alone. The scientists attached cables onto metal pedestals protruding from DeGray’s head, which transmitted the electrical signals recorded in his brain to a decoder: a nearby network of computers running machine-learning algorithms.
 
The algorithms were constructed by David Brandman, at the time a doctoral student in neuroscience collaborating with the Stanford team through a consortium known as BrainGate. He designed them to rapidly associate different patterns of neural activity with different intended hand movements, and to update themselves every two to three seconds, in theory becoming more accurate each time. If the neurons in DeGray’s skull were like notes on a piano, then his distinct intentions were analogous to unique musical compositions. An attempt to lift his hand would coincide with one neural melody, for example, while trying to move his hand to the right would correspond to another. As the decoder learned to identify the movements DeGray intended, it sent commands to move the cursor in the corresponding direction.
 
Brandman asked DeGray to imagine a movement that would give him intuitive control of the cursor. Staring at the computer screen, searching his mind for a way to begin, DeGray remembered a scene from the movie “Ghost” in which the deceased Sam Wheat (played by Patrick Swayze) invisibly slides a penny along a door to prove to his girlfriend that he still exists in a spectral form. DeGray pictured himself pushing the cursor with his finger as if it were the penny, willing it toward the target. Although he was physically incapable of moving his hand, he tried to do so with all his might. Brandman was ecstatic to see the decoder work as quickly as he had hoped. In 37 seconds, DeGray gained control of the cursor and reached the first glowing dot. Within several minutes he hit dozens of targets in a row.
 
Only a few dozen people on the planet have had neural interfaces embedded in their cortical tissue as part of long-term clinical research. DeGray is now one of the most experienced and dedicated among them. Since that initial trial, he has spent more than 1,800 hours spanning nearly 400 training sessions controlling various forms of technology with his mind. He has played a video game, manipulated a robotic limb, sent text messages and emails, purchased products on Amazon and even flown a drone — just a simulator, for now — all without lifting a finger. Together, DeGray and similar volunteers are exploring the frontier of a technology with the potential to fundamentally alter how humans and machines interact.



 
Scientists and engineers have been creating and studying brain-computer interfaces since the 1950s. Given how much of the brain’s behavior remains a mystery — not least how consciousness emerges from three pounds of electric jelly — the aggregate achievements of such systems are remarkable. Paralyzed individuals with neural interfaces have learned to play simple tunes on a digital keyboard, control exoskeletons and maneuver robotic limbs with enough dexterity to drink from a bottle. In March, a team of international scientists published a study documenting for the first time that someone with complete, bodywide paralysis used a brain-computer interface to convey their wants and needs by forming sentences one letter at a time.
 
Neural interfaces can also create bidirectional pathways of communication between brain and machine. In 2016, Nathan Copeland, who was paralyzed from the chest down in a car accident, not only fist-bumped President Barack Obama with a robotic hand, he also experienced the tactile sensation of the bump in his own hand as the prosthesis sent signals back to electrodes in his brain, stimulating his sensory cortex. By combining brain-imaging technology and neural networks, scientists have also deciphered and partly reconstructed images from people’s minds, producing misty imitations that resemble weathered Polaroids or smeared oil paintings.

 

Most researchers developing brain-computer interfaces say they are primarily interested in therapeutic applications, namely restoring movement and communication to people who are paralyzed or otherwise disabled. Yet the obvious potential of such technology and the increasing number of high-profile start-ups developing it suggest the possibility of much wider adoption: a future in which neural interfaces actually enhance people’s innate abilities and grant them new ones, in addition to restoring those that have been lost.
 
In the history of life on Earth, we have never encountered a mind without a body. Highly complex cognition has always been situated in an intricate physical framework, whether eight suction-cupped arms, four furry limbs or a bundle of feather and beak. Human technology often amplifies the body’s inherent abilities or extends the mind into the surrounding environment through the body. Art and writing, agriculture and engineering: All human innovations have depended on, and thus been constrained by, the body’s capacity to physically manipulate whatever tools the mind devises. If brain-computer interfaces fulfill their promise, perhaps the most profound consequence will be this: Our species could transcend those constraints, bypassing the body through a new melding of mind and machine.
 
On a spring morning in 1893, during a military training exercise in Würzburg, Germany, a 19-year-old named Hans Berger was thrown from his horse and nearly crushed to death by the wheel of an artillery gun. The same morning, his sister, 60 miles away in Coburg, was flooded with foreboding and persuaded her father to send a telegram inquiring about her brother’s well-being. That seemingly telepathic premonition obsessed Berger, compelling him to study the mysteries of the mind. His efforts culminated in the 1920s with the invention of electroencephalography (EEG): a method of recording electrical activity in the brain using electrodes attached to the scalp. The oscillating patterns his apparatus produced, reminiscent of a seismograph’s scribbling, were the first transcriptions of the human brain’s cellular chatter.



 
In the following decades, scientists learned new ways to record, manipulate and channel the brain’s electrical signals, constructing ever-more-elaborate bridges between mind and machine. In 1964, José Manuel Rodríguez Delgado, a Spanish neurophysiologist, brought a charging bull to a halt using radio-controlled electrodes embedded in the animal’s brain. In the 1970s, the University of California Los Angeles professor Jacques Vidal coined the term brain-computer interface and demonstrated that people could mentally guide a cursor through a simple virtual maze. By the early 2000s, the Duke University neuroscientist Miguel Nicolelis and his collaborators had published studies demonstrating that monkeys implanted with neural interfaces could control robotic prostheses with their minds. In 2004, Matt Nagle, who was paralyzed from the shoulders down, became the first human to do the same. He further learned how to use his thoughts alone to play Pong, change channels on a television, open emails and draw a circle on a computer screen.
 
Since then, the pace of achievements in the field of brain-computer interfaces has increased greatly, thanks in part to the rapid development of artificial intelligence. Machine-learning software has substantially improved the efficiency and accuracy of neural interfaces by automating some of the necessary computation and anticipating the intentions of human users, not unlike how your phone or email now has A.I.-assisted predictive text. Last year, the University of California San Francisco neurosurgeon Edward Chang and a dozen collaborators published a landmark study describing how a neural interface gave a paralyzed 36-year-old man a voice for the first time in more than 15 years. Following a car crash and severe stroke at age 20, the man, known as Pancho, lost the ability to produce intelligible speech. Over a period of about 20 months, 128 disk-shaped electrodes placed on top of Pancho’s sensorimotor cortex recorded electrical activity in brain regions involved in speech processing and vocal tract control as he attempted to speak words aloud. A decoder associated different patterns of neural activity with different words and, with the help of language-prediction algorithms, eventually learned to decipher 15 words per minute with 75 percent accuracy on average. Although this is only a fraction of the rate of typical speech in English (140 to 200 words a minute), it is considerably faster than many point-and-click methods of communication available to people with severe paralysis.
 
In another groundbreaking study published last year, Jaimie Henderson and several colleagues, including Francis Willett, a biomedical engineer, and Krishna Shenoy, an electrical engineer, reported an equally impressive yet entirely different approach to communication by neural interface. The scientists recorded neurons firing in Dennis DeGray’s brain as he visualized himself writing words with a pen on a notepad, trying to recreate the distinct hand movements required for each letter. He mentally wrote thousands of words in order for the system to reliably recognize the unique patterns of neural activity specific to each letter and output words on a screen. “You really learn to hate M’s after a while,” he told me with characteristic good humor. Ultimately, the method was extremely successful. DeGray was able to type up to 90 characters or 18 words a minute — more than twice the speed of his previous efforts with a cursor and virtual keyboard. He is the world’s fastest mental typist. “Sometimes I get going so fast it’s just one big blur,” he said. “My concentration gets to a point where it’s not unusual for them to remind me to breathe.”
 
Achievements in brain-computer interfaces to date have relied on a mix of invasive and noninvasive technologies. Many scientists in the field, including those who work with DeGray, rely on a surgically embedded array of spiky electrodes produced by a Utah-based company, Blackrock Neurotech. The Utah Array, as it’s known, can differentiate the signals of individual neurons, providing more refined control of connected devices, but the surgery it requires can result in infection, inflammation and scarring, which may contribute to eventual degradation of signal strength. Interfaces that reside outside the skull, like headsets that depend on EEG, are currently limited to eavesdropping on the collective firing of groups of neurons, sacrificing power and precision for safety. Further complicating the situation, most neural interfaces studied in labs require cumbersome hardware, cables and an entourage of computers, whereas most commercially available interfaces are essentially remote controls for rudimentary video games, toys and apps. These commercial headsets don’t solve any real-world problems, and the more powerful systems in clinical studies are too impractical for everyday use.
 
With this problem in mind, Elon Musk’s company Neuralink has developed an array of flexible polymer threads studded with more than 3,000 tiny electrodes connected to a bottlecap-size wireless radio and signal processor, as well as a robot that can surgically implant the threads in the brain, avoiding blood vessels to reduce inflammation. Neuralink has tested its system in animals and has said it would begin human trials this year.
 
Synchron, which is based in New York, has developed a device called a Stentrode that doesn’t require open-brain surgery. It is a four-centimeter, self-expanding tubular lattice of electrodes, which is inserted into one of the brain’s major blood vessels via the jugular vein. Once in place, a Stentrode detects local electric fields produced by nearby groups of neurons in the motor cortex and relays recorded signals to a wireless transmitter embedded in the chest, which passes them on to an external decoder. In 2021, Synchron became the first company to receive F.D.A. approval to conduct human clinical trials of a permanently implantable brain-computer interface. So far, four people with varied levels of paralysis have received Stentrodes and used them, some in combination with eye-tracking and other assistive technologies, to control personal computers while unsupervised at home.
 
Philip O’Keefe, 62, of Greendale, Australia, received a Stentrode in April 2020. Because of amyotrophic lateral sclerosis (A.L.S.), O’Keefe can walk only short distances, cannot move his left arm and is losing the ability to speak clearly. At first, he explained, he had to concentrate intensely on the imagined movements required to operate the system — in his case, thinking about moving his left ankle for different lengths of time. “But the more you use it, the more it’s like riding a bike,” he said. “You get to a stage where you don’t think so hard about the movement you need to make. You think about the function you need to execute, whether it’s opening an email, scrolling a web page or typing some letters.” In December, O’Keefe became the first person in the world to post to Twitter using a neural interface: “No need for keystrokes or voices,” he wrote by mind. “I created this tweet just by thinking it. #helloworldbci”
 




Thomas Oxley, a neurologist and the founding C.E.O. of Synchron, thinks future brain-computer interfaces will fall somewhere between LASIK and cardiac pacemakers in terms of their cost and safety, helping people with disabilities recover the capacity to engage with their physical surroundings and a rapidly evolving digital environment. “Beyond that,” he says, “if this technology allows anyone to engage with the digital world better than with an ordinary human body, that is where it gets really interesting. To express emotion, to express ideas — everything you do to communicate what is happening in your brain has to happen through the control of muscles. Brain-computer interfaces are ultimately going to enable a passage of information that goes beyond the limitations of the human body. And from that perspective, I think the capacity of the human brain is actually going to increase.”
 
There is no technology yet that can communicate human thoughts as fast as they occur. Fingers and thumbs will never move quickly enough. And there are many forms of information processing better suited to a computer than to a human brain. Oxley speculated about the possibility of using neural interfaces to enhance human memory, bolster innate navigational skills with a direct link to GPS, sharply increase the human brain’s computational abilities and create a new form of communication in which emotions are wordlessly “thrown” from one mind to another. “It’s just the beginning of the dawn of this space,” Oxley said. “It’s really going to change the way we interact with one another as a species.”
 
Frederic Gilbert, a philosopher at the University of Tasmania, has studied the ethical quandaries posed by neurotechnology for more than a decade. Through in-depth interviews, he and other ethicists have documented how some people have adverse reactions to neural implants, including self-estrangement, increased impulsivity, mania, self-harm and attempted suicide. In 2015, he traveled to Penola, South Australia, to meet Rita Leggett, a 54-year-old patient with a very different, though equally troubling, experience.
 
Several years earlier, Leggett participated in the first human clinical trial of a particular brain-computer interface that warned people with epilepsy of imminent seizures via a hand-held beeper, giving them enough time to take a stabilizing medication or get to a safe place. With the implant, she felt much more confident and capable and far less anxious. Over time, it became inextricable from her identity. “It was me, it became me,” she told Gilbert. “With this device I found myself.” Around 2013, NeuroVista, the company that manufactured the neural interface, folded because it could not secure new funding. Despite her resistance, Leggett underwent an explantation. She was devastated. “Her symbiosis was so profound,” Gilbert told me, that when the device was removed, “she suffered a trauma.”
 
In a striking parallel, a recent investigation by the engineering magazine IEEE Spectrum revealed that, because of insufficient revenues, the Los Angeles-based neuroprosthetics company Second Sight had stopped producing and largely stopped servicing the bionic eyes they sold to more than 350 visually impaired people around the world. At least one individual’s implant has already failed with no way to repair it — a situation that could befall many others. Some patients enrolled in clinical trials for Second Sight’s latest neural interface, which directly stimulates the visual cortex, have either removed the device or are contemplating doing so.
 
If sophisticated brain-computer interfaces eventually transcend medical applications and become consumer goods available to the general public, the ethical considerations surrounding them multiply exponentially. In a 2017 commentary on neurotechnology, the Columbia University neurobiologist Rafael Yuste and 24 colleagues identified four main areas of concern: augmentation; bias; privacy and consent; and agency and identity. Neural implants sometimes cause disconcerting shifts in patients’ self-perception. Some have reported feeling like “an electronic doll” or developing a blurred sense of self. Were someone to commit a crime and blame an implant, how would the legal system determine fault? As neural interfaces and artificial intelligence evolve, these tensions will probably intensify.
 
All the scientists and engineers I spoke to acknowledged the ethical issues posed by neural interfaces, yet most were more preoccupied with consent and safety than what they regarded as far-off or unproven concerns about privacy and agency. In the world of academic scientific research, the appropriate future boundaries for the technology remain contentious.




 
In the private sector, ethics are often a footnote to enthusiasm, when they are mentioned at all. As pressure builds to secure funding and commercialize, spectacular and sometimes terrifying claims proliferate. Christian Angermayer, a German entrepreneur and investor, has said he is confident that everyone will be using brain-computer interfaces within 20 years. “It is fundamentally an input-output device for the brain, and it can benefit a large portion of society,” he posted on LinkedIn last year. “People will communicate with each other, get work done and even create beautiful artwork, directly with their minds.” Musk has described the ultimate goal of Neuralink as achieving “a sort of symbiosis with artificial intelligence” so that humanity is not obliterated, subjugated or “left behind” by superintelligent machines. “If you can’t beat em, join em,” he once said on Twitter, calling it a “Neuralink mission statement.” And Max Hodak, a former Neuralink president who was forced out of the company, then went on to found a new one called Science, dreams of using neural implants to make the human sensorium “directly programmable” and thereby create a “world of bits”: a parallel virtual environment, a lucid waking dream, that appears every time someone closes their eyes.
 
Today, DeGray, 68, still resides in the Menlo Park assisted-living facility he chose a decade ago for its proximity to Stanford. He still has the same two electrode arrays that Henderson embedded in his brain six years ago, as well as the protruding metal pedestals that provide connection points to external machines. Most of the time, he doesn’t feel their presence, though an accidental knock can reverberate through his skull as if it were a struck gong. In his everyday life, he relies on round-the-clock attention from caregivers and a suite of assistive technologies, including voice commands and head-motion tracking. He can get around in a breath-operated wheelchair, but long trips are taxing. He spends much of his time reading news articles, scientific studies and fiction on his computer. “I really miss books,” he told me. “They smell nice and feel good in your hands.”
 
DeGray’s personal involvement in research on brain-computer interfaces has become the focus of his life. Scientists from Stanford visit his home twice a week, on average, to continue their studies. “I refer to myself as a test pilot,” he said. “My responsibility is to take a nice new airplane out every morning and fly the wings off of it. Then the engineers drag it back into the hangar and fix it up, and we do the whole thing again the next day.”
 
Exactly what DeGray experiences when he activates his neural interface depends on his task. Controlling a cursor with attempted hand movements, for example, “boils the whole world down to an Etch A Sketch. All you have is left, right, up and down.” Over time, this kind of control becomes so immediate and intuitive that it feels like a seamless extension of his will. In contrast, maneuvering a robot arm in three dimensions is a much more reciprocal process: “I’m not making it do stuff,” he told me. “It’s working with me in the most versatile of ways. The two of us together are like a dance.”
 
No one knows exactly how long existing electrode arrays can remain in a human brain without breaking down or endangering someone’s health. Although DeGray can request explantation at any time, he wants to continue as a research participant indefinitely. “I feel very validated in what I’m doing here,” he said. “It would break my heart if I had to get out of this program for some reason.”
 
Regarding the long-term future of the technology in his skull, however, he is somewhat conflicted. “I actually spend quite a bit of time worrying about this,” he told me. “I’m sure it will be misused, as every technology is when it first comes out. Hopefully that will drive some understanding of where it should reside in our civilization. I think ultimately you have to trust in the basic goodness of man — otherwise, you would not pursue any new technologies ever. You have to just develop it and let it become monetized and see where it goes. It’s like having a baby: You only get to raise them for a while, and then you have to turn them loose on the world.”
 
The Man Who Controls Computers With His Mind. By Ferris Jabr. The New York Times, May 12, 2022.




In 2019, Rafael Yuste successfully implanted images directly into the brains of mice and controlled their behavior. Now, the neuroscientist warns that there is little that can prevent humans from being next.
 
If used responsibly, neurotechnology — in which machines interact directly with human neurons — can be used to understand and cure stubborn illnesses like Alzheimer's and Parkinson's disease, and assist with the development of prosthetic limbs and speech therapy.
 
But if left unregulated, neurotechnology could also lead to the worst corporate and state excesses, including discriminatory policing and privacy violations, leaving our minds as vulnerable to surveillance as our communications.
 
Now a group of neuroscientists, philosophers, lawyers, human rights activists and policymakers are racing to protect that last frontier of privacy — the brain.
 
They are not seeking a ban. Instead, campaigners like Yuste, who runs the Neurorights initiative at Columbia University, call for a set of principles that guarantee citizens' rights to their thoughts, and protection from any intruders, while taking advantage of any potential health benefits.
 
But they see plenty of reason to be alarmed about certain applications of neurotechnology, especially as it attracts the attention of militaries, governments and technology companies.
 
China and the U.S. are both leading research into artificial intelligence and neuroscience. The U.S. Defense department is developing technology that could be used to tweak memories.
 
It's not just scientists; firms, including major players like Facebook and Elon Musk's Neuralink are making advances too.
 
Neurotech wearables are now entering the market. Kernel, an American company, has developed a headset for the consumer market that can record brain activity in real time. Facebook funded a project to create a brain-computer interface that would allow users to communicate without speaking. (They pulled out this summer.) Neuralink is working on brain implants, and in April 2021 released a video of a monkey playing a game with its mind using the company’s implanted chip.
 
“The problem is what these tools can be used for,” he said. There are some scary examples: Researchers have used brain scans to predict the likelihood of criminals reoffending, and Chinese employers have monitored employees' brainwaves to read their emotions. Scientists have also managed to subliminally probe for personal information using consumer devices.
 
“We have on the table the possibility of a hybrid human that will change who we are as a species, and that's something that's very serious. This is existential,” he continued. Whether this is a change for good or bad, Yuste argues now is the time to decide.
 
Inception
 
The neurotechnology of today cannot decode thoughts or emotions. But with artificial intelligence, that might not be necessary. Powerful machine learning systems could make correlations between brain activity and external circumstances.
 
“In order to raise privacy challenges it’s sufficient that you have an AI that is powerful enough to identify patterns and establish correlative associations between certain patterns of data, and certain mental states,” said Marcello Ienca, a bioethicist at ETH Zurich.
 
Researchers have already managed to use a machine learning system to infer credit card digits from a person’s brain activity.
 
Brain scans have also been used in the criminal justice system for diagnostics and for predicting which criminals are likely to offend again, both of which at this stage — like lie detector tests before it — offer limited and at times flawed information.
 
That could have dire consequences for people of color, who are already likely to suffer disproportionately from algorithmic discrimination.
 
“[W]hen, for example, lie detection or the detection of memory appears accurate enough according to the science, why would the public prosecutor say no to such kind of technology?” said Sjors Ligthart, who studies the legal implications of coercive brain reading at Tilburg University.
 
With brain implants in particular, experts say it's unclear whether thoughts would be induced, or originate from the brain, which poses questions over accountability. “You cannot discern which tasks are being conducted by yourself and which thoughts are being accomplished by the AI, simply because the AI is becoming the mediator of your own mind,” Ienca said.
 
Where is my mind?
 
People have never needed to assert the authority of the individual over the thoughts they carry, but neurotechnology is prompting policymakers to do just that.
 
Chile is working on the world’s first law that would guarantee its citizens such so-called neurorights.
 
Senator Guido Girardi, who is behind Chile’s proposal, said the bill will create a similar registration system for neurotechnologies as for medicines, and using these technologies will need the consent of both the patient and doctors.
 
The goal is to ensure that technologies such as “AI can be used for good, but never to control a human being,” Girardi said.
 
In July, Spain adopted a nonbinding Charter for Digital Rights, meant to guide future legislative projects.
 
“The Spanish approach is to ensure the confidentiality and the security of the data that are related to these brain processes, and to ensure the complete control of the person over their data,” said Paloma Llaneza González, a data protection lawyer who worked on the charter.
 
“We want to guarantee the dignity of the person, equality and nondiscrimination, because you can discriminate people based on on his or her thoughts,” she said.
 
The Organisation for Economic Co-operation and Development, a Paris-based club of mostly rich countries, approved nonbinding guidelines on neurotechnology, and lists new rights meant to protect privacy and the freedom to think one's own thoughts, known as cognitive liberty.
 
The problem is that it's unclear whether existing legislation, which did not have neurotech in mind when drafted, is adequate.
 
“What we do need is to take another look at existing rights and specify them to neurotechnology,” said Ligthart. One target might be the European Convention of Human Rights, which ensures, for example, the right to respect private life, which could be updated to also include the right to mental privacy.




 
The GDPR, Europe’s strict data protection regime, offers protection for sensitive data, such as health status and religious beliefs. But a study by Ienca and Gianclaudio Malgieri at the EDHEC Business School in Lille found that the law might not cover emotions and thoughts.
 
Yuste argues that action is needed on an international level, and organizations such as the U.N. need to act before the technology is developed further.
 
“We want to do something a little bit more intelligent than wait until we have a problem and then try to fix it when it's too late, which is what happened with the internet and privacy and AI,” Yuste said.
 
Today’s privacy issues are going to be “peanuts compared to what's coming.”
 
Machines can read your brain. There’s little that can stop them. By Melissa Heikkilä. Politico,
August 31, 2021. 




Recording memories, reading thoughts, and manipulating what another person sees through a device in their brain may seem like science fiction plots about a distant and troubled future. But a team of multi-disciplinary researchers say the first steps to inventing these technologies have already arrived. Through a concept called “neuro rights,” they want to put in place safeguards for our most precious biological possessions: our mind.

 
Headlining this growing effort today is the NeuroRights Initiative, formed by Columbia University neuroscientist Rafael Yuste. Their proposition is to stay ahead of the tech by convincing governments across the world to create “neuro rights” legal protections, in line with the Universal Declaration of Human Rights, a document announced by the United Nations in 1948 as the standard for rights that should be universally protected for all people. Neuro rights advocates propose five additions to this standard: the rights to personal identity, free will, mental privacy, equal access to mental augmentation, and protection from algorithmic bias.
 
It’s a long way to go from theoretical protections to actual policy, especially when it comes to technology that doesn’t (yet) exist, but the movement has promise. The National Congress of Chile recently approved an amendment to add such protections to the Chilean constitution, becoming the first country ever to specifically include neural rights as protected by law. Chile, though, already had a branch of government dealing with protections regarding health (which reached out to the NeuroRights Initiative and Yuste on its own, seeking their advice). The reasons and methods for protections in other countries, including the U.S. for example, may differ.
 
Since brain data can now be tracked remotely and Elon Musk is forging ahead with Neuralink brain implants, it’s not so much fiction as it is fledgeling technology. Neuro rights advocates aim to convince disparate government policy makers, fellow researchers, and the public that it’s vital to stay ahead of the game, rather than wait for neurotechnology to become a problem.
 
The seeds of the movement began when Yuste became part of the Obama administration’s BRAIN Initiative, a program that linked a national network of neuroscience labs that would investigate brain-machine interfaces and related tech (which is now, in part thanks to Yuste, a global endeavor). But the ambitious nature of the initiative gave Yuste pause. While medical codes of ethics and neuroscience guidelines exist in various forms, there is no current unifying ethical code for neurotechnology.
 
“In the very first memo we sent to Obama, we highlighted the need for ethical regulation of all this technology,” Yuste said in a video interview. The BRAIN Initiative pooled research endeavors, from cell biology to brain mapping; some work included ways to decode or record thoughts or safely implant microchips into the brain. Recent NIH BRAIN initiative-funded projects include learning how the brain plans movements and how to do things like read minds using ultrasound.
 
Many neuro rights advocates are academics—including scientists whose own experiments have convinced them of the need for greater legal protections. In 2017, Yuste hosted informal talks and a workshop with independent volunteers from around the world called The Morningside Group. They huddled in a cramped classroom, almost shoulder to shoulder as they took turns writing on a chalkboard and sharing ideas (just one building away, Yuste likes to point out, from where Manhattan Project scientists realized, in 1939, that nuclear technology would change the world). Their fields spanned law, ethics, sciences, and philosophy, and by all accounts, it was exciting. “For three days, essentially, it was a closed meeting, and we came up with a series of ethical guidelines, also with the reflection that this is a human rights problem,” said Yuste.
 
They grappled with some big questions: How can we ensure that access to cognition-enhancing devices isn’t restricted to the very rich? Who owns the copyright to a recorded dream? What laws should exist to prevent one person from altering the memory of another through a neural implant? How do we maintain mental integrity separate from an implanted device? If someone can read our mind, how do we protect the read-out of our thoughts as our own?
 
These questions seem incredibly theoretical, but some arose from these researchers’ own experiments. One such project worked on by Yuste aimed to understand how groups of neurons work together in the visual cortex of the brain, but it incidentally let scientists alter the perception of mice, making them see things that were not actually there. After tracking which neurons were activated when the mice saw vertical bars on a screen, scientists could trigger just those neurons—and the mice, which had been trained to lick a water spout when they saw the bars, displayed the same behaviour when the neurons were triggered. The researchers could make the mice “see” the vertical bars even if none were there.
 
Once he realized the implications of being able to change another being’s perception, Yuste was both excited to have learned more about the brain and gravely concerned. He warns that even if this technique doesn’t work yet in human beings, now that the basic premise of manipulating perception is doable, all someone has to do is build on it.
 
Likewise, when scientist Jack Gallant and his team developed a side project to better understand the human visual system, they ended up creating some of the groundwork for “reading” or “decoding” some types of thoughts, such as mental images, using fMRI and an algorithm. In one of their many experiments, human participants watched short silent films while scientists monitored an area of their visual cortices. The information was handed over to an AI that was trained on YouTube videos but not the participants’ films. From the data retrieved from brain scans, the AI pieced together and reproduced the general scenes that participants saw. While the reproductions were far from perfect, they represented a first step in decoding information from a human mind.
 
Since then, multiple experiments working with similar technology have joined ranks with this work, and neuro rights advocates believe that it’s only a matter of time before this technology could be used in a consumer market—for example, for recording dreams, ideas, or memories. Elon Musk’s company Neuralink has been working on neural implants intended to one day help treat brain disorders, allow people to control external devices with their minds, and even boost intelligence and memory (so far, an early version of Neuralink has allowed a monkey to play a videogame with its mind).
 
Even though each brain operates a bit differently based on individual experiences and quirks, the general organization of the brain is the same across nearly all people. At a recent virtual workshop discussing neuro rights, scientists repeated in their presentations, again and again: The ability is there, and all that’s needed is enough brain data from an individual to create a custom model of their brain. With issues of the public not fully understanding or reading informed consent and social media terms of service agreements, it’s already easy to side-step data protections in various ways.



 
“This is a new frontier of privacy rights, in that the things that are inside of our heads are ours. They’re intimate; we share them when we want to share them. And we don’t want that to be made into a data field for experience,” said Sara Goering, professor of philosophy and co-lead for the Neuroethics Group for the Center of Neurotechnology at University of Washington, in a phone interview.
 
Goering, who studies the effects of brain-machine interface technologies on patients as part of her ethics and philosophy work, also pointed out that while she believes future neurotech could ultimately be liberating for many people, even current brain-machine interface devices don’t always give users enough transparency on how they are working. Brain-machine interfaces that let people move computer cursors with their minds and deep brain stimulation (DBS) devices for Parkinson’s disease and depression are wonderful tools, but according to interviews conducted by Goering and her colleagues, users of this tech sometimes wonder about who is truly in control. One person used a DBS for Parkinson’s for mobility and occasionally placed his foot where he did not intend. He had no way of telling if the device had malfunctioned or if he simply misstepped—often, he would think that the DBS was now more in control of his body than he was.
 
“So this [device] followed my action and I intended to do something. But did I do that, or did the device help me do it, or did we do it together?” Goering posed. Neuro rights could begin conversations around developing useful tech that puts emphasis on giving the user more direct agency and sense of self, or providing feedback on when and how a device is working. 
 
And since advanced neurotechnology has the potential to help people who are currently disadvantaged or suffering, holding back these technologies is also ethically questionable. This could especially be an issue if devices are designed for people who can’t communicate their consent to using the tech, as with covert consciousness and cognitive motor disorder, terms used for a range of conditions in which patients appear to be unconscious but can still think and perceive. Currently, technologies like fMRI can help identify people who are conscious while in a vegetative state and, sometimes, their ability to respond to words, but actual communication of the patient’s thoughts is not yet a reality.
 
“It turns out that 14%, 15% of people who look unconscious are not. If you test them with the imaging or EEG, and those with families know that their loved ones are conscious, they make different kinds of decisions about care,” Joseph Fins, an ethicist, author, and physician at Weill Cornell in New York, said in a video interview. Physicians and neuropsychologists make multiple specialized bedside assessments to determine a patient’s consciousness status (though there have been experimental uses of fMRI or deep-brain stimulation). These patients would be unable to give consent to having their minds read, but future neurotechnological advances could help them and those with aphasias or other communication issues, opening their lives to interacting with the rest of the world. If the concept of neuro rights takes off, policy makers will have to consider the nuances of how rights would be applied in medical settings.
 
But neuro rights advocates are more concerned with brain-machine interfaces in non-medical, consumer settings—assuming scientists or companies can get the devices to market and vetted to work. This has already been a challenge with transcranial direct-current brain stimulation devices (tDCS), which left the realm of scientific experimentation and came to market in the past decade with little to no regulation via DIY-ers, now with some guidelines on safety and even possible military applications. “And that’s where your rights could come in. The minute that you talk about the brain, you cannot avoid going into human rights, because the brain is what makes us human,” said Yuste.
 
“We’re still at the very early stages of this,” warned Fins, recalling that unregulated tDCS in untrained hands can potentially cause seizures, emotional dysregulation, and brain tumours. “So the other thing is the risk of quackery. You know, the late 19th century was all about electromagnetism, and it did nothing.” In many ways, neuro rights would be the Frankenstein’s monster of protections: part FDA, part privacy act, part pioneer of legal definitions—like what it actually means to own your sense of self.
 
What Yuste doesn’t want to happen is for no one to pay attention to the issue until it’s too late to regulate—similar to what’s happened with social media, which has ballooned into a privacy, security, and ethical nightmare with very little oversight.
 
“Maybe we can be a little bit smarter with this neurotech,” Yuste said, “and from the outset, we can have ethical guidelines that agree with our humanity.”
 
The Movement to Protect Your Mind From Brain-Computer Technologies. By Natalie Zarrelli. Gizmodo, May 31, 2021.








Out-of-body experiences recur throughout spiritual literature. Thought to signify a spiritual “essence” co-existing alongside biology, OBEs began to be viewed in a different light when they were replicated in a laboratory in 2007. University College London researchers induced OBEs in volunteers through the use of head-mounted video displays. Other means for inducing OBEs include electrical and magnetic stimulation of the brain.
 
If a well-placed magnet causes you to “leave” your body, what else is possible with a little transcranial stimulation?
 
This question is of growing concern as wearable scanners become increasingly common. Last week, Columbia University neuroscience professor Rafael Yuste advocated for the United Nations to adopt “neuro-rights” into its Universal Declaration of Human Rights thanks to a burgeoning industry promising to alter—some would say manipulate—consciousness.
 
As Yuste phrased it during an online conference,
 
“If you can record and change neurons, you can in principle read and write the minds of people. This is not science fiction. We are doing this in lab animals successfully.”
 
Neurotechnology is a growing field that includes a range of technologies that influence higher brain activities. Therapeutics designed to repair and improve brain function are included in this discipline—interventions for sleep problems, overstimulation, motor coordination, epilepsy, even depression.
 
So are more insidious intentions, however. You can imagine such devices in the hands of a cult leader, for example. Or perhaps a political leader steeling up their base. If the human imagination can create an idea, it can be transformed into reality, and not all humans are benevolent.
 
The ethical question is not new. For example, the debate over embryonic stem cells raged for years. Promises of trait enhancement concerned people who thought scientists would play the role of a god. While that debate has mostly died down, the use of neurotechnology by militaries and tech companies—particularly concerning privacy—will be contentious for decades.
 
Cognitive liberty is a term assigned to those who believe every individual must be allowed to maintain their own agency. An extension of the concept of freedom of thought, cognitive liberty is defined as “the right of each individual to think independently and autonomously, to use the full power of his or her mind, and to engage in multiple modes of thought,” as written by neuroethicist Dr. Wrye Sententia and legal theorist Richard Glen Boire.
 
The challenges to cognitive liberty include privacy, which they argue must encompass the domain of inner thought; autonomy, so that thought processes remain the province of the individual; and choice, provided that the individual is not harming others.
 
Yuste believes the U.N.’s declaration, which was created in the wake of World War II in 1948, needs immediate revision. Deep brain stimulation is already an FDA-approved procedure. Whereas social media creates its own addiction and mental health problems, a sense of agency still exists. When tech has the capability to get “under the skull and get at our neurons,” as Johns Hopkins professor of neurology and neuroscience, John Krakauer, says, a sense of urgency exists.
 
For Yuste it’s completely a matter of agency—and liberty.
 
“This is the first time in history that humans can have access to the contents of people’s minds. We have to think very carefully about how we are going to bring this into society.”
 
Scientists urge UN to add ‘neuro-rights’ to Universal Declaration of Human Rights. By Derek Beres. Big Think, December 9, 2020. 













No comments:

Post a Comment