12/01/2019

Rise of the Machines: has Technology evolved beyond our Control?






The voice-activated gadget in the corner of your bedroom suddenly laughs maniacally, and sends a recording of your pillow talk to a colleague. The clip of Peppa Pig your toddler is watching on YouTube unexpectedly descends into bloodletting and death. The social network you use to keep in touch with old school friends turns out to be influencing elections and fomenting coups.

Something strange has happened to our way of thinking – and as a result, even stranger things are happening to the world. We have come to believe that everything is computable and can be resolved by the application of new technologies. But these technologies are not neutral facilitators: they embody our politics and biases, they extend beyond the boundaries of nations and legal jurisdictions and increasingly exceed the understanding of even their creators. As a result, we understand less and less about the world as these powerful technologies assume more control over our everyday lives.

Across the sciences and society, in politics and education, in warfare and commerce, new technologies are not merely augmenting our abilities, they are actively shaping and directing them, for better and for worse. If we do not understand how complex technologies function then their potential is more easily captured by selfish elites and corporations. The results of this can be seen all around us. There is a causal relationship between the complex opacity of the systems we encounter every day and global issues of inequality, violence, populism and fundamentalism.

Instead of a utopian future in which technological advancement casts a dazzling, emancipatory light on the world, we seem to be entering a new dark age characterised by ever more bizarre and unforeseen events. The Enlightenment ideal of distributing more information ever more widely has not led us to greater understanding and growing peace, but instead seems to be fostering social divisions, distrust, conspiracy theories and post-factual politics. To understand what is happening, it’s necessary to understand how our technologies have come to be, and how we have come to place so much faith in them.

In the 1950s, a new symbol began to creep into the diagrams drawn by electrical engineers to describe the systems they built: a fuzzy circle, or a puffball, or a thought bubble. Eventually, its form settled into the shape of a cloud. Whatever the engineer was working on, it could connect to this cloud, and that’s all you needed to know. The other cloud could be a power system, or a data exchange, or another network of computers. Whatever. It didn’t matter. The cloud was a way of reducing complexity, it allowed you to focus on the issues at hand. Over time, as networks grew larger and more interconnected, the cloud became more important. It became a business buzzword and a selling point. It became more than engineering shorthand; it became a metaphor.

Today the cloud is the central metaphor of the internet: a global system of great power and energy that nevertheless retains the aura of something numinous, almost impossible to grasp. We work in it; we store and retrieve stuff from it; it is something we experience all the time without really understanding what it is. But there’s a problem with this metaphor: the cloud is not some magical faraway place, made of water vapour and radio waves, where everything just works. It is a physical infrastructure consisting of phone lines, fibre optics, satellites, cables on the ocean floor, and vast warehouses filled with computers, which consume huge amounts of water and energy. Absorbed into the cloud are many of the previously weighty edifices of the civic sphere: the places where we shop, bank, socialise, borrow books and vote. Thus obscured, they are rendered less visible and less amenable to critique, investigation, preservation and regulation.



Over the last few decades, trading floors around the world have fallen silent, as people are replaced by banks of computers that trade automatically. Digitisation meant that trades within, as well as between, stock exchangescould happen faster and faster. As trading passed into the hands of machines, it became possible to react almost instantaneously. High-Frequency Trading (HFT) algorithms, designed by former physics PhD students to take advantage of millisecond advantages, entered the market, and traders gave them names such as The Knife. These algorithms were capable of eking out fractions of a cent on every trade, and they could do it millions of times a day.

Something deeply weird is occurring within these massively accelerated, opaque markets. On 6 May 2010, the Dow Jones opened lower than the previous day, falling slowly over the next few hours in response to the debt crisis in Greece. But at 2.42pm, the index started to fall rapidly. In less than five minutes, more than 600 points were wiped off the market. At its lowest point, the index was nearly 1,000 points below the previous day’s average, a difference of almost 10% of its total value, and the biggest single-day fall in the market’s history. By 3.07pm, in just 25 minutes, it recovered almost all of those 600 points, in the largest and fastest swing ever.

In the chaos of those 25 minutes, 2bn shares, worth $56bn, changed hands. Even more worryingly, many orders were executed at what the Securities Exchange Commission called “irrational prices”: as low as a penny, or as high as $100,000. The event became known as the “flash crash”, and it is still being investigated and argued over years later.

One report by regulators found that high-frequency traders exacerbated the price swings. Among the various HFT programs, many had hard-coded sell points: prices at which they were programmed to sell their stocks immediately. As prices started to fall, groups of programs were triggered to sell at the same time. As each waypoint was passed, the subsequent price fall triggered another set of algorithms to automatically sell their stocks, producing a feedback effect. As a result, prices fell faster than any human trader could react to. While experienced market players might have been able to stabilise the crash by playing a longer game, the machines, faced with uncertainty, got out as quickly as possible.

Other theories blame the algorithms for initiating the crisis. One technique that was identified in the data was HFT programmes sending large numbers of “non-executable” orders to the exchanges – that is, orders to buy or sell stocks so far outside of their usual prices that they would be ignored. The purpose of such orders is not to actually communicate or make money, but to deliberately cloud the system, so that other, more valuable trades can be executed in the confusion. Many orders that were never intended to be executed were actually fulfilled, causing wild volatility.

Flash crashes are now a recognised feature of augmented markets, but are still poorly understood. In October 2016, algorithms reacted to negative news headlines about Brexit negotiations by sending the pound down 6% against the dollar in under two minutes, before recovering almost immediately. Knowing which particular headline, or which particular algorithm, caused the crash is next to impossible. When one haywire algorithm started placing and cancelling orders that ate up 4% of all traffic in US stocks in October 2012, one commentator was moved to comment wryly that “the motive of the algorithm is still unclear”.

At 1.07pm on 23 April 2013 Associated Press sent a tweet to its 2 million followers: “Breaking: Two Explosions in the White House and Barack Obama is injured.” The message was the result of a hack later claimed by the Syrian Electronic Army, a group affiliated to Syrian president Bashar al-Assad. AP and other journalists quickly flooded the site with alerts that the message was false. The algorithms following breaking news stories had no such discernment, however. At 1.08pm, the Dow Jones went into a nosedive. Before most human viewers had even seen the tweet, the index had fallen 150 points in under two minutes, and bounced back to its earlier value. In that time, it erased $136bn in equity market value.

Computation is increasingly layered across, and hidden within, every object in our lives, and with its expansion comes an increase in opacity and unpredictability. One of the touted benefits of Samsung’s line of “smart fridges” in 2015 was their integration with Google’s calendar services, allowing owners to schedule grocery deliveries from the kitchen. It also meant that hackers who gained access to the then inadequately secured machines could read their owner’s Gmail passwords. Researchers in Germany discovered a way to insert malicious code into Philips’s wifi-enabled Hue lightbulbs, which could spread from fixture to fixture throughout a building or even a city, turning the lights rapidly on and off and – in one possible scenario – triggering photosensitive epilepsy. This is the approach favoured by Byron the Bulb in Thomas Pynchon’s Gravity’s Rainbow, an act of grand revolt by the little machines against the tyranny of their makers. Once-fictional possibilities for technological violence are being realised by the Internet of Things.


 
In Kim Stanley Robinson’s novel Aurora, an intelligent spacecraft carries a human crew from Earth to a distant star. The journey will take multiple lifetimes, so one of the ship’s jobs is to ensure that the humans look after themselves. When their fragile society breaks down, threatening the mission, the ship deploys safety systems as a means of control: it is able to see everywhere through sensors, open or seal doors at will, speak so loudly through its communications equipment that it causes physical pain, and use fire suppression systems to draw down the level of oxygen in a particular space.

This is roughly the same suite of operations available now from Google Home and its partners: a network of internet-connected cameras for home security, smart locks on doors, a thermostat capable of raising and lowering the temperature in individual rooms, and a fire and intruder detection system that emits a piercing emergency alarm. Any successful hacker would have the same powers as the Aurora does over its crew, or Byron over his hated masters.

Before dismissing such scenarios as the fever dreams of science fiction writers, consider again the rogue algorithms in the stock exchanges. These are not isolated events, but everyday occurrences within complex systems. The question then becomes, what would a rogue algorithm or a flash crash look like in the wider reality?

Would it look, for example, like Mirai, a piece of software that brought down large portions of the internet for several hours on 21 October 2016? When researchers dug into Mirai, they discovered it targets poorly secured internet connected devices – from security cameras to digital video recorders – and turns them into an army of bots. In just a few weeks, Mirai infected half a million devices, and it needed just 10% of that capacity to cripple major networks for hours.

Mirai, in fact, looks like nothing so much as Stuxnet, another virus discovered within the industrial control systems of hydroelectric plants and factory assembly lines in 2010. Stuxnet was a military-grade cyberweapon; when dissected, it was found to be aimed specifically at Siemens centrifuges, and designed to go off when it encountered a facility that possessed a particular number of such machines. That number corresponded with one particular facility: the Natanz nuclear facility in Iran. When activated, the program would quietly degrade crucial components of the centrifuges, causing them to break down and disrupt the Iranian enrichment programme.

The attack was apparently partially successful, but the effect on other infected facilities is unknown. To this day, despite obvious suspicions, nobody knows where Stuxnet came from, or who made it. Nobody knows for certain who developed Mirai, either, or where its next iteration might come from, but it might be there, right now, breeding in the CCTV camera in your office, or the wifi-enabled kettle in the corner of your kitchen.

Or perhaps the crash will look like a string of blockbuster movies pandering to rightwing conspiracies and survivalist fantasies, from quasi-fascist superheroes (Captain America and the Batman series) to justifications of torture and assassination (Zero Dark Thirty, American Sniper). In Hollywood, studios run their scripts through the neural networks of a company called Epagogix, a system trained on the unstated preferences of millions of moviegoers developed over decades in order to predict which lines will push the right – meaning the most lucrative – emotional buttons. Algorithmic engines enhanced with data from Netflix, Hulu, YouTube and others, with access to the minute-by-minute preferences of millions of video watchers acquire a level of cognitive insight undreamed of by previous regimes. Feeding directly on the frazzled, binge-watching desires of news-saturated consumers, the network turns on itself, reflecting, reinforcing and heightening the paranoia inherent in the system.

Game developers enter endless cycles of updates and in-app purchases directed by A/B testing interfaces and real-time monitoring of players’ behaviours. They have such a fine-grained grasp of dopamine-producing neural pathways that teenagers die of exhaustion in front of their computers, unable to tear themselves away.

Or perhaps the flash crash will look like literal nightmares broadcast across the network for all to see? In the summer of 2015, the sleep disorders clinic of an Athens hospital was busier than it had ever been: the country’s debt crisis was in its most turbulent period. Among the patients were top politicians and civil servants, but the machines they spent the nights hooked up to, monitoring their breathing, their movements, even the things they said out loud in their sleep, were sending that information, together with their personal medical details, back to the manufacturers’ diagnostic data farms in northern Europe. What whispers might escape from such facilities?

We are able to record every aspect of our daily lives by attaching technology to the surface of our bodies, persuading us that we too can be optimised and upgraded like our devices. Smart bracelets and smartphone apps with integrated step counters and galvanic skin response monitors track not only our location, but every breath and heartbeat, even the patterns of our brainwaves. Users are encouraged to lay their phones beside them on their beds at night, so that their sleep patterns can be recorded. Where does all this data go, who owns it, and when might it come out? Data on our dreams, our night terrors and early morning sweating jags, the very substance of our unconscious selves, turn into more fuel for systems both pitiless and inscrutable.

Or perhaps the flash crash in reality looks exactly like everything we are experiencing right now: rising economic inequality, the breakdown of the nation-state and the militarisation of borders, totalising global surveillance and the curtailment of individual freedoms, the triumph of transnational corporations and neurocognitive capitalism, the rise of far-right groups and nativist ideologies, and the degradation of the natural environment. None of these are the direct result of novel technologies, but all of them are the product of a general inability to perceive the wider, networked effects of individual and corporate actions accelerated by opaque, technologically augmented complexity.

In New York in 1997, world chess champion Garry Kasparov faced off for the second time against Deep Blue, a computer specially designed by IBM to beat him. When he lost, he claimed some of Deep Blue’s moves were so intelligent and creative that they must have been the result of human intervention. But we understand why Deep Blue made those moves: its process for selecting them was ultimately one of brute force, a massively parallel architecture of 14,000 custom-designed chess chips, capable of analysing 200m board positions per second. Kasparov was not outthought, merely outgunned.

By the time the Google Brain–powered AlphaGo software took on the Korean professional Go player Lee Sedol in 2016, something had changed. In the second of five games, AlphaGo played a move that stunned Sedol, placing one of its stones on the far side of the board. “That’s a very strange move,” said one commentator. “I thought it was a mistake,” said another. Fan Hui, a seasoned Go player who had been the first professional to lose to the machine six months earlier, said: “It’s not a human move. I’ve never seen a human play this move.”

AlphaGo went on to win the game, and the series. AlphaGo’s engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them.

The late Iain M Banks called the place where these moves occurred “Infinite Fun Space”. In Banks’s SF novels, his Culture civilisation is administered by benevolent, superintelligent AIs called simply Minds. While the Minds were originally created by humans, they have long since redesigned and rebuilt themselves and become all-powerful. Between controlling ships and planets, directing wars and caring for billions of humans, the Minds also take up their own pleasures. Capable of simulating entire universes within their imaginations, some Minds retreat for ever into Infinite Fun Space, a realm of meta-mathematical possibility, accessible only to superhuman artificial intelligences.

Many of us are familiar with Google Translate, which was launched in 2006, using a technique called statistical language inference. Rather than trying to understand how languages actually worked, the system imbibed vast corpora of existing translations: parallel texts with the same content in different languages. By simply mapping words on to one another, it removed human understanding from the equation and replaced it with data-driven correlation.

Translate was known for its humorous errors, but in 2016, the system started using a neural network developed by Google Brain, and its abilities improved exponentially. Rather than simply cross-referencing heaps of texts, the network builds its own model of the world, and the result is not a set of two-dimensional connections between words, but a map of the entire territory. In this new architecture, words are encoded by their distance from one another in a mesh of meaning – a mesh only a computer could comprehend.

While a human can draw a line between the words “tank” and “water” easily enough, it quickly becomes impossible to draw on a single map the lines between “tank” and “revolution”, between “water” and “liquidity”, and all of the emotions and inferences that cascade from those connections. The map is thus multidimensional, extending in more directions than the human mind can hold. As one Google engineer commented, when pursued by a journalist for an image of such a system: “I do not generally like trying to visualise thousand-dimensional vectors in three-dimensional space.” This is the unseeable space in which machine learning makes its meaning. Beyond that which we are incapable of visualising is that which we are incapable of even understanding.

In the same year, other researchers at Google Brain set up three networks called Alice, Bob and Eve. Their task was to learn how to encrypt information. Alice and Bob both knew a number – a key, in cryptographic terms – that was unknown to Eve. Alice would perform some operation on a string of text, and then send it to Bob and Eve. If Bob could decode the message, Alice’s score increased; but if Eve could, Alice’s score decreased.

Over thousands of iterations, Alice and Bob learned to communicate without Eve breaking their code: they developed a private form of encryption like that used in private emails today. But crucially, we don’t understand how this encryption works. Its operation is occluded by the deep layers of the network. What is hidden from Eve is also hidden from us. The machines are learning to keep their secrets.


How we understand and think of our place in the world, and our relation to one another and to machines, will ultimately decide where our technologies will take us. We cannot unthink the network; we can only think through and within it. The technologies that inform and shape our present perceptions of reality are not going to go away, and in many cases we should not wish them to. Our current life support systems on a planet of 7.5 billion people and rising depend on them. Our understanding of those systems, and of the conscious choices we make in their design, remain entirely within our capabilities. We are not powerless, not without agency. We only have to think, and think again, and keep thinking. The network – us and our machines and the things we think and discover together – demands it.

Computational systems, as tools, emphasise one of the most powerful aspects of humanity: our ability to act effectively in the world and shape it to our desires. But uncovering and articulating those desires, and ensuring that they do not degrade, overrule, efface, or erase the desires of others, remains our prerogative.



When Kasparov was defeated back in 1997, he didn’t give up the game. A year later, he returned to competitive play with a new format: advanced, or centaur, chess. In advanced chess, humans partner, rather than compete, with machines. And it rapidly became clear that something very interesting resulted from this approach. While even a mid-level chess computer can today wipe the floor with most grandmasters, an average player paired with an average computer is capable of beating the most sophisticated supercomputer – and the play that results from this combination of ways of thinking has revolutionised the game. It remains to be seen whether cooperation is possible – or will be permitted – with the kinds of complex machines and systems of governance now being developed, but understanding and thinking together offer a more hopeful path forward than obfuscation and dominance.

Our technologies are extensions of ourselves, codified in machines and infrastructures, in frameworks of knowledge and action. Computers are not here to give us all the answers, but to allow us to put new questions, in new ways, to the universe.

Rise of the machines: has technology evolved beyond our control? By James Bridle. The Guardian,  June 15, 2018.

The Age of Complexity is an extract from the book  ‘New Dark Age: Technology and the End of the Future’’ It  was published by Verso in the UK in June 2018, and the US in July 2018.

More extracts here :

Known Unknowns.  Harper’s Magazine. July  2018. 
Inside the infinite imagination of a computer.  Dazed & Confused, September 17, 2018.




AR
  In your book, you talk about the notion that we’re in a state of knowing more about the world than ever, but we have less and less agency to change it, and we need to develop a kind of literacy around these computing systems. But it seems like we could develop literacy and still not gain any real power over the systems.

JB
Oh, absolutely. That’s certainly possible. I don’t think that there’s any kind of anything that will guarantee you some kind of magical power over things. In fact, the hope that you can do so is itself kind of dangerous. But it’s one of the routes that I explore to a possibility of gaining some kind of agency within these systems.
One of the ways that I approach these problems is through one particular form of systemic literacy that I’ve developed through my work and my studies, but I also think it’s generalizable. I think anyone can get there from a background in any number of disciplines. And understanding that literacy is transferable and that we all have the capabilities to apply it to think clearly about subjects that seem difficult and complex is one of the main thrusts of the book.

AR
You’ve given examples in the past of ways that people could resist “inevitable” technological progress, like taxi drivers making salt traps for self-driving cars. What else could they do?

JB
I did a whole bunch of projects around self-driving cars, which also included building my own — poorly, but in a way that helped me learn how it’s done — so that I gained an understanding of those systems, and possibly as a result would be able to produce a different kind of self-driving car, essentially. In the same way that anyone who tries to work on these systems, build them themselves, and understand them has the possibility of shaping them in a totally different way.

The autonomous trap is another approach to some of the more threatening aspects of the self-driving car. It’s quite a sort of aggressive action to literally stop it. And I think working with and attempting to stop and resist are both super useful approaches, but they both depend on having some level of understanding of these systems.

AR
These seem like individual solutions to some extent. How do you deal with situations like climate change, where you need really large-scale systemic change? 

JB
There’s a couple of things I talk about regarding climate in the book, and one of them is to be really, really super direct about the actual threat of it, which is horrific, and it’s kind of so horrific that it’s difficult for us to think about. Simply the act of articulating that — making it really, really clear, exploring some of the implications of it — that kind of realism is a super necessary act.

We’re still fighting this rear-guard action of, “Oh, it’s manageable,” “Oh, we can mitigate it,” or “It’s not really real.” We’re still, despite everything we know, everything people say, stuck in this ridiculous bind where we seem incapable of taking any kind of action. And, for me, that’s part and parcel of this continuous argument we have over numbers and facts and figures and the data and information that we’re gathering, as though this is some kind of argument that has to be won before we do anything. That excludes the possibility of doing anything concrete and powerful and present.

AR
How does it feel to be a critic of these technologies for years and suddenly see people start agreeing with you?

JB
I think there’s a lot of people right now who find themselves in the position of being “Well yes, this is exactly what we meant,” you know? I remember having conversations years ago with someone saying, “What’s the worst that can happen with someone having all this data centralized?” And my answer to that was, “Well, the worst thing that can happen is that fascists take over and have control of that data.” And a few years ago, that felt like the worst possible thing, completely unimaginable. And here we are today — when fascism is alive and well in Europe, and growing in certain ways in the US as well. So it’s suddenly not so remote.

But at the same time, people who have been thinking about this for a while have also been building things that are capable of mitigating that. So while I argue against everything being magically fixed, putting this all out in the open in certain ways does start to make some kind of difference. The really important thing, I think, is to constantly frame this as a struggle. Which, again, we kind of don’t often do, particularly in the context of technology — where we see this stuff as a kind of ongoing, always upward unstoppable march.

Technology always walks this kind of weird knife edge. It becomes hard for us to understand and change — everything disappears behind glass, inside little black boxes. But at the same time, if you do manage to crack them open just a little bit, if you get some kind of understanding, everything suddenly becomes really quite starkly clear in ways that it wasn’t before. I’m kind of insisting on that moment being the moment of possibility — not some kind of weird imaginary future point where it all becomes clear, but just these moments of doubt and uncertainty and retelling of different stories.

AR
Speaking of stories, you reference authors like H.P. Lovecraft and Iain M. Banks in New Dark Age. How is fiction shaping the way we deal with this future?

JB
A lot of the way that we think of technology, and the internet in particular, has been really shaped by the ideas of it that came along before the thing itself arrived, right? Just as our ideas of space exploration are completely shaped by fantasies of space exploration from long before we got to space practically. The really interesting science fiction to me now happens kind of in the next week or the next year at most because it’s so obvious to us how little we can predict about long-term futures, which really, for me, is more of a reflection of reality than reality is a reflection of science fiction.

I’m unsure about the value of stories to pull us in a particular direction. Most science fiction writers insist that all their fiction is really about the present, so they’re really just different ways of imagining that.

AR
Jeff VanderMeer has also said that futuristic dystopias are a way of shifting real problems “over there” out of reality.

JB
Yeah, exactly. There’s a whole genre of design fiction as well that posits these political things as design objects as a way to kind of pull those futures into being. But I always think there’s something very risky about that, because it also positions them as somewhere else, right? Not as tools that we have access to in the present. And VanderMeer’s fiction is pretty interesting, because while it’s obviously somewhat future-oriented, it’s also deeply about the weird and strange and difficult to understand.

I think that is better than what I said before, really. That is the most interesting current within science fiction right now: not imaginings of weird futures, utopian or dystopian, but ones that really home into how little we understand about the world around us right now.

AR
How do we critique the idea of inevitable, upward progress without overly romanticizing the past? In the US, criticism of automation gets tied up with calls to protect jobs that fit a stereotypical 20th century white, male vision of work.

JB
There’s always that danger of romanticization, it’s true. It’s still being played out. That also comes about because of our really narrow view of history — that we have these quite small and very essentially made-up histories of things that we’re so acculturated to. So one of the things I try to do in the book is pull out these alternative histories of technology, and that’s another current that’s quite strong at the moment.

I just read Claire Evans’ book Broad Band, about the number of women involved in the creation of the internet as we know it today. Many of the characters, real people in her book, they’re not just engineers and programmers. They’re also community moderators and communicators, people who shaped the internet just as much as people who wrote the lines of code.

And so as soon as you dig up that history, you then can’t help but understand the internet as something that’s very different in the present. And therefore you can understand the future as something else as well. So if we talk about automation, then one of the works we can do is not just to hark back to some kind of golden age, but to trouble that legacy as well, to talk about who worked then and under what conditions, you know?

There’s always technological resistance. Like the Luddites, who are pretty well-known now, but the fact is that the Luddites weren’t smashers of technology; they were a social movement, performing a very violent and direct form of critique of the destruction of their livelihoods, of what those machines were doing. And so now we have many, many other tools of critique for that. But by retelling these stories, by understanding them in different ways, it’s possible to rethink what might be possible in the present.
  
James Bridle on why technology is creating a new dark age. By  Adi Robertson.  The  Verge.  July 16, 2018.


Halfway through James Bridle’s foreboding, at times terrifying, but ultimately motivating account of our technological present, he recounts a scene from a magazine article about developments in artificial intelligence. The journalist is asking a Google engineer to give an image of the AI system developed at Google. The engineer’s response was, ‘I do not generally like trying to visualise thousand-dimensional vectors in three-dimensional space.’ A few pages later, discussing the famous example of grandmaster Garry Kasparov losing a series of six chess matches to IBM supercomputer Deep Blue, Bridle quotes Fan Hui, an experienced Go player, describing the Google-developed AlphaGo software’s defeat of professional Korean Go player Lee Sedol at the 2,500-year-old strategy game: ‘“It’s not a human move. I’ve never seen a human play this move.” And then he added, “So beautiful.”’

 The first challenge for proving a system’s intelligence is image cognition: AI are trained for facial recognition or to scan satellite imagery. Still, technology is not primarily considered a visual problem, even if new technologies’ effect on our lives is the subject of countless movies which are often, to echo Bridle’s title, quite dark. Bridle, a visual artist whose artworks consider the intersection of technology and representation, from the shadows cast by drones to the appearance of stock images in public space, does not focus his book on representations of technology, but rather on a different visual problem: invisibility. In his introduction, Bridle warns that society is powerless to understand and map the interconnections between the technological systems that it has built. What is needed, the artist claims, is an understanding that ‘cannot be limited to the practicalities of how things work: it must be extended to how things came to be, and how they continue to function in the world in ways that are often invisible and interwoven. What is required is not understanding, but literacy.’

Literacy, in Bridle’s use, is beyond understanding, and is the result of our struggle to conceive — to imagine, or describe — the scale of new technologies. A lot of the examples in the book are visual and descriptive, providing new imagery to help his readers picture some of the issues that should concern them but are hard to imagine since they happen far from the eye. In a chapter dedicated to complex systems, Bridle describes Amazon warehouses that employ a logistics technique called ‘chaotic storage’ which manages the goods on floors whose organisation is not based on any order a human can grasp — alphabetised books, homeware in a specific department — but on an algorithmic logic that makes the system incomprehensible to its employees. The workers carry handheld devices that direct them across the facility: they are incapable of intervening with the machine’s choice, incapable of seeing its reason. Even when things are made visible, it’s also a reflection of darkness: when IBM developed the Selective Sequence Electronic Calculator, it was installed on a ground-floor shop on East 57th Street in Manhattan. The President of IBM at the time, Thomas J. Watson, wanted the public to see the SSEC, so that they would feel assured that the machine is not meant to replace them. The publicity photos of the IBM calculator, operated by a woman in a former shoe store, do not expose what was actually happening: the SSEC was being used to run simulations of hydrogen bomb explosions, carried out in full view in a storefront in New York City.

New Dark Age is neatly divided into ten chapters, each titled with a single word beginning with the letter C: ‘Chasm’ is the introduction, and one of the most valuable sections of the book, discussing how technological acceleration has changed society and charting the impossibility of seeing clearly how these changes affect every aspect of our day-to-day lives: ‘new technologies,’ writes Bridle, ‘do not merely augment our abilities, but actively shape and direct them, for better and worse. It is increasingly necessary to be able to think new technologies in different ways, and to be critical of them, in order to meaningfully participate in that shaping and directing.’ The next chapter, ‘Computation’, is a short history of computers, in which Bridle explores the interwoven history of computational development and warfare, especially atomic warfare during the Cold War. A chapter called ‘Cognition’ is dedicated to artificial intelligence, and one titled ‘Complicity’ discusses surveillance and systems of control via technology. ‘Concurrency’ takes up an example Bridle has written about before — and which was picked up by major newspapers and television news — and expands it. The initial essay was titled ‘Something is wrong on the internet’; Bridle published it on Medium because, he explained in a short paragraph preceding the piece, he didn’t want the materials he was writing on ‘anywhere near’ his own website. Looking at YouTube videos, Bridle was pointing to several disturbing, weird, dark clips purportedly served up to toddlers. Things like the ‘wrong head trope’, which involves Disney characters whose heads are separated from their bodies, floating onscreen to the sound of nursery rhymes until they are matched with the right bodies or a bloody video of Peppa Pig going to the dentist. Bridle describes ‘a growing sense of something inhuman’ in the proliferation of these — which isn’t necessarily related to the origin of these videos but, rather, to the way they are distributed to children: via an algorithm that serves them disturbing content because it is set to autoplay. Bridle links this example with a discussion of Russian interference with foreign elections via the distribution of misinformation, and he also brings in the Ashley Madison hacks, which exposed that the dating site for married people had tens of thousands of fake, automated female accounts that interacted with men: paid subscribers who shelled dollars to interact with a piece of software attached to a photograph of a woman. The content directed at us, whether created by state propaganda, corporations in search of advertising dollars and paid subscriptions, or simply spammers, creates the same results — confusion, deception, a relationship to power (state or corporate) that is constantly reasserted by the information we are served up. ‘This is how the world actually is,’ Bridle says, ‘and now we are going to have to live in it.’ (And raise our children in it.)

In ‘Climate’, a summary of technology’s effect on and impact by climate change, Bridle outlines the endless cycle in which abuse of resources affects a system that uses those resources both to study and monitor the climate. For example, cable landing sites, where the submarine cables connecting the internet reach the shore, are especially vulnerable to sea level rise — which is ironic since the internet is also a major player in climate change. The power data centres require accounted for 2 per cent of global emissions in 2015, which is about the same carbon footprint as commercial aviation. Cryptocurrencies and blockchain software, so often discussed in emancipatory terms, since they have the potential to decentralise financial systems, require the same amount of energy as nine American homes per day to complete a single transaction; blockchain will use up the same amount of electricity as the entire United States by the end of 2019. In Japan, predictions are that by 2030 digital services will require more power than the nation can generate. The network’s voracious consumption of power isn’t just the responsibility of the NSA data centres, but also the end-users. ‘We need to be more responsible about what we use the internet for,’ Bridle quotes Ian Bitterlin, a UK expert on data centres: ‘Data centres aren’t the culprits – it’s driven by social media and mobile phones. It’s films, pornography, gambling, dating, shopping – anything that involves images.’

Which suggests the missing chapter—or approach—in the book: Culture. Bridle is an artist and the visual examples he puts forth are some of the highlights of the book, especially when considering its subtitle: ‘the end of the future’. The end, that is, of something we’ve always imagined. There is a lovely short section where Bridle writes about the Concorde, the supersonic passenger plane that British Airways and Air France stopped flying in 2003. Bridle describes growing up in the London suburbs under the flight path to Heathrow Airport and hearing, every evening at 6.30 p.m., the rumble of the plane, its futuristic, sleek, triangular design an image of the future that died with the end of the Concorde flights. These stunning few paragraphs on design and its impact on the popular psyche follow a discussion of clean-air turbulence (another terrifying result of climate change, where flights experience extreme turbulence in unforeseen areas) and precede a simple conclusion: that futuristic inventions and designs like the Concorde are the exception and the rule is small in-flight adjustments, like slightly better wingspan leading to slightly better fuel mileage. These two pages set up an idea about what we cannot see: Bridle cites philosopher Timothy Morton’s idea of the ‘hyperobject’, which is a thing that is too big to see in its entirety and thus to comprehend. Climate, for Bridle, is a hyperobject — which we only perceive through its effects: a melting ice sheet, for example — ‘but so are nuclear radiation, evolution, and the internet.’

The things we cannot see are not always imperceptible because they are too large to comprehend, but because they are intentionally obfuscated. The simple example is the language we use when discussing technology — the ‘cloud’ for a series of links between servers; ‘open’ is a decentralised resource, but open-source is also a method of building free software using business-friendly, hivemind-labour. The ‘democratising’ potential of the internet is hailed by multinational corporations, those same corporations that stand to benefit from the positive PR of the ‘freedom’ that platforms like Twitter promote. Without the use of scare quotes, these ethereal, abstract terms press an understanding of the internet as an ecosystem with its own rules, and one that is presented as intangible and ubiquitous. The far-from-simple example is Bridle’s discussion of high-frequency trading. In a chapter titled ‘Complexity’, Bridle describes a bicycle ride from Slough, just west of London, to Basildon near the eastern coast of the UK. The 60-plus-mile journey cuts through the heart of the City, London’s financial hub. The City and its cluster of glass towers is the public image of the UK’s finance sector, but the transactions that fuel it are made out of sight, in warehouses like the Euronext Data Center (the European outpost of the New York Stock Exchange) in Basildon and the LD4 Data Centre (the London Stock Exchange) in Slough. The glass towers, the stock exchanges designed like Greek temples, are now symbolic, empty signs: they stand for something that is totally invisible, that happens in warehouses on the outskirts of the city. Conjure an image from 1980s films about Wall Street and its culture: on the trading floor, men shouting, fighting, running while carrying slips of papers in their hands. Replace it with the image of men sitting in offices, pressing the refresh button again and again on their desktop computers. Then replace that image, too. Financial transactions were always dependent on speed, but as computing power and network speeds have increased, the speed of these exchanges has accelerated to leave these men behind.



Now computers are trading with other computers in countryside locations where space and power are available, but there is no symbolic imagery. ‘Financial systems have been rendered obscure, and thus even more unequal,’ Bridle writes. The chapter on complexity is also the one that talks most about the effects of the meeting of capital — another C word — and technology on the societies we live in, especially in terms of labour. This is the chapter to include a long discussion of Uber’s relationship to its drivers as contractors (who they force to listen to anti-union podcasts) and the charting of Amazon’s storage facilities. These networks are not invisible, they are made to look invisible. And the stakes of opacity are the impossibility of organising, both as employees and as citizens. Could there be an Occupy movement around obfuscated spaces like data centres on the peripheries of cities?

Bridle’s conclusion begins with an event — the 2013 Google Zeitgeist conference. Held annually in an exclusive hotel in Hertfordshire, England, it’s a private gathering — though some of the meetings are posted on Google’s ‘zeitgesitminds’ page, TED-talk style — for executives and politicians. At the 2013 conference, Google CEO Eric Schmidt publicly discussed the emancipatory power of technology. Schmidt talked about how technology, and particularly cell phones and their built-in cameras, could prevent atrocities by exposing them — ‘somebody would have figured out and somebody would have reacted to prevent this terrible carnage.’ His example was the Rwandan genocide, which, he described, had to be planned, ‘people had to write it down’. An image of those plans would have leaked, Schmidt is certain, and ‘somebody would have reacted’. Bridle summarises easily: ‘Schmidt’s — and Google’s — worldview is one that is entirely predicated on the belief that making something visible makes it better, and that technology is a tool to make things visible.’ But of course, the UN, the USA, Belgium and France all had access to intelligence information, including radio broadcasts and satellite imagery from Rwanda, and ‘somebody’ didn’t react. Bridle cites a report on Rwanda, noting it could have been the conclusion of his book, too: ‘any failure to fully appreciate the genocide stemmed from political, moral, and imaginative weaknesses, not informational ones.’

The incapability to understand the scale and impact of technology on the lives of human beings is not a visual problem, it is a problem of imagination. One of the significant achievements of Bridle’s book is that it challenges the idea that to participate in the conversation about technology requires prior technical knowledge. Rather, Bridle points out, the fight is against the intentional obfuscation of systems, and that is before we even consider machine vision: to counter Schmidt’s idea of technology as a tool to make things visible, we need to criticise the role of technology in the creation of that image. Considering these complex questions of representation, maybe we should look to visual artists in order to see a reflection of the world we live in, and see that to point to the darkness is a way of shining a light. For the informed reader of technology criticism, New Dark Age will not be a revelation. Bridle’s research is impressive and the knowledge, examples and concerns he lays out are proposed in an organised, systemic fashion. As a summary of discussions spanning many disciplines, from finance to entertainment and climate change, Bridle’s book is not a primer, but a crucial illustration of just how intertwined these concerns are.

New Dark Age takes its title from H.P. Lovecraft’s ‘The Call of Cthulhu’ — ‘that we shall either go mad from the revelation or flee from the deadly light into the piece and safety of a new dark age’  — but then goes on to cite a line from Virginia Woolf’s diaries: ‘the future is dark, which is the best thing the future can be.’ This book is not a collection of prophecies; it is a commitment to the present. ‘Nothing here is an argument against technology: to do so would be to argue against ourselves,’ writes Bridle. He insists that what is needed is not understanding, but a new language, new metaphors — a new image — that would allow us to look at the darkness directly and — hopefully — begin to see.

James Bridle’s ‘New Dark Age’ By  Orit Gat.  The White Review. October 2018.






















No comments:

Post a Comment