Jamie Susskind talks about his book 'Future Politics : Living Together
in a World Transformed by Tech with Matthew d'Ancona. Drugstore Culture, vodcast #13. November 23, 2018.
Jamie Susskind is a practicing barrister and author, and former fellow
of Harvard's Berkman Klein Center for Internet and Society. He writes about the
effects of technology on politics, looking at how AI and machine learning, the
internet of things, robotics, blockchain, and virtual reality will change the
way we live together.
In this talk he discusses the subject of his new book, "Future
Politics: Living Together in a World Transformed by Tech": How far should
our lives be directed and controlled by powerful digital systems - and on what
terms? Talks at Google, October 18, 2018
When we consider the future that technological change will bring about,
it is tempting to envision a world taken over by robots, where the singularity
has given way to superintelligent agents and human extinction. This is the
image of our future we have grown accustomed to seeing in cinematic depictions,
but it is not the future that British barrister Jamie Susskind wants us to
worry about. Instead, in Future Politics: Living Together in a World
Transformed by Tech, Susskind focuses on how digital technologies control human
life rather than eliminate it.
All digital systems, after all, have their origin in code, and code,
Susskind contends, does not merely direct the actions of machines or
algorithmic platforms, it also directs our behavior and thought. For example,
code can force us to do things we would not otherwise do. A self-driving car
engineered to operate below the speed limit ensures its users obey the law.
Code can also scrutinize our choices and persuade us to change our behavior. A
smart fridge that monitors our eating habits, shaming our guilty pleasures,
might lead us to abandon our late-night snacking routine. And code, of course,
can shape our perception of the world. Search engines and algorithmic newsfeeds
control the flow of information, determining what we see and know.
Susskind’s remarkably comprehensive book explores the challenges new
digital technologies create, asking what the power and potential of digital
systems means for human liberty, democracy, justice, and politics. Most
importantly, he argues that the political ideas we have held for centuries are
ill-equipped to respond to the challenges posed by current and future
technological innovations. As such, we must upgrade the political theory canon
for the tech age.
And we ought not delay. From Russian hackers’ successful penetration of
major social media platforms ahead of the 2016 election to greater awareness of
addictive product design and newfound privacy concerns in the wake of the
Cambridge Analytica scandal, tech anxiety has become mainstream in the last
couple of years. There is a growing sense that today’s tech behemoths have created
technologies that even they cannot control—let alone the state or individual
users.
This has emboldened the “technology run amok” narrative. But Susskind’s
intervention is to note that recent events should force us to look in the
mirror instead. Susskind observes that while today’s technological innovations
have the capacity to learn on their own, they primarily learn to learn from the
humans who design and use them. Facial recognition software accustomed to
viewing white faces invalidated an Asian man’s passport photo because it
believed his eyes were closed, for example. Similarly, Google at one point
autocompleted the search query “why do gay guys…” with “why do gay guys have
weird voices?” “Lurking behind all the technology,” Susskind writes, “most algorithmic
injustice can actually be traced back to the actions and decisions of people.”
In other words, many of the problems tech forces us to confront do not
concern the innovations themselves, but how those innovations reflect back to
us and perpetuate the injustices of our own human world. “At its heart,”
Susskind writes, “this is not a book about technology. . . . It is a book about
people. Many of the problems referred to in these pages, past and future, can
be attributed to the choices of individuals. . . . These aren’t problems with
technology. They’re problems with us.” Responding to Russian election meddling
through digital advertisements, former president Barack Obama made a similar
argument. He warned that “our vulnerability to Russia or any other foreign
power is directly related to how divided, partisan, dysfunctional our political
process is.”
In this sense, what sets apart today’s technological developments from those
of prior eras is not necessarily their pace of innovation or their integration
into our daily lives, but their unique capacity to amplify the bad behavior and
choices of humans. Through this important observation, Susskind seizes the
terrain that still belongs—and can only belong—to humans. “The digital
lifeworld will demand more of us all,” Susskind argues. “From the shiniest CEO
to the most junior programmer, those who work in tech will assume a role of
particular influence.”
Some individuals who work in tech are already sensitive to this growing
responsibility. Google employees recently wrote their CEO to condemn
“Dragonfly,” a project that would create a search engine to censor whatever
content the Chinese government demands. Some Silicon Valley employees have
reportedly asked themselves, “What have we done?” But other revelations are
less comforting. Following Trump’s travel ban, for example, Google employees
considered ways to modify search functions, leading users to donation pages for
pro-immigration organizations and setting up a forum for users to contact
lawmakers and government agencies. While such actions may comport with a
worldview and political impulse shared by many (including my own), the fact
that a single company—and its employees—have such concentrated influence
undeniably presents a new challenge for our democracy. The power private tech
companies wield over our public and private lives represents an increasingly
precarious arrangement.
Susskind’s book concerns how we should think about this new reality, not
what we should do about it. “This is a book about principles and ideas. It
isn’t intended to offer specific regulatory proposals,” he concedes. But in
order to formulate the policy actions needed to confront the challenges posed
by today’s technological innovations, Susskind argues that we must begin to
understand that “the digital is political.” The major debate of our time is no
longer “about how much of our collective life should be determined by the state
and what should be left to market forces and civil society.” Instead, the
question we must grapple with is “how much of our collective life should be
directed and controlled by powerful digital systems—and on what terms.”
Because tech companies affect our political systems, sense of justice, and
economic lives, Susskind argues that our current regulatory and antitrust regimes
are ill-equipped to sufficiently monitor tech’s power. We need new and bold
regulatory proposals because, as Susskind says, “some of the power accruing to
tech firms . . . is so extraordinary that it rivals or exceeds the power of any
corporate entity of the past.” We need an alternative to antitrust, for
example, because “its regulatory domain is structured by reference to markets,
not forms of power.” With tech’s reach extending beyond the economic realm, the
relationship between technology companies and those who use their products no
longer constitutes a traditional relationship between consumer and company.
Susskind pleads, “We have to stop seeing [technology] just as consumers.” This
will be a tall task, he warns, since, “big tech companies will characterize
themselves first and foremost as corporate entities pursuing private profit.”
But this is far from the image that tech companies have worked hard to
create for themselves. Indeed, Susskind underappreciates how, in many ways,
today’s tech behemoths enjoy portraying themselves as playing an indispensable
civic role. Mark Zuckerberg has described Facebook as “more like a government
than a traditional company” and has said, “Facebook stands for bringing us
closer together and building a global community.” Airbnb’s head of policy
argues that the platform is “democratizing capitalism” and has said it provides
a new resource for the middle class. Responding to New York City’s recent cap
on ride-hailing services, Lyft’s vice president of policy stated, “These
sweeping cuts to transportation will bring New Yorkers back to an era of
struggling to get a ride, particularly for communities of color and in the
outer boroughs.”
Each of these statements enjoys a kernel of truth, but the civic role tech
companies have carved out seems to have obscured which political and economic
outcomes originate from their flawed business strategies and which are inherent
components of their technological innovations. Indeed, I would argue that it is
this perception of tech-as-civic-actor that has largely insulated tech
companies from greater scrutiny thus far. The gig economy, for example, depicts
itself as a flexible and empowering way to “get your side hustle on.” But
underpaying and under-protecting their users-cum-workers is not inherently part
of their technology; it is part of their business model—a model that has evaded
greater responsibility towards its independent contractor workforce by posing
as a social safety net for workers left behind by the global economy.
Similarly, in an attempt to sustain users’ attention, YouTube's autoplay
function leads viewers to extremist video content knowing such content will
keep users glued to their screens. The deleterious impact this has on civic
discourse is a result of a business model that trades user attention for
advertising dollars and not the inevitable result of using an algorithm to
filter and curate content on the video platform.
So while acknowledging that “the digital is political” might be an
important step in reining in tech’s power, we might go too far by buying into
the concept of tech companies as non-traditional economic actors. Disrupting
our political and economic lives often seems to be an inherent part of the
innovations put forward by today’s leading tech companies, but it is also
simply the tack tech companies have taken in their quest for profits. While the
ability of today’s tech to exert control over our behavior and thought is cause
for concern, digital technologies have, in many ways, simply enabled new ways
of doing old things, from running political campaigns to organizing protests to
driving taxis. Human history is one long story of adapting to this kind of
disruptive change, and it is premature to say what will or won’t work when it
comes to managing our current period of transition. But as Susskind reminds us,
“we have more control over [the future] than we realize.”
The Digital is Political. By Clara Hendrickson. Boston Review, October 15, 2018.
Nothing is as remote as yesterday’s utopias. From the 1990s until the end
of the last decade, the explosion in computing power was seen by wide-eyed
optimists as a force for liberation that would lay low unaccountable authority.
Their eyes have narrowed now. Democracy, justice, our very ability to earn a
living, feel precarious. “All that is solid melts into air,” said Marx of
19th-century capitalism. In our times, not only do economic systems feel
unstable, but basic assumptions on how humans live together.
Now, and ever more so in the future, how we perceive the world will be
determined by what is revealed and concealed by social media and search
services, affective computing and virtual reality platforms. The distinction
between cyberspace and real space is becoming redundant. The two are merging,
and as they come together, companies and states will have the power to control
our perceptions. The fragmentation social media promotes has been discussed to
death. But it is worth stressing that automated systems are placing us in
silos. It is their choice not ours to create a world where the people who most
need to hear opposing views are the least likely to hear them. Meanwhile, the
scandal of the Brexit campaign is setting the pattern for all campaigns;
showing how politcians and their agents can harvest data and target propaganda,
tailored to meet its recipients’ prejudices, without any public authority
regulating it or even knowing about it.
“You are entitled to your own opinions but not your own facts,” said
senator Daniel Patrick Moynihan in the 20th century. If that were ever true, it
is not now, as Jamie Susskind shows in this superb and necessary book. Highly
unusually, Susskind, a young British lawyer, combines knowledge of technology
with knowledge of political theory. He is as comfortable discussing Athenian
democracy as the moral problems of having “sex” with a virtual child. His
breadth of knowledge allows him to avoid replacing techno-utopianism with
fashionable dystopianism, and gives us a work that emphasises that the future
depends not just on technical advance but on political choices.
For who can argue that the introduction of technologies capable of
rapidly disseminating lies does not require a political response. It is not
good enough to say Facebook, Google or Twitter are like other private
companies, when they have become a modern agora that democratic society has
every right to regulate as it regulates other essentials of life. I still
subscribe to John Milton and John Stuart Mill’s belief that we should allow
free speech to be tested in the marketplace of ideas. But it needs
reinterpreting when the marketplace has been partitioned and privatised.
If virtual monopolies are to be tamed, their algorithms cannot be
treated as commercially confidential when they order how debates are conducted
and information is received. Society must ensure that their rules meet the
principles of consent and fairness. As Susskind states, so long as tech firms
keep their algorithms hidden, their data policies obscure, and their values
undefined, they cannot possibly claim these forms of legitimacy.
The need for openness goes far beyond the tech giants. Already most CVs are
never seen by human eyes. Algorithms scan them and decide who should work. If
you find a job, algorithms will monitor your performance and determine whether
you stay hired or become fired. Your chances of getting a loan or insurance
depend on computer programs, as increasingly will every essential of life.
For nearly all of the history of humanity we have lived as an unmonitored
species whose deeds were largely forgotten. We are becoming a watched species
whose every action, at least in theory, is becoming recordable and retrievable.
Those who do the monitoring wield enormous power: a power that is increasingly
hard to contest. Imagine the horror of trying to interact with a
face-recognition system that does not recognise that you are you. Or look at
how the ability to record your past, to rate you as if you were on TripAdvisor,
is used in China where local governments are compiling digital records of
citizens so the trustworthy can roam where they will while malcontents are
confined.
In the west, the algorithms that are ranking people are said to be
neutral. Yet Airbnb rules that visitors with African American names are
significantly less likely to be accepted than white people with identical
credentials. Sentencing algorithms in the US are once again predicting that
black offenders are more likely to reoffend than whites. And here lies a
distinctly modern arbitrariness: no one can say why. The judge does not give
his or her reasons. The code is a commercial secret. The prisoner is sent down
by a system that is more Kafkaesque than Orwellian.
Susskind is too intelligent a
writer to say that state intervention is the only answer. As China is already
suggesting, the new technologies are giving the state powers the dictators of
the past could only dream of. Rather, his work leads to demands for
transparency. As a working principle it ought to be a given that any form of
artificial intelligence that can harm the public must be publicly accountable.
If that means opening up commercially confidential information to independent
auditors, so be it.
The great debate of the 20th century was how much of our collective life
should be controlled by the market. The great debate of the 21st century will
be how much should be directed and controlled by digital systems. “We cannot
allow ourselves to become the playthings of alien powers, subject always to
decisions made for us by entities and systems beyond our control,” concludes
Susskind. It is a tribute to him that his work makes that future a little less
likely.
Future Politics: Living Together in a World Transformed By Tech – review,
By Nik Cohen. The Guardian, September 17, 2018.
No comments:
Post a Comment