In the
film The Big Sleep (1946), the private eye Philip Marlowe (played by Humphrey
Bogart) calls at the house of General Sternwood to discuss his two daughters.
They sit in the greenhouse as the wealthy widower recounts an episode of
blackmail involving his younger daughter. At one point, Marlowe interjects with
an interested and knowing ‘hmm’.
‘What
does that mean?’ Sternwood asks suspiciously.
Marlowe
lets out a clipped chuckle and says: ‘It means, “Hmm”.’
Marlowe’s
reply is impertinent and evasive, but it’s also accurate. ‘Hmm’ does mean
‘hmm’. Our language is full of interjections and verbal gestures that don’t
necessarily mean anything beyond themselves. Most of our words – ‘baseball’,
‘thunder’, ‘ideology’ – seem to have a meaning outside themselves – to
designate or stand for some concept. The way the word looks and sounds is only
arbitrarily connected to the concept that it represents.
But the
meanings of other expressions – including our hmms, hars and huhs – seem much
more closely tied to the individual utterance. The meaning is inseparable from
or immanent in the expression. These kinds of expressions seem to have meaning
more how a particular action might have meaning.
Are
these two ways of meaning – designative and immanent – simply different things?
Or are they related to one another? And if so, how? These questions might seem
arcane, but they lead us back to some of the most basic puzzles about the world
and our place in it.
Human
beings are brazen animals. We have lifted ourselves out of the world – or we
think we have – and now gaze back upon it detached, like researchers examining
a focus group through one-way glass. Language is what allows us to entertain
this strange, but extraordinarily productive, thought. It is the ladder we use to
climb out of the world.
In this
way, human detachment seems to depend on the detachment of words. If words are
to keep the world at arm’s length, they must also be uninvolved in what they
mean – they must designate it arbitrarily. But if words fail to completely
detach, that failure should tell us something about the peculiar – and humble –
position we occupy ‘between gods and beasts’, as Plotinus put it.
In his
Philosophical Investigations (1953), Ludwig Wittgenstein draws a distinction
that mirrors the one between these two ways of meaning. ‘We speak of
understanding a sentence,’ he writes, ‘in the sense in which it can be replaced
by another which says the same; but also in the sense in which it cannot be
replaced by any other.’ (Marlowe evidently felt his ‘hmm’ could not be
replaced.)
The
first kind of understanding points to a peculiar aspect of words and sentences:
two of them can mean the same thing. As Wittgenstein points out, we’d never
think of replacing one musical theme with another as if they amounted to the
same thing. Nor would we equate two different paintings or two different
screams. But with many other sentences, understanding the meaning is
demonstrated by putting it in other words.
However,
the meanings of the music, the painting and the scream seem to be immediately
there. ‘A picture tells me itself,’ Wittgenstein writes. There is no way to
replace one expression with another without changing the meaning. In these
cases, there isn’t really a sense of a meaning apart from the expression
itself. It would be perverse to ask someone who has just let loose a chilling
scream: ‘What exactly did you mean by that?’ or ‘Could you put that another
way?’
Although
these two examples of ‘understanding’ might seem of completely different kinds,
Wittgenstein insists that they not be divorced from one another. Together, they
make up his ‘concept of understanding’. And, indeed, most of our language does
seem to lie somewhere along a spectrum between simply designating its meaning
and actually embodying it.
On one
end of the spectrum, we can imagine, as Wittgenstein does, people who speak a
language consisting only of ‘vocal gestures’ – expressions such as ‘hmm’ that
communicate only themselves. On the other end lies ‘a language in whose use the
“soul” of the words played no part’. Here, ‘meaning-blind’ people, Wittgenstein
writes, would use words without experiencing the meanings as connected to the
words at all. They would use them the way a mathematician uses an ‘x’ to
designate the side of a triangle, without the word seeming to embody the
meaning in any way.
But
neither of these imaginary languages seems capable of anything like the range
and expressive richness of actual human language. The former seems to place
human language (and our world) closer to that of animals and infants; the
latter, closer to that of computers, for whom it couldn’t matter less how
something is said.
Still,
the examples might provide some clue as to how these ways of meaning relate to
each other. The language of gesture would seem to have to come before the
language of signs. It’s difficult to imagine a little girl first learning to
communicate her needs with arbitrary signs, and only later learning how to
communicate by gesture.
Even
once we do come to use words in an arbitrary, designative manner, they – at
least, many of them – still seem to have their meanings in themselves. When I
first learn that the French ‘livre’ means book, the word is associated with its
meaning only in a mediated manner. I remain, at this stage, meaning-blind with
respect to the word. I know what it means, but its meaning doesn’t resonate in
the material aspects of the word. As I become more fluent in French, however,
the word’s meaning becomes sedimented in it. ‘Livre’ begins to sound like it means
what it means.
Full
understanding, in Wittgenstein’s sense, seems to involve not just being able to
replace ‘livre’ with ‘book’, but also in the experience of the meaning in the
word. To put it another way, ‘livre’ might mean book but it doesn’t mean it the
way ‘book’ does.
We can
of course imagine a person (or machine) using words competently without having
this experience of meaning, but is what we imagine really human language use?
It’s hard to see how such a person would have access to the whole range of
practices in which we use words. Subtleties in certain jokes or emotional
expressions would escape them. Meaning is more sunken into words than the
practice of replacing one term with another suggests.
The idea
that words themselves might harbour meaning used to be more intellectually
respectable. Deciphering the relationship between what words mean and how they
sound, which seems absurd with all but a small subset of our vocabulary, used
to be of great interest. In Plato’s Cratylus, the title character indulges in
the speculation, common at the time, that certain words are correct: that they
name the things they refer to accurately. Etymology can therefore provide
insight. ‘Anyone who knows a thing’s name also knows the thing,’ Cratylus says.
Plato’s
Socrates prefers to gain insight into things by grasping the ‘forms’ behind
them, instead of through the contingent, and often mistaken, names given to
them. The production of names or words – ‘onomatopoieo’ in Ancient Greek –
tells us only how an individual name-giver saw things, Socrates tells Cratylus.
There’s no way to adjudicate the ‘civil war among names’ and decide which get
at the truth.
Today,
we use the concept of onomatopoeia in a more restricted way. It is applied only
to words for sounds – ‘boom’, ‘gasp’, ‘splash’ – that bear a mimetic
relationship to a sound in nature. The connection might be more indirect and
tenuous in other cases, as in apparently onomatopoetic words for motions such
as ‘slither’ or ‘wobble’ that seem through a kind of synaesthesia to imitate a
sound that might accompany the motion.
But
Socrates and Cratylus were also talking about what we now call sound symbolism,
a much wider range of connections between sounds and what they mean. These
include things such as the association in English and related languages between
the ‘gl-’ sound and light, as in ‘glisten’, ‘glint’, ‘glimmer’ and ‘glow’.
Does
this sound have an onomatopoetic connection to light? Or is it just an
arbitrary connection that has come to ‘feel’ nonarbitrary to native speakers?
The question is difficult to even ponder. Asking how ‘gl-’ relates to light is
a little like enquiring after the connection between sad music and sadness. We
can point to feelings and sensations that suggest they belong together, but we
struggle to come up with an objective arbiter of the connection outside our own
experience. This, as I’ll come to, is because the articulation itself produces
or constitutes the connection.
Studies
have established that the connections between things and sounds are
nonarbitrary in many more of these cases than it would seem at first glance.
There seem to be universal or near-universal synaesthetic connections between
particular shapes and sounds. But, in a certain sense, the objectivity of the
connection is beside the point. These connections still won’t underwrite the
kinds of hopes that Cratylus had for etymology. At most, they indicate certain
affinities between aspects of things – shape, size, motion – and particular
sounds that the human vocal apparatus can produce. But even if the connection
between ‘glow’ and glowing were not based on any verifiable affinity, the
word’s meaning is still accompanied by its sound. ‘Glow’ would still glow with
glowing.
What is
noteworthy here is the human capacity not only to recognise but to produce,
transform and extend similarities as a way of communicating meaning. In a short
and opaque essay from 1933, Walter Benjamin refers to this capacity as the
‘mimetic faculty’ and suggests it is the foundation of human language. Language
is an archive of what he calls ‘nonsensuous similarities’ – similarities
produced by human practices and human language that don’t exist independently
of them. Nonsensuous similarity, Benjamin writes, establishes ‘the ties between
what is said and what is meant’. He suggests that fully developed human
language emerges out of more basic imitative practices both in the life of the
child and in the evolution of language itself.
The
suggestion that all language is onomatopoeic becomes here not a thesis about
any independent relationship obtaining between words and the world, as it was
for Cratylus, but one about human creativity and understanding: our ability to
produce and see correspondences or, as Benjamin talks about elsewhere, to
translate into words the meaning communicated to us through experience.
This
mimetic faculty is not just active in the ‘name-giving’ that establishes
connections between language and objects, but in the ways in which established
language is extended. The philosopher Charles Taylor writes in The Language
Animal (2016) about ‘the figuring dimension’ of language: the way we use
language from one domain to articulate another. Physical language of surface
and depth, for example, permeates our emotional and intellectual language –
‘deep thoughts’, ‘shallow people’. Words of physical motion and action permeate
more complex and abstract operations. We ‘grasp’ ideas, ‘get out of’ social
obligations, ‘bury’ emotions.
These
produced similarities between the physical and social or intellectual word fit
into a similar space as the ‘gl-’ of ‘glow’. They are certainly not arbitrary,
and yet they can’t really be justified by any criteria outside the figuration
itself.
In many
cases, they are much more than metaphors, since they are indispensable for our
very conception of the matters they describe. Can we describe the depth of a
deep thought without drawing in some way on the concept of depth? Language,
here, as Taylor puts it, constitutes meanings: ‘The phenomenon swims into our ken
along with its attribution.’ These cases suggest that, not only is meaning
sunken into words, it is simply unavailable without their articulation.
This
kind of articulation is more familiar in arts like painting and music. Words
such as ‘deep’ and ‘glow’ can be thought of as analogous to particular notes
that figure a particular kind of experience in a particular way, and so bring
it into greater relief. Wittgenstein writes that ‘understanding a sentence in
language is much more akin to understanding a theme in music than one may
think’.
Moreover,
as they become conventional, both linguistic and musical phrases open up new
avenues for variation and combination that enable ever more fine-grained
articulation and even the expression of entirely new phenomena and feelings. As
Herman Melville wrote in his novel Pierre (1852): ‘The trillionth part has not
yet been said; and all that has been said, but multiplies the avenues to what
remains to be said.’
There
is, of course, something very different about understanding a sentence and
understanding a theme in music. We can replace the words ‘glow’ and ‘deep’ in
most contexts in which they appear without much fuss – with, say, ‘shine’ and
‘profound’. We can even imagine using utterly different words in their place,
stipulating, for example, that ‘rutmol’ will replace ‘deep’ in the dictionary.
After a period of consistent use, ‘rutmol’ might even be as expressive of depth
as ‘deep’ is now. We might begin to ponder ‘rutmol thoughts’ and be shaken by
‘rutmol wellsprings of emotion’.
Now,
imagine a film scene portraying a cheerful family gathering around a sun-soaked
table. Instead of the customary bright melody, the soundtrack is random musical
notes. No matter how many times we watch it and try to make the score express
‘cheerfulness’, it would never feel right. The music couldn’t be heard as
cheerful if it wasn’t. It might convey, instead, that not everything is as it
seems with this family.
The
meaning of ‘deep’ is detachable from ‘deep’ in a way that the meaning of the
melody is not. Words can stand for things in a way that music can’t. This is
what drove Socrates in Cratylus to stop wondering how words such as ‘deep’
related to deep things, and ask instead what deep things could tell us about
the idea of depth. Once we get the messy words out of the way – which might
barely have anything to do with the things anyway – then we can contemplate the
things abstractly.
This
ability to put the expressions for things to the side to focus on the things
themselves seems integral to the unique relationship we have to the world.
While animals (and small children) might be able to respond to signs as
stimuli, and even use them in a rudimentary way to advance particular ends,
they don’t seem to have the objects the way we do. They are too close to
things, unable to step back and abstract from their concrete appearance. This
makes them, as Heidegger puts it, ‘poor in world’.
When I
tell my dog to get his leash, what he ‘understands’ remains an element of the
immediate environment in which the expression appears, indicating a path to a
particular end – in this case, a walk. For the dog, words have meaning the same
way that the sound of my car pulling into the driveway has meaning. The word
‘leash’ might have only an arbitrary connection to the object for me, but for
my dog it is inseparable from the situation. If I ask him to get the leash in
the wrong context – when we’re already on our walk, for example – it means
nothing.
But what
is it exactly for ‘leash’ to symbolise a leash? This might seem self-evident:
human beings come along, find the world and its furniture sitting there, and
simply start tagging it arbitrarily with signs. But we forget the role the
words played in lifting us into this perspective on things. Would we be able to
conceive of the leash the way we do if we couldn’t call it anything else? In
other words, what role does the arbitrariness of the word – the fact that we
can replace it – play in the constitution of the things as independent objects?
And,
moreover, how exactly do we get from the kind of immanent engagement with the
concrete world that dogs seem unable to tear themselves out of to the position
of disengaged spectator naming things willy-nilly?
In
another difficult essay, Benjamin reads the story of Adam and Eve’s exile from
the Garden of Eden as a kind of parable about the ‘externalisation’ of meaning.
‘On Language of Such and the Language of Man’ (1916) begins by positing an
absolutely general definition of meaning: ‘we cannot imagine a total absence of
language in anything’. And so Benjamin understands ‘language as such’ as a very
basic sense of meaning – which Genesis construes as God’s creative speech. It
is a fundamental given and constituent aspect of reality, being itself.
‘Language as such’ means in the way that pictures and ‘hmm’ mean. It means
itself.
Human
language begins by naming this always-already meaningful reality, engaging with
it by imitating it in onomatopoeia and figurative articulation. Adam’s naming
of the animals is the biblical parallel. But in the Fall, this immanent and
expressive meaning – which means in the same way that literally everything else
means – is externalised into the human word. We abandon ‘the communication of
the concrete’ and immediate, Benjamin writes, for abstract and mediate words
that vainly purport to stand for things instead of just imitating aspects of
them. As a result of this Fall, we exile ourselves from Eden – immediate
engagement with the natural world – and from our own bodies, which we now also
experience, to our great shame, as objects.
For
Benjamin, designative language and the world of objects that it brings about
are made possible by a kind of forgetfulness. Our language is composed of words
that have undergone countless transformations and whose original mimetic
connections to reality have been lost. The immediate meaning of figurative
imitations and metaphors dies and detaches from the concrete contexts in which
it originally communicated.
Language
is dead art, still connected to the things but so withered it now appears as
only an arbitrary and abstract sign. Words appear to us like a faded landscape
painting appears to a half-blind man. He no longer makes out the figure
depicted but, remembering what it signifies, he instead takes the shapes as
tokens. Using them, he can now to refer to mountains and streams in the
abstract, no longer constrained by the immediacy of the scene, and free to
replace the shapes with another expression if he likes. He gains a world only
by losing his ability to really see it.
Does
this new kind of interchangeable expression constitute a different way of
meaning from the kind of meaning that ‘tells me itself’? Wittgenstein is wary
of the very concept of meaning detached from expression, not only in the latter
case but also in the case of interchangeable words. He doesn’t refer to the two
sentences as meaning the same thing, but only to the practice we have of
replacing one with another.
Whenever
we enquire after the meaning of a word, we never get the thing that is meant –
a permanent definition that underlies the word – but only another way of saying
it. Despite its pretensions, the dictionary is no more than a pedantic and
overexacting thesaurus. It doesn’t offer meaning, only other words.
Dictionary
definitions can encourage in us a sense of words as signs representing fuller
meanings or content that are in some sense ‘inside’ or ‘underneath’ them. But
when we analyse meaning, we are usually making only lateral moves, not
‘excavating’ anything. These are in reality interpretations that exist ‘on the
same level’ as what is interpreted. ‘Every interpretation,’ Wittgenstein
writes, ‘hangs in the air together with what it interprets, and cannot give it
any support.’
When we
understand – despite how the word sounds – we don’t get at something beneath
what is understood, but are simply able to provide another, perhaps better, way
of saying it. The way the intellectual domain has been figured by words
originating in our interaction with the physical environment can mislead us
into the abstruse, multilevel ontologies that plague philosophy.
Wittgenstein
is not suggesting that we discard these metaphors of our intellectual life that
are constituted by our language. It is unclear that we even can, or what it
would mean if we did. They are part of what Wittgenstein calls our ‘form of
life’. Likewise, distance from the world that language provides and reinforces
is indispensable, both in everyday communication and in modelling the world
scientifically, but it can lead us astray if we take it too seriously, as we
often do. ‘The best I can propose,’ Wittgenstein says of one of the pictures
arising from our designative language, ‘is that we yield to the temptation to
use this picture, but then investigate what the application of the picture
looks like.’ If we look at how these words are used, Wittgenstein thinks, the
question of what they really mean or refer to will dissolve.
This
solution distinguishes Wittgenstein from postmodern theorists who take the
limitations of our language and the impossibility of pure objectivity as reason
to reject ‘Enlightenment’ reason. Those who pretend to see through ‘the myth’
of objectivity are on no firmer ground than those who cling to it. If anything,
the former pull themselves out even further than the latter, pretending to
watch the watchers.
One
version of this view sees us stuck in what Friedrich Nietzsche called the
‘prison-house of language’. To Nietzsche and others, we are confined within our
own meagre language and its presumptuous abstractions, which fall short of the
real world even while they purport to describe it truthfully. Language is
deemed inadequate to the world, an implausible instrument for pursuing and
expressing truth.
But this
view assumes exactly the same division between language and world as the one it
criticises: it’s just less sanguine about reaching across the divide. To both
ways of thinking, whether we can reach it or not, there is something out there:
the way things are, which language is meant to designate. But ‘the great
difficulty here’, Wittgenstein writes, ‘is not to represent the matter as if
there were something that one couldn’t do.’ For him, it is the divide itself,
which places language on one side and the world on the other, that needs to be
questioned, not whether the divide can be bridged.
This is
not to say that the divide should be regarded as a fiction. It is, rather, an
achievement, but one with certain limits that are easily forgotten.
Wittgenstein’s later writing takes on the aspect of therapy because it tries to
draw attention to the moments, in philosophy especially, where removing
language from the contexts in which it has a use, lends that language a kind of
magical power and leads to confusion. We begin to puzzle about what the word
refers to out there in the world, instead of attending to what it actually does
in particular linguistic practices – what it tells us.
These
problems are not only philosophical. In all kinds of domains – science,
technology, politics, religion – we are prone to taking useful interpretations
and turning them into frozen and potentially dangerous ideologies. Instead of
looking at the concrete application of the words, we disengage them from
practice, and instil them and the pictures they generate with greater reality
than reality itself. We side with the words even when they begin to contradict
the reality.
There
is, in the end, only one kind of meaning. As Wittgenstein puts it, if the
abstractions of philosophy are to have a use, ‘it must be as humble as that of
the words “table”, “lamp”, “door”.’ He might have added ‘‘hmm’.
The way
words mean. By Alexander Stern. Aeon ,
September 3, 2019.
The Fall
of Language : Benjamin and Wittgenstein
on Meaning. Harvard University Press
In a
little-watched 1947 comedy, The Sin of Harold Diddlebock, directed by Preston
Sturges, the title character, an accountant, takes shelter against the
bureaucratic drudgery of his existence with a literal wall of clichés.
Diddlebock covers every inch of the space next to his desk with wooden tiles
engraved with cautious slogans like “Success is just around the corner” and
“Look before you leap.” Head down, green visor shielding him from the light, he
passes the years under the aegis of cliché, until one day he’s fired, forced to
pry the clichés off his wall one by one, and face the world — that is,
Sturges’s screwball world — as it is.
The
French word “cliché,” as Sturges must have known, originally referred to a
metal cast from which engravings could be made. Diddlebock’s tiles point to one
of the primary virtues of cliché: it is reliable, readymade language to calm
our nerves and get us through the day. Clichés assure us that this has all
happened before — in fact, it’s happened so often that there’s a word, phrase,
or saying for it.
This
points to one of the less remarked upon uses of language: the way it can,
rather than interpret the world or make it available, cover it up. As George
Orwell and Hannah Arendt recognized, this can be socially and politically
dangerous. Arendt’s Eichmann spoke almost exclusively in clichés, which, she
wrote, “have the socially recognizable function of protecting us against
reality.” And, referring to go-to political phrases like “free peoples of the
world” and “stand shoulder to shoulder,” Orwell wrote:
A
speaker who uses that kind of phraseology has gone some distance towards
turning himself into a machine. The appropriate noises are coming out of his
larynx, but his brain is not involved as it would be if he were choosing his
words for himself. If the speech he is making is one that he is accustomed to
make over and over again, he may be almost unconscious of what he is saying, as
one is when one utters the responses in church. And this reduced state of
consciousness, if not indispensable, is at any rate favorable to political
conformity.
Orwell’s
critique of political language was grounded in a belief that thinking and
language are intertwined. Words, in his view, are not just vehicles for
thoughts but make them possible, condition them, and can therefore distort
them. Where words stagnate thought does too. Thus, ultimate control in 1984 is
control over the dictionary. “Every year fewer and fewer words, and the range
of consciousness always a little smaller.”
Orwell
worried not just about the insipid thinking conditioned and expressed by
cliché, but also the damaging policies justified by euphemism and inflated,
bureaucratic language in both totalitarian and democratic countries. “Such
phraseology,” he wrote — as “pacification” or “transfer of population” — “is needed
if one wants to name things without calling up mental pictures of them.”
In our
democracies words may not mask mass murder or mass theft, but they do become a
means of skirting or skewing difficult issues. Instead of talking about the
newest scheme to pay fewer workers less money, “job creators” talk about
“disruption.” Instead of talking about the end of fairly compensated and
dignified manual labor, politicians and journalists talk about the “challenges
of the global economy.” Instead of talking about a campaign of secret
assassinations that kill innocent civilians, military leaders talk about the
“drone program” and its “collateral damage.”
The
problem is not that these phrases are meaningless or can’t be useful, but, like
Diddlebock’s desperate clichés, they direct our thinking along a predetermined
track that avoids unpleasant details. They take us away from the concrete and
into their own abstracted realities, where policies can be contemplated and
justified without reference to their real consequences.
Even
when we set out to challenge these policies, as soon as we adopt their prefab
language we channel our thoughts into streams of recognizable opinion. The
“existing dialect,” as Orwell puts it, “come[s] rushing in” to “do the job for
you.” Language, in these cases, “speaks us,” to borrow a phrase from Heidegger.
Our ideas sink under the weight of their own vocabulary into a swamp of jargon
and cant.
Of
course politics is far from the only domain where language can serve to protect
us from reality. There are many unsightly truths we screen with chatter. But
how exactly is language capable of this? How do words come to forestall
thought?
I.
The
tendency of abstracted and clichéd political language to hide the phenomena to
which it refers is not unique. All language is reductive, at least to a degree.
In order to say anything at all of reality, language must distill and thereby
deform the world’s infinite complexity into finite words. Our names bring
objects into view, but never fully and always at a price.
Before
the term “mansplaining,” for example, the frustration a woman felt before a
yammering male egotist might have remained vague, isolated, impossible to
articulate, unnameable (and all the more frustrating therefore). With the term,
the egotist can be classed under a broader phenomenon, which can then be
further articulated, explained, argued over, denied, et cetera. The word opens
up a part, however small, of the world. (This domain, unconscious
micro-behavior and its power dynamics, has received concentrated semantic
attention of late: “microaggression,” “manspreading,” “power poses,” “implicit
bias”.)
But,
like all language, the word is doomed to fall short of the individual cases. It
will fit some instances better than others, and, applied too broadly, it will
run the risk of distortion. Like pop stars and pop songs, words tend to
devolve, at varying speeds, from breakthrough to banality, from innovative
truth to insipid generalization we can scarcely bear to listen to. As words
age, we tend to forget the work they did to wrench open a door into the world
in the first place. In certain cases, this means the door effectively closes,
sealing off the reality behind the word, even as it names it.
When a
word is new, the reality it opens up is too, and it remains fluid to us. We can
still experience the poetry of the word — we can see the reality as the coiner
of the word might have. We can also see how the word approaches its object, and
for that matter that it does approach the object from a certain perspective.
But over time, the door can close and leave us only with a word, one that was
never capable of completely capturing the thing to begin with.
Nietzsche,
for example, rejected the notion that the German word for snake, Schlange,
could live up to its object, since it “refers only to its winding [schlingen
means “to wind”] movements and could just as easily refer to a worm.” The idea
that a medium evolving in such a partial, anthropocentric, and haphazard manner
might be able to speak truth about the world, when it could only ever take a
sliver of its objects into account, struck him as laughable. He famously wrote:
What is
truth? A mobile army of metaphors, metonymies, anthropomorphisms — in short a
sum of human relations, which are poetically and rhetorically intensified,
passed down, and embellished and which, after long use, strike a people as
firm, canonical, and binding.
Language
is made up of countless corrupted, abused, misused, retooled, coopted, and
eroded figures of speech. It is a museum of disfigured insights (arrayed
alongside trendy innovations) that eventually congeal into the kind of insipid
prattle we consider suitable for stating facts about the world.
Still,
we may not want to go as far as Nietzsche in denying language its pretension to
objective truth simply because of its unseemly evolution. Language, in a
certain sense, needs to be disfigured. In order to function at all, it must be
disengaged and its origins forgotten. If using words constantly required the
kind of poetic awareness and insight with which Adam named the animals — or
anything approaching it — we’d be in perpetual thrall to the singularity of
creation. We might have that kind of truth, but not the useful facts we need to
manage our calendars and pay our bills. If the word for snake really did
measure up to its object, it would be no less unique than an individual snake —
more like a work of art than a word — and conceptually useless. Poetry, like
nature, must die so we can put it to use.
II.
Take the
phrase the “mouth of the river” — an example Elizabeth Barry discusses in a
book on Samuel Beckett’s use of cliché. We certainly don’t have an anatomical
mouth in mind anymore when we use that phrase. The analogy that gave the river
its mouth is, in this sense, no longer active, it has died and, in death,
become an unremarkable, unconscious, and useful part of our language.
This
kind of metaphor death is typical of, perhaps even central to the history of
language. As Nietzsche’s remarks on the word “snake” suggest, our sense that
when we use “mouth” to refer to the cavity between our lips we are speaking
more literally than when we use the phrase “mouth of a river” is something of
an illusion. It is a product of time and amnesia, rather than some hard and fast
distinction between the literal and the figurative. The oral “mouth” was once
figurative too; it has just been dead longer.
Clichés,
as Orwell writes of them, are one step closer to life than river “mouths.” They
are “dying,” he says. Not quite dead since they still function as metaphor,
neither are they quite alive since they are far from a vibrant figurative
characterizations of an object. Orwell’s examples include “toe the line,”
“fishing in troubled waters,” “hotbed,” “swan song.”
These
metaphors are dying since we use them without seeing them. We don’t see, except
under special conditions, the line to be toed, the unhatched chickens we should
resist counting, the dead horse we may have been flogging. We may even use
these phrases without understanding the metaphor behind them. Thus, Orwell
points out, we often find the phrase spelled “tow the line.”
But they
haven’t become ordinary words, like the river’s “mouth.” We’re much more aware
of their metaphorical character and they are much more easily brought back to
life. They remain metaphorical even if their metaphors usually lie dormant.
This gives them the peculiar ability to express thoughts that haven’t, strictly
speaking, been thought. We don’t so much use this kind of language as submit to
it.
This
in-between character is what leaves cliché susceptible to the uses of ideology
and self- deception. We come to use cliché like it’s ordinary language with
obvious, unproblematic meanings, but in reality, the language remains a
particular and often skewed or softened interpretation of the phenomena.
In
politics and marketing, spin-doctors invent zombie language like this in a
relatively transparent way — phrases that present themselves as names when they
really serve to pre-interpret an issue. “Entitlement programs.” “A more
connected world.” These phrases answer a question before it is asked; they try
to forestall alternative ways of looking at what is referred to. Cliché is the
bane of journalism not just for stylistic reasons, but also because it can
betray a news story’s pretense of objectivity — its tendency to insinuate its
own view into the reader.
Euphemistic
clichés are particularly good at preventing us from thinking about what they
designate. Sometimes this is harmless: we go from “deadborn” to “stillborn.”
Sometimes it isn’t. We go, as George Carlin documented, from “shell shock” to
“battle fatigue” to “post- traumatic stress disorder,” to simply “PTSD.” By the
time we’re done, the horrors of war have been replaced by the jargon of a car
manual, and a social pathology is made into an individual one. To awaken the
death and mutilation beneath the phrase takes an effort most users of the
phrase won’t make.
This is
how a stylistic failure can pass into an ethical one. Language stands in the
way of a clear view into an object. The particular gets buried beneath its
name.
We might
broaden the definition of cliché here to include not just particular overused,
unoriginal phrases and words, but a tendency in language in general to get
stuck in this coma between life and death. Cliché is neither useful, “dead”
literal language that we use unthinkingly to get us through the day, nor
vibrant, living language meant to name something new or interpret something old
in a new way. This latter type of language figures or articulates its object,
translating something in the world into words, modeling reality in language and
making it available to us in a way it wasn’t before. The former, “dead”
language need only point to its object.
The
trouble with cliché is that it plays dead. We use it as dead language, simply
pointing at the world, when it is really figuring it in a particular way. It is
language that has ceased to seem figurative without ceasing to figure, and it
is this that accounts for its enormous usefulness in covering up the truth or
reinforcing ideology. In cliché, blatant distortions wear the guise of mundane
realities. In the process, it is not just language that is deadened, but
experience itself.
III.
I want
to suggest now that in popular language today irony has become clichéd.
Consider “binge-watching.” The phrase might seem too new to really qualify as a
cliché. But, in our ever-accelerating culture a clever tweet can become a
hackneyed phrase with a single heave of the internet. In “binge-watching” what
has become clichéd is not a metaphor, but a particular tone — irony.
“Binge-watching”
is of course formed on the model of “binge-drinking” and “binge-eating.”
According to the OED, our sense of “binge” comes from a word originally meaning
“to soak.” It has been used as slang to refer to getting drunk (“soaked”) since
the 19th century. A Glossary of Northamptonshire words and phrases from 1854
helpfully reports that “a man goes to the alehouse to get a good binge, or to
binge himself.” In the early 20th century the word was extended to cover any
kind of spree, and even briefly, as a verb, to mean “pep up” as in “be
encouraging to the others and binge them up.”
Like
most everything else, binging started to become medicalized in the 1950s.
“Binge” was used in clinical contexts as a prefix to “eating” and “drinking,”
and both concepts underwent diagnostic refinement in the second half of the
century. Binge-eating first appeared in the DSM in 1987. Binge drinking is now
defined by the NIH as five drinks for men and four for women in a two-hour
period.
By the
time “binge-watch” was coined and popularized (Collins Dictionary’s 2015 word
of the year), the phrase “binge-drinking” had moved from the emergency room to
the dorm room. The binge-drinkers themselves now refer to binge-drinking as
“binge-drinking” while they’re binge-drinking. In a way, they’re taking the
word back to its slang origins, but, in the meantime, it has accumulated some
serious baggage.
The
usage is ironic, since when the binge-drinkers call binge-drinking
“binge-drinking,” they don’t mean it literally, the way the nurse does in the
hospital the next morning. Take this example from Twitter. “I’m so blessed to
formally announce that I will continue my career of binge drinking and self
loathing at Louisiana state university!” The tweet is a familiar kind of
self-deprecating irony that aims, most of all, to express self-awareness.
It
elevates the user above the concept of binge-drinking. If I actually had a
problem with binge-drinking, the writer suggests, would I be talking about it
like this? By ironically calling binge-drinking “binge-drinking,”
binge-drinkers deflect anxiety that they might have a problem with
binge-drinking. They preempt criticism or concern by seeming to confess everything
up front.
The
phenomenon is part of a larger incursion of the language of pathology into
everyday life, where it has, in many contexts, replaced the language of
morality. Disease is no longer just for sick people. Where we might have once
worried about doing wrong, we now worry about being ill. One result is that a
wealth of clinical language enters our everyday lexicon.
These
words are often not said with the clinical definition in mind. “I’m addicted to
Breaking Bad/donuts/Zumba.” “I’m so OCD/ADD/anorexic.” “I’m such an alcoholic/a
hoarder.” “I was stalking you on Facebook and I noticed…” They are uttered
half-ironically to assure you that the utterer is aware that his or her
behavior lies on a pathological continuum. But that very awareness is marshaled
here as evidence that the problem is under control. It would only really be a
problem if I didn’t recognize it, if I were in denial.
This is
a kind of automatic version of the talking cure. If I’m not embarrassed to call
my binge-drinking “binge-drinking,” it must mean it’s not a problem. In effect,
we hide these problems in plain sight, admitting to them so readily, with just
the right dose of irony, that we don’t have to confront them. This
self-awareness of the compulsive character of much of our behavior becomes in
essence a way to cover it up.
“Binge-watching”
accomplishes this move even more effectively, since it beats the clinic to the
punch. It doesn’t ironize an extant clinical term, but instills irony in the
very name of the activity — watching an obscene amount of television to block
out the existence of the outside world. It can’t even be referred to without
the term. Irony is thus built in to any discussion of the activity.
“Binge-watching” neutralizes critique. Taking it seriously is made reactionary
and moralizing by the term itself.
Irony,
in effect, becomes clichéd here. In the same way as cliché, the irony in the
phrase “binge-watching” hovers comatose between life and death. Just as the
dead horse in the cliché stops being really experienced as a metaphor,
“binge-watching” stops being experienced as irony. The irony that animated the
initial use of the term fades away and it is used as a simple name for an
activity. But the irony doesn’t disappear completely. It still serves to automatically
undercut the possibility of critique through the ironic admission of a
compulsion. We no longer really experience the irony when we use the term, but
it insinuates itself into our expressions and becomes thereby all the more
effective.
This means
that we get the benefits of irony — its feeling of distance and superiority —
without having done any of the work of critical negation. Actual literal
engagement with the activity gets covered over by a clichéd irony. When this
pseudo-confessional irony is embedded in the name for the thing itself, we
prevent it from really being seen. The name conceals the named. We cease to see
people sitting in front of a screen for hours at a time, unable to tear
themselves away, infantilized, stupefied by cheap cliffhangers and next episode
autoplay, some rendered so impatient they feel compelled to consume the content
on fast forward. Instead, they are simply “binge-watching” — engaged in a
socially sanctioned activity with a cute name.
IV.
This
clichéing of irony undercuts its social significance. The Greek eironeia meant
“feigned ignorance.” Socrates was ironic because he affected not to know when
he questioned accepted wisdom. It was a way of undermining thoughtlessly held
beliefs. Irony has come (“ironically”) to mean almost the opposite. We now
engage in clichéd mass irony in order to feign awareness of things we really
aren’t aware of. We pay lip service to critique in order not to confront it
head-on. Irony props up thoughtlessly held beliefs — clichés that we stamp on
experiences before we have a chance to really experience them.
There is
an ongoing complaint, periodically renewed by a spate of books or think-pieces,
that sees us plagued by ironic language, attitudes, dress, et cetera. These are
taken as signs of cynicism, alienation, disaffection. But, in reality, irony
has become a coping mechanism, built-in critique that is not cynical,
alienated, or disaffected enough. Half-sensing that something is wrong, we
gesture toward critique, toward an external, alienated view of our
circumstances, so we don’t have to take it up.
The
pseudo-awareness that this engenders comes to conceal not just personal
failings, but also the moral compromises extracted by modern bureaucracies.
Indeed it is one of the remarkable and terrifying things about a bureaucracy —
perhaps about our culture as a whole — that everyone in it seems to know
better.
Diddlebock’s
self-delusion looks naïve, but our protective clichés — not imprinted on
woodblocks that hang above our heads, but integrated into our everyday speech —
are just as damning. We deliver a series of pre-packaged postures and
ready-made ironies on cue to keep the realities of our culture from making
their repeated, compulsive appearance in full force. They serve the same
purpose as Diddlebock’s tiles: to ensure us that what we’re doing is all right
— what else can we do? — and to keep us from seeing things as they are. This
refuge quickly becomes a prison.
Bingespeak.
By Alexander Stern. Los Angeles Review of Books , April 1 , 2018.
"[Trump]
is like the blond alien in the 1995 movie ‘Species,’ who mutates from ova to
adult in months, regenerating and reconfiguring at warp speed to escape the
establishment, kill everyone in sight and eliminate the human race."
—
Maureen Dowd
"Australia
is like the Robert Horry or Philip Baker Hall of continents — it just makes
everything it touches a little bit better." — Bill Simmons
An
analogy is, according to Webster’s, "a comparison of two things based on
their being alike in some way." The definition seems to capture exactly
what Simmons, a sports commentator, and Dowd, a New York Times columnist, are
doing in the sentences above: comparing two things and explaining how they’re
alike. Being a dictionary, however, Webster’s has little to say about why we
use analogies, where they come from, or what role they really play in human
life.
Analogies
need not, of course, all have the same aim. They’re used in different contexts
to varying effect. Still, it is evident that we use analogies for mainly
rhetorical reasons: to shed light, to explain, to reveal a new aspect of
something, to draw out an unseen affinity, to drive home a point. As
Wittgenstein wrote, "A good simile refreshes the mind."
This
Simmons’s and Dowd’s analogies demonstrably fail to do. Our understanding of
Trump is unlikely to benefit from an attentive viewing of Species. The careers
of the basketball player Robert Horry and the actor Philip Baker Hall,
admirable though they may be, leave Australia similarly unilluminated. This
kind of analogy — which often consists of an ostensibly funny pop-culture
reference or of objects between which certain equivalences can be drawn (x is
the y of z’s) — has become increasingly common.
You also
find it in academic writing. For example, from the journal Cultural Critique:
"Attempting to define multiculturalism is like trying to pick up a
jellyfish — you can do it, but the translucent, free-floating entity turns
almost instantly into an unwieldy blob of amorphous abstraction." The
analogy aims not to enlighten, but to enliven, adorn, divert.
Of
course there’s nothing wrong with this, as far as it goes, but its increasing
prominence reflects more general changes in the way we relate to the world
around us.
The
virtue of analogies for Wittgenstein consists in "changing our way of
seeing." Experience is diffuse, fragmented, and isolated — modern
experience increasingly so. A good analogy leaps across a wide terrain of
experience to reveal connections between domains that we wouldn’t have thought
had anything to do with one another. In so doing, the analogy produces the
feeling of renewal to which Wittgenstein refers. It brings us up for air,
elevating us into a broader expressive context that allows us to see a given
phenomenon in the light of another.
Analogy,
in this sense, is not just a useful technique that colors some component of an
explanation or a topping for an argument. It is often the explanation itself.
Analogical reasoning is, furthermore, fundamental to the way we get around in
the world. When we’re confronted with something new, we resort to analogy to
try to come to terms with it.
In their
book on analogy, Surfaces and Essences (Basic Books, 2013), Douglas Hofstadter
and Emmanuel Sander write of Galileo’s discovery of the moons of Jupiter as the
complex analogical application of the idea of the Moon (at the time the word
only referred to our moon) to the bits of light reaching his eye through his
telescope. This analogy, quite literally, changed Galileo’s way of seeing, and
in so doing, changed the Moon into a moon.
Like
Wittgenstein, Aristotle emphasized the connection between analogy and
perception. "To use metaphor well," he wrote, "is to discern
similarities." A good analogy implies an imaginative and attentive grasp
of the world — an ability, as Wittgenstein puts it, to "see
connections."
One of
Wittgenstein’s own more illuminating analogies sees language as an old city —
"a maze of little streets and squares, of old and new houses, of houses
with extensions from various periods, and all this surrounded by a multitude of
new suburbs with straight and regular streets and uniform houses." The
analogy is remarkable because it gives us a physical representation of
language. It puts the irregular verbs, baroque rules for tense formation, and
irrational particles of the old city side by side with the subdivisions of
scientific notation and M.B.A.-speak. What we see, Wittgenstein hopes, is no
longer a means of expression riddled with ancient inefficiencies that could be
refined or formalized, but a complicated medium where various eras, cultures,
and visions of human life interact with one another to give a language its
expressive life.
The
affinity between language and a city is importantly not an accident. Both are
built out of human expression and ingenuity. Both progress by way of the aging
and sedimentation of these initial sparks of invention. Like a new coinage, a
new building excites, inspires, offends, and then, if it is built to last,
fades into the skyline, becomes cliché. Both language and the city are
contrivance become convention. Of course, we can always take a step back and
recognize their artifice and contingency. With effort we can once again see
them as they were seen at birth. We can estrange ourselves, and tend to when
we’re traveling or learning another language. We can ask, for example, why we
say "cut to the chase," or why they chose those colors for the stoplight,
or why "deep thinking" is thought of as deep.
Analogies
like Wittgenstein’s accomplish a similar kind of estrangement. They lift the
reader into a higher register, militate against the biases that naturally arise
from sealed, prolonged existence inside particular languages, communities,
ideological formations — ways of thinking. These analogies are discoveries,
rather than inventions. They light up a single pathway across an expansive
terrain — instead of leaping haphazardly from one thing to another. They
resemble the way a translator finds an apt way to render a foreign expression
in his native tongue, or how a physicist, scales falling from her eyes,
suddenly sees an old problem in a new way.
It would
be manifestly unfair to hold Simmons-and-Dowd-like analogies to such a
standard. They don’t pretend to seek this level of illumination. In fact, they
don’t seek illumination at all — just entertainment. They follow on a long
tradition of analogy in comedy that has made its way into public discourse.
The fake
news in particular makes heavy (sometimes tiresome) use of this kind of
analogy. Jon Stewart, for example, compared the NSA phone-surveillance program
being ruled in violation of the Patriot Act to someone getting thrown out of an
Olive Garden for eating too many breadsticks. At its best, this kind of analogy
denigrates self-important individuals or institutions by partnering them with
the mundane. George Carlin once compared organized religion to an orthotic
insole. At its worst, though, this kind of analogy simply serves as distraction
for an audience conditioned to check their phones at the first sign of boredom.
Here, the point is not to bring the self-important down to earth but to keep
the audience from looking at Facebook.
This is
especially evident on the HBO show of the Stewart acolyte John Oliver. There
the analogies are a formal element of the show’s structure. Even when they’re
good, they have a paint-by-numbers quality. His show quite openly takes the
form: Please listen to a modicum of political analysis, and I will show you a
video of a bucket of sloths. Oliver’s show, at times, seems little more than a
streamlined summary of a week’s worth of web-surfing in the writer’s room.
This is,
not incidentally, the logic according to which college classes are increasingly
taught. To maintain a semblance of student engagement, professors must offer
digressions on pop culture, or audiovisual sugar to help the medicine go down.
But these digressions rarely serve to bring the material to life or help
students look at it differently. They leave the material behind, providing
relief from what now appears even more like drudgery than it did before. The
material is not meant to "go down," as its saccharine accompaniment
suggests, but to be questioned, criticized, struggled over. Both on Oliver’s
show and in college classrooms, pandering diversion reinforces a model of
consumption that applies just as much to the substance as to the fluff that
makes it bearable. Ideas — if they’re ingested at all — are swallowed whole.
In the
end, the analogies of Simmons and Dowd fulfill no role except, like the
Stewart-Oliver analogies at their worst, to keep an audience entertained and
their authors self-satisfied. They’re all invention and no discovery, an
embedded distraction that anticipates the readers’ own impulse to change the
channel or succumb to sidebar clickbait or check Instagram to instantly fill a
momentary but intolerable void in their experience. The Simmons-Dowd analogies
fill the void for us without our even having to leave the article.
They
draw attention, in this sense, to wholesale changes in the structure of reading
itself, which has become more like watching TV. Where before we scanned a
newspaper or table of contents for something to read, now we scroll down a
constantly updating feed, freed from the constraints of time and paper, and
privileging the new quite literally above all else. We are more likely to swipe
up and wait for our app to refresh than to keep swiping down to read about a
yesterday that’s happening much longer ago than it used to.
Once
we’ve chosen an article, moreover, we can rest assured that a number of escape
routes will be available should it try our ravaged attention spans. At the very
least there will be links, preferably to videos. More than likely we will be
able to maneuver left and right as well, and, at the impetuous jerk of a thumb,
leave midparagraph and consign the article to the heap of the half-read.
And of
course, the whole reading enterprise teeters on the brink, since the home
button and its embarrassment of distractions presides silently over the entire
process. This is a reading environment no longer conducive to fortuitous
discoveries, close reading, or reflective pauses. How often do you see someone
look up from an iPhone and stare contemplatively into the middle distance?
What
exactly has changed here is hard to say. The endurance of our attention has
certainly suffered, but so has its character. We are in danger of losing not
just our ability to concentrate, but a way of seeing. The items of experience
no longer occupy regions of a coherent whole, comprising a terrain that, as
Wittgenstein says, we can "crisscross in all directions." Instead
they pass before us like the shadows in Plato’s cave — images disconnected from
each other and the world at large.
The
Simmons and Dowd analogies are evidence of the impact of our experience of the
internet and other media on our language. These analogies are like the links
that often accompany them. They remove us from the established context and load
new content for our perusal, underwriting in the process an image of a world
that is nothing more than an agglomeration of isolated, arbitrarily connected
items. We hop from one to another, using them up and then moving on.
When
Analogies Fail. By Alexander Stern. The Chronicle of Higher Education, September 11, 2016
No comments:
Post a Comment