This article was first published by Arc Digital on December 10th 2018.
There are few ideals as central to the life of liberal democracies as that of stable and rewarding work. Political parties of every stripe make promises and boasts about job creation; even Donald Trump is not so eccentric that he does not brag about falling rates of unemployment. Preparing individuals for the job market is seen as the main purpose of education, and a major responsibility of parents too.
But all of this is starting to ring hollow. Today it is an open secret that, whatever the headline employment figures say, the future of work is beset by uncertainty.
Since the 1980s, the share of national income going to wages has declined in almost every advanced economy (the socially democratic Nordic countries are the exception). The decade since the financial crisis of 2007–8 has seen a stubborn rise in youth unemployment, and an increase in “alternative arrangements” characteristic of the gig economy: short-term contracts, freelancing and part-time work. Graduates struggle to find jobs to match their expectations. In many places the salaried middle-class is shrinking, leaving a workforce increasingly polarized between low- and high-earners.
Nor do we particularly enjoy our work. A 2013 Gallup survey found that in Western countries only a fifth of people say they are “engaged” at work, with the rest “not engaged” or “actively disengaged.”
The net result is an uptick of resentment, apathy, and despair. Various studies suggest that younger generations are less likely to identify with their career, or profess loyalty to their employer. In the United States, a worrying number of young men have dropped out of work altogether, with many apparently devoting their time to video games or taking prescription medication. And that’s without mentioning the ongoing automation revolution, which will exacerbate these trends. Robotics and artificial intelligence will likely wipe-out whole echelons of the current employment structure.
So what to do? Given the complexity of these problems — social, cultural, and economic — we should not expect any single, perfect solution. Yet it would be reckless to hope that, as the economy changes, it will reinvent a model of employment resembling what we have known in the past.
We should be thinking in broad terms about two related questions: in the short term, how could we reduce the strains of precarious or unfulfilling employment? And in the long term, what will we do if work grows increasingly scarce?
One answer involves a limited intervention by the state, aimed at revitalizing the habits of a free-market society — encouraging individuals to be independent, mobile, and entrepreneurial. American entrepreneur Andrew Yang proposes a Universal Basic Income (UBI) paid to all citizens, a policy he dubs “the freedom dividend.” Alternatively, Harvard economist Lawrence Katz suggests improving labor rights for part-time and contracted workers, while encouraging a middle-class “artisan economy” of creative entrepreneurs, whose greatest asset is their “personal flair.”
There are valid intuitions here about what many of us desire from work — namely, autonomy, and useful productivity. We want some control over how our labor is employed, and ideally to derive some personal fulfillment from its results. These values are captured in what political scientist Ian Shapiro has termed “the workmanship ideal”: the tendency, remarkably persistent in Western thought since the Enlightenment, to recognize “the sense of subjective satisfaction that attaches to the idea of making something that one can subsequently call one’s own.”
But if technology becomes as disruptive as many foresee, then independence may come at a steep price in terms of unpredictability and stress. For your labor — or, for that matter, your artisan products — to be worth anything in a constantly evolving market, you will need to dedicate huge amounts of time and energy to retraining. According to some upbeat advice from the World Economic Forum, individuals should now be aiming to “skill, reskill, and reskill again,” perhaps as often as every 2–3 years.
Is it time, then, for more radical solutions? There is a strand of thinking on the left which sees the demise of stable employment very differently. It argues that by harnessing technological efficiency in an egalitarian way, we could all work much less and still have the means to lead more fulfilling lives.
This “post-work” vision, as it is now called, has been gaining traction in the United Kingdom especially. Its advocates — a motley group of Marx-inspired journalists and academics — found an unexpected political platform in Jeremy Corbyn’s Labour Party, which has recently proposed cutting the working week to four days. It has also established a presence in mainstream progressive publications such as The Guardian and New Statesman.
To be sure, there is no coherent, long-term program here. Rather, there is a great deal of blind faith in the prospects of automation, common ownership and cultural revolution. Many in the post-work camp see liberation from employment, usually accompanied by UBI, as the first step in an ill-defined plan to transcend capitalism. Typical in that respect are Alex Williams and Nick Srnicek, authors of Inventing the Future: Postcapitalism and a World Without Work. This blueprint includes open borders and a pervasive propaganda network, and flirts with the possibility of “synthetic forms of biological reproduction” to enable “a newfound equality between the sexes.”
We don’t need to buy into any of this, though, to appreciate the appeal of enabling people to work less. Various thinkers, including Bertrand Russell and John Maynard Keynes, took this to be an obvious goal of technological development. And since employment does not provide many of us with the promised goods of autonomy, fulfillment, productive satisfaction and so on, why shouldn’t we make the time to pursue them elsewhere?
Now, one could say that even this proposition is based on an unrealistic view of human nature. Arguably the real value of work is not enjoyment or even wealth, but purpose: people need routine, structure, a reason to get up in the morning, otherwise they would be adrift in a sea of aimlessness. Or at least some of them would – for another thing employment currently provides is a relatively civilized way for ambitious individuals to compete for resources and social status. Nothing in human history suggests that, even in conditions of superabundance, that competition would stop.
According to this pessimistic view, freedom and fulfillment are secondary concerns. The real question is, in the absence of employment, what belief systems, political mechanisms, and social institutions would make work for all of those idle thumbs?
But the way things are headed, it looks like we are going to need to face that question anyway, in which case our work-centric culture is a profound obstacle to generating good solutions. With so much energy committed to long hours and career success (the former being increasingly necessary for the latter), there is no space for other sources of purpose, recognition, or indeed fulfilment to emerge in an organic way.
The same goes for the economic side of the problem. I am no supporter of UBI – a policy whose potential benefits are dwarfed by the implications of a society where every individual is a client of the state. But if we want to avoid that future, it would be better to explore other arrangements now than to cling to our current habits until we end up there by default. Thus, if for no other reason than to create room for such experiments, the idea of working less is worth rescuing from the margins of the debate.
More to the point, there needs to be a proper debate. Given how deeply rooted our current ideas about employment are, politicians will continue appealing to them. We shouldn’t accept such sedatives. Addressing this problem will likely be a messy and imperfect process however we go about it, and the sooner we acknowledge that the better.
I normally can’t stand hearing about the working habits of famous artists. Whether by sheer talent or some fiendish work ethic, they tend to be hyper-productive in a way that I could never be. Thankfully, there are counter-examples – like the painter Pierre Bonnard. As you can read in the first room of the Bonnard exhibition now at Tate Modern, he often took years to finish a painting, putting it to one side before coming back to it and reworking it multiple times. He was known to continue tinkering with his paintings when he came across them hanging on the wall of somebody’s house. At the very end of his life, no longer able to paint, he instructed his nephew to change a section of his final work Almond Tree in Blossom (1947).
Maybe this is wishful thinking, but I find things that have been agonised over to acquire a special kind of depth. In many ways Bonnard is not my kind of painter, but his work rewards close attention. There is hardly an inch of his canvases where you do not find different tones layered over each other – layers not only of paint, but of time and effort – creating a luminous sea of brushstrokes which almost swarms in front of your eyes. And this belaboured quality is all the more intriguing given the transience of his subject matter: gardens bursting with euphoric colour, interiors drenched in vibrant light, domestic scenes that capture the briefest of moments during the day.
Nowhere is this tension more pronounced than in The Bowl of Milk (1919). Pictured is a room with a window overlooking the sea, and two tables ranged with items of crockery and a vase of flowers. In the foreground stands a woman wearing a long gown and holding a bowl, presumably for the cat which approaches in the shadows at her feet. Yet there is something nauseating, almost nightmarish about this image. Everything swims with indeterminacy, vanishing from our grasp. So pallid is the light pouring through the window that at first I assumed it was night outside. The objects and figures crowding the room shimmer as though on the point of dissolving into air. The woman’s face is a vague, eyeless mask. The painting is composed so that if you focus on one particular passage, everything else recedes into a shapeless soup in the periphery of your vision. It is a moment of such vivid intensity that one is forced to realise it has been conjured from the depths of fantasy.
* * *
The woman in The Bowl of Milk is almost certainly Marthe de Méligny, formerly Maria Boursin, Bonnard’s lifelong model and spouse. They met in Paris in 1893, where de Méligny was employed manufacturing artificial flowers for funerals. Some five years later, Bonnard began to exhibit paintings that revealed their intimate domestic life together. These would continue throughout his career, with de Méligny portrayed in various bedrooms, bathrooms and hallways, usually alone, usually nude, and often in front of a mirror.
Pierre Bonnard “Nude in the Bath” (1936). Oil paint on canvas. Paris, musée d’Art moderne.
It was not an uncomplicated relationship: Bonnard is thought to have had affairs, and when the couple eventually married in 1925 de Méligny revealed she had lied about her name and age (she had broken off contact with her family before moving to Paris). They were somewhat isolated. De Méligny is described as having a silent and unnerving presence, and later developed a respiratory disease which forced them to spend periods on the Atlantic coast. Yet Bonnard’s withdrawal from the Parisian art scene, where he had been prominent during his twenties, allowed him to develop his exhaustive, time-leaden painting process, and to forge his own style. The paintings of de Méligny seem to relish the freedom enabled by familiarity and seclusion. One of the gems of the current Tate exhibition are a series of nude photographs that the couple took of one another in their garden in the years 1899-1901. In each of these unmistakeably Edenic pictures, we see a bright-skinned body occupying a patch of sunlight, securely framed by shadowy thickets of grass and leaves.
The female figure in The Bowl of Milk is far from familiar: she is a flicker of memory, a robed phantasm. But like other portrayals of de Méligny, this painting revels in the erotics of space, whereby the proximity and secrecy of the domestic setting are charged with the presence of a human subject – an effect only heightened by our voyeuristic discomfort at gaining access to this private world. There is no nudity, but a disturbing excess of sensual energy in the gleaming white plates, the crimson anemones, the rich shadows and the luxurious stride of the cat. To describe these details as sexual is to lessen their true impact: they are demonic, signalling the capacity of imagination to terrorise us with our own senses.
* * *
In 1912 Bonnard bought a painting by Henri Matisse, The Open Window at Collioure (1905). Matisse would soon emerge as one of the leading figures of modern painting, but the two were also friends, maintaining a lively correspondence over several decades. And one can see what inspired Bonnard to make this purchase: doors and windows appear continually in his own work, allowing interior space to be animated by the vitality of the outside world.
Henri Matisse, “The Open Window at Collioure” (1905). Oil paint on canvas. National Gallery of Art, WashingtonPierre Bonnard, “The Studio with Mimosas” (1939-46). Oil paint on canvas. Musée National d’Art Moderne – Centre Pompidou, Paris.
More revealing, though, are the differences we can glean from The Open Window at Collioure. Matisse’s painting, with its flat blocks of garish colour, is straining towards abstraction. As a formal device, the window merely facilitates a jigsaw of squares and rectangles. Such spatial deconstruction and pictorial simplification were intrinsic to the general direction of modernism at this time. This, however, was the direction from which the patient and meticulous Bonnard had partly stepped aside. For he remained under the influence of impressionist painting, which emphasised the subtlety and fluidity of light and colour as a means of capturing the immediacy of sensory experience. Thus, as Juliette Rizzi notes, Bonnard’s use of “framing devices such as doors, mirrors, and horizontal and vertical lines” allow him a compromise of sorts. They do not simplify his paintings so much as provide an angular scaffolding around which he can weave his nebulous imagery.
The window and its slanted rectangles of light are crucial to the strange drama of The Bowl of Milk. Formally, this element occupies the very centre of the composition, holding it in place. But it is also a source of ambiguity. The window is seemingly a portal to another world, flooding the room with uncanny energy. The woman appears stiff, frozen at the edge of a spotlight. It’s as though the scene has been illuminated just briefly – before being buried in darkness again.
In his latest book Enlightenment Now: The Case for Reason, Science, Humanism and Progress, Steven Pinker heaps a fair amount of scorn on Romanticism, the movement in art and philosophy which spread across Europe during the late-18th and 19th centuries. In Pinker’s Manichean reading of history, Romanticism was the malign counterstroke to the Enlightenment: its goal was to quash those values listed in his subtitle. Thus, the movement’s immense diversity and ambiguity are reduced to a handful of ideas, which show that the Romantics favored “the heart over the head, the limbic system over the cortex.” This provides the basis for Pinker to label “Romantic” various irrational tendencies that are still with us, such as nationalism and reverence for nature.
In the debates following Enlightenment Now, many have continued to use Romanticism simply as a suitcase term for “counter-Enlightenment” modes of thought. Defending Pinker in Areo, Bo Winegard and Benjamin Winegard do produce a concise list of Romantic propositions. But again, their version of Romanticism is deliberately anachronistic, providing a historical lineage for the “modern romantics” who resist Enlightenment principles today.
As it happens, this dichotomy does not appeal only to defenders of the Enlightenment. In his book The Age of Anger, published last year, Pankaj Mishra explains various 21st century phenomena — including right-wing populism and Islamism — as reactions to an acquisitive, competitive capitalism that he traces directly back to the 18th century Enlightenment. This, says Mishra, is when “the unlimited growth of production . . . steadily replaced all other ideas of the human good.” And who provided the template for resisting this development? The German Romantics, who rejected the Enlightenment’s “materialist, individualistic and imperialistic civilization in the name of local religious and cultural truth and spiritual virtue.”
Since the Second World War, it has suited liberals, Marxists, and postmodernists alike to portray Romanticism as the mortal enemy of Western rationalism. This can convey the impression that history has long consisted of the same struggle we are engaged in today, with the same teams fighting over the same ideas. But even a brief glance at the Romantic era suggests that such narratives are too tidy. These were chaotic times. Populations were rising, people were moving into cities, the industrial revolution was occurring, and the first mass culture emerging. Europe was wracked by war and revolution, nations won and lost their independence, and modern politics was being born.
So I’m going to try to explain Romanticism and its relationship with the Enlightenment in a bit more depth. And let me say this up front: Romanticism was not a coherent doctrine, much less a concerted attack on or rejection of anything. Put simply, the Romantics were a disparate constellation of individuals and groups who arrived at similar motifs and tendencies, partly by inspiration from one another, partly due to underlying trends in European culture. In many instances, their ideas were incompatible with, or indeed hostile towards, the Enlightenment and its legacy. On the other hand, there was also a good deal of mutual inspiration between the two.
Sour grapes
The narrative of Romanticism as a “counter-Enlightenment” often begins in the mid-18th century, when several forerunners of the movement appeared. The first was Jean-Jacques Rousseau, whose Social Contract famously asserts “Man is born free, but everywhere he is in chains.” Rousseau portrayed civilization as decadent and morally compromised, proposing instead a society of minimal interdependence where humanity would recover its natural virtue. Elsewhere in his work he also idealized childhood, and celebrated the outpouring of subjective emotion.
In fact various Enlightenment thinkers, Immanuel Kant in particular, admired Rousseau’s ideas; he was arguing that left to their own devices, ordinary people would use reason to discover virtue. Nonetheless, he was clearly attacking the principle of progress, and his apparent motivations for doing so were portentous. Rousseau had been associated with the French philosophes — men such as Thiry d’Holbach, Denis Diderot, Claude Helvétius and Jean d’Alembert — who were developing the most radical strands of Enlightenment thought, including materialist philosophy and atheism. But crucially, they were doing so within a rather glamorous, cosmopolitan milieu. Though they were monitored and harassed by the French ancien régime, many of the philosophes were nonetheless wealthy and well-connected figures, their Parisian salons frequented by intellectuals, ambassadors and aristocrats from across Europe.
Rousseau decided the Enlightenment belonged to a superficial, hedonistic elite, and essentially styled himself as a god-fearing voice of the people. This turned out to be an important precedent. In Prussia, where a prolific Romantic movement would emerge, such antipathy towards the effete culture of the French was widespread. For much to the frustration of Prussian intellectuals and artists — many of whom were Pietist Christians from lowly backgrounds — their ruler Frederick the Great was an “Enlightened despot” and dedicated Francophile. He subscribed to Melchior Grimm’s Correspondence Littéraire, which brought the latest ideas from the Paris; he hosted Voltaire at his court as an Enlightenment mascot; he conducted affairs in French, his first language.
This is the background against which we find Johann Gottfried Herder, whose ideas about language and culture were deeply influential to Romanticism. He argued that one can only understand the world via the linguistic concepts that one inherits, and that these reflect the contingent evolution of one’s culture. Hence in moral terms, different cultures occupy significantly different worlds, so their values should not be compared to one another. Nor should they be replaced with rational schemes dreamed up elsewhere, even if this means that societies are bound to come into conflict.
Rousseau and Herder anticipated an important cluster of Romantic themes. Among them are the sanctity of the inner-life, of folkways and corporate social structures, of belonging, of independence, and of things that cannot be quantified. And given the apparent bitterness of Herder and some of his contemporaries, one can see why Isaiah Berlin declared that all this amounted to “a very grand form of sour grapes.” Berlin takes this line too far, but there is an important insight here. During the 19th century, with the rise of the bourgeoisie and of government by utilitarian principles, many Romantics will show a similar resentment towards “sophisters, economists, and calculators,” as Edmund Burke famously called them. Thus Romanticism must be seen in part as coming from people denied status in a changing society.
Then again, Romantic critiques of excessive uniformity and rationality were often made in the context of developments that were quite dramatic. During the 1790s, it was the French Revolution’s degeneration into tyranny that led first-generation Romantics in Germany and England to fear the so-called “machine state,” or government by rational blueprint. Similarly, the appalling conditions that marked the first phase of the industrial revolution lay behind some later Romantics’ revulsion at industrialism itself. John Ruskin celebrated medieval production methods because “men were not made to work with the accuracy of tools,” with “all the energy of their spirits . . . given to make cogs and compasses of themselves.”
And ultimately, it must be asked if opposition to such social and political changes was opposition to the Enlightenment itself. The answer, of course, depends on how you define the Enlightenment, but with regards to Romanticism we can only make the following generalization. Romantics believed that ideals such as reason, science, and progress had been elevated at the expense of values like beauty, expression, or belonging. In other words, they thought the Enlightenment paradigm established in the 18th century was limited. This is well captured by Percy Shelley’s comment in 1821 that although humanity owed enormous gratitude to philosophers such as John Locke and Voltaire, only Rousseau had been more than a “mere reasoner.”
And yet, in perhaps the majority of cases, this did not make Romantics hostile to science, reason, or progress as such. For it did not seem to them, as it can seem to us in hindsight, that these ideals must inevitably produce arrangements such as industrial capitalism or technocratic government. And for all their sour grapes, they often had reason to suspect those whose ascent to wealth and power rested on this particular vision of human improvement.
“The world must be romanticized”
One reason Romanticism is often characterized as against something — against the Enlightenment, against capitalism, against modernity as such — is that it seems like the only way to tie the movement together. In the florescence of 19th century art and thought, Romantic motifs were arrived at from a bewildering array of perspectives. In England during the 1810s, for instance, radical, progressive liberals such as Shelley and Lord Byron celebrated the crumbling of empires and of religion, and glamorized outcasts and oppressed peoples in their poetry. They were followed by arch-Tories like Thomas Carlyle and Ruskin, whose outlook is fundamentally paternalistic. Other Romantics migrated across the political spectrum during their lifetimes, bringing their themes with them.
All this is easier to understand if we note that a new sensibility appeared in European culture during this period, remarkable for its idealism and commitment to principle. Disparaged in England as “enthusiasm,” and in Germany as Schwärmerei or fanaticism, we get a flavor of it by looking at some of the era’s celebrities. There was Beethoven, celebrated as a model of the passionate and impoverished genius; there was Byron, the rebellious outsider who received locks of hair from female fans; and there was Napoleon, seen as an embodiment of untrammeled willpower.
Curiously, though, while this Romantic sensibility was a far cry from the formality and refinement which had characterized the preceding age of Enlightenment, it was inspired by many of the same ideals. To illustrate this, and to expand on some key Romantic concepts, I’m going to focus briefly on a group that came together in Prussia at the turn of the 19th century, known as the Jena Romantics.
The Jena circle — centred around Ludwig Tieck, Friedrich and August Schlegel, Friedrich Hölderlin, and the writer known as Novalis — have often been portrayed as scruffy bohemians, a conservative framing that seems to rest largely on their liberal attitudes to sex. But this does give us an indication of the group’s aims: they were interested in questioning convention, and pursuing social progress (their journal Das Athenäum was among the few to publish female writers). They were children of the Enlightenment in other respects, too. They accepted that rational skepticism had ruled out traditional religion and superstition, and that science was a tool for understanding reality. Their philosophy, however, shows an overriding desire to reconcile these capacities with an inspiring picture of culture, creativity, and individual fulfillment. And so they began by adapting the ideas of two major Enlightenment figures: Immanuel Kant and Benedict Spinoza.
Kant, who spent his entire life among the Romantics in Prussia, had impressed on them the importance of one dilemma in particular: how was human freedom possible given that nature was determined? But rather than follow Kant down the route of transcendental freedom, the Jena school tried to update the universe Spinoza had described a century earlier, which was a single deterministic entity governed by a mechanical sequence of cause and effect. Conveniently, this mechanistic model had been called into doubt by contemporary physics. So they kept the integrated, holistic quality of Spinoza’s nature, but now suggested that it was suffused with another Kantian idea — that of organic force or purpose.
Consequently, the Jena Romantics arrived at an organic conception of the universe, in which nature expressed the same omnipresent purpose in all its manifestations, up to and including human consciousness. Thus there was no discrepancy between mental activity and matter, and the Romantic notion of freedom as a channelling of some greater will was born. After all, nature must be free because, as Spinoza had argued, there is nothing outside nature. Therefore, in Friedrich Schlegel’s words, “Man is free because he is the highest expression of nature.”
Various concepts flowed from this, the most consequential being a revolutionary theory of art. Whereas the existing neo-classical paradigm had assumed that art should hold a mirror up to nature, reflecting its perfection, the Romantics now stated that the artist should express nature, since he is part of its creative flow. What this entails, moreover, is something like a primitive notion of the unconscious. For this natural force comes to us through the profound depths of language and myth; it cannot be definitely articulated, only grasped at through symbolism and allegory.
Such longing for the inexpressible, the infinite, the unfathomable depth thought to lie beneath the surface of ordinary reality, is absolutely central to Romanticism. And via the Jena school, it produces an ideal which could almost serve as a Romantic program: being-through-art. The modern condition, August Schlegel says, is the sensation of being adrift between two idealized figments of our imagination: a lost past and an uncertain future. So ultimately, we must embrace our frustrated existence by making everything we do a kind of artistic expression, allowing us to move forward despite knowing that we will never reach what we are aiming for. This notion that you can turn just about anything into a mystery, and thus into a field for action, is what Novalis alludes to in his famous statement that “the world must be romanticized.”
It appears there’s been something of a detour here: we began with Spinoza and have ended with obscurantism and myth. But as Frederick Beiser has argued, this baroque enterprise was in many ways an attempt to radicalize the 18th century Enlightenment. Indeed, the central thesis that our grip on reality is not certain, but we must embrace things as they seem to us and continue towards our aims, was almost a parody of the skepticism advanced by David Hume and by Kant. Moreover, and more ominously, the Romantics amplified the Enlightenment principle of self-determination, producing the imperative that individuals and societies must pursue their own values.
The Romantic legacy
It is beyond doubt that some Romantic ideas had pernicious consequences, the most demonstrable being a contribution to German nationalism. By the end of the 19th century, when Prussia had become the dominant force in a unified Germany and Richard Wagner’s feverish operas were being performed, the Romantic fascination with national identity, myth, and the active will had evolved into something altogether menacing. Many have taken the additional step, which is not a very large one, of implicating Romanticism in the fascism of the 1930s.
A more tenuous claim is that Romanticism (and German Romanticism especially) contains the origins of the postmodern critique of the Enlightenment, and of Western civilization itself, which is so current among leftist intellectuals today. As we have seen, there was in Romanticism a strong strain of cultural relativism — which is to say, relativism about values. But postmodernism has at its core a relativism about facts, a denial of the possibility of reaching objective truth by reason or observation. This nihilistic stance is far from the skepticism of the Jena school, which was fundamentally a means for creative engagement with the world.
But whatever we make of these genealogies, remember that we are talking about developments, progressions over time. We are not saying that Romanticism was in any meaningful sense fascistic, postmodernist, or whichever other adjective appears downstream. I emphasize this because if we identify Romanticism with these contentious subjects, we will overlook its myriad more subtle contributions to the history of thought.
Many of these contributions come from what I described earlier as the Romantic sensibility: a variety of intuitions that seem to have taken root in Western culture during this era. For instance, that one should remain true to one’s own principles at any cost; that there is something tragic about the replacement of the old and unusual with the uniform and standardized; that different cultures should be appreciated on their own terms, not on a scale of development; that artistic production involves the expression of something within oneself. Whether these intuitions are desirable is open to debate, but the point is that the legacy of Romanticism cannot be compartmentalized, for it has colored many of our basic assumptions.
This is true even of ideas that we claim to have inherited from the Enlightenment. For some of these were these were modified, and arguably enriched, as they passed through the Romantic era. An explicit example comes from John Stuart Mill, the founding figure of classical Liberalism. Mill inherited from his father and from Jeremy Bentham a very austere version of utilitarian ethics. This posited as its goal the greatest good for the greatest number of people; but its notion of the good did not account for the value of culture, spirituality, and a great many other things we now see as intrinsic to human flourishing. As Mill recounts in his autobiography, he realized these shortcomings by reading England’s first-generation Romantics, William Wordsworth and Samuel Taylor Coleridge.
This is why, in 1840, Mill bemoaned the fact that his fellow progressives thought they had nothing to learn from Coleridge’s philosophy, warning them that “the besetting danger is not so much of embracing falsehood for truth, as of mistaking part of the truth for the whole.” We are committing a similar error today when we treat Romanticism simply as a “counter-Enlightenment.” Ultimately this limits our understanding not just of Romanticism but of the Enlightenment as well.
This essay was first published in Areo Magazine on June 10 2018. See it here.
This essay was first published by Little Atoms on 09 August 2018. The image on my homepage is a detail from an original illustration by Jacob Stead. You can see the full work here.
Until recently it seemed safe to assume that what most people wanted on social media was to appear attractive. Over the last decade, the major concerns about self-presentation online have been focused on narcissism and, for women especially, unrealistic standards of beauty. But just as it is becoming apparent that some behaviours previously interpreted as narcissistic – selfies, for instance – are simply new forms of communication, it is also no longer obvious that the rules of this game will remain those of the beauty contest. In fact, as people derive an ever-larger proportion of their social interaction here, the aesthetics of social media are moving distinctly towards the grotesque.
When I use the term grotesque, I do so in a technical sense. I am referring to a manner of representing things – the human form especially – which is not just bizarre or unsettling, but which creates a sense of indeterminacy. Familiar features are distorted, and conventional boundaries dissolved.
Instagram, notably, has become the site of countless bizarre makeup trends among its large demographic of young women and girls. These transformations range from the merely dramatic to the carnivalesque, including enormous lips, nose-hair extensions, eyebrows sculpted into every shape imaginable, and glitter coated onto everything from scalps to breasts. Likewise, the popularity of Snapchat has led to a proliferation of face-changing apps which revel in cartoonish distortions of appearance. Eyes are expanded into enormous saucers, faces are ghoulishly elongated or squashed, and animal features are tacked onto heads. These images, interestingly, are also making their way onto dating app profiles.
Of course for many people such tools are simply a way, as one reviewer puts it, “to make your face more fun.” There is something singularly playful in embracing such plasticity: see for instance the creative craze “#slime”, which features videos of people playing with colourful gooey substances, and has over eight million entries on Instagram. But if you follow the threads of garishness and indeterminacy through the image-oriented realms of the internet, deeper resonances emerge.
The pop culture embraced by Millennials and the so-called Generation C (born after 2000) reflects a fascination with brightly adorned, shape-shifting and sexually ambiguous personae. If performers like Miley Cyrus and Lady Gaga were forerunners of this tendency, they are now joined by more dark and refined way figures such as Sophie and Arca from the dance music scene. Meanwhile fashion, photography and video abound with kitsch, quasi-surreal imagery of the kind popularised by Dazed magazine. Celebrated subcultures such as Japan’s “genderless Kei,” who are characterised by bright hairstyles and makeup, are also part of this picture.
But the most striking examples of this turn towards the grotesque come from art forms emerging within digital culture itself. It is especially well illustrated by Porpentine, a game designer working with the platform Twine, whose disturbing interactive poems have achieved something of a cult status. They typically place readers in the perspective of psychologically and socially insecure characters, leading them through violent urban futurescapes reminiscent of William Burrough’s Naked Lunch. The New York Times aptly describes her games as “dystopian landscapes peopled by cyborgs, intersectional empresses and deadly angels,” teeming with “garbage, slime and sludge.”
These are all manifestations both of a particular sensibility which is emerging in parts of the internet, and more generally of a new way of projecting oneself into public space. To spend any significant time in the networks where such trends appear is to become aware of a certain model of identity being enacted, one that is mercurial, effervescent, and boldly expressive. And while the attitudes expressed vary from anxious subjectivity to humorous posturing – as well as, at times, both simultaneously – in most instances one senses that the online persona has become explicitly artificial, plastic, or even disposable.
* * *
Why, though, would a paradigm of identity such as this invite expression as the grotesque? Interpreting these developments is not easy given that digital culture is so diffuse and rapidly evolving. One approach that seems natural enough is to view them as social phenomena, arising from the nature of online interaction. Yet to take this approach is immediately to encounter a paradox of sorts. If “the fluid self” represents “identity as a vast and ever-changing range of ideas that should all be celebrated” (according to trend forecaster Brenda Milis), then why does it seem to conform to generic forms at all? This is a contradiction, that in fact might prove enlightening.
One frame which has been widely applied to social media is sociologist Erving Goffman’s “dramaturgical model,” as outlined in his 1959 book The Presentation of Self in Every Day Life. According to Goffman, identity can be understood in terms of a basic dichotomy, which he explains in terms of “Front Stage” and “Back Stage.” Our “Front Stage” identity, when we are interacting with others, is highly responsive to context. It is preoccupied with managing impressions and assessing expectations so as to present what we consider a positive view of ourselves. In other words, we are malleable in the degree to which we are willing to tailor our self-presentation.
The first thing to note about this model is that it allows for dramatic transformations. If you consider the degree of detachment enabled by projecting ourselves into different contexts through words and imagery, and empathising with others on the same basis, then the stage is set for more or less anything becoming normative within a given peer group. As for why people would want to take this expressive potential to unusual places, it seems reasonable to speculate that in many cases, the role we want to perform is precisely that of someone who doesn’t care what anyone thinks. But since most of us do in fact care, we might end up, ironically enough, expressing this within certain established parameters.
But focusing too much on social dynamics risks underplaying the undoubted sense of freedom associated with the detachment from self in online interaction. Yes, there is peer pressure here, but within these bounds there is also a palpable euphoria in escaping mundane reality. The neuroscientist Susan Greenfield has made this point while commenting on the “alternative identity” embraced by young social media users. The ability to depart from the confines of stable identity, whether by altering your appearance or enacting a performative ritual, essentially opens the door to a world of fantasy.
With this in mind, we could see the digital grotesque as part of a cultural tradition that offers us many precedents. Indeed, this year marks the 200th anniversary of perhaps the greatest precedent of all: Mary Shelley’s iconic novel Frankenstein. The great anti-hero of that story, the monster who is assembled and brought to life by the scientist Victor Frankenstein, was regarded by later generations as an embodiment of all the passions that society requires the individual to suppress – passions that the artist, in the act of creation, has special access to. The uncanny appearance and emotional crises of Frankenstein’s monster thus signify the potential for unknown depths of expression, strange, sentimental, and macabre.
That notion of the grotesque as something uniquely expressive and transformative was and has remained prominent in all of the genres with which Frankenstein is associated – romanticism, science fiction, and the gothic. It frequently aligns itself with the irrational and surreal landscapes of the unconscious, and with eroticism and sexual deviancy; the films of David Lynch are emblematic of this crossover. In modern pop culture a certain glamourised version of the grotesque, which subverts rigid identity with makeup and fashion, appeared in the likes of David Bowie and Marilyn Manson.
Are today’s online avatars potentially incarnations of Frankenstein’s monster, tempting us with unfettered creativity? The idea has been explored by numerous artists over the last decade. Ed Atkins is renowned for his humanoid characters, their bodies defaced by crude drawings, who deliver streams of consciousness fluctuating between the poetic and the absurd. Jon Rafman, meanwhile, uses video and animation to piece together entire composite worlds, mapping out what he calls “the anarchic psyche of the internet.” Reflecting on his years spent exploring cyberspace, Rafman concludes: “We’ve reached a point where we’re enjoying our own nightmares.”
* * *
It is possible that the changing aesthetics of the Internet reflect both the social pressures and the imaginative freedoms I’ve tried to describe, or perhaps even the tension between them. One thing that seems clear, though, is that the new notions of identity emerging here will have consequences beyond the digital world. Even if we accept in some sense Goffman’s idea of a “Backstage” self, which resumes its existence when we are not interacting with others, the distinction is ultimately illusory. The roles and contexts we occupy inevitably feed back into how we think of ourselves, as well as our views on a range of social questions. Some surveys already suggest a generational shift in attitudes to gender, for instance.
That paradigms of identity shift in relation to technological and social changes is scarcely surprising. The first half of the 20th century witnessed the rise of a conformist culture, enabled by mass production, communication, and ideology, and often directed by the state. This then gave way to the era of the unique individual promoted by consumerism. As for the balance of psychological benefits and problems that will arise as online interaction grows, that is a notoriously contentious question requiring more research.
There is, however, a bigger picture here that deserves attention. The willingness of people to assume different identities online is really part of a much broader current being borne along by technology and design – one whose general direction to enable individuals to modify and customise themselves in a wide range of ways. Whereas throughout the 20th century designers and advertisers were instrumental in shaping how we interpreted and expressed our social identity – through clothing, consumer products, and so on – this function is now increasingly being assumed by individuals within social networks.
Indeed, designers and producers are surrendering control of both the practical and the prescriptive aspects of their trade. 3D printing is just one example of how, in the future, tools and not products will be marketed. In many areas, the traditional hierarchy of ideas has been reversed, as those who used to call the tune are now trying to keep up with and capitalise on trends that emerge from their audiences. One can see this loss of influence in an aesthetic trend that seems to run counter to those I’ve been observing here, but which ultimately reflects the same reality. From fashion to furniture, designers are making neutral products which can be customised by an increasingly identity-conscious, changeable audience.
Currently, the personal transformations taking place online rely for the most part on software; the body itself is not seriously altered. But with scientific fields such as bioengineering expanding in scope, this may not be the case for long. Alice Rawsthorn has considered the implications: “As our personal identities become subtler and more singular, we will wish to make increasingly complex and nuanced choices about the design of many aspects of out lives… We will also have more of the technological tools required to do so.” If this does turn out to be the case, we will face considerable ethical dilemmas regarding the uses and more generally the purpose of science and technology.
I have a slightly gloomy but, I think, not unreasonable view of birthdays, which is that they are really all about death. It rests on two simple observations. First, much as they pretend otherwise, people do generally find birthdays to be poignant occasions. And second, a milestone can have no poignancy which does not ultimately come from the knowledge that the journey in question must end. (Would an eternal being find poignancy in ageing, nostalgia, or anything else associated with the passing of time? Surely not in the sense that we use the word). In any case, I suspect most of us are aware that at these moments when our life is quantified, we are in some sense facing our own finitude. What I find interesting, though, is that to acknowledge this is verboten. In fact, we seem to have designed a whole edifice of niceties and diversions – cards, parties, superstitions about this or that age – to avoid saying it plainly.
Well it was my birthday recently, and it appears at least one of my friends got the memo. He gave a copy of Hans Holbein’s Dance of Death, a sequence of woodcuts composed in 1523-5. They show various classes in society being escorted away by a Renaissance version of the grim reaper – a somewhat cheeky-looking skeleton who plays musical instruments and occasionally wears a hat. He stands behind The Emperor, hands poised to seize his crown; he sweeps away the coins from The Miser’s counting table; he finds The Astrologer lost in thought, and mocks him with a skull; he leads The Child away from his distraught parents.
Hans Holbein, “The Astrologer” and “The Child,” from “The Dance of Death” (1523-5)
It is striking for the modern viewer to see death out in the open like this. But the “dance of death” was a popular genre that, before the advent of the printing press, had adorned the walls of churches and graveyards. Needless to say, this reflects the fact that in Holbein’s time, death came frequently, often without warning, and was handled (both literally and psychologically) within the community. Historians speculate about what pre-modern societies really believed regarding death, but belief is a slippery concept when death is part of the warp and weft of culture, encountered daily through ritual and artistic representations. It would be a bit like asking the average person today what their “beliefs” are about sex – where to begin? Likewise in Holbein’s woodcuts, death is complex, simultaneously a bringer of humour, justice, grief, and consolation.
Now let me be clear, I am not trying to romanticise a world before antibiotics, germ theory, and basic sanitation. In such a world, with child mortality being what it was, you and I would most likely be dead already. Nonetheless, the contrast with our own time (or at least with certain cultures, and more about that later) is revealing. When death enters the public sphere today – which is to say, fictional and news media – it rarely signifies anything, for there is no framework in which it can do so. It is merely a dramatic device, injecting shock or tragedy into a particular set of circumstances. The best an artist can do now is to expose this vacuum, as the photographer Jo Spence did in her wonderful series The Final Project, turning her own death into a kitsch extravaganza of joke-shop masks and skeletons.
From Jo Spence, “The Final Project,” 1991-2, courtesy of The Jo Spence Memorial Archive and Richard Saltoun Gallery
And yet, to say that modern secular societies ignore or avoid death is, in my view, to miss the point. It is rather that we place the task of interpreting mortality squarely and exclusively upon the individual. In other words, if we lack a common means of understanding death – a language and a liturgy, if you like – it is first and foremost because we regard that as a private affair. This convention is hinted at by euphemisms like “life is short” and “you only live once,” which acknowledge that our mortality has a bearing on our decisions, but also imply that what we make of that is down to us. It is also apparent, I think, in our farcical approach to birthdays.
Could it be that, thanks to this arrangement, we have actually come to feel our mortality more keenly? I’m not sure. But it does seem to produce some distinctive experiences, such as the one described in Philip Larkin’s famous poem “Aubade” (first published in 1977):
Waking at four to soundless dark, I stare.
In time the curtain-edges will grow light.
Till then I see what’s really always there:
Unresting death, a whole day nearer now,
Making all thought impossible but how
And where and when I shall myself die.
Larkin’s sleepless narrator tries to persuade himself that humanity has always struggled with this “special way of being afraid.” He dismisses as futile the comforts of religion (“That vast moth-eaten musical brocade / Created to pretend we never die”), as well as the “specious stuff” peddled by philosophy over the centuries. Yet in the final stanza, as he turns to the outside world, he nonetheless acknowledges what does make his fear special:
telephones crouch, getting ready to ring
In locked-up offices, and all the uncaring
Intricate rented world begins to rouse.
…
Work has to be done.
Postmen like doctors go from house to house.
There is a dichotomy here, between a personal world of introspection, and a public world of routine and action. The modern negotiation with death is confined to the former: each in our own house.
* * *
When did this internalisation of death occur, and why? Many reasons spring to mind: the decline of religion, the rise of Freudian psychology in the 20thcentury, the discrediting of a socially meaningful death by the bloodletting of the two world wars, the rise of liberal consumer societies which assign death to the “personal beliefs” category, and would rather people focused on their desires in the here and now. No doubt all of these have had some part to play. But there is also another way of approaching this question, which is to ask if there isn’t some sense in which we actually savour this private relationship with our mortality that I’ve outlined, whatever the burden we incur as a result. Seen from this angle, there is perhaps an interesting story about how these attitudes evolved.
I direct you again to Holbein’s Dance of Death woodcuts.As I’ve said, what is notable from our perspective is that they picture death within a traditional social context. But as it turns out, these images also reflect profound changes that were taking place in Northern Europe during the early modern era. Most notably, Martin Luther’s Protestant Reformation had erupted less than a decade before Holbein composed them. And among the many factors which led to that Reformation was a tendency which had begun emerging within Christianity during the preceding century, and which would be enormously influential in the future. This tendency was piety, which stressed the importance of the individual’s emotional relationship to God.
As Ulinka Rublack notes in her commentary on The Dance of Death, one of the early contributions of piety was the convention of representing death as a grisly skeleton. This figure, writes Rublack, “tested its onlooker’s immunity to spiritual anxiety,” since those who were firm in their convictions “could laugh back at Death.” In other words, buried within Holbein’s rich and varied portrayal of mortality was already, in embryonic form, an emotionally charged, personal confrontation with death. And nor was piety the only sign of this development in early modern Europe.
Hans Holbein, The Ambassadors (1533)
In 1533, Holbein produced another, much more famous work dealing with death: his painting The Ambassadors. Here we see two young members of Europe’s courtly elite standing either side of a table, on which are arrayed various objects that symbolise a certain Renaissance ideal: a life of politics, art, and learning. There are globes, scientific instruments, a lute, and references to the ongoing feud within the church. The most striking feature of the painting, however, is the enormous skull which hovers inexplicably in the foreground, fully perceptible only from a sidelong angle. This remarkable and playful item signals the arrival of another way of confronting death, which I describe as decadent. It is not serving any moral or doctrinal message, but illuminating what is most precious to the individual: status, ambition, accomplishment.
The basis of this decadent stance is as follows: death renders meaningless our worldly pursuits, yet at the same time makes them seem all the more urgent and compelling. This will be expounded in a still more iconic Renaissance artwork: Shakespeare’s Hamlet (1599). It is no coincidence that the two most famous moments in this play are both direct confrontations with death. One is, of course, the “To be or not to be” soliloquy; the other is the graveside scene, in which Hamlet holds a jester’s skull and asks: “Where be your gibes now, your gambols, your songs, your flashes of merriment, that were wont to set the table on a roar?” These moments are indeed crucial, for they suggest why the tragic hero, famously, cannot commit to action. As he weighs up various decisions from the perspective of mortality, he becomes intoxicated by the nuances of meaning and meaninglessness. He dithers because ultimately, such contemplation itself is what makes him feel, as it were, most alive.
All of this is happening, of course, within the larger development that historians like to call “the birth of the modern individual.” But as the modern era progresses, I think there are grounds to say that these two approaches – the pious and the decadent – will be especially influential in shaping how certain cultures view the question of mortality. And although there is an important difference between them insofar as one addresses itself to God, they also share something significant: a mystification of the inner life, of the agony and ecstasy of the individual soul, at the expense of religious orthodoxy and other socially articulated ideas about life’s purpose and meaning.
During the 17thcentury, piety became the basis of Pietism, a Lutheran movement that enshrined an emotional connection with God as the most important aspect of faith. Just as pre-Reformation piety may have been a response, in part, to the ravages of the Black Death, Pietism emerged from the utter devastation wreaked in Germany by the Thirty Years War. Its worship was based on private study of the bible, alone or in small groups (sometimes called “churches within a church”), and on evangelism in the wider community. In Pietistic sermons, the problem of our finitude – of our time in this world – is often bound up with a sense of mystery regarding how we ought to lead our lives. Everything points towards introspection, a search for duty. We can judge how important these ideas were to the consciousness of Northern Europe and the United States simply by naming two individuals who came strongly under their influence: Immanuel Kant and John Wesley.
It was also from the Central German heartlands of Pietism that, in the late-18thcentury, Romanticism was born – a movement which took the decadent fascination with death far beyond what we find in Hamlet. Goethe’s novel The Sorrows of Young Werther, in which the eponymous artist shoots himself from lovesickness, led to a wave of copycat suicides by men dressed in dandyish clothing. As Romanticism spread across Europe and into the 19thcentury, flirting with death, using its proximity as a kind of emotional aphrodisiac, became a prominent theme in the arts. As Byron describes one of his typical heroes: “With pleasure drugged, he almost longed for woe, / And e’en for change of scene would seek the shades below.” Similarly, Keats: “Many a time / I have been half in love with easeful Death.”
* * *
This is a very cursory account, and I am certainly not claiming there is any direct or inevitable progression between these developments and our own attitudes to death. Indeed, with Pietism and Romanticism, we have now come to the brink of the Great Awakenings and Evangelism, of Wagner and mystic nationalism – of an age, in other words, where spirituality enters the public sphere in a dramatic and sometimes apocalyptic way. Nonetheless, I think all of this points to a crucial idea which has been passed on to some modern cultures, perhaps those with a northern European, Protestant heritage; the idea that mortality is an emotional and psychological burden which the individual should willingly assume.
And I think we can now discern a larger principle which is being cultivated here – one that has come to define our understanding of individualism perhaps more than any other. That is the principle of freedom. To take responsibility for one’s mortality – to face up to it and, in a manner of speaking, to own it – is to reflect on life itself and ask: for what purpose, for what meaning? Whether framed as a search for duty or, in the extreme decadent case, as the basis of an aesthetic experience, such questions seem to arise from a personal confrontation with death; and they are very central to our notions of freedom. This is partly, I think, what underlies our convention that what you make of death is your own business.
The philosophy that has explored these ideas most comprehensively is, of course, existentialism. In the 20thcentury, Martin Heidegger and Jean Paul Sartre argued that the individual can only lead an authentic life – a life guided by the values they deem important – by accepting that they are free in the fullest, most terrifying sense. And this in turn requires that the individual honestly accept, or even embrace, their finitude. For the way we see ourselves, these thinkers claim, is future-oriented: it consists not so much in what we have already done, but in the possibility of assigning new meaning to those past actions through what we might do in the future. Thus, in order to discover what our most essential values really are – the values we wish to direct our choices as free beings – we should consider our lives from its real endpoint, which is death.
Sartre and Heidegger were eager to portray these dilemmas, and their solutions, as brute facts of existence which they had uncovered. But it is perhaps truer to say that they were signing off on a deal which had been much longer in the making – a deal whereby the individual accepts the burden of understanding their existence as doomed beings, with all the nausea that entails, in exchange for the very expansive sense of freedom we now consider so important. Indeed, there is very little that Sartre and Heidegger posited in this regard which cannot be found in the work of the 19thcentury Danish philosopher Søren Kierkegaard; and Kierkegaard, it so happens, can also be placed squarely within the traditions of both Pietism and Romanticism.
To grasp how deeply engrained these ideas have become, consider again Larkin’s poem “Aubade:”
Most things may never happen: this one will,
And realisation of it rages out
In furnace-fear when we are caught without
People or drink. Courage is no good:
It means not scaring others. Being brave
Lets no one off the grave.
Death is no different whined at than withstood.
Here is the private confrontation with death framed in the most neurotic and desperate way. Yet part and parcel with all the negative emotions, there is undoubtedly a certain lugubrious relish in that confrontation. There is, in particular, something titillating in the rejection of all illusions and consolations, clearing the way for chastisement by death’s uncertainty. This, in other words, is the embrace of freedom taken to its most masochistic limit. And if you find something strangely uplifting about this bleak poem, it may be that you share some of those intuitions.
In recent years, a great deal has been written on the subject of group identity in politics, much of it aiming to understand how people in Western countries have become more likely to adopt a “tribal” or “us-versus-them” perspective. Naturally, the most scrutiny has fallen on the furthest ends of the spectrum: populist nationalism on one side, and certain forms of radical progressivism on the other. We are by now familiar with various economic, technological, and psychological accounts of these group-based belief systems, which are to some extent analogous throughout Europe and in North America. Something that remains little discussed, though, is the role of ideas and attitudes regarding the past.
When I refer to the past here, I am not talking about the study of history – though as a source of information and opinion, it is not irrelevant either. Rather, I’m talking about the past as a dimension of social identity; a locus of narratives and values that individuals and groups refer to as a means of understanding who they are, and with whom they belong. This strikes me as a vexed issue in Western societies generally, and one which has had a considerable bearing on politics of late. I can only provide a generic overview here, but I think it’s notable that movements and tendencies which emphasise group identity do so partly through a particular, emotionally salient conception of the past.
First consider populism, in particular the nationalist, culturally conservative kind associated with the Trump presidency and various anti-establishment movements in Europe. Common to this form of politics is a notion that Paul Taggart has termed “heartland” – an ill-defined earlier time in which “a virtuous and unified population resides.” It is through this temporal construct that individuals can identify with said virtuous population and, crucially, seek culprits for its loss: corrupt elites and, often, minorities. We see populist leaders invoking “heartland” by brandishing passports, or promising to make America great again; France’s Marine Le Pen has even sought comparison to Joan of Arc.
Meanwhile, parts of the left have embraced an outlook well expressed by Faulkner’s adage that the past is never dead – it isn’t even past. Historic episodes of oppression and liberating struggle are treated as continuous with, and sometimes identical to, the present. While there is often an element of truth in this view, its practical efficacy has been to spur on a new protest movement. A rhetorical fixation with slavery, colonialism, and patriarchy not only implies urgency, but adds moral force to certain forms of identification such as race, gender, or general antinomianism.
Nor are these tendencies entirely confined to the fringes. Being opposed to identity politics has itself become a basis for identification, albeit less distinct, and so we see purposeful conceptions of the past emerging among professed rationalists, humanists, centrists, classical liberals and so on. In their own ways, figures as disparate as Jordan Peterson and Steven Pinker define the terra firma of reasonable discourse by a cultural narrative of Western values or Enlightened liberal ideals, while everything outside these bounds invites comparison to one or another dark episode from history.
I am not implying any moral or intellectual equivalence between these different outlooks and belief systems, and nor am I saying their views are just figments of ideology. I am suggesting, though, that in all these instances, what could plausibly be seen as looking to history for understanding or guidance tends to shade into something more essential: the sense that a given conception of the past can underpin a collective identity, and serve as a basis for the demarcation of the political landscape into friends and foes.
* * *
These observations appear to be supported by recent findings in social psychology, where “collective nostalgia” is now being viewed as a catalyst for inter-group conflict. In various contexts, including populism and liberal activism, studies suggest that self-identifying groups can respond to perceived deprivation or threat by evoking a specific, value-leaden conception of the past. This appears to bolster solidarity within the group and, ultimately, to motivate action against out-groups. We might think of the past here as becoming a kind of sacred territory to be defended; consequently, it serves as yet another mechanism whereby polarisation drives further polarisation.
This should not, I think, come as a surprise. After all, nation states, religious movements and even international socialism have always found narratives of provenance and tradition essential to extracting sacrifices from their members (sometimes against the grain of their professed beliefs). Likewise, as David Potter noted, separatist movements often succeed or fail on the basis of whether they can establish a more compelling claim to historical identity than that of larger entity from which they are trying to secede.
In our present context, though, politicised conceptions of the past have emerged from cultures where this source of meaning or identity has largely disappeared from the public sphere. Generally speaking, modern Western societies allow much less of the institutional transmission of stories which has, throughout history, brought an element of continuity to religious, civic, and family life. People associate with one another on the basis of individual preference, and institutions which emerge in this way usually have no traditions to refer to. In popular culture, the lingering sense that the past withholds some profound quality is largely confined to historical epics on the screen, and to consumer fads recycling vintage or antiquated aesthetics. And most people, it should be said, seem perfectly happy with this state of affairs.
Nonetheless, if we want to understand how the past is involved with the politics of identity today, it is precisely this detachment that we should scrutinise more closely. For ironically enough, we tend to forget that our sense of temporality – or indeed lack thereof – is itself historically contingent. As Francis O’Gorman details in his recent book Forgetfulness: Making the Modern Culture of Amnesia, Western modernity is the product of centuries worth of philosophical, economic, and cultural paradigms that have fixated on the future, driving us towards “unknown material and ideological prosperities to come.” Indeed, from capitalism to Marxism, from the Christian doctrine of salvation to the liberal doctrine of progress, it is remarkable how many of the Western world’s apparently diverse strands of thought regard the future as the site of universal redemption.
But more to the point, and as the intellectual historian Isaiah Berlin never tired of pointing out, this impulse towards transcending the particulars of time and space has frequently provoked, or at times merged with, its opposite: ethnic, cultural, and national particularism. Berlin made several important observations by way of explaining this. One is that universal and future-oriented ideals tend to be imposed by political and cultural elites, and are thus resented as an attack on common customs. Another is that many people find something superficial and alienating about being cut off from the past; consequently, notions like heritage or historical destiny become especially potent, since they offer both belonging and a form of spiritual superiority.
I will hardly be the first to point out that the most recent apotheosis of progressive and universalist thought came in the era immediately following the Cold War (not for nothing has Francis Fukuyama’s The End of History become its most iconic text). In this moment, energetic voices in Western culture – including capitalists and Marxists, Christians and liberals – were preoccupied with cutting loose from existing norms. And so, from the post-national rhetoric of the EU to postmodern academia and the champions of the service economy and global trade, they all defined the past by outdated modes of thought, work, and indeed social identity.
I should say that I’m too young to remember this epoch before the war on terror and the financial crisis, but the more I’ve tried to learn about it, the more I am amazed by its teleological overreach. This modernising discourse, or so it appears to me, was not so much concerned with constructing a narrative of progress leading up to the present day as with portraying the past as inherently shameful and of no use whatsoever. To give just one example, consider that as late as 2005, Britain’s then Prime Minister Tony Blair did not even bother to clothe his vision of the future in the language of hope, simply stating: “Unless we ‘own’ the future, unless our values are matched by a completely honest understanding of the reality now upon us and the next about to hit us, we will fail.”
Did such ways of thinking lay in store the divisive attachments to the past we see in politics today? Arguably, yes. The populist impulse towards heartland has doubtless been galvanised by the perception that elites have abandoned provenance as a source of common values. Moreover, as the narrative of progress has become increasingly unconvincing in the twenty-first century, its latent view of history as a site of backwardness and trauma has been seized upon by a new cult of guilt. What were intended as reasons to dissociate from the past have become reasons to identify with it as victims or remorseful oppressors.
* * *
Even if you accept all of this, there remains a daunting question: namely, what is the appropriate relationship between a society and its past? Is there something to be gained from cultivating some sense of a common background, or should we simply refrain from undermining that which already exists? It’s important to state, firstly, that there is no perfect myth which every group in a polity can identify with equally. History is full of conflict and tension, and well as genuine injustice, and to suppress this fact is inevitably to sow the seeds of resentment. Such was the case, for instance, with the Confederate monuments which were the focus of last year’s protests in the United States: many of these were erected as part of a campaign for national unity in the early 20th century, one that denied the legacy of African American slavery.
Moreover, a strong sense of tradition is easily co-opted by rulers to sacralise their own authority and stifle dissent. The commemoration of heroes and the vilification of old enemies are today common motifs of state propaganda in Russia, India, China, Turkey, Poland and elsewhere. Indeed, many of the things we value about modern liberal society – free thought, scientific progress, political equality – have been won largely by intransigence towards the claims of the past. None of them sit comfortably in societies who afford significant moral authority to tradition. And this is to say nothing of the inevitable sacrificing of historical truth when the past is used as an agent of social cohesion.
But notwithstanding the partial resurgence of nationalism, it is not clear there exists in the West today any vehicle for such comprehensive, overarching myths. As with “tribal” politics in general, the politicisation of the past has been divergent rather than unifying because social identity is no longer confined to traditional concepts and categories. A symptom of this, at least in Europe, is that people who bemoan the absence of shared historical identity – whether politicians such as Emmanuel Macron or critics like Douglas Murray – struggle to express what such a thing might actually consist in. Thus they resort to platitudes like “sovereignty, unity and democracy” (Macron), or a rarefied high culture of Cathedrals and composers (Murray).
The reality which needs to be acknowledged, in my view, is that the past will never be an inert space reserved for mere curiosity or the measurement of progress. The human desire for group membership is such that it will always be seized upon as a buttress for identity. The problem we have encountered today is that, when society at large loses its sense of the relevance and meaning of the past, the field is left open to the most divisive interpretations; there is, moreover, no common ground from which to moderate between such conflicting narratives. How to broaden out this conversation, and restore some equanimity to it, might in the present circumstances be an insoluble question. It certainly bears thinking about though.
What are the roots that clutch, what branches grow
Out of this stony rubbish? Son of man,
You cannot say, or guess, for you know only
A heap of broken images, where the sun beats,
And the dead trees give no shelter, the cricket no relief,
And the dry stone no sound of water
– T.S. Eliot, The Waste Land
I had a professor who used to say there was an exact moment when modernism arrived in English poetry, and it was the third line of T.S. Eliot’s “The Love Song of J. Alfred Prufrock.” This is when Eliot, in a sudden and disturbing image, describes the evening sky as “Like a patient etherised upon a table.” The simile still carries a punch today, but this is just an echo of what it signified on publication in 1915. Drawing heavily on the jaundiced outlook of French symbolism, Eliot was making poetry confront the emotional register of modern life, with its lurking anxieties and peculiar sense of estrangement.
But the awkward young émigré from St Louis, Missouri, did not end his contribution to modernist poetry there. Seven years later, in 1922, he published what is widely seen as its landmark work. The Waste Land, written while Eliot was recuperating from a nervous breakdown, is a five-poem sequence which considers from a mythological perspective the febrile and traumatised civilisation that had barely emerged from the First World War. Using an innovative collage technique, it splices together desolate scenes of ordinary life with references to cultures distant in time and space. It thus portrays a world haunted by the wellsprings of meaning from which it has experienced a terminal rupture.
At the heart of the poem, thematically speaking, is the “waste land” itself – a series of barren terrains whose most prominent features are absence, infertility and confusion:
The river’s tent is broken; the last fingers of leaf
Clutch and sink into the wet bank. The wind
Crosses the brown land, unheard. The nymphs are departed.
Around this void-like centre are layered a multitude of different voices or perspectives, all expressing the same anxieties, but isolated from one another by the poem’s abrupt, fragmented structure. Hence the “waste land” is echoed not just in the mundane suffering of individuals (“He’ll want to know what you done with that money he gave you / To get yourself some teeth”) and at the level of civilisational uncertainty (“Falling towers / Jerusalem Athens Alexandria / Vienna London / Unreal”), but also in the impossibility of piecing it all together. And strewn throughout, we find an eclectic array of characters and quotations from world literature, including Plutarch, Ovid, St Augustine, Dante, Spencer, and the Buddha.
Of course neither abstruse experimentation nor pessimism were unusual in interwar literature. But even so, The Waste Land is remarkable for its over-wrought intensity. Eliot himself made light of this when asked for some explanatory notes to help baffled readers, producing an index of intellectual arcana that discusses everything from ancient vegetation ceremonies to the price of raisins. Indeed, it’s difficult to pin down exactly how much conviction Eliot had in his more apocalyptic pronouncements, and ultimately, whether you find the poem a compelling diagnosis of the modern condition or something akin to intellectual masturbation will probably depend on your own demeanour.
However there is one area where The Waste Land has undoubtedly proved prophetic, and that is in the arts themselves. I was reminded of this recently by an exhibition at the Turner Contemporary gallery in Margate, called “Journeys with The Waste Land,” which explores how Eliot’s poem has resonated with visual art across the last century. The show is worth discussing, because it does indeed manage to illustrate some of The Waste Land’s most prescient insights – only, not in the way it actually intends to.
The exhibition is enormous. With almost a hundred artworks, and too many big names to list here, there is bound to be something you will enjoy (for me this was Kathe Kollwitz, Paula Rego, Tacita Dean, and four huge paintings by the Eliot-inspired abstract expressionist Cy Twombly). It is also stimulating to see how non-Western artists, despite very different contexts, have echoed The Waste Land’s vision of modernity. But ultimately, these brief insights are diminished by the exhibition’s sprawling incoherence. Besides being curated around big baggy topics like identity, myth, and technology, it presents such a smorgasbord of concepts and of media – from painting, photography and textiles to installation, printmaking and video – you eventually feel like you’re winding through an enormous out-of-town supermarket.
There are also unconvincing attempts to assign to The Waste Land the preoccupations of the 21st-century art world. In the first room, we read that the “key themes of the poem” are “gender, myth and journeying” – I must have read it fifty times and never has its concern for gender struck me as anything but incidental. Later, The Waste Land is portrayed as an eco-poem, “reminding us of our interference with and damage to cycles of nature.” It’s fitting, then, that the show occupies the same beach where Eliot wrote the lines “On Margate Sands. / I can connect / Nothing with nothing.” With its aimless approach, connecting nothing with nothing is exactly what this show has done.
But inadvertently, “Journeys with The Waste Land” does illustrate something important about art over the last century, and to greater and lesser degrees, about many areas of contemporary culture. For what we see reflected in the exhibition’s radical diversity of expression, and in the tenuous attempts to glue it all together, is the absence of any stable or enduring framework for artistic value. It is a labyrinth of niches and paradigms which, though perfectly capable of aspiring to value on their own terms, can only be appreciated together if one adopts a detached, scholarly relativism. By failing to make this explicit, the curators have missed a trick; for here is a situation to which TheWaste Land really is pertinent.
As we have seen, Eliot’s “waste land” is an allegorical landscape, a disorientation at once cultural, spiritual, and psychological. Yet underpinning this, and in a sense embodied by the poem itself, is also a treatment of the uncertain purpose and meaning of art in modern society. When Eliot asks, in a crucial passage, “What are the roots that clutch, what branches grow / Out of this stony rubbish?” the question is partly self-referential. For as is suggested by the poem’s ephemeral, obscure and disjointed allusions to lapsed literary traditions, art can no longer be part of some holistic cultural and religious whole. This must be true because culture itself has become a shattered prism without any central axis.
This realisation, in turn, casts a revealing light on the poem’s own experiments in form, structure, and idiom. Such innovations, however dazzling, are of only conditional value insofar as they do not issue from the roots and branches of a coherent metaphysical structure, but from its breakdown. Indeed, if The Waste Land is anything to go by, all that remains for the artist at this point is sifting through “a heap of broken images,” and seeking a new way of establishing continuity between them. Presumably, any attempt to invent new purpose will end up in the same position as the poem’s various characters: isolated and plagued by anxieties over their impermanence.
Eliot’s contemporaries could not miss this message, for in the first two decades of the 20th century, the same atmosphere of social and cultural unraveling which inspired The Waste Land had caused something to snap in the realm of artistic production. This was the heyday of movement-based art, with its multitude of “isms:” Cubism, Futurism, Surrealism, Expressionism, Dadaism, Constructivism, Suprematism, and so on. These inter-disciplinary, avant-garde networks advanced not just new formal approaches but, more fundamentally, new and conflicting ideas about art’s purpose and value. Gone was the rigid art world of the late 19th century, in which a single curmudgeonly critic (John Ruskin) attacking a single painter (James Whistler) could produce a scandalous libel case.
Within these new milieus, art was being variously imagined as a vehicle for revolutionary politics, as a specialist branch of aesthetic experimentation and contemplation, as a celebration of technology, and as a channel for the unhindered (and often unhinged) expression of the individual psyche. Such divergence, moreover, was self-perpetuating, since it dramatically accelerated the withdrawal of the arts into a separate sphere of discourse, detached from culture at large. This only heightened the nagging uncertainty about what artistic products are actually for, and whether they had anything of real use or relevance to offer society – questions which in turn guaranteed a further profusion of answers.
Nor was Eliot a remote observer of these developments. Like so many authors of the period, he owed his breakthrough to Ezra Pound, the flamboyant and fanatical cultural broker who personally initiated a string of movements such as Imagism and Vorticism. In fact, Pound was so instrumental in crafting the iconic structure of The Waste Land that he should probably be credited as co-author. In the manuscript we see him stripping away any semblance of convention, with comments like “verse [i.e. traditional poetic form] not interesting enough as verse to warrant so much of it.”
But it is precisely The Waste Land’s unflinchingly avant-garde posture that makes its recognition of the crisis of artistic value so compelling. Eliot was disdainful of nostalgia; remember his earlier “Prufrock” was partly responsible for dragging poetry out of the corpse of Victorian romanticism. Moreover, as he pointed out in his 1919 essay “Tradition and the Individual Talent,” artistic genealogies are always in a kind of flux, as each new addition forces a fresh perspective on what has gone before. Eliot simply acknowledged that the modern perspective was defined by a kind of radical disjuncture, and wanted to explore the implications of that. This meant confronting the insecurity inherent to the modern artist’s task of, as it were, inventing his own values.
Most forms of artistic production have been insulated from the full force of this dilemma by simple practicalities: novels, plays, music, film and architecture have limited materials to make use of and specific markets to target. I would argue these natural boundaries allow us to appreciate the creative freedom that modern culture has brought, without being too concerned that as a consequence, there is something arbitrary about the goals instantiated in any particular work. This is why the best illustration of The Waste Land’s insights can today be found in art galleries and magazines. Having been subject to ever-fewer conventional constraints and popular expectations, this expanding ragbag of purposes and practices has come to embody the profound uncertainties that entered culture a century ago.
One of my favourite moments in cinema comes from Paolo Sorrentino’s film The Great Beauty. The scene is a fashionable get-together on a summer evening, and as the guests gossip over aperitifs, we catch a woman uttering: “Everybody knows Ethiopian jazz is the only kind worth listening to.” The brilliance of this line is not just that it shows the speaker to be a pretentious fool. More than that, it manages to demonstrate the slipperiness of a particular ideal. For what this character is implying, with her reference to Ethiopian jazz, is that she and her tastes are authentic. She appreciates artistic integrity, meaningful expression, and maybe a certain virtuous naivety. And the irony, of course, is that by setting out to be authentic she has merely stumbled into cliché.
I find myself recalling this dilemma when I pass through the many parts of London that seem to be suffering an epidemic of authenticity today. Over the past decade or so, life here and in many other cities has become crammed with nostalgic, sentimental objects and experiences. We’ve seen retro décor in cocktail bars and diners, the return of analogue formats like vinyl and film photography, and a fetishism of the vintage and the hand-made in everything from fashion to crockery. Meanwhile restaurants, bookshops, and social media feeds offer a similarly quaint take on customs from around the globe.
Whether looking back to a 1920s Chicago of leather banquettes and Old Fashioned cocktails, or the wholesome cuisine of a traditional Balkan home, these are so many tokens of an idealised past – attempts to signify that simple integrity which, paradoxically, is the mark of cosmopolitan sophistication. These motifs have long since passed into cliché themselves. Yet the generic bars and coffee shops keep appearing, the LPs are still being reissued, and urban neighborhoods continue being regenerated to look like snapshots of times and places that never quite existed.
The Discount Suit Company, one of London’s many “Prohibition-style cocktail dens” according to TimeOut
There is something jarring about this marriage of the authentic with the commercial and trendy, just as there is when someone announces their love of Ethiopian jazz to burnish their social credentials. We understand there is more to authenticity than just an aura of uniqueness, a vague sense of being true to something, which a product or experience might successfully capture. Authenticity is also defined by what it isn’t: shallow conformity. Whether we find it in the charmingly traditional or in the unusual and eccentric, authenticity implies a defiance of those aspects of our culture that strike us as superficial or contrived.
Unsurprisingly then, most commentators have concluded that what surrounds us today is not authenticity at all. Rather, in these “ready-made generic spaces,” what we see is no less than “the triumph of hive mind aesthetics to the expense of spirit and of soul.” The authentic has become a mere pretense, a façade behind which a homogenized, soulless modernity has consolidated its hold. And this says something about us of course. To partake in such a fake culture suggests we are either unfortunate dupes or, perhaps, something worse. As one critic rather dramatically puts it: “In cultural markets that are all too disappointingly accessible to the masses, the authenticity fetish disguises and renders socially acceptable a raw hunger for hierarchy and power.”
These responses echo a line of criticism going back to the 1970s, which sees the twin ideals of the authentic self and the authentic product as mere euphemisms for the narcissistic consumer and the passing fad. And who can doubt that the prerogative of realising our unique selves has proved susceptible to less-than-unique commercial formulas? This cosmetic notion of authenticity is also applied easily to cultures as a whole. As such, it is well suited to an age of sentimental relativism, when all are encouraged to be tourists superficially sampling the delights of world.
And yet, if we are too sceptical, we risk accepting the same anaemic understanding of authenticity that the advertisers and trendsetters foist on us. Is there really no value in authenticity beyond the affirmation it gives us as consumers? Is there no sense in which we can live up to this ideal? Does modern culture offer us nothing apart from illusions? If we try to grasp where our understanding of authenticity comes from, and how it governs our relationship with culture, we might find that for all its fallibility it remains something that is worth aiming for. More importantly perhaps, we’ll see that for better or for worse, it’s not a concept we can be rid of any time soon.
Authenticity vs. mass culture
In the narrowest sense of the word, authenticity applies to things like banknotes and paintings by Van Gogh: it describes whether they are genuine or fake. What do we mean, though, when we say that an outfit, a meal, or a way of life is authentic? Maybe it’s still a question of provenance and veracity – where they originate and whether they are what they claim – but now these properties have taken on a quasi-spiritual character. Our aesthetic intuitions have lured us into much deeper waters, where we grope at values like integrity, humility, and self-expression.
Clearly authenticity in this wider sense cannot be determined by an expert with a magnifying glass. In fact, if we want to grasp how such values can seem to be embodied in our cultural environment – and how this relates to the notion of being an authentic person – we should take a step back. The most basic answers can be found in the context from which the ideal of authenticity emerged, and in which it continues to operate today: Western mass culture.
That phrase – mass culture – might strike you as modern sounding, recalling as it does a world of consumerism, Hollywood and TV ads. But it simply means a culture in which beliefs and habits are shaped by exposure to the same products and media, rather than by person-to-person interaction. In Europe and elsewhere, this was clearly emerging in the 18th and 19th centuries, in the form of mass media (journals and novels), mass-produced goods, and a middle class seeking novelties and entertainments. During the industrial revolution especially, information and commodities began to circulate at a distinctly modern tempo and scale.
Gradually, these changes heralded a new and somewhat paradoxical experience. On the one hand, the content of this culture – whether business periodicals, novels and plays, or department store window displays – inspired people to see themselves as individuals with their own ambitions and desires. Yet those individuals also felt compelled to keep up with the latest news, fashions and opinions. Ensconced in a technologically driven, commercially-minded society, culture became the site of constant change, behind which loomed an inscrutable mass of people. The result was an anxiety which has remained a feature of art and literature ever since: that of the unique subject being pulled along, puppet-like, by social expectations, or caught up in the gears of an anonymous system.
And one product of that anxiety was the ideal of authenticity. Philosophers like Jean-Jacques Rousseau in the 18th century, Søren Kierkegaard in the 19th, and Martin Heidegger in the 20th, developed ideas of what it meant to be an authentic individual. Very broadly speaking, they were interested in the distinction between the person who conforms unthinkingly, and the person who approaches life on his or her own terms. This was never a question of satisfying the desire for uniqueness vis-à-vis the crowd, but an insistence that there were higher concepts and goals in relation to which individuals, and perhaps societies, could realise themselves.
John Ruskin’s illustrations of Gothic architecture, published in The Stones of Venice (1851)
Others, though, approached the problem from the opposite angle. The way to achieve an authentic way of being, they thought, was collectively, through culture. They emphasised the need for shared values that are not merely instrumental – values more meaningful than making money, saving time, or seeking social status. The most famous figures to attempt this in the 19th century were John Ruskin and William Morris, and the way they went about it was very telling indeed. They turned to the past and, drawing a direct link between aesthetics and morality, sought forms of creativity and production that seemed to embody a more harmonious existence among individuals.
For Morris, the answer was a return to small-scale, pre-industrial crafts. For Ruskin, medieval Gothic architecture was the model to be emulated. Although their visions of the ideal society differed greatly, both men praised loving craftsmanship, poetic expressiveness, imperfection and integrity – and viewed them as social as well as artistic virtues. The contrast with the identical commodities coming off factory production lines could hardly be more emphatic. In Ruskin’s words, whereas cheap wholesale goods forced workers “to make cogs and compasses of themselves,” the contours of the Gothic cathedral showed “the life and liberty of every workman who struck the stone.”
The authentic dilemma
In Ruskin and Morris we can see the outlines of our own understanding of authenticity today. Few of us share their moral and social vision (Morris was a utopian socialist, Ruskin a paternalist Christian), but they were among the first to articulate a particular intuition that arises from the experience of mass culture – one that leads us to idealise certain products and pastimes as embodiments of a more free-spirited and nourishing, often bygone world. Our basic sense of what it means to be an authentic individual is rooted in this same ground: a defiance of the superficial and materialistic considerations that the world seems to impose on us.
Thanks to ongoing technological change, mass culture has impressed each new generation with these same tensions. The latest installment, of course, has been the digital revolution. Many of us find something impersonal in cultural products that exist only as binary code and appear only on a screen – a coldness somehow worsened by their convenience. The innocuous branding of digital publishing companies, with cuddly names like Spotify and Kindle, struggles to hide the bloodless efficiency of the algorithm. This is stereotypically contrasted with the soulful pleasures of, say, the authentic music fan, pouring over the sleeve notes of his vinyl record on the top deck of the bus.
But this hackneyed image immediately recalls the dilemma we started with, whereby authenticity itself gets caught-up in the web of fashion and consumerist desire. So when did ideals become marketing tools? The prevailing narrative emphasises the commodification of leisure in the early 20th century, the expansion of mass media into radio and cinema, and the development of modern advertising techniques. Yet, on a far more basic level, authenticity was vulnerable to this contradiction from the very beginning.
Ideals are less clear-cut in practice than they are in the page. For Ruskin and Morris, the authenticity of certain products and aesthetics stemmed from their association with a whole other system of values and beliefs. To appreciate them was effectively to discard the imperatives of mass culture and commit yourself to a different way of being. But no such clear separation exists in reality. We are quite capable of recognizing and appreciating authenticity when it is served to us by mass culture itself – and we can do so without even questioning our less authentic motives and desires.
Hi-tech Victorian entertainment: the Panorama. (Source: Wikimedia commons)
Thus, by the time Ruskin published “On the Nature of Gothic” in 1851, Britain had long been in the grip of a mass phenomenon known as the Gothic Revival – a fascination with Europe’s Christian heritage manifest in everything from painting and poetry to fashion and architecture. Its most famous monument would be the building from which the new industrial society was managed and directed: the Houses of Parliament in Westminster. Likewise, nodding along to Ruskin’s noble sentiments did not prevent bourgeois readers from enjoying modern conveniences and entertainments, and merely justified their disdain for mass-produced goods as cheap and common.
From then until now, to be “cultured” has to some degree implied a mingling of nostalgia and novelty, efficiency and sentimentality. Today’s middle-classes might resent their cultural pursuits becoming generic trends, but also know that their own behavior mirrors this duplicity. The artisanal plate of food is shared on Facebook, a yoga session begins a day of materialistic ambition, and the Macbook-toting creative expresses in their fashion an air of old-fashioned simplicity. It’s little wonder boutique coffee shops the world over look depressingly similar, seeing as most of their customers happily share the same environment on their screens.
Given this tendency to pursue conflicting values simultaneously, there is really nothing to stop authentic products and ideas becoming fashionable in their own right. And once they do so, of course, they have started their inevitable descent into cliché. But crucially, this does not mean that authenticity is indistinguishable from conformity and status seeking itself. In fact, it can remain meaningful even alongside these tendencies.
Performing the authentic
A few years ago, I came across a new, elaborately designed series of Penguin books. With their ornate frontispieces and tactile covers, these “Clothbound Classics” seemed to be recalling the kind volume that John Ruskin himself might have read. On closer inspection, though, these objects really reflected the desires of the present. The antique design elements were balanced with modern ones, so as to produce a carefully crafted simulacrum: a copy for which no original has ever existed. Deftly straddling the nostalgia market and the world of contemporary visuals, these were books for people who now did most of their reading from screens.
Volumes from Penguin’s “Clothbound Classics” series
As we’ve seen, to be authentic is to aspire to a value more profound than mere expediency – one that we often situate in the obsolete forms of the past. This same sentimental quality, however, also makes for a very good commodity. We often find that things are only old or useless insofar as this allows them to be used as novelties or fashion statements. And such appropriation is only too easy when the aura of authenticity can be summoned, almost magically, by the manipulation of symbols: the right typeface on a menu, the right degree of saturation in a photograph, the right pattern on a book cover.
This is where our self-deceiving relationship with culture comes into closer focus. How is it we can be fooled by what are clearly just token gestures towards authenticity, couched in alterior motives like making money or grabbing our attention? The reason is that, in our everyday interactions with culture, we are not going around as judges but as imaginative social beings who appreciate such gestures. We recognise that they have a value simply as reminders of ideals that we hold in common, or that we identify with personally. Indeed, buying into hints and suggestions is how ideals remain alive in amidst the disappointments and limitations of lived reality.
In his essay “A is for Authentic,” Design Museum curator Deyan Sudjic expands this idea by portraying culture as a series of choreographed rituals and routines, which demonstrate not so much authenticity as our aspirations towards it. From the homes we inhabit to the places we shop and the clothes we wear, Sudjic suggests, “we live much of our lives on a sequence of stage sets, modeled on dreamlike evocations of the world that we would like to live in rather than the world as it is.”
This role-play takes us away from the realities of profit and loss, necessity and compromise, and into a realm where those other notions like humility and integrity have the place they deserve. For Sudjic, the authentic charm of a period-themed restaurant, for instance, allows us to “toy with the idea that the rituals of everyday life have more significance than, in truth, we suspect that they really do.” We know we are not going to find anything like pure, undiluted authenticity, free from all pretense. But we can settle for something that acknowledges the value of authenticity in a compelling way – something “authentic in its artistic sincerity.” That is enough for us to play along.
Steven Poole makes a similar point about the ideal of being an authentic person, responding to the uncompromising stance that Jean Paul Satre takes on this issue. In Satre’s Being and Nothingness, there is a humorous vignette in which he caricatures the mannerisms of a waiter in a café. In Satre’s eyes, this man’s contrived behavior shows that he is performing a role rather than being his authentic self. But Poole suggests that, “far from being deluded that he really is a waiter,” maybe Satre’s dupe is aware that he is acting, and is just enjoying it.
Social life is circumscribed by performance and gesture to the extent that, were we to dig down in an effort to find some authentic bedrock, we would simply be taking up another role. Our surroundings and possessions are part of that drama too – products like books and Gothic cathedrals are ultimately just props we use to signal towards a hypothetical ideal. So yes, authenticity is a fiction. But insofar as it allows us to express our appreciation of values we regard as important, it can be a useful one.
Between thought and expression
Regardless of the benefits, though, our willingness to relax judgment for the sake of gesture has obvious shortcomings. The recent craze for the authentic, with its countless generic trends, has demonstrated them clearly. Carried away by the rituals of consumerism, we can end up embracing little more than a pastiche of authenticity, apparently losing sight of the bigger picture of sterile conformity in which those interactions are taking place. Again, the suspicion arises that authenticity itself is a sham. For how can it be an effective moral standard if, when it comes to actually consuming culture, we simply accept whatever is served up to us?
I don’t think this picture is entirely right, though. Like most of our ideals, authenticity has no clear and permanent outline, but exists somewhere between critical thought and social conventions. Yet these two worlds are not cut off from each other. We do still possess some awareness when we are immersed in everyday life, and the distinctions we make from a more detached perspective can, gradually and unevenly, sharpen that awareness. Indeed, even the most aggressive criticism of authenticity today is, at least implicitly, grounded in this possibility.
One writer, for instance, describes the vernacular of “reclaimed wood, Edison bulbs, and refurbished industrial lighting” which has become so ubiquitous in modern cities, calling it “a hipster reduction obsessed with a superficial sense of history and the remnants of industrial machinery that once occupied the neighbourhoods they take over.” The pretense of authenticity has allowed the emergence of zombie-like cultural forms: deracinated, fake, and sinister in their social implications. “From Bangkok to Beijing, Seoul to San Francisco,” he writes, this “tired style” is catering to “a wealthy, mobile elite, who want to feel like they’re visiting somewhere ‘authentic’ while they travel.”
This is an effective line of attack because it clarifies a vague unease that many will already feel in these surroundings. But crucially, it can only do this by appealing to a higher standard of authenticity. Like most recent critiques of this kind, it combines aesthetic revulsion at a soulless, monotonous landscape, with moral condemnation of the social forces responsible, and thus reads exactly like an updated version of John Ruskin’s arguments. In other words, the same intuitions that lead consumers, however erroneously, to find certain gestures and symbols appealing, are being leveraged here to clarify those intuitions.
This is the fundamental thing to understand about authenticity: it is so deeply ingrained in our ways of thinking about culture, and in our worldview generally, that it is both highly corruptible and impossible to dispense with. Since our basic desire for authenticity doesn’t come from advertisers or philosophers, but from the experience of mass culture itself, we can manipulate and refine that desire but we can’t suppress it. And almost regardless of what we do, it will continue to find expression in any number of ways.
A portrait posted by socialite Kendall Jenner on Instagram in 2015, typical of the new mannerist, sentimental style
This has been vividly demonstrated, for instance, in the relatively new domain of social media. Here the tensions of mass culture have, in a sense, risen afresh, with person-to-person interaction taking place within the same apparatus that circulates mass media and social trends. Thus a paradigm of authentic expression has emerged which in some places verges on outright romanticism: consider the phenomenon of baring your soul to strangers on Facebook, or the mannerist yet sentimental style of portrait that is so popular on Instagram. Yet this paradigm still functions precisely along the lines we identified earlier. Everybody knows it is ultimately a performance, but are willing to go along with it.
Authenticity has also become “the stardust of this political age.” The sprouting of a whole crop of unorthodox, anti-establishment politicians on both sides of the Atlantic is taken to mean that people crave conviction and a human touch. Yet even here it seems we are dealing not so much with authentic personas as with authentic products. For their followers, such leaders are an ideal standard against which culture can be judged, as well as symbolic objects that embody an ideology – much as handcrafted goods were for William Morris’ socialism, or Gothic architecture was for Ruskin’s Christianity.
Moreover, where these figures have broadened their appeal beyond their immediate factions, it is again because mass culture has allowed them to circulate as recognisable and indeed fashionable symbols of authenticity. One of the most intriguing objects I’ve come across recently is a “bootlegged” Nike t-shirt, made by the anonymous group Bristol Street Wear in support of the politician Jeremy Corbyn. Deliberately or not, their use of one of the most iconic commercial designs in history is an interesting comment on that trade-off between popularity and integrity which is such a feature of authenticity in general.
The bootleg t-shirt produced by Bristol Street Wear during the 2017 General Election campaign. Photograph: Victoria & Albert Museum, London
These are just cursory observations; my point is that the ideal of authenticity is pervasive, and that for this very reason, any expression of it risks being caught-up in the same system of superficial motives and ephemeral trends that it seeks to oppose. This does not make authenticity an empty concept. But it does mean that, ultimately, it should be seen as a form of aspiration, rather than a goal which can be fully realised.
In the afternoon our house settles into a decadent air. My sisters’ children are asleep, there is the lingering smell of coffee, the corridors are in shade with leaves moving silently outside the windows. In my room light still pours in from the electric blue sky of the Eastern Cape. There is a view of the town, St Francis Bay, clustered picturesquely in the orthodox Dutch style, thatched roofs and gleaming whitewashed walls hugging the turquoise of the Indian Ocean.
This town, as I hear people say, is not really like South Africa. Most of its occupants are down over Christmas from the northern Highveld cities. During these three weeks the town’s population quadruples, the shopping centres, bars and beaches filling with more or less wealthy holidaymakers. They are white South Africans – English and Afrikaans-speaking – a few African millionaires, and recently, a number of integrated middle-class Africans too. The younger generations are Americanised, dressing like it was Orange County. There are fun runs and triathlons on an almost daily basis, and dance music drifts across the town every night.
But each year it requires a stronger act of imagination, or repression, to ignore the realities of the continent to which this place is attached. Already, the first world ends at the roadside, where families of pigs and goats tear open trash bags containing health foods and House and Leisure. Holidaymakers stock up on mineral water at the vast Spar supermarket, no longer trusting their taps. At night, the darkness of power cuts is met with the reliable whirring of generators.
And from where I sit at my desk I can make out, along the worn-out roads, impoverished African men loping in twos or threes towards the margins of town after their day of construction work, or of simply waiting at the street corner to be picked up for odd jobs. Most of them are headed to Sea Vista, or KwaNomzamo, third-world townships like those that gather around all of South Africa’s towns and cities, like the faded edges of a photograph.
When I visited South Africa as a child, this ragged frontier seemed normal, even romantic. Then, as I grew used to gazing at the world from London, my African insights became a source of tension. The situation felt rotten, unaccountable. But if responsibility comes from proximity, how can the judgment that demands it come from somewhere far removed? And who is being judged, anyway? In a place where the only truly shared experience is instability, judicial words like ‘inequality’ must become injudicious ones like ‘headfuck’. That is what South Africa is to an outsider: an uncanny dream where you feel implicated yet detached, unable to ignore or to understand.
–––––––––
Ethics is an inherently privileged pursuit, requiring objectivity, critical distance from a predicament. If, as Thomas Nagel says, objective judgment is ‘a set of concentric spheres, progressively revealed as we detach from the contingencies of the self,’ then ethics assume the right to reside in some detached outer sphere, a non-person looking down at the human nuclei trapped in their lesser orbits.
In his memoir Lost and Found in Johannesburg, Mark Gevisser uses another aerial view, a 1970s street guide, to recollect the divisions of apartheid South Africa. Areas designated for different races are placed on separate pages, or the offending reality of a black settlement is simply left blank. These omissions represented the outer limits of ethical awareness, as sanctioned by the state.
Gevisser, raised as a liberal, English-speaking South African, had at least some of the detachment implied by his map. Apartheid was the creation of the Afrikaner people, whose insular philosophy became bureaucratic reality in 1948, by virtue of their being just over half of South Africa’s white voters. My parents grew up within its inner circle, a world with no television and no loose talk at parties, tightly embraced by the National Party and by God himself through his Dutch Reformed church.
It was a prison of memory – the Afrikaners had never escaped their roots as the hopeless dregs of Western Europe that had coalesced on the tip of Africa in the 17th century. Later, the British colonists would call them ‘rock spiders’. They always respected a leader who snubbed the outside world, like Paul Kruger, who in the late 19th century called someone a liar for claiming to have sailed around the earth, which of course was flat. Their formation of choice was the laager, a circular fort of settlers’ wagons, with guns trained at the outside.
By the time my father bought the house in St Francis in 1987, the world’s opinions had long been flooding in. Apartheid’s collapse was under way, brought about, ironically, by dependence on African labour and international trade. My family lived in Pretoria, where they kept a revolver in the glove compartment. We left seven years later, when I was three, part of the first wave of a great diaspora of white South Africans to the English-speaking world.
–––––––––
From my half-detached perspective, the rhythms of South African history appear deep and unbending. The crude patchwork of apartheid dissolved only to reform as a new set of boundaries, distinct spheres of experience sliding past each other. Even as places like St Francis boomed, the deprived rural population suddenly found itself part of a global economy, and flooded into peripheral townships and squatter camps. During the year, when there is no work in St Francis, these are the ghosts who break into empty mansions to steal taps, kettles, and whatever shred of copper they can find.
This is how Patricia and her family moved to KwaNomzamo, near the poor town of Humansdorp, about 20 minutes’ drive from St Francis. Patricia is our cleaner, a young woman with bright eyes. She is Coloured, an ethnicity unique to South Africa, which draws its genes from African and Malay slaves, the indigenous San and Khoikhoi people of the Cape, and the Afrikaners, whose language they share. This is the deferential language of the past – ‘ja Mevrou,’ Patricia says in her lilting accent.
I have two images of Patricia. The first is a mental one of her home in KwaNomzamo, one of the tin boxes they call ‘disaster housing’, planted neatly in rows beside the sprawl of the apartheid-era ‘location’. This image is dominated by Patricia’s disabled mother, who spends her days here, mute and motionless like a character from an absurdist drama. Beside this is the actual photograph Patricia asked us to take at her boyfriend’s house, where they assumed a Madonna-like pose with their three-month-old child.
These memories drive apart the different perspectives in me like nothing else. The relationships between middle-class South Africans and their domestic staff today are a genuine strand of solidarity in an otherwise confusing picture. But from my European viewpoint, always aware of history and privilege, even empathy is just another measure of injustice, of difference. This mindset is calibrated from a distance: someone who brings it to actual relationships is not an attractive prospect, nor an ethical one. Self-aware is never far from self-absorbed.
–––––––––
The danger usually emphasised by ethics is becoming trapped in a subjective viewpoint, seeing the world from too narrow an angle. But another problem is the philosophical shrinking act sometimes known as false objectivity. If you already have a detached perspective, the most difficult part of forming a judgment is understanding the personal motives of those involved. ‘Reasons for action,’ as Nagel says, ‘have to be reasons for individuals’. The paradox is that a truly objective judgment has to be acceptable from any viewpoint, otherwise it is just another subjective judgment.
In Britain, hardship seems to exist for our own judicial satisfaction. Ethics are a spectator sport mediated by screens, a televised catharsis implying moral certainty. War, natural disasters, the boats crossing the Mediterranean – there’s not much we can offer these images apart from such Manichean responses as blind sympathy or outrage, and these we offer largely to our consciences. Looking out becomes another way of looking in.
The journalist R.W. Johnson noted that after liberation, foreign papers lost interest in commissioning stories about South Africa. Just as well, since it soon became a morass of competing anxieties, the idealism of the ‘rainbow nation’ corroded by grotesque feats of violence and corruption: I am not unusual in having relatives who have been murdered. Against this background, the pigs and potholes among the mansions of St Francis are like blood coughed into a silk handkerchief, signs of a hidden atrophy already far progressed.
Alison and Tim are the sort of young South Africans – and there remain many – whose optimism has always been the antidote to all this. They are Johannesburgers proud of their cosmopolitan city. One evening last Christmas, I sat with Tim in a St Francis bar that served craft beer and staged an indie band in the corner. This is not really like South Africa, he said, pointing to the entirely white crowd. Then he told me he, too, is thinking of leaving.
South Africa’s currency, the Rand, crashed in December after President Jacob Zuma fired his Finance Minister on a whim. You could not go anywhere without hearing about this. Everyone is looking for something to export, Tim said, a way to earn foreign currency before it becomes impossible to leave. He has a family to think of – and yes, he admitted several drinks later, it bothers him that you could wake any night with a gun to your head.
‘More often in the first world / one wakes from not to the nightmare’, writes the American poet Kathy Fagan. There is such as thing as a shared dream, but even nightmares that grow from the same source tend to grow apart. They are personal, invisible from outside.
This article was first published by The Junket on 29 Feb 2016
This text was written for a screening of Camille Summers-Valli’s film, Big Mountain, Diné Bikéyah, on 4 Jun 2015
If the basis of documentary is to give a sense of reality, then moving film must be the best medium. Yes, we know the camera can be very good at lying, and that the questions of when and what to film, and how it is edited, lead to highly subjective answers. But nonetheless, filming is a mechanised recording process, and watching film remains the best representation of what it’s actually like in the world captured by the camera.
The problem for documentary film is that from birth it has shared the screen with the most overbearing and manipulative of partners: fictional narrative. As we know, people love stories. So much so that, for most, the immediate connotations of the word ‘film’ have nothing to do with reality. They are of beginnings and endings, twists in the plotline, suspense, characterisation, dénouement, tragedy and redemption.
We are, on so many levels, wired for stories. Stories arrange events to pluck meaning from utter chaos. We compose them every day. Memory is a storytelling device that crafts a cascade of incidental moments into the personal narrative that is identity. Collectively, too: religion, ideology, morality and ambition are all narrative frameworks within which we live, stories that give meaning to the enigma of existence.
In film, narrative reduces our need for sense and meaning to a mechanical process. Narrative takes us for a ride. It sets about constructing a puzzle: there is a progression of scenes, development of characters, the convergence of unrelated events and, somewhere in the basement of our minds, a series of clicks as things fall into place.
Given how addictive this can be, it’s unsurprising that documentary film has often taken refuge in the notion of the ‘true story’ – after all, we like those most of all. But a true story is not the same as truth or reality – nowhere near it. It is a composition and, despite our proficiency with stories, we can feel that, because it excludes so much of the background noise that we know reality contains. Furthermore, narrative conventions tend to universalise their subject matter. When the mind finds a narrative path that it recognises, the particulars become almost incidental. The story is a dream which makes its contents vivid and memorable, only to strip them of their reality.
Even if the audience can be kept out of this stream that it so longs to drift down, there is the issue of the medium itself – film, and the screen – which over time has become contaminated by the aura of fiction. It exists inside what WH Auden called ‘the magic circle’ – a parallel world that is deeply engrossing but, to our relief, makes no demands of us. When the credits roll, we collect our coats and go home. When we watch something classed as documentary, we are aware that it is supposedly ‘real life,’ and yet mentally it tends to end up right alongside fiction in a much bigger box marked ‘media.’
The challenges for documentary film are many, especially if – as is the case with Camille Summers-Valli’s film, Big Mountain, Diné Bikéyah – the reality portrayed is so different to our own. The stories to which we cleave emerge over centuries from the depths of our culture. In part, the suffering inflicted on the people of this remote corner of Northern Arizona stems from a failure to understand the fundamentally different narratives which give meaning to their existence. For the various authorities who have hounded them for generations, they have been an anomaly, a digression from the greater American story, and a minor obstacle in various tales of individual advancement.
To package these circumstances for easy consumption in our own narrative terms would be to repeat this disastrous misunderstanding and, what is worse, reduce it to a state of pseudo-fictional entertainment. The answer of Big Mountain, Diné Bikéyah is to kick us out of our passive state by representing the lived human experience in all its unresolved chaos. This offers us, momentarily, an escape from fiction, and the rest is up to us.