Notes on “Why Liberalism Failed”

Patrick Deneen’s Why Liberalism Failed was one of the most widely discussed political books last year. In a crowded field of authors addressing the future of liberalism, Deneen stood out like a lightning-rod for his withering, full-frontal attack on the core principles and assumptions of liberal philosophy. And yet, when I recently went back and read the many reviews of Why Liberalism Failed, I came out feeling slightly dissatisfied. Critics of the book seemed all too able to shrug off its most interesting claims, and to argue in stead on grounds more comfortable to them.

Part of the problem, perhaps, is that Deneen’s book is not all that well written. His argument is more often a barrage of polemical statements than a carefully constructed analysis. Still, the objective is clear enough. He is taking aim at the liberal doctrine of individual freedom, which prioritises the individual’s right to do, be, and choose as he or she wishes. This “voluntarist” notion of freedom, Deneen argues, has shown itself to be not just destructive, but in certain respects illusory. On that basis he claims we would be better off embracing the constraints of small-scale community life.

Most provocatively, Deneen claims that liberal societies, while claiming merely to create conditions in which individuals can exercise their freedom, in fact mould people to see themselves and to act in a particular way. Liberalism, he argues, grew out of a particular idea of human nature, which posited, above all, that people want to pursue their own ends. It imagined our natural and ideal condition as that of freely choosing individual actors without connection to any particular time, place, or social context. For Deneen, this is a dangerous distortion – human flourishing also requires things at odds with personal freedom, such as self-restraint, committed relationships, and membership of a stable and continuous community. But once our political, economic, and cultural institutions are dedicated to individual choice as the highest good, we ourselves are encouraged to value that freedom above all else. As Deneen writes:

Liberalism began with the explicit assertion that it merely describes our political, social, and private decision making. Yet… what it presented as a description of human voluntarism in fact had to displace a very different form of human self-understanding and experience. In effect, liberal theory sought to educate people to think differently about themselves and their relationships.

Liberal society, in other words, shapes us to behave more like the human beings imagined by its political and economic theories.

It’s worth reflecting for a moment on what is being argued here. Deneen is saying our awareness of ourselves as freely choosing agents is, in fact, a reflection of how we have been shaped by the society we inhabit. It is every bit as much of a social construct as, say, a view of the self that is defined by religious duties, or by membership of a particular community. Moreover, valuing choice is itself a kind of constraint: it makes us less likely to adopt decisions and patterns of life which might limit our ability to choose in the future – even if we are less happy as a result. Liberalism makes us unfree, in a sense, to do anything apart from maximise our freedom.

*   *   *

 

Reviewers of Why Liberalism Failed did offer some strong arguments in defence of liberalism, and against Deneen’s communitarian alternative. These tended to focus on material wealth, and on the various forms of suffering and oppression inherent to non-liberal ways of life. But they barely engaged with his claims that our reverence for individual choice amounts to a socially determined and self-defeating idea of freedom. Rather, they tended to take the freely choosing individual as a given, which often meant they failed to distinguish between the kind of freedom Deneen is criticizing – that which seeks to actively maximise choice – and simply being free from coercion.

Thus, writing in the New York Times, Jennifer Szalai didn’t see what Deneen was griping about. She pointed out that

nobody is truly stopping Deneen from doing what he prescribes: finding a community of like-minded folk, taking to the land, growing his own food, pulling his children out of public school. His problem is that he apparently wants everyone to do these things

Meanwhile, at National Review, David French argued that liberalism in the United States actually incentivises individuals to“embrace the most basic virtues of self-governance – complete your education, get married, and wait until after marriage to have children.”And how so? With the promise of greater “opportunities and autonomy.” Similarly Deidre McCloskey, in a nonetheless fascinating rebuttal of Why Liberalism Failed, jumped between condemnation of social hierarchy and celebration of the “spontaneous order” of the liberal market, without acknowledging that she seemed to be describing two systems which shape individuals to behave in certain ways.

So why does this matter? Because it matters, ultimately, what kind of creatures we are – which desires we can think of as authentic and intrinsic to our flourishing, and which ones stem largely from our environment. The desire, for instance, to be able to choose new leaders, new clothes, new identities, new sexual partners – do these reflect the unfolding of some innate longing for self-expression, or could we in another setting do just as well without them?

There is no hard and fast distinction here, of course; the desire for a sports car is no less real and, at bottom, no less natural than the desire for friendship. Yet there is a moral distinction between the two, and a system which places a high value on the freedom to fulfil one’s desires has to remain conscious of such distinctions. The reason is, firstly, because many kinds of freedom are in conflict with other personal and social goods, and secondly, because there may come a time when a different system offers more by way of prosperity and security.  In both cases, it is important to be able to say what amounts to an essential form of freedom, and what does not.

*   *   *

 

Another common theme among Deneen’s critics was to question his motivation. His Catholicism, in particular, was widely implicated, with many reviewers insinuating that his promotion of close-knit community was a cover for a reactionary social and moral order. Here’s Hugo Drochon writing in The Guardian:

it’s clear that what he wants… is a return to “updated Benedictine forms” of Catholic monastic communities. Like many who share his worldview, Deneen believes that if people returned to such communities they would get back on a moral path that includes the rejection of gay marriage and premarital sex, two of Deneen’s pet peeves.

Similarly, Deidre McCloskey:

We’re to go back to preliberal societies… with the church triumphant, closed corporate communities of lovely peasants and lords, hierarchies laid out in all directions, gays back in the closet, women in the kitchen, and so forth.

Such insinuations strike me as unjustified – these views do not actually appear in Why Liberalism Failed– but they are also understandable. For Deneen does not clarify the grounds of his argument. His critique of liberalism is made in the language of political philosophy, and seems to be consequentialist: liberalism has failed, because it has destroyed the conditions necessary for human flourishing. And yet whenever Deneen is more specific about just what has been lost, one hears the incipient voice of religious conservatism. In sexual matters, Deneen looks back to “courtship norms” and “mannered interaction between the sexes”; in education, to “comportment” and “the revealed word of God.”

I don’t doubt that Deneen’s religious beliefs colour his views, but nor do I think his entire case springs from some dastardly deontological commitment to Catholic moral teaching. Rather, I would argue that these outbursts point to a much more interesting tension in his argument.

My sense is that the underpinnings of Why Liberalism Failed come from virtue ethics – a philosophy whose stock has fallen somewhat since the Enlightenment, but which reigned supreme in antiquity and medieval Christendom. In Deneen’s case, what is important to grasp is Aristotle’s linking of three concepts: virtue, happiness, and the polis or community. The highest end of human life, says Aristotle, is happiness (or flourishing). And the only way to attain that happiness is through consistent action in accordance with virtue – in particular, through moderation and honest dealing. But note, virtues are not rules governing action; they are principles that one must possess at the level of character and, especially, of motivation. Also, it is not that virtue produces happiness as a consequence; the two are coterminous – to be virtuous is to be happy. Finally, the pursuit of virtue/happiness can only be successful in a community whose laws and customs are directed towards this same goal. For according to Aristotle:

to obtain a right training for goodness from an early age is a hard thing, unless one has been brought up under right laws. For a temperate and hardy way of life is not a pleasant thing to most people, especially when they are young.

The problem comes, though, when one has to provide a more detailed account of what the correct virtues are. For Aristotle, and for later Christian thinkers, this was provided by a natural teleology – a belief that human beings, as part of a divinely ordained natural order, have a purpose which is intrinsic to them. But this crutch is not really available in a modern philosophical discussion. And so more recent virtue ethicists, notably Alasdair MacIntyre, have shifted the emphasis away from a particular set of virtues with a particular purpose, and towards virtue and purpose as such. What matters for human flourishing, MacIntyre argued, is that individuals be part of a community or tradition which offers a deeply felt sense of what it is to lead a good life. Living under a shared purpose, as manifest in the social roles and duties of the polis, is ultimately more important than the purpose itself.

This seems to me roughly the vision of human flourishing sketched out in Why Liberalism Failed. Yet I’m not sure Deneen has fully reconciled himself to the relativism that is entailed by abandoning the moral framework of a natural teleology. This is a very real problem – for why should we not accept, say, the Manson family as an example of virtuous community? – but one which is difficult to resolve without overtly metaphysical concepts. And in fact, Deneen’s handling of human nature does strain in that direction, as when he looks forward to

the only real form of diversity, a variety of cultures that is multiple yet grounded in human truths that are transcultural and hence capable of being celebrated by many peoples.

So I would say that Deneen’s talk of “courtship norms” and “comportment” is similar to his suggestion that the good life might involve “cooking, planting, preserving, and composting.” Such specifics are needed to refine what is otherwise a dangerously vague picture of the good life.

 

 

 

 

Addressing the crisis of work

This article was first published by Arc Digital on December 10th 2018.

There are few ideals as central to the life of liberal democracies as that of stable and rewarding work. Political parties of every stripe make promises and boasts about job creation; even Donald Trump is not so eccentric that he does not brag about falling rates of unemployment. Preparing individuals for the job market is seen as the main purpose of education, and a major responsibility of parents too.

But all of this is starting to ring hollow. Today it is an open secret that, whatever the headline employment figures say, the future of work is beset by uncertainty.

Since the 1980s, the share of national income going to wages has declined in almost every advanced economy (the socially democratic Nordic countries are the exception). The decade since the financial crisis of 2007–8 has seen a stubborn rise in youth unemployment, and an increase in “alternative arrangements” characteristic of the gig economy: short-term contracts, freelancing and part-time work. Graduates struggle to find jobs to match their expectations. In many places the salaried middle-class is shrinking, leaving a workforce increasingly polarized between low- and high-earners.

Nor do we particularly enjoy our work. A 2013 Gallup survey found that in Western countries only a fifth of people say they are “engaged” at work, with the rest “not engaged” or “actively disengaged.”

The net result is an uptick of resentment, apathy, and despair. Various studies suggest that younger generations are less likely to identify with their career, or profess loyalty to their employer. In the United States, a worrying number of young men have dropped out of work altogether, with many apparently devoting their time to video games or taking prescription medication. And that’s without mentioning the ongoing automation revolution, which will exacerbate these trends. Robotics and artificial intelligence will likely wipe-out whole echelons of the current employment structure.

So what to do? Given the complexity of these problems — social, cultural, and economic — we should not expect any single, perfect solution. Yet it would be reckless to hope that, as the economy changes, it will reinvent a model of employment resembling what we have known in the past.

We should be thinking in broad terms about two related questions: in the short term, how could we reduce the strains of precarious or unfulfilling employment? And in the long term, what will we do if work grows increasingly scarce?

One answer involves a limited intervention by the state, aimed at revitalizing the habits of a free-market society — encouraging individuals to be independent, mobile, and entrepreneurial. American entrepreneur Andrew Yang proposes a Universal Basic Income (UBI) paid to all citizens, a policy he dubs “the freedom dividend.” Alternatively, Harvard economist Lawrence Katz suggests improving labor rights for part-time and contracted workers, while encouraging a middle-class “artisan economy” of creative entrepreneurs, whose greatest asset is their “personal flair.”

There are valid intuitions here about what many of us desire from work — namely, autonomy, and useful productivity. We want some control over how our labor is employed, and ideally to derive some personal fulfillment from its results. These values are captured in what political scientist Ian Shapiro has termed “the workmanship ideal”: the tendency, remarkably persistent in Western thought since the Enlightenment, to recognize “the sense of subjective satisfaction that attaches to the idea of making something that one can subsequently call one’s own.”

But if technology becomes as disruptive as many foresee, then independence may come at a steep price in terms of unpredictability and stress. For your labor — or, for that matter, your artisan products — to be worth anything in a constantly evolving market, you will need to dedicate huge amounts of time and energy to retraining. According to some upbeat advice from the World Economic Forum, individuals should now be aiming to “skill, reskill, and reskill again,” perhaps as often as every 2–3 years.

Is it time, then, for more radical solutions? There is a strand of thinking on the left which sees the demise of stable employment very differently. It argues that by harnessing technological efficiency in an egalitarian way, we could all work much less and still have the means to lead more fulfilling lives.

This “post-work” vision, as it is now called, has been gaining traction in the United Kingdom especially. Its advocates — a motley group of Marx-inspired journalists and academics — found an unexpected political platform in Jeremy Corbyn’s Labour Party, which has recently proposed cutting the working week to four days. It has also established a presence in mainstream progressive publications such as The Guardian and New Statesman.

To be sure, there is no coherent, long-term program here. Rather, there is a great deal of blind faith in the prospects of automation, common ownership and cultural revolution. Many in the post-work camp see liberation from employment, usually accompanied by UBI, as the first step in an ill-defined plan to transcend capitalism. Typical in that respect are Alex Williams and Nick Srnicek, authors of Inventing the Future: Postcapitalism and a World Without Work. This blueprint includes open borders and a pervasive propaganda network, and flirts with the possibility of “synthetic forms of biological reproduction” to enable “a newfound equality between the sexes.”

We don’t need to buy into any of this, though, to appreciate the appeal of enabling people to work less. Various thinkers, including Bertrand Russell and John Maynard Keynes, took this to be an obvious goal of technological development. And since employment does not provide many of us with the promised goods of autonomy, fulfillment, productive satisfaction and so on, why shouldn’t we make the time to pursue them elsewhere?

Now, one could say that even this proposition is based on an unrealistic view of human nature. Arguably the real value of work is not enjoyment or even wealth, but purpose: people need routine, structure, a reason to get up in the morning, otherwise they would be adrift in a sea of aimlessness. Or at least some of them would – for another thing employment currently provides is a relatively civilized way for ambitious individuals to compete for resources and social status. Nothing in human history suggests that, even in conditions of superabundance, that competition would stop.

According to this pessimistic view, freedom and fulfillment are secondary concerns. The real question is, in the absence of employment, what belief systems, political mechanisms, and social institutions would make work for all of those idle thumbs?

But the way things are headed, it looks like we are going to need to face that question anyway, in which case our work-centric culture is a profound obstacle to generating good solutions. With so much energy committed to long hours and career success (the former being increasingly necessary for the latter), there is no space for other sources of purpose, recognition, or indeed fulfilment to emerge in an organic way.

The same goes for the economic side of the problem. I am no supporter of UBI – a policy whose potential benefits are dwarfed by the implications of a society where every individual is a client of the state. But if we want to avoid that future, it would be better to explore other arrangements now than to cling to our current habits until we end up there by default. Thus, if for no other reason than to create room for such experiments, the idea of working less is worth rescuing from the margins of the debate.

More to the point, there needs to be a proper debate. Given how deeply rooted our current ideas about employment are, politicians will continue appealing to them. We shouldn’t accept such sedatives. Addressing this problem will likely be a messy and imperfect process however we go about it, and the sooner we acknowledge that the better.

Notes on “The Bowl of Milk”

I normally can’t stand hearing about the working habits of famous artists. Whether by sheer talent or some fiendish work ethic, they tend to be hyper-productive in a way that I could never be. Thankfully, there are counter-examples – like the painter Pierre Bonnard. As you can read in the first room of the Bonnard exhibition now at Tate Modern, he often took years to finish a painting, putting it to one side before coming back to it and reworking it multiple times. He was known to continue tinkering with his paintings when he came across them hanging on the wall of somebody’s house. At the very end of his life, no longer able to paint, he instructed his nephew to change a section of his final work Almond Tree in Blossom (1947).

Maybe this is wishful thinking, but I find things that have been agonised over to acquire a special kind of depth. In many ways Bonnard is not my kind of painter, but his work rewards close attention. There is hardly an inch of his canvases where you do not find different tones layered over each other – layers not only of paint, but of time and effort – creating a luminous sea of brushstrokes which almost swarms in front of your eyes. And this belaboured quality is all the more intriguing given the transience of his subject matter: gardens bursting with euphoric colour, interiors drenched in vibrant light, domestic scenes that capture the briefest of moments during the day.

Nowhere is this tension more pronounced than in The Bowl of Milk (1919). Pictured is a room with a window overlooking the sea, and two tables ranged with items of crockery and a vase of flowers. In the foreground stands a woman wearing a long gown and holding a bowl, presumably for the cat which approaches in the shadows at her feet. Yet there is something nauseating, almost nightmarish about this image. Everything swims with indeterminacy, vanishing from our grasp. So pallid is the light pouring through the window that at first I assumed it was night outside. The objects and figures crowding the room shimmer as though on the point of dissolving into air. The woman’s face is a vague, eyeless mask. The painting is composed so that if you focus on one particular passage, everything else recedes into a shapeless soup in the periphery of your vision. It is a moment of such vivid intensity that one is forced to realise it has been conjured from the depths of fantasy.

*     *     *

 

The woman in The Bowl of Milk is almost certainly Marthe de Méligny, formerly Maria Boursin, Bonnard’s lifelong model and spouse. They met in Paris in 1893, where de Méligny was employed manufacturing artificial flowers for funerals. Some five years later, Bonnard began to exhibit paintings that revealed their intimate domestic life together. These would continue throughout his career, with de Méligny portrayed in various bedrooms, bathrooms and hallways, usually alone, usually nude, and often in front of a mirror.

Pierre Bonnard (1867-1947). "Nu dans le bain". Huile sur toile, 1936. Paris, musée d'Art moderne.
Pierre Bonnard “Nude in the Bath” (1936). Oil paint on canvas. Paris, musée d’Art moderne.

It was not an uncomplicated relationship: Bonnard is thought to have had affairs, and when the couple eventually married in 1925 de Méligny revealed she had lied about her name and age (she had broken off contact with her family before moving to Paris). They were somewhat isolated. De Méligny is described as having a silent and unnerving presence, and later developed a respiratory disease which forced them to spend periods on the Atlantic coast. Yet Bonnard’s withdrawal from the Parisian art scene, where he had been prominent during his twenties, allowed him to develop his exhaustive, time-leaden painting process, and to forge his own style. The paintings of de Méligny seem to relish the freedom enabled by familiarity and seclusion. One of the gems of the current Tate exhibition are a series of nude photographs that the couple took of one another in their garden in the years 1899-1901. In each of these unmistakeably Edenic pictures, we see a bright-skinned body occupying a patch of sunlight, securely framed by shadowy thickets of grass and leaves.

pierre-bonnard-1900-1901-jardin-de-montval-marthe-bonnard-rmn1

pierre-bonnard-1900-1901-jardin-de-montval-marthe-bonnard-rmn
(Source: https://dantebea.com/category/peintures-dessins/pierre-bonnard/page/2/)

The female figure in The Bowl of Milk is far from familiar: she is a flicker of memory, a robed phantasm. But like other portrayals of de Méligny, this painting revels in the erotics of space, whereby the proximity and secrecy of the domestic setting are charged with the presence of a human subject – an effect only heightened by our voyeuristic discomfort at gaining access to this private world. There is no nudity, but a disturbing excess of sensual energy in the gleaming white plates, the crimson anemones, the rich shadows and the luxurious stride of the cat. To describe these details as sexual is to lessen their true impact: they are demonic, signalling the capacity of imagination to terrorise us with our own senses.

*     *     *

 

In 1912 Bonnard bought a painting by Henri Matisse, The Open Window at Collioure (1905). Matisse would soon emerge as one of the leading figures of modern painting, but the two were also friends, maintaining a lively correspondence over several decades. And one can see what inspired Bonnard to make this purchase: doors and windows appear continually in his own work, allowing interior space to be animated by the vitality of the outside world.

•-Open-Window-Collioure-1905-by-Henri-Matisse-•-Henri-Matisse-painted-Open-Window-Collioure-in-t
Henri Matisse, “The Open Window at Collioure” (1905). Oil paint on canvas. National Gallery of Art, Washington
Pierre Bonnard L'atelier au mimosa 1939-46 Musée National d'Art Moderne - Centre Pompidou (Paris, France)
Pierre Bonnard, “The Studio with Mimosas” (1939-46). Oil paint on canvas. Musée National d’Art Moderne – Centre Pompidou, Paris.

More revealing, though, are the differences we can glean from The Open Window at Collioure. Matisse’s painting, with its flat blocks of garish colour, is straining towards abstraction. As a formal device, the window merely facilitates a jigsaw of squares and rectangles. Such spatial deconstruction and pictorial simplification were intrinsic to the general direction of modernism at this time. This, however, was the direction from which the patient and meticulous Bonnard had partly stepped aside. For he remained under the influence of impressionist painting, which emphasised the subtlety and fluidity of light and colour as a means of capturing the immediacy of sensory experience. Thus, as Juliette Rizzi notes, Bonnard’s use of “framing devices such as doors, mirrors, and horizontal and vertical lines” allow him a compromise of sorts. They do not simplify his paintings so much as provide an angular scaffolding around which he can weave his nebulous imagery.

The window and its slanted rectangles of light are crucial to the strange drama of The Bowl of Milk. Formally, this element occupies the very centre of the composition, holding it in place. But it is also a source of ambiguity. The window is seemingly a portal to another world, flooding the room with uncanny energy. The woman appears stiff, frozen at the edge of a spotlight. It’s as though the scene has been illuminated just briefly – before being buried in darkness again.

Testing the limits of universalism in science

This essay was first published by Areo magazine on 23 November 2018. 

Science traditionally aspires to be universal in two respects. First, it seeks fundamental knowledge—facts which are universally true. Second, it aims to be impersonal in practice; identity should be irrelevant to the process by which a scientific claim is judged.

Since the era following the Second World War, a great deal has come to rest on these aspirations. For not only does universalism make science a reliable means of understanding the world; it also makes scientific institutions an obvious basis for cooperation in response to various grim and complex challenges facing humanity. Today, these challenges include environmental damage, infectious diseases, biotechnology and food and energy insecurity. Surely, if anyone can rise above conflicts of culture and interest—and maybe even help governments do the same—it is the people in the proverbial white coats.

And yet, lately we find the very principle of universalism being called into doubt. Armed with the tools of critical theory, scholars in the social sciences and humanities assert that science is just one knowledge system among many, relative to the western context in which it evolved. In this view, the universalism that enables science to inform other peoples and cultures is really a form of unjust hegemony.

So far, this trend has mostly been discussed in an educational setting, where there have been calls to decolonize scientific curricula and to address demographic imbalances among students. But how will it affect those institutions seeking to foster scientific collaboration on critical policy issues?

An argument erupted this year in the field of ecology, centered on a body called the IPBES (Intergovernmental Panel on Biodiversity and Ecosystem Services). I suspect few readers have heard of this organization, but then, such is the unglamorous business of saving the world. The IPBES is one of the few vehicles for drawing governments’ attention to the rapid global decline of biodiversity, and of animal and plant populations generally.

In January, leading members of the panel published an article in the journal Science, announcing a “paradigm shift” in how it would approach its mission. They claim the scientific model on which the IPBES was founded is “dominated by knowledge from the natural sciences and economics,” and prone to adopt the “generalizing perspective” of “western science.” Consequently, they argue, it does not provide space for the humanities and social sciences, nor does it recognize the knowledge and values of local and indigenous peoples.

The article, which sparked an acrimonious row within the research community, came after several years in which IPBES papers and reports had developed “a pluralistic approach to recognizing the diversity of values.” The panel has now officially adopted a new paradigm that “resist[s] the scientific goal of attaining a universally applicable schema,” while seeking to “overcome existing power asymmetries between western science and local and indigenous knowledge, and among different disciplines within western science.”

 

Science, Policy, Politics

 It is easy to dismiss such terminology as mere jargon, and that is what some critics have done. They claim the “paradigm shift” amounts to “a political compromise, and not a new scientific concept.” In other words, labeling a universal outlook western science is a diplomatic gesture to placate skeptics. Recognizing “a diversity of values” does not alter the pertinent data, because, however you frame them, the data are the data.

But here is the problem. When it comes to organizations whose role is to inform policy, this neat separation between science and politics is misleading; they often have their own political goals that guide their scientific activity. For the IPBES, that goal is persuading policymakers to conserve the natural world. Consequently, the panel does not merely gather data about the health of ecosystems. It gathers data showing how humans benefit from healthy ecosystems, so as to emphasize the costs of not conserving them.

This strategy, however, forces the IPBES to make value judgments which are not straightforwardly amenable to scientific methods. To assess the benefits of nature, one must consider not just clean air and soil nutrients, but also nonmaterial factors such as religious inspiration and cultural identity that vary widely around the world. Can all of this really be incorporated into a universal, objective system of measurements?

The IPBES’ original paradigm tried to do so, but, inevitably, the result was a crude framework of utilitarian metrics. It sought to categorize and quantify all of nature’s benefits (including the religious and cultural) and convert them into monetary values—this being, after all, the language policy makers understand best. As the Science article states, drawing on a substantial literature, this reductive approach alienated a great many scientists, as well as local people, whose participation is crucial for conservation.

All of this illustrates some general problems with universalism as a basis for cooperation. Firstly, when a scientific institution directs its work towards certain policy outcomes, its claims to objectivity become more questionable. It might still produce knowledge that is universally true; but which knowledge it actually seeks, and how it translates that knowledge into policy tools are more contentious questions.

This problem arises even in cases of solid scientific consensus, such as climate change. Rising temperatures are one thing, but which consequences should scientists investigate to grab the attention of policymakers or even voters? Which economic policies should they endorse? Such judgments will inevitably be political and ideological in nature.

Moreover, some subjects are simply more politically and culturally contentious than others. There are many areas where, even if a universalist approach can be devised, it will nonetheless be regarded as an unwelcome and foreign way of thinking. As we have seen, nature is one of these areas. Another obvious example is gene editing, which Japan has recently allowed in human embryos. Any attempts to regulate this technology will likely require a debate about religious and cultural mores as much as hard science.

 

The Limits of Pluralism

The question is, however, does the pluralism now advocated by IPBES offer a viable solution to these problems? It is highly doubtful. The influence of critical theory, as seen in a fixation with knowledge as a proxy for power, is itself antithetical to productive cooperation. Rather than merely identifying the practical limitations of the scientific worldview, it pits science in zero-sum competition with other perspectives.

The problem begins with a slide from cultural pluralism into epistemological relativism. In the literature that laid the groundwork for the IPBES “paradigm shift,” knowledge systems are treated as “context specific,” each containing “its own processes of validity.” As a result, the prospect of compromise recedes into the distance, the priority being to “equitably bridge different value systems, eventually allowing processes of social learning.”

As critics have warned, there is a danger here of losing clarity and focus, leading to less effective advocacy. IPBES papers and reports now bulge with extensive discussions of cultural particularism and equity, threatening at times to become an altogether parallel mission. Yet in 2016, when the panel delivered its most comprehensive assessment to date, the summary for policymakers included barely any information about the economic costs of ecological damage.

Indeed, despite its supposed skepticism, there is an air of fantasy surrounding this discourse. Even if there are areas where it is inappropriate to impose a purely scientific outlook, it is disingenuous to pretend that, with a particular goal in view, all perspectives are equally useful. Likewise, no amount of consultation and mediation can negate the reality that, with limited resources, different values and interests must be traded off against one another. If scientists absolve themselves of this responsibility, they simply pass it on to policymakers.

Universalism has practical limits of its own: it cannot dissolve cultural differences, or remove the need to make political decisions. But, provided such limitations are understood, it surely remains the most useful default principle for collaborative work. Even diverse institutions need common goals: to treat values as fully incommensurable is to invite paralysis. And to politicize knowledge itself is to risk unraveling the scientific enterprise altogether.

Yuval Noah Harari’s half-baked guide to the 21st century

This review was first published by Arc Digital on 25 October 2018.

There is something immensely comforting about Yuval Noah Harari. In an era when a writer’s success often depends on a willingness to provoke, Harari’s calling cards are politeness and equanimity. In the new class of so-called “rock star intellectuals,” he is analogous to Coldplay: accessible, inoffensive, and astoundingly popular. I find no other writer so frequently referenced by friends who don’t generally read. On YouTube he is a man for all seasons, discussing #MeToo with Natalie Portman, contemplating the nature of money with Christine Lagarde, and considering “Who Really Runs the World?” with Russell Brand.

Harari, a historian at the Hebrew University of Jerusalem, is by no means undeserving of this success. His first book, Sapiens: A Brief History of Humankind, displayed a rare talent for condensing vast epochs of history into simple narratives. In his second, Homo Deus, he showed all the imagination of a science fiction writer in presenting the dystopian possibilities of artificial intelligence and biotechnology.

But now Harari has abandoned the speculative realms of past and future, turning his attention to the thorny problems of the present. And here we find that his formula has its limits. 21 Lessons for the 21st Century is a collection of essays taking on everything from culture and politics to technology and spirituality. Undoubtedly, it offers plenty of thought-provoking questions and insights. By and large though, the very thing that made his previous works so engaging — an insistence on painting in broad, simple brushstrokes — makes this latest effort somewhat superficial.

Many of Harari’s essays are just not very illuminating. They circle their subjects ponderously, never quite making contact. Take his chapter on the immigration debate in Europe. Harari begins by identifying three areas of disagreement: borders, integration, and citizenship. Then he walks us through some generic and largely hypothetical pro- and anti-immigration stances, guided mainly by a desire not to offend anyone. Finally, after explaining that “culturism” is not the same as racism, he simply concludes: “If the European project fails…it would indicate that belief in the liberal values of freedom and tolerance is not enough to resolve the cultural conflicts of the world.”

Here we glimpse one of the book’s main questions: whether liberalism can unite the world and overcome the existential challenges facing humanity. But what is liberalism? According to Harari, all social systems, whether religious or political, are “stories.” By this he means that they are psychological software packages, allowing large-scale cooperation while providing individuals with identity and purpose. Thus, liberalism is a “global story” which boils down to the belief that “all authority ultimately stems from the free will of individual humans.” Harari gives us three handy axioms: “the voter knows best,” “the customer is always right,” and “follow your heart.”

This certainly makes matters crystal clear. But political systems are not just ideological dogmas to which entire populations blindly subscribe. They are institutional arrangements shaped by the clashes and compromises of differing values and interests. Historically, liberalism’s commitment to individualism was less important than its preference for democratic means to resolve such conflicts. Harari’s individualist, universalist liberalism has certainly been espoused in recent decades; but as a more perceptive critic such as John Gray or Shadi Hamid would point out, it is only for sections of Western society that this has offered a meaningful worldview.

Overlooking this basic degree of complexity leads Harari to some bizarre judgments. He claims that “most people who voted for Trump and Brexit didn’t reject the liberal package in its entirety — they lost faith mainly in its globalizing part.” Does he really think these voters were once enthusiastic about globalism? Likewise, to illustrate the irrational character of liberal customs, Harari states: “If democracy were a matter of rational decision-making, there would be absolutely no reason to give all people equal voting rights.” Did he not consider that a key purpose of the ballot is to secure the legitimacy of government?

Harari is frequently half-sighted, struggling to acknowledge that phenomena can have more than one explanation. I confess I chuckled at his reading of Ex Machina, the 2015 sci-fi about a cyborg femme fatale.“This is not a movie about the human fear of intelligent robots,” he writes. It is about “the male fear…that female liberation might lead to female domination.” To support his interpretation, Harari poses a question: “For why on earth would an AI have a sexual or a gender identity?” This in a book which argues extensively that artificial intelligence will be used to exploit human desires.

Nor are such hiccups merely incidental. Rather, they stem from Harari’s failure to connect his various arguments into a coherent world-view. This is perhaps the most serious shortcoming of 21 Lessons. Reading this book is like watching a one-man kabuki play, whereby Harari puts on different masks as the situation demands. But these characters are not called on to complement each other so much as to prevent the stage from collapsing.

We have already encountered Harari’s first mask: postmodern cynicism. He is at pains to deconstruct the grand narratives of the past, whether religious, political, or national. He argues that the human subject, too, is a social construct — an amalgam of fictions, bound by context and largely incapable of rational thought.

However this approach tends to invite relativism and apathy. And so, to provide some moral ballast, Harari picks up the mask of secularist polemic. Though never abandoning his light-hearted tone, he spends a great deal of time eye-poking and shin-kicking any tradition that indulges the human inclination for sanctity, ritual, and transcendence. But not to worry: you can keep your superstitions, “provided you adhere to the secular ethical code.” This consists of truth, compassion, equality, freedom, courage, and responsibility.

What, then, of our darker impulses? And what of our yearning to identify with something larger than ourselves? Enter Harari in his third mask: neo-Buddhist introspection. This is an especially useful guise, for whenever Harari encounters a difficult knot, he simply cuts it with a platitude. “If you really understand how an action causes unnecessary suffering to yourself and to others,” he writes, “you will naturally abstain from it.” Moreover: “If you really know the truth about yourself and the world, nothing can make you miserable.”

I am not saying these outlooks cannot be reconciled. My point is that Harari does not attempt to do so, leaving us instead with an array of loose ends. If the imperative is to deconstruct, why should secular shibboleths be left standing? Why should we worry about technology treating us as “little more than biochemical algorithms,” when Harari already thinks that “your core identity is a complex illusion created by neural networks”? And given that “both the ‘self’ and freedom are mythological chimeras,” what does Harari mean when he advises us to “work very hard…to know what you are, and what you want from life”?

You might object that I’m being ungenerous; that the most popular of popular intellectuals must necessarily deal in outlines, not details. But this is a slippery slope that leads to lazy assumptions about the incuriousness of a general audience. When it comes to current political and philosophical dilemmas, being a good popularizer does not consist in doling out reductive formulas. It consists in giving a flavor of the subtlety which makes these matters worth exploring. In that respect, 21 Lessons falls short of the mark.

What was Romanticism? Putting the “counter-Enlightenment” in context

In his latest book Enlightenment Now: The Case for Reason, Science, Humanism and Progress, Steven Pinker heaps a fair amount of scorn on Romanticism, the movement in art and philosophy which spread across Europe during the late-18th and 19th centuries. In Pinker’s Manichean reading of history, Romanticism was the malign counterstroke to the Enlightenment: its goal was to quash those values listed in his subtitle. Thus, the movement’s immense diversity and ambiguity are reduced to a handful of ideas, which show that the Romantics favored “the heart over the head, the limbic system over the cortex.” This provides the basis for Pinker to label “Romantic” various irrational tendencies that are still with us, such as nationalism and reverence for nature.

In the debates following Enlightenment Now, many have continued to use Romanticism simply as a suitcase term for “counter-Enlightenment” modes of thought. Defending Pinker in Areo, Bo Winegard and Benjamin Winegard do produce a concise list of Romantic propositions. But again, their version of Romanticism is deliberately anachronistic, providing a historical lineage for the “modern romantics” who resist Enlightenment principles today.

As it happens, this dichotomy does not appeal only to defenders of the Enlightenment. In his book The Age of Anger, published last year, Pankaj Mishra explains various 21st century phenomena — including right-wing populism and Islamism — as reactions to an acquisitive, competitive capitalism that he traces directly back to the 18th century Enlightenment. This, says Mishra, is when “the unlimited growth of production . . . steadily replaced all other ideas of the human good.” And who provided the template for resisting this development? The German Romantics, who rejected the Enlightenment’s “materialist, individualistic and imperialistic civilization in the name of local religious and cultural truth and spiritual virtue.”

Since the Second World War, it has suited liberals, Marxists, and postmodernists alike to portray Romanticism as the mortal enemy of Western rationalism. This can convey the impression that history has long consisted of the same struggle we are engaged in today, with the same teams fighting over the same ideas. But even a brief glance at the Romantic era suggests that such narratives are too tidy. These were chaotic times. Populations were rising, people were moving into cities, the industrial revolution was occurring, and the first mass culture emerging. Europe was wracked by war and revolution, nations won and lost their independence, and modern politics was being born.

So I’m going to try to explain Romanticism and its relationship with the Enlightenment in a bit more depth. And let me say this up front: Romanticism was not a coherent doctrine, much less a concerted attack on or rejection of anything. Put simply, the Romantics were a disparate constellation of individuals and groups who arrived at similar motifs and tendencies, partly by inspiration from one another, partly due to underlying trends in European culture. In many instances, their ideas were incompatible with, or indeed hostile towards, the Enlightenment and its legacy. On the other hand, there was also a good deal of mutual inspiration between the two.

 

Sour grapes

The narrative of Romanticism as a “counter-Enlightenment” often begins in the mid-18th century, when several forerunners of the movement appeared. The first was Jean-Jacques Rousseau, whose Social Contract famously asserts “Man is born free, but everywhere he is in chains.” Rousseau portrayed civilization as decadent and morally compromised, proposing instead a society of minimal interdependence where humanity would recover its natural virtue. Elsewhere in his work he also idealized childhood, and celebrated the outpouring of subjective emotion.

In fact various Enlightenment thinkers, Immanuel Kant in particular, admired Rousseau’s ideas; he was arguing that left to their own devices, ordinary people would use reason to discover virtue. Nonetheless, he was clearly attacking the principle of progress, and his apparent motivations for doing so were portentous. Rousseau had been associated with the French philosophes — men such as Thiry d’Holbach, Denis Diderot, Claude Helvétius and Jean d’Alembert — who were developing the most radical strands of Enlightenment thought, including materialist philosophy and atheism. But crucially, they were doing so within a rather glamorous, cosmopolitan milieu. Though they were monitored and harassed by the French ancien régime, many of the philosophes were nonetheless wealthy and well-connected figures, their Parisian salons frequented by intellectuals, ambassadors and aristocrats from across Europe.

Rousseau decided the Enlightenment belonged to a superficial, hedonistic elite, and essentially styled himself as a god-fearing voice of the people. This turned out to be an important precedent. In Prussia, where a prolific Romantic movement would emerge, such antipathy towards the effete culture of the French was widespread. For much to the frustration of Prussian intellectuals and artists — many of whom were Pietist Christians from lowly backgrounds — their ruler Frederick the Great was an “Enlightened despot” and dedicated Francophile. He subscribed to Melchior Grimm’s Correspondence Littéraire, which brought the latest ideas from the Paris; he hosted Voltaire at his court as an Enlightenment mascot; he conducted affairs in French, his first language.

This is the background against which we find Johann Gottfried Herder, whose ideas about language and culture were deeply influential to Romanticism. He argued that one can only understand the world via the linguistic concepts that one inherits, and that these reflect the contingent evolution of one’s culture. Hence in moral terms, different cultures occupy significantly different worlds, so their values should not be compared to one another. Nor should they be replaced with rational schemes dreamed up elsewhere, even if this means that societies are bound to come into conflict.

Rousseau and Herder anticipated an important cluster of Romantic themes. Among them are the sanctity of the inner-life, of folkways and corporate social structures, of belonging, of independence, and of things that cannot be quantified. And given the apparent bitterness of Herder and some of his contemporaries, one can see why Isaiah Berlin declared that all this amounted to “a very grand form of sour grapes.” Berlin takes this line too far, but there is an important insight here. During the 19th century, with the rise of the bourgeoisie and of government by utilitarian principles, many Romantics will show a similar resentment towards “sophisters, economists, and calculators,” as Edmund Burke famously called them. Thus Romanticism must be seen in part as coming from people denied status in a changing society.

Then again, Romantic critiques of excessive uniformity and rationality were often made in the context of developments that were quite dramatic. During the 1790s, it was the French Revolution’s degeneration into tyranny that led first-generation Romantics in Germany and England to fear the so-called “machine state,” or government by rational blueprint. Similarly, the appalling conditions that marked the first phase of the industrial revolution lay behind some later Romantics’ revulsion at industrialism itself. John Ruskin celebrated medieval production methods because “men were not made to work with the accuracy of tools,” with “all the energy of their spirits . . . given to make cogs and compasses of themselves.”

And ultimately, it must be asked if opposition to such social and political changes was opposition to the Enlightenment itself. The answer, of course, depends on how you define the Enlightenment, but with regards to Romanticism we can only make the following generalization. Romantics believed that ideals such as reason, science, and progress had been elevated at the expense of values like beauty, expression, or belonging. In other words, they thought the Enlightenment paradigm established in the 18th century was limited. This is well captured by Percy Shelley’s comment in 1821 that although humanity owed enormous gratitude to philosophers such as John Locke and Voltaire, only Rousseau had been more than a “mere reasoner.”

And yet, in perhaps the majority of cases, this did not make Romantics hostile to science, reason, or progress as such. For it did not seem to them, as it can seem to us in hindsight, that these ideals must inevitably produce arrangements such as industrial capitalism or technocratic government. And for all their sour grapes, they often had reason to suspect those whose ascent to wealth and power rested on this particular vision of human improvement.

 

“The world must be romanticized”

One reason Romanticism is often characterized as against something — against the Enlightenment, against capitalism, against modernity as such — is that it seems like the only way to tie the movement together. In the florescence of 19th century art and thought, Romantic motifs were arrived at from a bewildering array of perspectives. In England during the 1810s, for instance, radical, progressive liberals such as Shelley and Lord Byron celebrated the crumbling of empires and of religion, and glamorized outcasts and oppressed peoples in their poetry. They were followed by arch-Tories like Thomas Carlyle and Ruskin, whose outlook is fundamentally paternalistic. Other Romantics migrated across the political spectrum during their lifetimes, bringing their themes with them.

All this is easier to understand if we note that a new sensibility appeared in European culture during this period, remarkable for its idealism and commitment to principle. Disparaged in England as “enthusiasm,” and in Germany as Schwärmerei or fanaticism, we get a flavor of it by looking at some of the era’s celebrities. There was Beethoven, celebrated as a model of the passionate and impoverished genius; there was Byron, the rebellious outsider who received locks of hair from female fans; and there was Napoleon, seen as an embodiment of untrammeled willpower.

Curiously, though, while this Romantic sensibility was a far cry from the formality and refinement which had characterized the preceding age of Enlightenment, it was inspired by many of the same ideals. To illustrate this, and to expand on some key Romantic concepts, I’m going to focus briefly on a group that came together in Prussia at the turn of the 19th century, known as the Jena Romantics.

The Jena circle — centred around Ludwig Tieck, Friedrich and August Schlegel, Friedrich Hölderlin, and the writer known as Novalis — have often been portrayed as scruffy bohemians, a conservative framing that seems to rest largely on their liberal attitudes to sex. But this does give us an indication of the group’s aims: they were interested in questioning convention, and pursuing social progress (their journal Das Athenäum was among the few to publish female writers). They were children of the Enlightenment in other respects, too. They accepted that rational skepticism had ruled out traditional religion and superstition, and that science was a tool for understanding reality. Their philosophy, however, shows an overriding desire to reconcile these capacities with an inspiring picture of culture, creativity, and individual fulfillment. And so they began by adapting the ideas of two major Enlightenment figures: Immanuel Kant and Benedict Spinoza.

Kant, who spent his entire life among the Romantics in Prussia, had impressed on them the importance of one dilemma in particular: how was human freedom possible given that nature was determined? But rather than follow Kant down the route of transcendental freedom, the Jena school tried to update the universe Spinoza had described a century earlier, which was a single deterministic entity governed by a mechanical sequence of cause and effect. Conveniently, this mechanistic model had been called into doubt by contemporary physics. So they kept the integrated, holistic quality of Spinoza’s nature, but now suggested that it was suffused with another Kantian idea — that of organic force or purpose.

Consequently, the Jena Romantics arrived at an organic conception of the universe, in which nature expressed the same omnipresent purpose in all its manifestations, up to and including human consciousness. Thus there was no discrepancy between mental activity and matter, and the Romantic notion of freedom as a channelling of some greater will was born. After all, nature must be free because, as Spinoza had argued, there is nothing outside nature. Therefore, in Friedrich Schlegel’s words, “Man is free because he is the highest expression of nature.”

Various concepts flowed from this, the most consequential being a revolutionary theory of art. Whereas the existing neo-classical paradigm had assumed that art should hold a mirror up to nature, reflecting its perfection, the Romantics now stated that the artist should express nature, since he is part of its creative flow. What this entails, moreover, is something like a primitive notion of the unconscious. For this natural force comes to us through the profound depths of language and myth; it cannot be definitely articulated, only grasped at through symbolism and allegory.

Such longing for the inexpressible, the infinite, the unfathomable depth thought to lie beneath the surface of ordinary reality, is absolutely central to Romanticism. And via the Jena school, it produces an ideal which could almost serve as a Romantic program: being-through-art. The modern condition, August Schlegel says, is the sensation of being adrift between two idealized figments of our imagination: a lost past and an uncertain future. So ultimately, we must embrace our frustrated existence by making everything we do a kind of artistic expression, allowing us to move forward despite knowing that we will never reach what we are aiming for. This notion that you can turn just about anything into a mystery, and thus into a field for action, is what Novalis alludes to in his famous statement that “the world must be romanticized.”

It appears there’s been something of a detour here: we began with Spinoza and have ended with obscurantism and myth. But as Frederick Beiser has argued, this baroque enterprise was in many ways an attempt to radicalize the 18th century Enlightenment. Indeed, the central thesis that our grip on reality is not certain, but we must embrace things as they seem to us and continue towards our aims, was almost a parody of the skepticism advanced by David Hume and by Kant. Moreover, and more ominously, the Romantics amplified the Enlightenment principle of self-determination, producing the imperative that individuals and societies must pursue their own values.

 

The Romantic legacy

It is beyond doubt that some Romantic ideas had pernicious consequences, the most demonstrable being a contribution to German nationalism. By the end of the 19th century, when Prussia had become the dominant force in a unified Germany and Richard Wagner’s feverish operas were being performed, the Romantic fascination with national identity, myth, and the active will had evolved into something altogether menacing. Many have taken the additional step, which is not a very large one, of implicating Romanticism in the fascism of the 1930s.

A more tenuous claim is that Romanticism (and German Romanticism especially) contains the origins of the postmodern critique of the Enlightenment, and of Western civilization itself, which is so current among leftist intellectuals today. As we have seen, there was in Romanticism a strong strain of cultural relativism — which is to say, relativism about values. But postmodernism has at its core a relativism about facts, a denial of the possibility of reaching objective truth by reason or observation. This nihilistic stance is far from the skepticism of the Jena school, which was fundamentally a means for creative engagement with the world.

But whatever we make of these genealogies, remember that we are talking about developments, progressions over time. We are not saying that Romanticism was in any meaningful sense fascistic, postmodernist, or whichever other adjective appears downstream. I emphasize this because if we identify Romanticism with these contentious subjects, we will overlook its myriad more subtle contributions to the history of thought.

Many of these contributions come from what I described earlier as the Romantic sensibility: a variety of intuitions that seem to have taken root in Western culture during this era. For instance, that one should remain true to one’s own principles at any cost; that there is something tragic about the replacement of the old and unusual with the uniform and standardized; that different cultures should be appreciated on their own terms, not on a scale of development; that artistic production involves the expression of something within oneself. Whether these intuitions are desirable is open to debate, but the point is that the legacy of Romanticism cannot be compartmentalized, for it has colored many of our basic assumptions.

This is true even of ideas that we claim to have inherited from the Enlightenment. For some of these were these were modified, and arguably enriched, as they passed through the Romantic era. An explicit example comes from John Stuart Mill, the founding figure of classical Liberalism. Mill inherited from his father and from Jeremy Bentham a very austere version of utilitarian ethics. This posited as its goal the greatest good for the greatest number of people; but its notion of the good did not account for the value of culture, spirituality, and a great many other things we now see as intrinsic to human flourishing. As Mill recounts in his autobiography, he realized these shortcomings by reading England’s first-generation Romantics, William Wordsworth and Samuel Taylor Coleridge.

This is why, in 1840, Mill bemoaned the fact that his fellow progressives thought they had nothing to learn from Coleridge’s philosophy, warning them that “the besetting danger is not so much of embracing falsehood for truth, as of mistaking part of the truth for the whole.” We are committing a similar error today when we treat Romanticism simply as a “counter-Enlightenment.” Ultimately this limits our understanding not just of Romanticism but of the Enlightenment as well.

 

This essay was first published in Areo Magazine on June 10 2018. See it here.

Social media’s turn towards the grotesque

This essay was first published by Little Atoms on 09 August 2018. The image on my homepage is a detail from an original illustration by Jacob Stead. You can see the full work here.

Until recently it seemed safe to assume that what most people wanted on social media was to appear attractive. Over the last decade, the major concerns about self-presentation online have been focused on narcissism and, for women especially, unrealistic standards of beauty. But just as it is becoming apparent that some behaviours previously interpreted as narcissistic – selfies, for instance – are simply new forms of communication, it is also no longer obvious that the rules of this game will remain those of the beauty contest. In fact, as people derive an ever-larger proportion of their social interaction here, the aesthetics of social media are moving distinctly towards the grotesque.

When I use the term grotesque, I do so in a technical sense. I am referring to a manner of representing things – the human form especially – which is not just bizarre or unsettling, but which creates a sense of indeterminacy. Familiar features are distorted, and conventional boundaries dissolved.

Instagram, notably, has become the site of countless bizarre makeup trends among its large demographic of young women and girls. These transformations range from the merely dramatic to the carnivalesque, including enormous lips, nose-hair extensions, eyebrows sculpted into every shape imaginable, and glitter coated onto everything from scalps to breasts. Likewise, the popularity of Snapchat has led to a proliferation of face-changing apps which revel in cartoonish distortions of appearance. Eyes are expanded into enormous saucers, faces are ghoulishly elongated or squashed, and animal features are tacked onto heads. These images, interestingly, are also making their way onto dating app profiles.

Of course for many people such tools are simply a way, as one reviewer puts it, “to make your face more fun.” There is something singularly playful in embracing such plasticity: see for instance the creative craze “#slime”, which features videos of people playing with colourful gooey substances, and has over eight million entries on Instagram. But if you follow the threads of garishness and indeterminacy through the image-oriented realms of the internet, deeper resonances emerge.

The pop culture embraced by Millennials and the so-called Generation C (born after 2000) reflects a fascination with brightly adorned, shape-shifting and sexually ambiguous personae. If performers like Miley Cyrus and Lady Gaga were forerunners of this tendency, they are now joined by more dark and refined way figures such as Sophie and Arca from the dance music scene. Meanwhile fashion, photography and video abound with kitsch, quasi-surreal imagery of the kind popularised by Dazed magazine. Celebrated subcultures such as Japan’s “genderless Kei,” who are characterised by bright hairstyles and makeup, are also part of this picture.

But the most striking examples of this turn towards the grotesque come from art forms emerging within digital culture itself. It is especially well illustrated by Porpentine, a game designer working with the platform Twine, whose disturbing interactive poems have achieved something of a cult status. They typically place readers in the perspective of psychologically and socially insecure characters, leading them through violent urban futurescapes reminiscent of William Burrough’s Naked Lunch. The New York Times aptly describes her games as “dystopian landscapes peopled by cyborgs, intersectional empresses and deadly angels,” teeming with “garbage, slime and sludge.”

These are all manifestations both of a particular sensibility which is emerging in parts of the internet, and more generally of a new way of projecting oneself into public space. To spend any significant time in the networks where such trends appear is to become aware of a certain model of identity being enacted, one that is mercurial, effervescent, and boldly expressive. And while the attitudes expressed vary from anxious subjectivity to humorous posturing – as well as, at times, both simultaneously – in most instances one senses that the online persona has become explicitly artificial, plastic, or even disposable.

*   *   *

Why, though, would a paradigm of identity such as this invite expression as the grotesque? Interpreting these developments is not easy given that digital culture is so diffuse and rapidly evolving. One approach that seems natural enough is to view them as social phenomena, arising from the nature of online interaction. Yet to take this approach is immediately to encounter a paradox of sorts. If “the fluid self” represents “identity as a vast and ever-changing range of ideas that should all be celebrated” (according to trend forecaster Brenda Milis), then why does it seem to conform to generic forms at all? This is a contradiction, that in fact might prove enlightening.

One frame which has been widely applied to social media is sociologist Erving Goffman’s “dramaturgical model,” as outlined in his 1959 book The Presentation of Self in Every Day Life. According to Goffman, identity can be understood in terms of a basic dichotomy, which he explains in terms of “Front Stage” and “Back Stage.” Our “Front Stage” identity, when we are interacting with others, is highly responsive to context. It is preoccupied with managing impressions and assessing expectations so as to present what we consider a positive view of ourselves. In other words, we are malleable in the degree to which we are willing to tailor our self-presentation.

The first thing to note about this model is that it allows for dramatic transformations. If you consider the degree of detachment enabled by projecting ourselves into different contexts through words and imagery, and empathising with others on the same basis, then the stage is set for more or less anything becoming normative within a given peer group. As for why people would want to take this expressive potential to unusual places, it seems reasonable to speculate that in many cases, the role we want to perform is precisely that of someone who doesn’t care what anyone thinks. But since most of us do in fact care, we might end up, ironically enough, expressing this within certain established parameters.

But focusing too much on social dynamics risks underplaying the undoubted sense of freedom associated with the detachment from self in online interaction. Yes, there is peer pressure here, but within these bounds there is also a palpable euphoria in escaping mundane reality. The neuroscientist Susan Greenfield has made this point while commenting on the “alternative identity” embraced by young social media users. The ability to depart from the confines of stable identity, whether by altering your appearance or enacting a performative ritual, essentially opens the door to a world of fantasy.

With this in mind, we could see the digital grotesque as part of a cultural tradition that offers us many precedents. Indeed, this year marks the 200th anniversary of perhaps the greatest precedent of all: Mary Shelley’s iconic novel Frankenstein. The great anti-hero of that story, the monster who is assembled and brought to life by the scientist Victor Frankenstein, was regarded by later generations as an embodiment of all the passions that society requires the individual to suppress – passions that the artist, in the act of creation, has special access to. The uncanny appearance and emotional crises of Frankenstein’s monster thus signify the potential for unknown depths of expression, strange, sentimental, and macabre.

That notion of the grotesque as something uniquely expressive and transformative was and has remained prominent in all of the genres with which Frankenstein is associated – romanticism, science fiction, and the gothic. It frequently aligns itself with the irrational and surreal landscapes of the unconscious, and with eroticism and sexual deviancy; the films of David Lynch are emblematic of this crossover. In modern pop culture a certain glamourised version of the grotesque, which subverts rigid identity with makeup and fashion, appeared in the likes of David Bowie and Marilyn Manson.

Are today’s online avatars potentially incarnations of Frankenstein’s monster, tempting us with unfettered creativity? The idea has been explored by numerous artists over the last decade. Ed Atkins is renowned for his humanoid characters, their bodies defaced by crude drawings, who deliver streams of consciousness fluctuating between the poetic and the absurd. Jon Rafman, meanwhile, uses video and animation to piece together entire composite worlds, mapping out what he calls “the anarchic psyche of the internet.” Reflecting on his years spent exploring cyberspace, Rafman concludes: “We’ve reached a point where we’re enjoying our own nightmares.”

*   *   *

It is possible that the changing aesthetics of the Internet reflect both the social pressures and the imaginative freedoms I’ve tried to describe, or perhaps even the tension between them. One thing that seems clear, though, is that the new notions of identity emerging here will have consequences beyond the digital world. Even if we accept in some sense Goffman’s idea of a “Backstage” self, which resumes its existence when we are not interacting with others, the distinction is ultimately illusory. The roles and contexts we occupy inevitably feed back into how we think of ourselves, as well as our views on a range of social questions. Some surveys already suggest a generational shift in attitudes to gender, for instance.

That paradigms of identity shift in relation to technological and social changes is scarcely surprising. The first half of the 20th century witnessed the rise of a conformist culture, enabled by mass production, communication, and ideology, and often directed by the state. This then gave way to the era of the unique individual promoted by consumerism. As for the balance of psychological benefits and problems that will arise as online interaction grows, that is a notoriously contentious question requiring more research.

There is, however, a bigger picture here that deserves attention. The willingness of people to assume different identities online is really part of a much broader current being borne along by technology and design – one whose general direction to enable individuals to modify and customise themselves in a wide range of ways. Whereas throughout the 20th century designers and advertisers were instrumental in shaping how we interpreted and expressed our social identity – through clothing, consumer products, and so on – this function is now increasingly being assumed by individuals within social networks.

Indeed, designers and producers are surrendering control of both the practical and the prescriptive aspects of their trade. 3D printing is just one example of how, in the future, tools and not products will be marketed. In many areas, the traditional hierarchy of ideas has been reversed, as those who used to call the tune are now trying to keep up with and capitalise on trends that emerge from their audiences. One can see this loss of influence in an aesthetic trend that seems to run counter to those I’ve been observing here, but which ultimately reflects the same reality. From fashion to furniture, designers are making neutral products which can be customised by an increasingly identity-conscious, changeable audience.

Currently, the personal transformations taking place online rely for the most part on software; the body itself is not seriously altered. But with scientific fields such as bioengineering expanding in scope, this may not be the case for long. Alice Rawsthorn has considered the implications: “As our personal identities become subtler and more singular, we will wish to make increasingly complex and nuanced choices about the design of many aspects of out lives… We will also have more of the technological tools required to do so.” If this does turn out to be the case, we will face considerable ethical dilemmas regarding the uses and more generally the purpose of science and technology.

When did death become so personal?

 

I have a slightly gloomy but, I think, not unreasonable view of birthdays, which is that they are really all about death. It rests on two simple observations. First, much as they pretend otherwise, people do generally find birthdays to be poignant occasions. And second, a milestone can have no poignancy which does not ultimately come from the knowledge that the journey in question must end. (Would an eternal being find poignancy in ageing, nostalgia, or anything else associated with the passing of time? Surely not in the sense that we use the word). In any case, I suspect most of us are aware that at these moments when our life is quantified, we are in some sense facing our own finitude. What I find interesting, though, is that to acknowledge this is verboten. In fact, we seem to have designed a whole edifice of niceties and diversions – cards, parties, superstitions about this or that age – to avoid saying it plainly.

Well it was my birthday recently, and it appears at least one of my friends got the memo. He gave a copy of Hans Holbein’s Dance of Death, a sequence of woodcuts composed in 1523-5. They show various classes in society being escorted away by a Renaissance version of the grim reaper – a somewhat cheeky-looking skeleton who plays musical instruments and occasionally wears a hat. He stands behind The Emperor, hands poised to seize his crown; he sweeps away the coins from The Miser’s counting table; he finds The Astrologer lost in thought, and mocks him with a skull; he leads The Child away from his distraught parents.

Screen Shot 2018-05-21 at 09.58.17
Hans Holbein, “The Astrologer” and “The Child,” from “The Dance of Death” (1523-5)

It is striking for the modern viewer to see death out in the open like this. But the “dance of death” was a popular genre that, before the advent of the printing press, had adorned the walls of churches and graveyards. Needless to say, this reflects the fact that in Holbein’s time, death came frequently, often without warning, and was handled (both literally and psychologically) within the community. Historians speculate about what pre-modern societies really believed regarding death, but belief is a slippery concept when death is part of the warp and weft of culture, encountered daily through ritual and artistic representations. It would be a bit like asking the average person today what their “beliefs” are about sex – where to begin? Likewise in Holbein’s woodcuts, death is complex, simultaneously a bringer of humour, justice, grief, and consolation.

Now let me be clear, I am not trying to romanticise a world before antibiotics, germ theory, and basic sanitation. In such a world, with child mortality being what it was, you and I would most likely be dead already. Nonetheless, the contrast with our own time (or at least with certain cultures, and more about that later) is revealing. When death enters the public sphere today – which is to say, fictional and news media – it rarely signifies anything, for there is no framework in which it can do so. It is merely a dramatic device, injecting shock or tragedy into a particular set of circumstances. The best an artist can do now is to expose this vacuum, as the photographer Jo Spence did in her wonderful series The Final Project, turning her own death into a kitsch extravaganza of joke-shop masks and skeletons.

Screen Shot 2018-05-21 at 10.04.25
From Jo Spence, “The Final Project,” 1991-2, courtesy of The Jo Spence Memorial Archive and Richard Saltoun Gallery

And yet, to say that modern secular societies ignore or avoid death is, in my view, to miss the point. It is rather that we place the task of interpreting mortality squarely and exclusively upon the individual. In other words, if we lack a common means of understanding death – a language and a liturgy, if you like – it is first and foremost because we regard that as a private affair. This convention is hinted at by euphemisms like “life is short” and “you only live once,” which acknowledge that our mortality has a bearing on our decisions, but also imply that what we make of that is down to us. It is also apparent, I think, in our farcical approach to birthdays.

Could it be that, thanks to this arrangement, we have actually come to feel our mortality more keenly? I’m not sure. But it does seem to produce some distinctive experiences, such as the one described in Philip Larkin’s famous poem “Aubade” (first published in 1977):

Waking at four to soundless dark, I stare.
In time the curtain-edges will grow light.
Till then I see what’s really always there:
Unresting death, a whole day nearer now,
Making all thought impossible but how
And where and when I shall myself die.

Larkin’s sleepless narrator tries to persuade himself that humanity has always struggled with this “special way of being afraid.” He dismisses as futile the comforts of religion (“That vast moth-eaten musical brocade / Created to pretend we never die”), as well as the “specious stuff” peddled by philosophy over the centuries. Yet in the final stanza, as he turns to the outside world, he nonetheless acknowledges what does make his fear special:

telephones crouch, getting ready to ring
In locked-up offices, and all the uncaring
Intricate rented world begins to rouse.

Work has to be done.
Postmen like doctors go from house to house.

There is a dichotomy here, between a personal world of introspection, and a public world of routine and action. The modern negotiation with death is confined to the former: each in our own house.

 

*     *     *

 

When did this internalisation of death occur, and why? Many reasons spring to mind: the decline of religion, the rise of Freudian psychology in the 20thcentury, the discrediting of a socially meaningful death by the bloodletting of the two world wars, the rise of liberal consumer societies which assign death to the “personal beliefs” category, and would rather people focused on their desires in the here and now. No doubt all of these have had some part to play. But there is also another way of approaching this question, which is to ask if there isn’t some sense in which we actually savour this private relationship with our mortality that I’ve outlined, whatever the burden we incur as a result. Seen from this angle, there is perhaps an interesting story about how these attitudes evolved.

I direct you again to Holbein’s Dance of Death woodcuts.As I’ve said, what is notable from our perspective is that they picture death within a traditional social context. But as it turns out, these images also reflect profound changes that were taking place in Northern Europe during the early modern era. Most notably, Martin Luther’s Protestant Reformation had erupted less than a decade before Holbein composed them. And among the many factors which led to that Reformation was a tendency which had begun emerging within Christianity during the preceding century, and which would be enormously influential in the future. This tendency was piety, which stressed the importance of the individual’s emotional relationship to God.

As Ulinka Rublack notes in her commentary on The Dance of Death, one of the early contributions of piety was the convention of representing death as a grisly skeleton. This figure, writes Rublack, “tested its onlooker’s immunity to spiritual anxiety,” since those who were firm in their convictions “could laugh back at Death.” In other words, buried within Holbein’s rich and varied portrayal of mortality was already, in embryonic form, an emotionally charged, personal confrontation with death. And nor was piety the only sign of this development in early modern Europe.

HOLBEIN-Hans-the-Younger-The-Ambassadors
Hans Holbein, The Ambassadors (1533)

In 1533, Holbein produced another, much more famous work dealing with death: his painting The Ambassadors. Here we see two young members of Europe’s courtly elite standing either side of a table, on which are arrayed various objects that symbolise a certain Renaissance ideal: a life of politics, art, and learning. There are globes, scientific instruments, a lute, and references to the ongoing feud within the church. The most striking feature of the painting, however, is the enormous skull which hovers inexplicably in the foreground, fully perceptible only from a sidelong angle. This remarkable and playful item signals the arrival of another way of confronting death, which I describe as decadent. It is not serving any moral or doctrinal message, but illuminating what is most precious to the individual: status, ambition, accomplishment.

The basis of this decadent stance is as follows: death renders meaningless our worldly pursuits, yet at the same time makes them seem all the more urgent and compelling. This will be expounded in a still more iconic Renaissance artwork: Shakespeare’s Hamlet (1599). It is no coincidence that the two most famous moments in this play are both direct confrontations with death. One is, of course, the “To be or not to be” soliloquy; the other is the graveside scene, in which Hamlet holds a jester’s skull and asks: “Where be your gibes now, your gambols, your songs, your flashes of merriment, that were wont to set the table on a roar?” These moments are indeed crucial, for they suggest why the tragic hero, famously, cannot commit to action. As he weighs up various decisions from the perspective of mortality, he becomes intoxicated by the nuances of meaning and meaninglessness. He dithers because ultimately, such contemplation itself is what makes him feel, as it were, most alive.

All of this is happening, of course, within the larger development that historians like to call “the birth of the modern individual.” But as the modern era progresses, I think there are grounds to say that these two approaches – the pious and the decadent – will be especially influential in shaping how certain cultures view the question of mortality. And although there is an important difference between them insofar as one addresses itself to God, they also share something significant: a mystification of the inner life, of the agony and ecstasy of the individual soul, at the expense of religious orthodoxy and other socially articulated ideas about life’s purpose and meaning.

During the 17thcentury, piety became the basis of Pietism, a Lutheran movement that enshrined an emotional connection with God as the most important aspect of faith. Just as pre-Reformation piety may have been a response, in part, to the ravages of the Black Death, Pietism emerged from the utter devastation wreaked in Germany by the Thirty Years War. Its worship was based on private study of the bible, alone or in small groups (sometimes called “churches within a church”), and on evangelism in the wider community. In Pietistic sermons, the problem of our finitude – of our time in this world – is often bound up with a sense of mystery regarding how we ought to lead our lives. Everything points towards introspection, a search for duty. We can judge how important these ideas were to the consciousness of Northern Europe and the United States simply by naming two individuals who came strongly under their influence: Immanuel Kant and John Wesley.

It was also from the Central German heartlands of Pietism that, in the late-18thcentury, Romanticism was born – a movement which took the decadent fascination with death far beyond what we find in Hamlet. Goethe’s novel The Sorrows of Young Werther, in which the eponymous artist shoots himself from lovesickness, led to a wave of copycat suicides by men dressed in dandyish clothing. As Romanticism spread across Europe and into the 19thcentury, flirting with death, using its proximity as a kind of emotional aphrodisiac, became a prominent theme in the arts. As Byron describes one of his typical heroes: “With pleasure drugged, he almost longed for woe, / And e’en for change of scene would seek the shades below.” Similarly, Keats: “Many a time / I have been half in love with easeful Death.”

 

*     *     *

 

This is a very cursory account, and I am certainly not claiming there is any direct or inevitable progression between these developments and our own attitudes to death. Indeed, with Pietism and Romanticism, we have now come to the brink of the Great Awakenings and Evangelism, of Wagner and mystic nationalism – of an age, in other words, where spirituality enters the public sphere in a dramatic and sometimes apocalyptic way. Nonetheless, I think all of this points to a crucial idea which has been passed on to some modern cultures, perhaps those with a northern European, Protestant heritage; the idea that mortality is an emotional and psychological burden which the individual should willingly assume.

And I think we can now discern a larger principle which is being cultivated here – one that has come to define our understanding of individualism perhaps more than any other. That is the principle of freedom. To take responsibility for one’s mortality – to face up to it and, in a manner of speaking, to own it – is to reflect on life itself and ask: for what purpose, for what meaning? Whether framed as a search for duty or, in the extreme decadent case, as the basis of an aesthetic experience, such questions seem to arise from a personal confrontation with death; and they are very central to our notions of freedom. This is partly, I think, what underlies our convention that what you make of death is your own business.

The philosophy that has explored these ideas most comprehensively is, of course, existentialism. In the 20thcentury, Martin Heidegger and Jean Paul Sartre argued that the individual can only lead an authentic life – a life guided by the values they deem important – by accepting that they are free in the fullest, most terrifying sense. And this in turn requires that the individual honestly accept, or even embrace, their finitude. For the way we see ourselves, these thinkers claim, is future-oriented: it consists not so much in what we have already done, but in the possibility of assigning new meaning to those past actions through what we might do in the future. Thus, in order to discover what our most essential values really are – the values we wish to direct our choices as free beings – we should consider our lives from its real endpoint, which is death.

Sartre and Heidegger were eager to portray these dilemmas, and their solutions, as brute facts of existence which they had uncovered. But it is perhaps truer to say that they were signing off on a deal which had been much longer in the making – a deal whereby the individual accepts the burden of understanding their existence as doomed beings, with all the nausea that entails, in exchange for the very expansive sense of freedom we now consider so important. Indeed, there is very little that Sartre and Heidegger posited in this regard which cannot be found in the work of the 19thcentury Danish philosopher Søren Kierkegaard; and Kierkegaard, it so happens, can also be placed squarely within the traditions of both Pietism and Romanticism.

To grasp how deeply engrained these ideas have become, consider again Larkin’s poem “Aubade:”

Most things may never happen: this one will,
And realisation of it rages out
In furnace-fear when we are caught without
People or drink. Courage is no good:
It means not scaring others. Being brave
Lets no one off the grave.
Death is no different whined at than withstood.

Here is the private confrontation with death framed in the most neurotic and desperate way. Yet part and parcel with all the negative emotions, there is undoubtedly a certain lugubrious relish in that confrontation. There is, in particular, something titillating in the rejection of all illusions and consolations, clearing the way for chastisement by death’s uncertainty. This, in other words, is the embrace of freedom taken to its most masochistic limit. And if you find something strangely uplifting about this bleak poem, it may be that you share some of those intuitions.

 

 

 

The Price of Success: Britain’s Tumultuous 19th Century

In 1858, an exclusive Soho dining society known simply as “the Club” – attended by former and future Prime Ministers, prominent clergymen, poets and men of letters – debated the question of “the highest period of civilization” ever reached. It was, they decided, “in London at the present moment.” The following year, several books were published which might, at first glance, appear to support this grandiose conclusion. They included On Liberty by John Stewart Mill, now a cornerstone of political philosophy; Adam Bede, the first novel by the great George Eliot; and Charles Darwin’s On the Origin of Species, which presented the most comprehensive argument yet for the theory of evolution.

Certainly, all of these works were products of quintessentially Victorian seams of thought. Yet they also revealed the fragility of what most members of “the Club” considered the very pillars of their “highest period of civilization.” Mill’s liberalism was hostile to the widespread complacency which held the British constitution to be perfect. George Eliot, aka Marian Evans, was a formidably educated woman living out of wedlock with the writer George Henry Lewes; as such, she was an affront to various tenets of contemporary morality. And Darwin’s work, of course, would fatally undermine the Victorian assumption that theirs was a divinely ordained greatness.

These are just some of the insecurities, tensions, and contradictions which lie at the heart of Britain’s history in the 19th century, and which provide the central theme of David Cannadine’s sweeping (and somewhat ironically titled) new volume, Victorious Century: The United Kingdom 1800-1906. This was a period when Britain’s global hegemony in economic, financial, and imperial terms was rendered almost illusory by an atmosphere of entropy and flux at home. It was a period when the state became more proactive and informed than ever before, yet could never fully comprehend the challenges of its rapidly industrialising economy. And it was a period when Britain’s Empire continued incessantly to expand, despite no one in Westminster finding a coherent plan of how, or for what purpose, to govern it.

Cannadine’s interest in discomfort and dilemma also explains the dates which bookend his narrative. In 1800 William Pitt’s administration enacted the Union with Ireland, bringing into existence the “United Kingdom” of the book’s title. Throughout the ensuing century, the “Irish question” would periodically overwhelm British politics through religious tension, famine, and popular unrest (indeed, I refer mainly to Britain in this review because Ireland was never assimilated into its cultural or political life). The general election of 1906, meanwhile, was the last hurrah of the Liberal Party, a coalition of progressive aristocrats, free traders and radical reformers whose internal conflicts in many ways mirrored those of Victorian Britain at large.

Cannadine’s approach is not an analytical one, and so there is little discussion of the great, complex question which looms over Britain’s 19th century: namely, why that seismic shift in world history, the industrial revolution, happened here. He does make clear, however, the importance of victory in the Napoleonic Wars which engulfed Europe until 1815. Without this hard-won success, Britain could not have exploited its geographical and cultural position in between its two largest export markets, Europe and the United States. Moreover, entrepreneurial industrial activity was directly stimulated by the state’s demand for materiel, and the wheels of international finance greased by government borrowing for the war effort.

From the outset, the volatility of this new model of capitalism was painfully clear. Until mid-century, Britain’s population, industrial output, investment and trade expanded at a dizzying rate, only to stumble repeatedly into prolonged and wrenching economic crises. The accompanying urban deprivation was brutal – life expectancy for a working-class man in 1840s Liverpool was 22 – though arguably no worse than the rural deprivation which had preceded it. Nonetheless, these realities, together with the regular outbreaks of revolution on the continent, meant that from the 1830s onwards the British state assumed a radically new role of “legislative engagement with contemporary issues”: regulating industry, enhancing local government and public services, and gauging public opinion to judge whether political concessions, particularly electoral reform, were necessary.

The second half of the century, by contrast, hatched anxieties which were less dramatic but more insidious. Rising giants such as the United States and Germany, with their superior resources and higher standards of science, technology, and education, foretold the end of British preeminence long before it came to pass. Certainly, the price of global competition was paid largely by landlords, farmers, and manufacturers; working-class living standards steadily improved. But declinism permeated the culture as a whole, manifesting itself in a range of doubts which may sound familiar to us today: immigration and loss of national identity, intractable inequality, military unpreparedness, the spiritual and physical decrepitude of the masses, and the depravity of conspicuous consumption among the upper classes.

Cannadine recounts all of this with lucidity, verve, and a dazzling turn of phrase. He is, however, committed to a top-down view of history which places Westminster politics at the centre of events. This has its benefits: we gain an understanding not just of such fascinating figures as Robert Peel, Benjamin Disraeli and William Gladstone, but also a detailed grasp of the evolution of modern government. This perspective does, however, run counter to the real story of the 19th century, which is precisely the redistribution of historical agency through expanding wealth, literacy, technology and political participation. Cannadine might have reassessed his priorities in light of his own book’s epigraph, from Marx’s Eighteenth Brumaire: “Men make their own history, but they do not do so freely, not under conditions of their own choosing.”

How The Past Became A Battlefield

 

In recent years, a great deal has been written on the subject of group identity in politics, much of it aiming to understand how people in Western countries have become more likely to adopt a “tribal” or “us-versus-them” perspective. Naturally, the most scrutiny has fallen on the furthest ends of the spectrum: populist nationalism on one side, and certain forms of radical progressivism on the other. We are by now familiar with various economic, technological, and psychological accounts of these group-based belief systems, which are to some extent analogous throughout Europe and in North America. Something that remains little discussed, though, is the role of ideas and attitudes regarding the past.

When I refer to the past here, I am not talking about the study of history – though as a source of information and opinion, it is not irrelevant either. Rather, I’m talking about the past as a dimension of social identity; a locus of narratives and values that individuals and groups refer to as a means of understanding who they are, and with whom they belong. This strikes me as a vexed issue in Western societies generally, and one which has had a considerable bearing on politics of late. I can only provide a generic overview here, but I think it’s notable that movements and tendencies which emphasise group identity do so partly through a particular, emotionally salient conception of the past.

First consider populism, in particular the nationalist, culturally conservative kind associated with the Trump presidency and various anti-establishment movements in Europe. Common to this form of politics is a notion that Paul Taggart has termed “heartland” – an ill-defined earlier time in which “a virtuous and unified population resides.” It is through this temporal construct that individuals can identify with said virtuous population and, crucially, seek culprits for its loss: corrupt elites and, often, minorities. We see populist leaders invoking “heartland” by brandishing passports, or promising to make America great again; France’s Marine Le Pen has even sought comparison to Joan of Arc.

Meanwhile, parts of the left have embraced an outlook well expressed by Faulkner’s adage that the past is never dead – it isn’t even past. Historic episodes of oppression and liberating struggle are treated as continuous with, and sometimes identical to, the present. While there is often an element of truth in this view, its practical efficacy has been to spur on a new protest movement. A rhetorical fixation with slavery, colonialism, and patriarchy not only implies urgency, but adds moral force to certain forms of identification such as race, gender, or general antinomianism.

Nor are these tendencies entirely confined to the fringes. Being opposed to identity politics has itself become a basis for identification, albeit less distinct, and so we see purposeful conceptions of the past emerging among professed rationalists, humanists, centrists, classical liberals and so on. In their own ways, figures as disparate as Jordan Peterson and Steven Pinker define the terra firma of reasonable discourse by a cultural narrative of Western values or Enlightened liberal ideals, while everything outside these bounds invites comparison to one or another dark episode from history.

I am not implying any moral or intellectual equivalence between these different outlooks and belief systems, and nor am I saying their views are just figments of ideology. I am suggesting, though, that in all these instances, what could plausibly be seen as looking to history for understanding or guidance tends to shade into something more essential: the sense that a given conception of the past can underpin a collective identity, and serve as a basis for the demarcation of the political landscape into friends and foes.

 

*     *     *

 

These observations appear to be supported by recent findings in social psychology, where “collective nostalgia” is now being viewed as a catalyst for inter-group conflict. In various contexts, including populism and liberal activism, studies suggest that self-identifying groups can respond to perceived deprivation or threat by evoking a specific, value-leaden conception of the past. This appears to bolster solidarity within the group and, ultimately, to motivate action against out-groups. We might think of the past here as becoming a kind of sacred territory to be defended; consequently, it serves as yet another mechanism whereby polarisation drives further polarisation.

This should not, I think, come as a surprise. After all, nation states, religious movements and even international socialism have always found narratives of provenance and tradition essential to extracting sacrifices from their members (sometimes against the grain of their professed beliefs). Likewise, as David Potter noted, separatist movements often succeed or fail on the basis of whether they can establish a more compelling claim to historical identity than that of larger entity from which they are trying to secede.

In our present context, though, politicised conceptions of the past have emerged from cultures where this source of meaning or identity has largely disappeared from the public sphere. Generally speaking, modern Western societies allow much less of the institutional transmission of stories which has, throughout history, brought an element of continuity to religious, civic, and family life. People associate with one another on the basis of individual preference, and institutions which emerge in this way usually have no traditions to refer to. In popular culture, the lingering sense that the past withholds some profound quality is largely confined to historical epics on the screen, and to consumer fads recycling vintage or antiquated aesthetics. And most people, it should be said, seem perfectly happy with this state of affairs.

Nonetheless, if we want to understand how the past is involved with the politics of identity today, it is precisely this detachment that we should scrutinise more closely. For ironically enough, we tend to forget that our sense of temporality – or indeed lack thereof – is itself historically contingent. As Francis O’Gorman details in his recent book Forgetfulness: Making the Modern Culture of Amnesia, Western modernity is the product of centuries worth of philosophical, economic, and cultural paradigms that have fixated on the future, driving us towards “unknown material and ideological prosperities to come.” Indeed, from capitalism to Marxism, from the Christian doctrine of salvation to the liberal doctrine of progress, it is remarkable how many of the Western world’s apparently diverse strands of thought regard the future as the site of universal redemption.

But more to the point, and as the intellectual historian Isaiah Berlin never tired of pointing out, this impulse towards transcending the particulars of time and space has frequently provoked, or at times merged with, its opposite: ethnic, cultural, and national particularism. Berlin made several important observations by way of explaining this. One is that universal and future-oriented ideals tend to be imposed by political and cultural elites, and are thus resented as an attack on common customs. Another is that many people find something superficial and alienating about being cut off from the past; consequently, notions like heritage or historical destiny become especially potent, since they offer both belonging and a form of spiritual superiority.

I will hardly be the first to point out that the most recent apotheosis of progressive and universalist thought came in the era immediately following the Cold War (not for nothing has Francis Fukuyama’s The End of History become its most iconic text). In this moment, energetic voices in Western culture – including capitalists and Marxists, Christians and liberals – were preoccupied with cutting loose from existing norms. And so, from the post-national rhetoric of the EU to postmodern academia and the champions of the service economy and global trade, they all defined the past by outdated modes of thought, work, and indeed social identity.

I should say that I’m too young to remember this epoch before the war on terror and the financial crisis, but the more I’ve tried to learn about it, the more I am amazed by its teleological overreach. This modernising discourse, or so it appears to me, was not so much concerned with constructing a narrative of progress leading up to the present day as with portraying the past as inherently shameful and of no use whatsoever. To give just one example, consider that as late as 2005, Britain’s then Prime Minister Tony Blair did not even bother to clothe his vision of the future in the language of hope, simply stating: “Unless we ‘own’ the future, unless our values are matched by a completely honest understanding of the reality now upon us and the next about to hit us, we will fail.”

Did such ways of thinking lay in store the divisive attachments to the past we see in politics today? Arguably, yes. The populist impulse towards heartland has doubtless been galvanised by the perception that elites have abandoned provenance as a source of common values. Moreover, as the narrative of progress has become increasingly unconvincing in the twenty-first century, its latent view of history as a site of backwardness and trauma has been seized upon by a new cult of guilt. What were intended as reasons to dissociate from the past have become reasons to identify with it as victims or remorseful oppressors.

 

*     *     *

 

Even if you accept all of this, there remains a daunting question: namely, what is the appropriate relationship between a society and its past? Is there something to be gained from cultivating some sense of a common background, or should we simply refrain from undermining that which already exists? It’s important to state, firstly, that there is no perfect myth which every group in a polity can identify with equally. History is full of conflict and tension, and well as genuine injustice, and to suppress this fact is inevitably to sow the seeds of resentment. Such was the case, for instance, with the Confederate monuments which were the focus of last year’s protests in the United States: many of these were erected as part of a campaign for national unity in the early 20th century, one that denied the legacy of African American slavery.

Moreover, a strong sense of tradition is easily co-opted by rulers to sacralise their own authority and stifle dissent. The commemoration of heroes and the vilification of old enemies are today common motifs of state propaganda in Russia, India, China, Turkey, Poland and elsewhere. Indeed, many of the things we value about modern liberal society – free thought, scientific progress, political equality – have been won largely by intransigence towards the claims of the past. None of them sit comfortably in societies who afford significant moral authority to tradition. And this is to say nothing of the inevitable sacrificing of historical truth when the past is used as an agent of social cohesion.

But notwithstanding the partial resurgence of nationalism, it is not clear there exists in the West today any vehicle for such comprehensive, overarching myths. As with “tribal” politics in general, the politicisation of the past has been divergent rather than unifying because social identity is no longer confined to traditional concepts and categories. A symptom of this, at least in Europe, is that people who bemoan the absence of shared historical identity – whether politicians such as Emmanuel Macron or critics like Douglas Murray – struggle to express what such a thing might actually consist in. Thus they resort to platitudes like “sovereignty, unity and democracy” (Macron), or a rarefied high culture of Cathedrals and composers (Murray).

The reality which needs to be acknowledged, in my view, is that the past will never be an inert space reserved for mere curiosity or the measurement of progress. The human desire for group membership is such that it will always be seized upon as a buttress for identity. The problem we have encountered today is that, when society at large loses its sense of the relevance and meaning of the past, the field is left open to the most divisive interpretations; there is, moreover, no common ground from which to moderate between such conflicting narratives. How to broaden out this conversation, and restore some equanimity to it, might in the present circumstances be an insoluble question. It certainly bears thinking about though.