Why I’m not giving up on my ego

This spring, I finally got round to reading Derek Parfit’s famous work, Reasons and Persons. Published in 1984, the book is often cited as a key inspiration for subsequent developments in moral philosophy, notably the field of population ethics and the Effective Altruism movement. (Both of which, incidentally, are closely associated with Oxford University, the institution where Parfit himself worked until his death in 2017). I found Reasons and Persons every bit the masterpiece many have made it out to be – a work not just of rich insight, but also of persuasive humility and charm. For this reason, and because some themes of the book resonate with certain cultural trends today, I thought it would be worth saying something about why Parfit did not win me over to his way of seeing the world.

In Reasons and Persons, Parfit takes on three main issues:

  1. He makes numerous arguments against the self-interest theory of rationality, which holds that what is most rational for any individual to do is whatever will benefit him or her the most;
  1. He argues for a Reductionist theory of identity, according to which there is no “deep further fact” or metaphysical essence underpinning our existence as individual persons, only the partial continuity of psychological experiences across time;
  1. He argues for the moral significance of future generations, and searches (unsuccessfully, by his own admission) for the best way to recognise that significance in our own decisions.

I want to consider (2), Parfit’s Reductionist view of identity. On my reading, this was really the lynchpin of the whole book. According to Parfit, we are inclined to believe there is a “deep further fact” involved in personal identity – that our particular bodies and conscious minds constitute an identity which is somehow more than the sum of these parts. If your conscious mind (your patterns of thought, memories and intentions) managed somehow to survive the destruction of your body (including your brain), and to find itself in a replica body, you may suspect that this new entity would not be you. Likewise if your body continued with some other mind. In either case some fundamental aspect of your personhood, perhaps a metaphysical essence or soul or self, would surely have perished along the way.

Parfit says these intuitions are wrong: there simply is no further fact involved in personal identity. In fact, as regards both a true understanding of reality and what we should value (or “what really matters,” as he puts it), Parfit thinks the notion of persons as bearers of distinct identities can be dispensed with altogether.

What really matters about identity, he argues, is nothing more than the psychological continuity that characterises our conscious minds; and this can be understood without reference to the idea of a person at all. If your body were destroyed and your mind transferred to a replica body, this would merely be “about as bad as ordinary survival.” Your mind could even find itself combined with someone else’s mind, in someone else’s body, which would no doubt present some challenges. In both cases, though, whether the new entity would “really be you” is an empty question. We could describe what had taken place, and that would be enough.

Finally, once we dispense with the idea of a person as bearer of a distinct identity, we notice how unpersonlike our conscious minds really are. Psychological continuity is, over the course of a life, highly discontinuous. Thought patterns, memories and intentions form overlapping “chains” of experience, and each of these ultimately expires or evolves in such a way that, although there is never a total rupture, our future selves might as well be different people.

As I say, I found these claims about identity to be the lynchpin of Reasons and Persons. Parfit doesn’t refer to them in the other sections of his book, where he argues against self-interest and for the moral significance of future generations. But you can hardly avoid noticing its relevance for both. Parfit’s agenda, ultimately, is to show that ethics is about the quality of human experiences, and that all experiences across time and space should have the same moral significance. Denying the sanctity of personal identity provides crucial support for that agenda. Once you accept that the notion of an experience being your experience is much less important than it seems, it is easier to care more about experiences happening on the other side of the planet, or a thousand years in the future.

But there is another reason I was especially interested in Parfit’s treatment of identity.  In recent years, some friends and acquaintances of mine have become fascinated by the idea of escaping from the self or ego, whether through neo-Buddhist meditation (I know people who really like Sam Harris) or the spiritualism of Eckhart Tolle. I’m also aware that various subcultures, notably in Silicon Valley, have become interested in the very Parfitian idea of transhumanism, whereby the transferal of human minds to enhanced bodies or machines raises the prospect of superseding humanity altogether. Add to these the new conceptions of identity emerging from the domain of cultural politics – in particular, the notion of gender fluidity and the resurgence of racial essentialism – and it seems to me we are living at a time when the metaphysics of selfhood and personhood have become an area of pressing uncertainty.

I don’t think it would be very productive to make Reasons and Persons speak to these contemporary trends, but they did inform my own reading of the book. In particular, they led me to notice something about Parfit’s presentation of the Reductionist view.

In the other sections of the Reasons and Persons, Parfit makes some striking historical observations. He argues for a rational, consequentialist approach to ethics by pointing out that in the modern world, our actions affect a far larger number of people than they did in the small communities where our traditional moral systems evolved. He reassures us of the possibility of moral progress by claiming that ethics is still in its infancy, since it has only recently broken free from a religious framework. In other words, he encourages us to situate his ideas in a concrete social and historical context, where they can be evaluated in relation to the goal of maximising human flourishing.

But this kind of contextualisation is entirely absent from Parfit’s treatment of identity. What he offers us instead is, ironically, a very personal reason for accepting the Reductionist view:

Is the truth depressing? Some may find it so. But I find it liberating, and consoling. When I believed that my existence was such a further fact, I seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others.

Parfit goes on to explain how accepting the Reductionist view helps him to reimagine his relationship to those who will be living after he has died. Rather than thinking “[a]fter my death, there will be no one living who will be me,” he can now think:

Though there will later be many experiences, none of these experiences will be connected to my present experiences by chains of such direct connections as those involved in experience-memory, or in the carrying out of an earlier intention.

There is certainly a suggestion here that, as I said earlier, the devaluation of personal identity supports a moral outlook which grants equal importance to all experiences across time and space. But there is no consideration of what it might be like if a significant number of people in our societies did abandon the idea of persons as substantive, continuous entities with real and distinct identities.

So what would that be like? Well, I don’t think the proposition makes much sense. As soon as we introduce the social angle, we see that Parfit’s treatment of identity is lacking an entire dimension. His arguments make us think about our personal identity in isolation, to show that in certain specific scenarios we imagine a further fact where there is none. But in social terms, our existence does involve a further fact – or rather, a multitude of further facts: facts describing our relations with others and the institutions that structure them. We are sons and daughters, parents, spouses, friends, citizens, strangers, worshippers, students, teachers, customers, employees, and so on. These are not necessarily well-defined categories, but they suggest the extent to which social life is dependent on individuals apprehending one another not in purely empirical terms, but in terms of roles with associated expectations, allowances and responsibilities.

And that, crucially, is also how we tend to understand ourselves – how we interpret our desires and formulate our motivations. The things we value, aim for, think worth doing, and want to become, inevitably take their shape from our impressions of the social world we inhabit, with its distinctive roles and practices.

We emulate people we admire, which does not mean we want to be exactly like them, but that they perform a certain role in a way that we identify with. There is some aspect of their identity, as we understand it, that we want to incorporate into our own. Likewise, when we care about something, we are typically situating ourselves in a social milieu whose values and norms become part of our identity. Such is the case with raising a family, being successful in some profession, or finding a community of interest like sport or art or playing with train sets. It is also the case, I might add, with learning meditation or studying philosophy in order to write a masterpiece about ethics.

There is, of course, a whole other tradition in philosophy that emphasises this interdependence of the personal and the social, from Aristotle and Hegel to Hannah Arendt and Alasdair MacIntyre. This tradition is sometimes called communitarian, by which is meant, in part, that it views the roles provided by institutions as integral to human flourishing. But the objection to Parfit I am trying to make here is not necessarily ethical.

My objection is that we can’t, in any meaningful sense, be Reductionists, framing our experiences and decisions as though they belong merely to transient nodes of psychological connectivity. Even if we consider personhood an illusion, it is an illusion we cannot help but participate in as soon as we begin to interact with others and to pursue ends in the social world. Identity happens, whether we like it or not: other people regard us in a certain way, we become aware of how they regard us, and in our ensuing negotiation with ourselves about how to behave, a person is born.

This is, of course, one reason that people find escaping the self so appealing: the problem of how to present ourselves in the world, and of deciding which values to consider authentically our own, can be a source of immense neurosis and anxiety. But the psychological dynamics from which all of this springs are a real and inescapable part of being human (there is a reason Buddhist sages have often lived in isolation – something I notice few of their contemporary western descendants do). You can go around suppressing these thoughts by continuously telling yourself they do not amount to a person or self, but then you would just be repeating the fallacy identified by Parfit – putting the emphasis on personhood rather than on experiences. Meanwhile, if you actually want to find purpose and fulfilment in the world, you will find yourself behaving like a person in all but name.

To truly step outside our identities by denying any further fact in our existence (or, for that matter, by experiencing the dissolution of the ego through meditation, or fantasising about being uploaded to a machine) is at most a private, intermittent exercise. And even then, our desire to undertake this exercise, our reasons for thinking it worthwhile, and the things we hope to achieve in the process, are firmly rooted in our histories as social beings. You must be a person before you can stop being a person.

Perhaps these complications explain why Parfit is so tentative in his report of what it is like to be a Reductionist: “There is still a difference between my life and the lives of other people. But the difference is less.” I interpret his claim that we should be Reductionists as the echo of an age-old wisdom: don’t get so caught up in your own personal dramas that you overlook your relative insignificance and the fact that others are, fundamentally, not so different to you. But this moral stance does not follow inevitably from a theoretical commitment to Reductionism (and like I say, I don’t think that commitment could be anything more than theoretical). In fact, it’s possible to imagine some horrific beliefs being just as compatible with the principle that persons do not really exist. Parfit’s claim that Reductionism makes him care more about humanity in general seems to betray his own place in the tradition of universalist moral thought – a tradition in which the sanctity of persons (and indeed of souls) has long been central.

As for my friends who like to step away from the self through meditation, if this helps them stay happy and grounded, more power to them. But I don’t think this could ever obviate the importance of engaging in another kind of reflection: one that recognises life as a journey we must all undertake as real persons living in a world with others, and which requires us to struggle to define who we are and want to be. This is not easy today, because the social frameworks that have always been necessary for persons, like so many climbing flowers, to grow, are now in a state of flux (but that is a subject for another time). Still, difficult as it may be, the road awaits.

Reading Antigone in an age of resistance

The play opens with two sisters, Antigone and Ismene, arguing about their duties to family versus those to the state. Their two brothers have just killed each other while leading opposing sides of a civil war in Thebes. Their uncle Creon has now taken charge of the city, and has decreed that one of the brothers, Polynices, is to be denied a funeral: “he must be left unburied, his corpse / carrion for the birds and dogs to tear, / an obscenity for the citizens to behold.”

Ismene chooses obedience to Creon, but Antigone decides to rebel. She casts a symbolic handful of dust over Polynices’ corpse, and when brought before Creon, affirms her action in the name of “the great unwritten, unshakeable traditions” demanding funeral rites for the dead. So begins a confrontation between two headstrong, unflinching protagonists. It will end with Antigone hanging herself in her jail cell, leading to the suicide both of Creon’s son (who was engaged to Antigone), and consequently of his wife.

*   *   *


“When I see that king in that play, the first name that came to mind was Donald Trump: arrogance, misogyny, tunnel vision.” This was reportedly one audience member’s response to Antigone in Ferguson, a 2018 theatre piece that brought a famous Greek tragedy into the context of US race relations. That tragedy is Sophocles’ Antigone, which I have summarised above. The play is now frequently being used to explore contemporary politics, especially in relation to the theme of resistance. “It’s a story of a woman who finds the courage of her convictions to speak truth to power,” said Carl Cofield, who directed another production of Antigone in New York last year. Cofield drew parallels with the #MeToo movement, Black Lives Matter, and “the resistance to the outcome of the presidential race.”

This reading of Antigone has become increasingly common since the post-war era. Its originator was perhaps Bertolt Brecht’s 1948 adaptation, which imagined a scenario where the German people had risen against Hitler. Since the 1970s Antigone has often been portrayed as a feminist heroine, and the play has served as a call-to-arms in countless non-western contexts too. As Fanny Söderbäck proudly notes: “Whenever and wherever civil liberties are endangered, when the rights or existence of aboriginal peoples are threatened, when revolutions are underway, when injustices take place – wherever she is needed, Antigone appears.”

Such appropriation of a classical figure is by no means unique. It echoes the canonisation of Socrates as a martyr for free speech and civil disobedience, most notably by John Stuart Mill, Mohandas Gandhi and Martin Luther King. And just as this image of Socrates rests on Plato’s Apology of Socrates, but ignores the quite different portrait in the Crito, the “resistance” reading of Antigone bears little resemblance to how the play was originally intended and received.

An audience in 5th century Athens would not have regarded Antigone as subversive towards the authority of the state. In fact, if you accept the conventional dating of the play (441 BC), the Athenian people elected Sophocles to serve as a general immediately after its first performance. Rather, the dramatic impact of Antigone lay in the clash of two traditional visions of justice. Creon’s position at the outset – “whoever places a friend / above the good of his own country, he is nothing” – was not a queue for booing and hissing, but a statement of conventional wisdom. Likewise, Antigone’s insistence on burying her brother was an assertion of divine law, and more particularly, her religious duties as a woman. Thus Creon’s error is not that he defends the prerogatives of the state, but that he makes them incompatible with the claims of the gods.

Sophocles’ protagonists were not just embodiments of abstract principles, though. He was also interested in what motivates individuals to defend a particular idea of justice. Creon, it seems, is susceptible to megalomania and paranoia. And as Antigone famously admits in her final speech, her determination to bury her brother was a very personal obsession, born from her uniquely wretched circumstances.

*   *   *


It’s hardly surprising that our intuitive reading of Antigone has changed over more than two millennia. The world we inhabit, and the moral assumptions that guide us through it, are radically different. Moreover, Antigone is one of those works that seem to demand a new interpretation in every epoch. Hegel, for instance, used the play to illustrate his theory of dialectical progress in history. The moral claims of Antigone and Creon – or in Hegel’s scheme, family and state – are both inadequate, but the need to synthesise them cannot be grasped until they have clashed and been found wanting. Simone de Beauvoir also identified both protagonists with flawed outlooks, though in her reading Antigone is a “moral idealist” and Creon a “political realist” – two ways, according to de Beauvoir, of avoiding moral responsibility.

So neither Hegel nor de Beauvoir recognised Antigone as the obvious voice of justice. Then again, they were clearly reading the play with the templates provided by their own moments in history. Hegel’s historical forces belong to the tumultuous conflicts of the early 19thcentury, in which he had staked out a position as both a monarchist and a supporter of the French Revolution. De Beauvoir’s archetypes belong to Nazi-occupied France – a world of vicious dilemmas in which pacifists, collaborators and resistors had all claimed to act for the greater good, and were all, in her eyes, morally compromised.

Thus, each era tries to understand Antigone using the roles and narratives particular to its own moral universe. And this, I would argue, is a natural part of artistic and political discourse. Such works cannot be quarantined in their original context – they have different resonances for different audiences. Moreover, the question of how one interprets something is always preceded by the question of why one bothers to interpret it at all, and that second question is inevitably bound up with what we consider important in the here and now. Our own moral universe, as I’ve already suggested, is largely defined by the righteousness of resistance and the struggle for freedom. Consequently, works from the past tend to be interpreted according to a narrative where one agent or category of agent suppresses the autonomy of another.

Nonetheless, there are pitfalls here. I think it is important for us to remain aware that our intuitive reading of a play like Antigone is precisely that – our intuitive reading. Otherwise, we may succumb to a kind of wishful thinking. We may end up being so comfortable projecting our values across time that we forget they belong to a contingent moment in history. We might forget, in other words, that our values are the product of a particular set of circumstances, not of some divine edict, and so cannot simply be accepted as right.

Of course we can always try to reason about right and wrong. But if we unthinkingly apply our worldview to people in other eras, we are doing precisely the opposite. We are turning history itself into a vast echo chamber, relieving us of the need to examine or defend our assumptions.

*   *   *


The task of guarding against such myopia has traditionally fallen to academic scholarship. And in a sense, this institution has never been better equipped to do it. Since the advent of New Historicism in the 1980s, the importance of the context in which works are made, as well as the context in which they are read, has been widely acknowledged in the humanities. But this has had a peculiarly inverse effect. The apparent impossibility of establishing any objective or timeless lesson in a play like Antigone has only heightened the temptation to claim it for ourselves.

Consider the approach taken by the influential gender theorist Judith Butler in her book Antigone’s Claim (2000). Using modern psychoanalytic concepts, Butler delves into the murky world of family and sexuality in the play (Antigone is the daughter of the infamously incestuous Oedipus, whose “curse” she is said to have inherited). Butler thus unearths “a classical western dilemma” about the treatment of those who do not fit within “normative versions of kinship.”

However Butler is not interested in establishing any timeless insights about Antigone. As she makes clear throughout her analysis, she is interested in Antigone “as a figure for politics,” and in particular, for the contemporary politics of resistance. “I began to think about Antigone a few years ago,” she says, “as I wondered what had happened to those feminist efforts to confront and defy the state.” She then sets out her aim of using the play to examine contemporary society, asking

what the conditions of intelligibility could have been that would have made [Antigone’s] life possible, indeed, what sustaining web of relations makes our lives possible, those of us who confound kinship in the rearticulation of its terms?

This leads her to compare Antigone’s plight to that of AIDS victims and those in alternative parenting arrangements, while also hinting at “the direction for a psychoanalytic theory” which avoids “heterosexual closure.”

Butler is clearly not guilty, then, of forgetting her own situatedness in history. However this does raise the question, if one is only interested in the present, why use a work from the past at all? Butler may well answer that such texts are an integral part of the political culture she is criticising. And that is fine, as far as it goes. But this approach seems to risk undermining the whole point of historicism. For although it does not pretend that people in other times had access to the same ideas and beliefs as we do, it does imply that the past is only worth considering in terms our own ideas and beliefs. And the result is very similar: Antigone becomes, effectively, a play about us.

In other words, Butler’s way of appropriating the past subtly makes it conform to contemporary values. And in doing so, it lays the ground for that echo-chamber I described earlier, whereby works from the past merely serve as opportunities to give our own beliefs a sheen of eternal truth. Indeed, elsewhere in the recent scholarship on Antigone, one finds that an impeccably historicist reading can nonetheless end  like this:

Thus is the nature of political activism bent on the expansion of human rights and the extension of human dignity. … Antigone is a charter member of a small human community that is “la Résistance,” wherever it pops up in the history of human civilisation(My emphasis)

Such statements are not just nonsensical, but self-defeating. However valuable ideas like human rights, human dignity, and resistance might be, they do not belong to “the history of human civilisation.” Moreover, it is impossible to understand their value unless one realises this.

The crucial question here is what we do with the knowledge that values differ across time. There is, perhaps, a natural tendency to see this as demanding an assertion of the ultimate validity of our own worldview. In this sense, our desire to portray Antigone as a figure of resistance recalls those theologians who used to scour classical texts for foreshadowings of Christ. I would argue, however, that we should treat the contingency of our beliefs as a warning against excessive certainty. Ideas are always changing in relation to circumstances, and as such, need to be constantly questioned.

The Forgotten Books of Dorothea Tanning

This article was first published by MutualArt on 4 April 2019

It has often been said that Dorothea Tanning had two careers in her exceptionally long life: first as a visual artist, then as a writer. At the current Tate Modern exhibition of Tanning’s paintings and sculptures, you can read her statement that it was after the death of her husband Max Ernst in 1976 that she “gave full rein to her long felt compulsion to write.” The decades before her own death in 2012 were increasingly dedicated to literature, as she produced two memoirs, a novel, and two well-regarded collections of poetry.

Nonetheless, it would be truer to say that word and image went hand-in-hand throughout Tanning’s career. She published a steady stream of texts during the height of her visual output from the 1940s until the 1970s. Moreover, as the wealth of literary allusions in her paintings suggests, she drew constant inspiration from the horde of books she and Ernst kept in their home. Tanning told the New York Times in 1995: “All my life I’ve been on the fence about whether to be an artist or writer.”

But the most overlooked aspect of Tanning’s literary-artistic career is her involvement in numerous books of poetry and printmaking in France from the 1950s onwards. These include collaborations with several French authors, and two books of Tanning’s own French poetry and prints – Demain (1963) and En chair et en or (1974).

These works deserve more attention. For one thing, the etchings and lithographs Tanning produced for these books amount to a significant and distinctive part of her oeuvre. According to Clare Elliott, curator of an upcoming show of Tanning’s graphic works at the Menil Collection in Houston, her prints “achieve a variety of visual effects impossible to achieve with other materials. Ranging from dreamlike representation to near total abstraction, they reveal the breadth of her formal innovation.”

What is more, a closer look at Tanning’s bookmaking years can give us a unique perspective on her as an artist – her working methods, her outlook, and her relationship to the movement she was most influenced by, Surrealism.


Book mania

Arriving in Paris in 1950, Tanning discovered a thriving scene around the beau livre, or limited edition artist’s book. “Paris in the first fifty years of our century spawned more beau livresthan the rest of the world together,” she recalled in 1983. “To call it mania would not have surprised or displeased anyone.” Mostly these books were collaborations between an artist and a poet, “with mutual admiration as the basic glue that held them together,” as well as an editor who normally bankrolled the project.

Tanning dove straight into this milieu. In 1950 she produced a series of lithographs, Les 7 Périls Spectraux (The 7 Spectral Perils), to accompany text by the Surrealist poet André Pieyre de Mandiargues. Here we can recognise several motifs from Tanning’s early paintings – most notably in Premier peril, where a female figure with a dishevelled mask of hair presses herself against an open door, which is also the cover of a book. But with her combination of visual textures, Tanning achieves a new depth in these images, showing her embrace of the lithographic process in all its layered intricacy.

As the collaborations continued during the 1950s and 60s, Tanning’s printmaking ambitions grew. Like many artists before her, she discovered in etching and lithography a seemingly limitless arena for experimentation, attempting a wide range of techniques and compositions. And in 1963 she went a step further, replacing the poetry of other authors with her own.

Screenshot 2019-05-14 at 13.17.58
Dorothea Tanning, “Frontispiece for Demain” and “Untitled for Demain” (1963). Courtesy of the Dorothea Tanning Foundation.

The result was Demain (Tomorrow), a book of six etchings and a poem in French dispersed across several pages. Though modest in size – just ten squared centimetres – it is a punchy work of Surrealism. The poem progresses through a series of menacing images, as language breaks down in the presence of time and memory. It concludes: “The night chews its bone / My house asks itself / And deplores / Tonight, bath of mud / Evening fetish of a hundred thousand years, / My vampire.” The etchings convey a similar sense of dissolution, with vague forms emerging from a fog of aquatint.

Making Demain involved frustrations any printmaker could recognise. She would later describe watching her printer, Georges Visat, “wiping colours on the little plates while I stood by, always imploring for another try. There must have been fifty of these.” She was, however, thrilled by the result: “For my own words my own images – what more could one ask?”

Eleven years later Tanning produced En chair et en or (Of flesh and gold), a more substantial and, in every respect, more accomplished book. Its ten etchings, in which curvaceous, almost-human figures are suspended above landscapes of pale yellow and blue, show us what to expect from the accompanying poem. Everything expresses a sense of poise, a dazzling, enigmatic tension:

Body and face drift
Down with nightfall, unnoticed.
Draw near, draw nearer
Your destination.

Gradually, Tanning introduces notes of violence and desire, culminating in the striking final stanza: “Death on a weekend / Opened the dance like a vein / Flaming flesh and gold.”


Second languages

Dorothea Tanning, “Quoi de plus,” from “En chair et en or” (1974). Courtesy of the Dorothea Tanning foundation.

By the time of En chair et en or, we can identify some characteristic features in Tanning’s printmaking and poetry. Her etchings typically present coarse background textures, ghostly colours, and loosely organic forms. Her poems, meanwhile, reveal her exposure to the international Surrealist movement during the 1940s. (In “Demain”for instance, there are direct echoes of the Mexican poet Octavio Paz).

But this is not the most insightful way to approach Tanning’s books. For what really appealed to her, an English-speaking painter, about printmaking and French poetry was the opportunity to escape familiar forms of expression.

“Much of this work, and etchings that follow, have to do with chance,” she wrote about one of her collaborations, “for so many things can happen to a copper plate, depending on how you treat it, that implications are myriad.” Very few artists master the printmaking process to the degree that they know exactly what they are going to get at the end of it, but for Tanning this was part of its allure. In her comments about printmaking, she often used words like “discovery” and “adventure.” Unpredictability, in other words, was a creative asset.

The same can be said of her poetry in this period. The Irish playwright Samuel Beckett claimed that he wrote in French precisely because he did not know it as well as English, and so was less confined by conventional style and idiom. Likewise, it is striking how raw and immediate Tanning’s French poetry is by comparison with her later work in English.

All of this resonates with what originally drew Tanning to Surrealism – in her often quoted phrase from 1936, “the limitless expanse of POSSIBILITY.” In its earliest and most dramatic phase, an important aim of Surrealism had been for artists to loosen their control over expression, thus allowing more spontaneous, expansive forms of communication and meaning. This is what printmaking and French – both, in a sense, second languages – allowed Tanning to do.

Notes on The Artist’s Studio

The series of paintings known as Concetto spaziale, by the Argentine-Italian artist Lucio Fontana, is one of those moments in art history whose significance is easily overlooked today. It is difficult to imagine how radical they must have looked during the 1960s: plain white canvases presenting nothing more than one or a few slits where Fontana slashed the surface with a blade. Moreover, as I realised when I reviewed an exhibition featuring Fontana in 2015 (you can read that review here), it is only by considering the atmosphere of post-war Europe that one can grasp how freighted with purpose and symbolism this simple gesture had been.

But there are always new ways of looking at an artwork. The other evening I was visiting some galleries near Piccadilly and found myself, unexpectedly, confronted by one of the Concetto spaziale paintings once more. Only I wasn’t looking at the painting itself, but at a series of photographs that showed Fontana in his studio making it. Where previously there had been the stark aura of an iconic artwork, now there was melodrama and a rye sense of humour. The images, taken by the Italian photographer Ugo Mulas, were arranged in a climactic sequence. First we see Fontana poised at some distance from canvas, Stanley-knife in hand, his tense wrist and neatly folded sleeve suggesting the commencement of a long-anticipated act. There is a mood of ritual silence in the room, heightened by the soft light pouring through a large window. Then Fontana is approaching the canvas uncertainly, and making the first incision on its white surface – a moment pictured first in wide-angle, then close-up. Finally, the deed done, he lingers in a ceremonious bowing posture, the canvas now divided by a metre-long cleft.

Installation shot of Ugo Mulas, Lucio Fontana, L’Attesa, Milano 1-6, 1964 (2019). Modern print. Gelatin silver print on baritated paper. Edition of 8. Courtesy of Robilant+Voena.

These are just some of the photographs Mulas took of artists in their studios during the 1960s and 70s, which can be seen at Robilant+Voena gallery on Dover Street. Much like Fontana’s paintings, Mulas’ photographs require one to step imaginatively backwards in time; they now appear so classical in style, and so gorgeous in tone, that one can overlook their more subtle aspects. In particular, I get the sense Mulas was aware of his role as a myth-maker. His images playfully pander to the romance surrounding the artist’s studio – the setting where, in the popular imagination, unusual individuals go to perform some exotic and mysterious process of magic.

*   *   *


I have always been fascinated by studios, probably because I grew up with one at home. This was my mother’s studio. It was located between the kitchen and my brother’s bedroom, but I was always aware that it was a different kind of room from the others in the house. A place of inspiration, yes: a realm of coffee, bookshelves, and classical music. But also a site of labour, which smelled of turpentine and had a cold cement floor, a place where my old clothes became rags to wipe etching plates. Above all it was (and remains) a very particular setting, shaped by the contingencies of one person’s working life as it had evolved over many years.

Insofar as artists’ studios really are special, mysterious places, it is because of this particularity. This is rarely reflected, though, in the photography and journalism that surrounds them. Rather, studios tend to attract attention according to how well they embody a particular conception of the artist as an outsider, an unconventional or even otherworldly being. One studio that fits this template belongs to the monk-like painter Frank Auerbach, who has worked in the same dank cell in Mornington Crescent more or less every day since 1954 (Auerbach once quipped that age had finally forced him to reduce his working year, from 365 days to 364). Not only is the room cramped and barely furnished, but to the delight of various photographers over the years, Auerbach’s scraping technique has left the floor coated in layer upon layer of calcified paint. This is nothing, however, compared to the iconic lair of Francis Bacon – a disaster zone that resembled a trash-heap more closely than a studio, and captured perfectly Bacon’s persona as a chaotic, doomed madman.

Jorge Lewinski, “Frank Auerbach,” 1965. © The Lewinski Archive at Chatsworth.
Perry Ogden, “Francis Bacon’s 7 Reece Mews studio, London, 1998.”

The fact is, of course, that studios are often highly utilitarian spaces – clean, carefully organised, with most consideration going to practical questions such as storage and lighting. Of course some artists are messy, but their clutter is not qualitatively different to that which exists in many workspaces. And yet, even the apparently humdrum reality of a studio can provide a mystifying effect. Journalists and visitors often dwell precisely on the most ordinary, relatable aspects of an artist’s working life, thereby implicitly reinforcing the idea that an artist is something other than ordinary. In one feature on “Secrets of the Studio,” for instance, we learn that Grayson Perry likes to “collapse in an armchair and listen to the Archers,” while George Shaw “pretty much work[s] office hours.”

This paradox was observed by Roland Barthes in his wonderful essay “The Writer on Holiday.” After noting the tendency of the press to dwell on such domestic aspects of a writer’s life as their holidays, diet, and the colour of their pyjamas, Barthes concludes:

Far from the details of his daily life bringing nearer to me the nature of his inspiration and making it clearer, it is the whole mythical singularity of his condition which the writer emphasises by such confidences. For I cannot but ascribe to some superhumanity the existence of beings vast enough to wear blue pyjamas at the very moment when they manifest themselves as universal conscience […].

Sometimes artists themselves appear to use this trick. Wolfgang Tillmans’ photograph Studio still life, c, 2014 shows a very ordinary desk spread with several computers, a keyboard, cellotape, post-it notes, and so on. There is just a suggestion of bohemia conveyed by the beer bottle, cigarette packs and ashtray. It is tempting to interpret this image, especially when shown alongside Tillmans’ other works, as a subtle piece of self-glorification – a gesture of humility that makes the artist seem all the more remarkable for being a real human being.

Wolfgang Tillmans, “Studio still life, c, 2014.”

*   *   *


We shouldn’t be too cynical, though. The various romantic tropes that surround artists are not always and entirely tools of mystification, and nor do they show, as Barthes suggested, “the glamorous status bourgeois society liberally grants its spiritual representatives” in order to render them harmless. Such “myths” also offer a way of pointing towards, and navigating around, a deeper reality of which we are aware: that artistic production, at least in its modern form, is a very personal thing. This is why we will always have the sense, when seeing or entering a studio, that we are intruders in a place of esoteric ritual.

As I said, the beauty of a studio lies in its particularity. Does this mean, then, that one cannot appreciate a studio without becoming familiar with it? Not entirely. I was recently lent a copy of the architect MJ Long’s book Artists’ Studios, in which she chronicles the numerous spaces she designed for artists during her career. These include some of the most colourful and, indeed, most widely mythologised studios out there. But as an architect, Long is uniquely well placed to tell us the specific practical and personal considerations behind them. As such, she is able to bring out their genuinely poetic aspects without falling into cliché.

That poetry is captured, I think, in some notes left by Long’s husband and partner, Sandy Wilson, to encourage her to write her book. He briefly summarises a few of their studio projects, and the artists who commissioned them, as follows:

Kitaj, scholar-artist worked surrounded by books and the works of his friends. In his studio books lie open on the floor at the foot of each easel like paving stones in a Japanese garden.

Blake works in a sort of wonderland mirroring and embodying his magical mystery world of icons that feed into his imagination.

A dance photographer required a pure vacuum charged with light but no physical sense of place whatsoever.

Auerbach’s studio is the locked cell of the dedicated solitary.

Ben Johnson requires the clinical conditions of the operating theatre shared with meticulous operatives in a planned programme of execution.


Notes on “Why Liberalism Failed”

Patrick Deneen’s Why Liberalism Failed was one of the most widely discussed political books last year. In a crowded field of authors addressing the future of liberalism, Deneen stood out like a lightning-rod for his withering, full-frontal attack on the core principles and assumptions of liberal philosophy. And yet, when I recently went back and read the many reviews of Why Liberalism Failed, I came out feeling slightly dissatisfied. Critics of the book seemed all too able to shrug off its most interesting claims, and to argue in stead on grounds more comfortable to them.

Part of the problem, perhaps, is that Deneen’s book is not all that well written. His argument is more often a barrage of polemical statements than a carefully constructed analysis. Still, the objective is clear enough. He is taking aim at the liberal doctrine of individual freedom, which prioritises the individual’s right to do, be, and choose as he or she wishes. This “voluntarist” notion of freedom, Deneen argues, has shown itself to be not just destructive, but in certain respects illusory. On that basis he claims we would be better off embracing the constraints of small-scale community life.

Most provocatively, Deneen claims that liberal societies, while claiming merely to create conditions in which individuals can exercise their freedom, in fact mould people to see themselves and to act in a particular way. Liberalism, he argues, grew out of a particular idea of human nature, which posited, above all, that people want to pursue their own ends. It imagined our natural and ideal condition as that of freely choosing individual actors without connection to any particular time, place, or social context. For Deneen, this is a dangerous distortion – human flourishing also requires things at odds with personal freedom, such as self-restraint, committed relationships, and membership of a stable and continuous community. But once our political, economic, and cultural institutions are dedicated to individual choice as the highest good, we ourselves are encouraged to value that freedom above all else. As Deneen writes:

Liberalism began with the explicit assertion that it merely describes our political, social, and private decision making. Yet… what it presented as a description of human voluntarism in fact had to displace a very different form of human self-understanding and experience. In effect, liberal theory sought to educate people to think differently about themselves and their relationships.

Liberal society, in other words, shapes us to behave more like the human beings imagined by its political and economic theories.

It’s worth reflecting for a moment on what is being argued here. Deneen is saying our awareness of ourselves as freely choosing agents is, in fact, a reflection of how we have been shaped by the society we inhabit. It is every bit as much of a social construct as, say, a view of the self that is defined by religious duties, or by membership of a particular community. Moreover, valuing choice is itself a kind of constraint: it makes us less likely to adopt decisions and patterns of life which might limit our ability to choose in the future – even if we are less happy as a result. Liberalism makes us unfree, in a sense, to do anything apart from maximise our freedom.

*   *   *


Reviewers of Why Liberalism Failed did offer some strong arguments in defence of liberalism, and against Deneen’s communitarian alternative. These tended to focus on material wealth, and on the various forms of suffering and oppression inherent to non-liberal ways of life. But they barely engaged with his claims that our reverence for individual choice amounts to a socially determined and self-defeating idea of freedom. Rather, they tended to take the freely choosing individual as a given, which often meant they failed to distinguish between the kind of freedom Deneen is criticizing – that which seeks to actively maximise choice – and simply being free from coercion.

Thus, writing in the New York Times, Jennifer Szalai didn’t see what Deneen was griping about. She pointed out that

nobody is truly stopping Deneen from doing what he prescribes: finding a community of like-minded folk, taking to the land, growing his own food, pulling his children out of public school. His problem is that he apparently wants everyone to do these things

Meanwhile, at National Review, David French argued that liberalism in the United States actually incentivises individuals to“embrace the most basic virtues of self-governance – complete your education, get married, and wait until after marriage to have children.”And how so? With the promise of greater “opportunities and autonomy.” Similarly Deidre McCloskey, in a nonetheless fascinating rebuttal of Why Liberalism Failed, jumped between condemnation of social hierarchy and celebration of the “spontaneous order” of the liberal market, without acknowledging that she seemed to be describing two systems which shape individuals to behave in certain ways.

So why does this matter? Because it matters, ultimately, what kind of creatures we are – which desires we can think of as authentic and intrinsic to our flourishing, and which ones stem largely from our environment. The desire, for instance, to be able to choose new leaders, new clothes, new identities, new sexual partners – do these reflect the unfolding of some innate longing for self-expression, or could we in another setting do just as well without them?

There is no hard and fast distinction here, of course; the desire for a sports car is no less real and, at bottom, no less natural than the desire for friendship. Yet there is a moral distinction between the two, and a system which places a high value on the freedom to fulfil one’s desires has to remain conscious of such distinctions. The reason is, firstly, because many kinds of freedom are in conflict with other personal and social goods, and secondly, because there may come a time when a different system offers more by way of prosperity and security.  In both cases, it is important to be able to say what amounts to an essential form of freedom, and what does not.

*   *   *


Another common theme among Deneen’s critics was to question his motivation. His Catholicism, in particular, was widely implicated, with many reviewers insinuating that his promotion of close-knit community was a cover for a reactionary social and moral order. Here’s Hugo Drochon writing in The Guardian:

it’s clear that what he wants… is a return to “updated Benedictine forms” of Catholic monastic communities. Like many who share his worldview, Deneen believes that if people returned to such communities they would get back on a moral path that includes the rejection of gay marriage and premarital sex, two of Deneen’s pet peeves.

Similarly, Deidre McCloskey:

We’re to go back to preliberal societies… with the church triumphant, closed corporate communities of lovely peasants and lords, hierarchies laid out in all directions, gays back in the closet, women in the kitchen, and so forth.

Such insinuations strike me as unjustified – these views do not actually appear in Why Liberalism Failed– but they are also understandable. For Deneen does not clarify the grounds of his argument. His critique of liberalism is made in the language of political philosophy, and seems to be consequentialist: liberalism has failed, because it has destroyed the conditions necessary for human flourishing. And yet whenever Deneen is more specific about just what has been lost, one hears the incipient voice of religious conservatism. In sexual matters, Deneen looks back to “courtship norms” and “mannered interaction between the sexes”; in education, to “comportment” and “the revealed word of God.”

I don’t doubt that Deneen’s religious beliefs colour his views, but nor do I think his entire case springs from some dastardly deontological commitment to Catholic moral teaching. Rather, I would argue that these outbursts point to a much more interesting tension in his argument.

My sense is that the underpinnings of Why Liberalism Failed come from virtue ethics – a philosophy whose stock has fallen somewhat since the Enlightenment, but which reigned supreme in antiquity and medieval Christendom. In Deneen’s case, what is important to grasp is Aristotle’s linking of three concepts: virtue, happiness, and the polis or community. The highest end of human life, says Aristotle, is happiness (or flourishing). And the only way to attain that happiness is through consistent action in accordance with virtue – in particular, through moderation and honest dealing. But note, virtues are not rules governing action; they are principles that one must possess at the level of character and, especially, of motivation. Also, it is not that virtue produces happiness as a consequence; the two are coterminous – to be virtuous is to be happy. Finally, the pursuit of virtue/happiness can only be successful in a community whose laws and customs are directed towards this same goal. For according to Aristotle:

to obtain a right training for goodness from an early age is a hard thing, unless one has been brought up under right laws. For a temperate and hardy way of life is not a pleasant thing to most people, especially when they are young.

The problem comes, though, when one has to provide a more detailed account of what the correct virtues are. For Aristotle, and for later Christian thinkers, this was provided by a natural teleology – a belief that human beings, as part of a divinely ordained natural order, have a purpose which is intrinsic to them. But this crutch is not really available in a modern philosophical discussion. And so more recent virtue ethicists, notably Alasdair MacIntyre, have shifted the emphasis away from a particular set of virtues with a particular purpose, and towards virtue and purpose as such. What matters for human flourishing, MacIntyre argued, is that individuals be part of a community or tradition which offers a deeply felt sense of what it is to lead a good life. Living under a shared purpose, as manifest in the social roles and duties of the polis, is ultimately more important than the purpose itself.

This seems to me roughly the vision of human flourishing sketched out in Why Liberalism Failed. Yet I’m not sure Deneen has fully reconciled himself to the relativism that is entailed by abandoning the moral framework of a natural teleology. This is a very real problem – for why should we not accept, say, the Manson family as an example of virtuous community? – but one which is difficult to resolve without overtly metaphysical concepts. And in fact, Deneen’s handling of human nature does strain in that direction, as when he looks forward to

the only real form of diversity, a variety of cultures that is multiple yet grounded in human truths that are transcultural and hence capable of being celebrated by many peoples.

So I would say that Deneen’s talk of “courtship norms” and “comportment” is similar to his suggestion that the good life might involve “cooking, planting, preserving, and composting.” Such specifics are needed to refine what is otherwise a dangerously vague picture of the good life.





Notes on “The Bowl of Milk”

I normally can’t stand hearing about the working habits of famous artists. Whether by sheer talent or some fiendish work ethic, they tend to be hyper-productive in a way that I could never be. Thankfully, there are counter-examples – like the painter Pierre Bonnard. As you can read in the first room of the Bonnard exhibition now at Tate Modern, he often took years to finish a painting, putting it to one side before coming back to it and reworking it multiple times. He was known to continue tinkering with his paintings when he came across them hanging on the wall of somebody’s house. At the very end of his life, no longer able to paint, he instructed his nephew to change a section of his final work Almond Tree in Blossom (1947).

Maybe this is wishful thinking, but I find things that have been agonised over to acquire a special kind of depth. In many ways Bonnard is not my kind of painter, but his work rewards close attention. There is hardly an inch of his canvases where you do not find different tones layered over each other – layers not only of paint, but of time and effort – creating a luminous sea of brushstrokes which almost swarms in front of your eyes. And this belaboured quality is all the more intriguing given the transience of his subject matter: gardens bursting with euphoric colour, interiors drenched in vibrant light, domestic scenes that capture the briefest of moments during the day.

Nowhere is this tension more pronounced than in The Bowl of Milk (1919). Pictured is a room with a window overlooking the sea, and two tables ranged with items of crockery and a vase of flowers. In the foreground stands a woman wearing a long gown and holding a bowl, presumably for the cat which approaches in the shadows at her feet. Yet there is something nauseating, almost nightmarish about this image. Everything swims with indeterminacy, vanishing from our grasp. So pallid is the light pouring through the window that at first I assumed it was night outside. The objects and figures crowding the room shimmer as though on the point of dissolving into air. The woman’s face is a vague, eyeless mask. The painting is composed so that if you focus on one particular passage, everything else recedes into a shapeless soup in the periphery of your vision. It is a moment of such vivid intensity that one is forced to realise it has been conjured from the depths of fantasy.

*     *     *


The woman in The Bowl of Milk is almost certainly Marthe de Méligny, formerly Maria Boursin, Bonnard’s lifelong model and spouse. They met in Paris in 1893, where de Méligny was employed manufacturing artificial flowers for funerals. Some five years later, Bonnard began to exhibit paintings that revealed their intimate domestic life together. These would continue throughout his career, with de Méligny portrayed in various bedrooms, bathrooms and hallways, usually alone, usually nude, and often in front of a mirror.

Pierre Bonnard (1867-1947). "Nu dans le bain". Huile sur toile, 1936. Paris, musée d'Art moderne.
Pierre Bonnard “Nude in the Bath” (1936). Oil paint on canvas. Paris, musée d’Art moderne.

It was not an uncomplicated relationship: Bonnard is thought to have had affairs, and when the couple eventually married in 1925 de Méligny revealed she had lied about her name and age (she had broken off contact with her family before moving to Paris). They were somewhat isolated. De Méligny is described as having a silent and unnerving presence, and later developed a respiratory disease which forced them to spend periods on the Atlantic coast. Yet Bonnard’s withdrawal from the Parisian art scene, where he had been prominent during his twenties, allowed him to develop his exhaustive, time-leaden painting process, and to forge his own style. The paintings of de Méligny seem to relish the freedom enabled by familiarity and seclusion. One of the gems of the current Tate exhibition are a series of nude photographs that the couple took of one another in their garden in the years 1899-1901. In each of these unmistakeably Edenic pictures, we see a bright-skinned body occupying a patch of sunlight, securely framed by shadowy thickets of grass and leaves.


(Source: https://dantebea.com/category/peintures-dessins/pierre-bonnard/page/2/)

The female figure in The Bowl of Milk is far from familiar: she is a flicker of memory, a robed phantasm. But like other portrayals of de Méligny, this painting revels in the erotics of space, whereby the proximity and secrecy of the domestic setting are charged with the presence of a human subject – an effect only heightened by our voyeuristic discomfort at gaining access to this private world. There is no nudity, but a disturbing excess of sensual energy in the gleaming white plates, the crimson anemones, the rich shadows and the luxurious stride of the cat. To describe these details as sexual is to lessen their true impact: they are demonic, signalling the capacity of imagination to terrorise us with our own senses.

*     *     *


In 1912 Bonnard bought a painting by Henri Matisse, The Open Window at Collioure (1905). Matisse would soon emerge as one of the leading figures of modern painting, but the two were also friends, maintaining a lively correspondence over several decades. And one can see what inspired Bonnard to make this purchase: doors and windows appear continually in his own work, allowing interior space to be animated by the vitality of the outside world.

Henri Matisse, “The Open Window at Collioure” (1905). Oil paint on canvas. National Gallery of Art, Washington
Pierre Bonnard L'atelier au mimosa 1939-46 Musée National d'Art Moderne - Centre Pompidou (Paris, France)
Pierre Bonnard, “The Studio with Mimosas” (1939-46). Oil paint on canvas. Musée National d’Art Moderne – Centre Pompidou, Paris.

More revealing, though, are the differences we can glean from The Open Window at Collioure. Matisse’s painting, with its flat blocks of garish colour, is straining towards abstraction. As a formal device, the window merely facilitates a jigsaw of squares and rectangles. Such spatial deconstruction and pictorial simplification were intrinsic to the general direction of modernism at this time. This, however, was the direction from which the patient and meticulous Bonnard had partly stepped aside. For he remained under the influence of impressionist painting, which emphasised the subtlety and fluidity of light and colour as a means of capturing the immediacy of sensory experience. Thus, as Juliette Rizzi notes, Bonnard’s use of “framing devices such as doors, mirrors, and horizontal and vertical lines” allow him a compromise of sorts. They do not simplify his paintings so much as provide an angular scaffolding around which he can weave his nebulous imagery.

The window and its slanted rectangles of light are crucial to the strange drama of The Bowl of Milk. Formally, this element occupies the very centre of the composition, holding it in place. But it is also a source of ambiguity. The window is seemingly a portal to another world, flooding the room with uncanny energy. The woman appears stiff, frozen at the edge of a spotlight. It’s as though the scene has been illuminated just briefly – before being buried in darkness again.

Yuval Noah Harari’s half-baked guide to the 21st century

This review was first published by Arc Digital on 25 October 2018.

There is something immensely comforting about Yuval Noah Harari. In an era when a writer’s success often depends on a willingness to provoke, Harari’s calling cards are politeness and equanimity. In the new class of so-called “rock star intellectuals,” he is analogous to Coldplay: accessible, inoffensive, and astoundingly popular. I find no other writer so frequently referenced by friends who don’t generally read. On YouTube he is a man for all seasons, discussing #MeToo with Natalie Portman, contemplating the nature of money with Christine Lagarde, and considering “Who Really Runs the World?” with Russell Brand.

Harari, a historian at the Hebrew University of Jerusalem, is by no means undeserving of this success. His first book, Sapiens: A Brief History of Humankind, displayed a rare talent for condensing vast epochs of history into simple narratives. In his second, Homo Deus, he showed all the imagination of a science fiction writer in presenting the dystopian possibilities of artificial intelligence and biotechnology.

But now Harari has abandoned the speculative realms of past and future, turning his attention to the thorny problems of the present. And here we find that his formula has its limits. 21 Lessons for the 21st Century is a collection of essays taking on everything from culture and politics to technology and spirituality. Undoubtedly, it offers plenty of thought-provoking questions and insights. By and large though, the very thing that made his previous works so engaging — an insistence on painting in broad, simple brushstrokes — makes this latest effort somewhat superficial.

Many of Harari’s essays are just not very illuminating. They circle their subjects ponderously, never quite making contact. Take his chapter on the immigration debate in Europe. Harari begins by identifying three areas of disagreement: borders, integration, and citizenship. Then he walks us through some generic and largely hypothetical pro- and anti-immigration stances, guided mainly by a desire not to offend anyone. Finally, after explaining that “culturism” is not the same as racism, he simply concludes: “If the European project fails…it would indicate that belief in the liberal values of freedom and tolerance is not enough to resolve the cultural conflicts of the world.”

Here we glimpse one of the book’s main questions: whether liberalism can unite the world and overcome the existential challenges facing humanity. But what is liberalism? According to Harari, all social systems, whether religious or political, are “stories.” By this he means that they are psychological software packages, allowing large-scale cooperation while providing individuals with identity and purpose. Thus, liberalism is a “global story” which boils down to the belief that “all authority ultimately stems from the free will of individual humans.” Harari gives us three handy axioms: “the voter knows best,” “the customer is always right,” and “follow your heart.”

This certainly makes matters crystal clear. But political systems are not just ideological dogmas to which entire populations blindly subscribe. They are institutional arrangements shaped by the clashes and compromises of differing values and interests. Historically, liberalism’s commitment to individualism was less important than its preference for democratic means to resolve such conflicts. Harari’s individualist, universalist liberalism has certainly been espoused in recent decades; but as a more perceptive critic such as John Gray or Shadi Hamid would point out, it is only for sections of Western society that this has offered a meaningful worldview.

Overlooking this basic degree of complexity leads Harari to some bizarre judgments. He claims that “most people who voted for Trump and Brexit didn’t reject the liberal package in its entirety — they lost faith mainly in its globalizing part.” Does he really think these voters were once enthusiastic about globalism? Likewise, to illustrate the irrational character of liberal customs, Harari states: “If democracy were a matter of rational decision-making, there would be absolutely no reason to give all people equal voting rights.” Did he not consider that a key purpose of the ballot is to secure the legitimacy of government?

Harari is frequently half-sighted, struggling to acknowledge that phenomena can have more than one explanation. I confess I chuckled at his reading of Ex Machina, the 2015 sci-fi about a cyborg femme fatale.“This is not a movie about the human fear of intelligent robots,” he writes. It is about “the male fear…that female liberation might lead to female domination.” To support his interpretation, Harari poses a question: “For why on earth would an AI have a sexual or a gender identity?” This in a book which argues extensively that artificial intelligence will be used to exploit human desires.

Nor are such hiccups merely incidental. Rather, they stem from Harari’s failure to connect his various arguments into a coherent world-view. This is perhaps the most serious shortcoming of 21 Lessons. Reading this book is like watching a one-man kabuki play, whereby Harari puts on different masks as the situation demands. But these characters are not called on to complement each other so much as to prevent the stage from collapsing.

We have already encountered Harari’s first mask: postmodern cynicism. He is at pains to deconstruct the grand narratives of the past, whether religious, political, or national. He argues that the human subject, too, is a social construct — an amalgam of fictions, bound by context and largely incapable of rational thought.

However this approach tends to invite relativism and apathy. And so, to provide some moral ballast, Harari picks up the mask of secularist polemic. Though never abandoning his light-hearted tone, he spends a great deal of time eye-poking and shin-kicking any tradition that indulges the human inclination for sanctity, ritual, and transcendence. But not to worry: you can keep your superstitions, “provided you adhere to the secular ethical code.” This consists of truth, compassion, equality, freedom, courage, and responsibility.

What, then, of our darker impulses? And what of our yearning to identify with something larger than ourselves? Enter Harari in his third mask: neo-Buddhist introspection. This is an especially useful guise, for whenever Harari encounters a difficult knot, he simply cuts it with a platitude. “If you really understand how an action causes unnecessary suffering to yourself and to others,” he writes, “you will naturally abstain from it.” Moreover: “If you really know the truth about yourself and the world, nothing can make you miserable.”

I am not saying these outlooks cannot be reconciled. My point is that Harari does not attempt to do so, leaving us instead with an array of loose ends. If the imperative is to deconstruct, why should secular shibboleths be left standing? Why should we worry about technology treating us as “little more than biochemical algorithms,” when Harari already thinks that “your core identity is a complex illusion created by neural networks”? And given that “both the ‘self’ and freedom are mythological chimeras,” what does Harari mean when he advises us to “work very hard…to know what you are, and what you want from life”?

You might object that I’m being ungenerous; that the most popular of popular intellectuals must necessarily deal in outlines, not details. But this is a slippery slope that leads to lazy assumptions about the incuriousness of a general audience. When it comes to current political and philosophical dilemmas, being a good popularizer does not consist in doling out reductive formulas. It consists in giving a flavor of the subtlety which makes these matters worth exploring. In that respect, 21 Lessons falls short of the mark.

What was Romanticism? Putting the “counter-Enlightenment” in context

In his latest book Enlightenment Now: The Case for Reason, Science, Humanism and Progress, Steven Pinker heaps a fair amount of scorn on Romanticism, the movement in art and philosophy which spread across Europe during the late-18th and 19th centuries. In Pinker’s Manichean reading of history, Romanticism was the malign counterstroke to the Enlightenment: its goal was to quash those values listed in his subtitle. Thus, the movement’s immense diversity and ambiguity are reduced to a handful of ideas, which show that the Romantics favored “the heart over the head, the limbic system over the cortex.” This provides the basis for Pinker to label “Romantic” various irrational tendencies that are still with us, such as nationalism and reverence for nature.

In the debates following Enlightenment Now, many have continued to use Romanticism simply as a suitcase term for “counter-Enlightenment” modes of thought. Defending Pinker in Areo, Bo Winegard and Benjamin Winegard do produce a concise list of Romantic propositions. But again, their version of Romanticism is deliberately anachronistic, providing a historical lineage for the “modern romantics” who resist Enlightenment principles today.

As it happens, this dichotomy does not appeal only to defenders of the Enlightenment. In his book The Age of Anger, published last year, Pankaj Mishra explains various 21st century phenomena — including right-wing populism and Islamism — as reactions to an acquisitive, competitive capitalism that he traces directly back to the 18th century Enlightenment. This, says Mishra, is when “the unlimited growth of production . . . steadily replaced all other ideas of the human good.” And who provided the template for resisting this development? The German Romantics, who rejected the Enlightenment’s “materialist, individualistic and imperialistic civilization in the name of local religious and cultural truth and spiritual virtue.”

Since the Second World War, it has suited liberals, Marxists, and postmodernists alike to portray Romanticism as the mortal enemy of Western rationalism. This can convey the impression that history has long consisted of the same struggle we are engaged in today, with the same teams fighting over the same ideas. But even a brief glance at the Romantic era suggests that such narratives are too tidy. These were chaotic times. Populations were rising, people were moving into cities, the industrial revolution was occurring, and the first mass culture emerging. Europe was wracked by war and revolution, nations won and lost their independence, and modern politics was being born.

So I’m going to try to explain Romanticism and its relationship with the Enlightenment in a bit more depth. And let me say this up front: Romanticism was not a coherent doctrine, much less a concerted attack on or rejection of anything. Put simply, the Romantics were a disparate constellation of individuals and groups who arrived at similar motifs and tendencies, partly by inspiration from one another, partly due to underlying trends in European culture. In many instances, their ideas were incompatible with, or indeed hostile towards, the Enlightenment and its legacy. On the other hand, there was also a good deal of mutual inspiration between the two.


Sour grapes

The narrative of Romanticism as a “counter-Enlightenment” often begins in the mid-18th century, when several forerunners of the movement appeared. The first was Jean-Jacques Rousseau, whose Social Contract famously asserts “Man is born free, but everywhere he is in chains.” Rousseau portrayed civilization as decadent and morally compromised, proposing instead a society of minimal interdependence where humanity would recover its natural virtue. Elsewhere in his work he also idealized childhood, and celebrated the outpouring of subjective emotion.

In fact various Enlightenment thinkers, Immanuel Kant in particular, admired Rousseau’s ideas; he was arguing that left to their own devices, ordinary people would use reason to discover virtue. Nonetheless, he was clearly attacking the principle of progress, and his apparent motivations for doing so were portentous. Rousseau had been associated with the French philosophes — men such as Thiry d’Holbach, Denis Diderot, Claude Helvétius and Jean d’Alembert — who were developing the most radical strands of Enlightenment thought, including materialist philosophy and atheism. But crucially, they were doing so within a rather glamorous, cosmopolitan milieu. Though they were monitored and harassed by the French ancien régime, many of the philosophes were nonetheless wealthy and well-connected figures, their Parisian salons frequented by intellectuals, ambassadors and aristocrats from across Europe.

Rousseau decided the Enlightenment belonged to a superficial, hedonistic elite, and essentially styled himself as a god-fearing voice of the people. This turned out to be an important precedent. In Prussia, where a prolific Romantic movement would emerge, such antipathy towards the effete culture of the French was widespread. For much to the frustration of Prussian intellectuals and artists — many of whom were Pietist Christians from lowly backgrounds — their ruler Frederick the Great was an “Enlightened despot” and dedicated Francophile. He subscribed to Melchior Grimm’s Correspondence Littéraire, which brought the latest ideas from the Paris; he hosted Voltaire at his court as an Enlightenment mascot; he conducted affairs in French, his first language.

This is the background against which we find Johann Gottfried Herder, whose ideas about language and culture were deeply influential to Romanticism. He argued that one can only understand the world via the linguistic concepts that one inherits, and that these reflect the contingent evolution of one’s culture. Hence in moral terms, different cultures occupy significantly different worlds, so their values should not be compared to one another. Nor should they be replaced with rational schemes dreamed up elsewhere, even if this means that societies are bound to come into conflict.

Rousseau and Herder anticipated an important cluster of Romantic themes. Among them are the sanctity of the inner-life, of folkways and corporate social structures, of belonging, of independence, and of things that cannot be quantified. And given the apparent bitterness of Herder and some of his contemporaries, one can see why Isaiah Berlin declared that all this amounted to “a very grand form of sour grapes.” Berlin takes this line too far, but there is an important insight here. During the 19th century, with the rise of the bourgeoisie and of government by utilitarian principles, many Romantics will show a similar resentment towards “sophisters, economists, and calculators,” as Edmund Burke famously called them. Thus Romanticism must be seen in part as coming from people denied status in a changing society.

Then again, Romantic critiques of excessive uniformity and rationality were often made in the context of developments that were quite dramatic. During the 1790s, it was the French Revolution’s degeneration into tyranny that led first-generation Romantics in Germany and England to fear the so-called “machine state,” or government by rational blueprint. Similarly, the appalling conditions that marked the first phase of the industrial revolution lay behind some later Romantics’ revulsion at industrialism itself. John Ruskin celebrated medieval production methods because “men were not made to work with the accuracy of tools,” with “all the energy of their spirits . . . given to make cogs and compasses of themselves.”

And ultimately, it must be asked if opposition to such social and political changes was opposition to the Enlightenment itself. The answer, of course, depends on how you define the Enlightenment, but with regards to Romanticism we can only make the following generalization. Romantics believed that ideals such as reason, science, and progress had been elevated at the expense of values like beauty, expression, or belonging. In other words, they thought the Enlightenment paradigm established in the 18th century was limited. This is well captured by Percy Shelley’s comment in 1821 that although humanity owed enormous gratitude to philosophers such as John Locke and Voltaire, only Rousseau had been more than a “mere reasoner.”

And yet, in perhaps the majority of cases, this did not make Romantics hostile to science, reason, or progress as such. For it did not seem to them, as it can seem to us in hindsight, that these ideals must inevitably produce arrangements such as industrial capitalism or technocratic government. And for all their sour grapes, they often had reason to suspect those whose ascent to wealth and power rested on this particular vision of human improvement.


“The world must be romanticized”

One reason Romanticism is often characterized as against something — against the Enlightenment, against capitalism, against modernity as such — is that it seems like the only way to tie the movement together. In the florescence of 19th century art and thought, Romantic motifs were arrived at from a bewildering array of perspectives. In England during the 1810s, for instance, radical, progressive liberals such as Shelley and Lord Byron celebrated the crumbling of empires and of religion, and glamorized outcasts and oppressed peoples in their poetry. They were followed by arch-Tories like Thomas Carlyle and Ruskin, whose outlook is fundamentally paternalistic. Other Romantics migrated across the political spectrum during their lifetimes, bringing their themes with them.

All this is easier to understand if we note that a new sensibility appeared in European culture during this period, remarkable for its idealism and commitment to principle. Disparaged in England as “enthusiasm,” and in Germany as Schwärmerei or fanaticism, we get a flavor of it by looking at some of the era’s celebrities. There was Beethoven, celebrated as a model of the passionate and impoverished genius; there was Byron, the rebellious outsider who received locks of hair from female fans; and there was Napoleon, seen as an embodiment of untrammeled willpower.

Curiously, though, while this Romantic sensibility was a far cry from the formality and refinement which had characterized the preceding age of Enlightenment, it was inspired by many of the same ideals. To illustrate this, and to expand on some key Romantic concepts, I’m going to focus briefly on a group that came together in Prussia at the turn of the 19th century, known as the Jena Romantics.

The Jena circle — centred around Ludwig Tieck, Friedrich and August Schlegel, Friedrich Hölderlin, and the writer known as Novalis — have often been portrayed as scruffy bohemians, a conservative framing that seems to rest largely on their liberal attitudes to sex. But this does give us an indication of the group’s aims: they were interested in questioning convention, and pursuing social progress (their journal Das Athenäum was among the few to publish female writers). They were children of the Enlightenment in other respects, too. They accepted that rational skepticism had ruled out traditional religion and superstition, and that science was a tool for understanding reality. Their philosophy, however, shows an overriding desire to reconcile these capacities with an inspiring picture of culture, creativity, and individual fulfillment. And so they began by adapting the ideas of two major Enlightenment figures: Immanuel Kant and Benedict Spinoza.

Kant, who spent his entire life among the Romantics in Prussia, had impressed on them the importance of one dilemma in particular: how was human freedom possible given that nature was determined? But rather than follow Kant down the route of transcendental freedom, the Jena school tried to update the universe Spinoza had described a century earlier, which was a single deterministic entity governed by a mechanical sequence of cause and effect. Conveniently, this mechanistic model had been called into doubt by contemporary physics. So they kept the integrated, holistic quality of Spinoza’s nature, but now suggested that it was suffused with another Kantian idea — that of organic force or purpose.

Consequently, the Jena Romantics arrived at an organic conception of the universe, in which nature expressed the same omnipresent purpose in all its manifestations, up to and including human consciousness. Thus there was no discrepancy between mental activity and matter, and the Romantic notion of freedom as a channelling of some greater will was born. After all, nature must be free because, as Spinoza had argued, there is nothing outside nature. Therefore, in Friedrich Schlegel’s words, “Man is free because he is the highest expression of nature.”

Various concepts flowed from this, the most consequential being a revolutionary theory of art. Whereas the existing neo-classical paradigm had assumed that art should hold a mirror up to nature, reflecting its perfection, the Romantics now stated that the artist should express nature, since he is part of its creative flow. What this entails, moreover, is something like a primitive notion of the unconscious. For this natural force comes to us through the profound depths of language and myth; it cannot be definitely articulated, only grasped at through symbolism and allegory.

Such longing for the inexpressible, the infinite, the unfathomable depth thought to lie beneath the surface of ordinary reality, is absolutely central to Romanticism. And via the Jena school, it produces an ideal which could almost serve as a Romantic program: being-through-art. The modern condition, August Schlegel says, is the sensation of being adrift between two idealized figments of our imagination: a lost past and an uncertain future. So ultimately, we must embrace our frustrated existence by making everything we do a kind of artistic expression, allowing us to move forward despite knowing that we will never reach what we are aiming for. This notion that you can turn just about anything into a mystery, and thus into a field for action, is what Novalis alludes to in his famous statement that “the world must be romanticized.”

It appears there’s been something of a detour here: we began with Spinoza and have ended with obscurantism and myth. But as Frederick Beiser has argued, this baroque enterprise was in many ways an attempt to radicalize the 18th century Enlightenment. Indeed, the central thesis that our grip on reality is not certain, but we must embrace things as they seem to us and continue towards our aims, was almost a parody of the skepticism advanced by David Hume and by Kant. Moreover, and more ominously, the Romantics amplified the Enlightenment principle of self-determination, producing the imperative that individuals and societies must pursue their own values.


The Romantic legacy

It is beyond doubt that some Romantic ideas had pernicious consequences, the most demonstrable being a contribution to German nationalism. By the end of the 19th century, when Prussia had become the dominant force in a unified Germany and Richard Wagner’s feverish operas were being performed, the Romantic fascination with national identity, myth, and the active will had evolved into something altogether menacing. Many have taken the additional step, which is not a very large one, of implicating Romanticism in the fascism of the 1930s.

A more tenuous claim is that Romanticism (and German Romanticism especially) contains the origins of the postmodern critique of the Enlightenment, and of Western civilization itself, which is so current among leftist intellectuals today. As we have seen, there was in Romanticism a strong strain of cultural relativism — which is to say, relativism about values. But postmodernism has at its core a relativism about facts, a denial of the possibility of reaching objective truth by reason or observation. This nihilistic stance is far from the skepticism of the Jena school, which was fundamentally a means for creative engagement with the world.

But whatever we make of these genealogies, remember that we are talking about developments, progressions over time. We are not saying that Romanticism was in any meaningful sense fascistic, postmodernist, or whichever other adjective appears downstream. I emphasize this because if we identify Romanticism with these contentious subjects, we will overlook its myriad more subtle contributions to the history of thought.

Many of these contributions come from what I described earlier as the Romantic sensibility: a variety of intuitions that seem to have taken root in Western culture during this era. For instance, that one should remain true to one’s own principles at any cost; that there is something tragic about the replacement of the old and unusual with the uniform and standardized; that different cultures should be appreciated on their own terms, not on a scale of development; that artistic production involves the expression of something within oneself. Whether these intuitions are desirable is open to debate, but the point is that the legacy of Romanticism cannot be compartmentalized, for it has colored many of our basic assumptions.

This is true even of ideas that we claim to have inherited from the Enlightenment. For some of these were these were modified, and arguably enriched, as they passed through the Romantic era. An explicit example comes from John Stuart Mill, the founding figure of classical Liberalism. Mill inherited from his father and from Jeremy Bentham a very austere version of utilitarian ethics. This posited as its goal the greatest good for the greatest number of people; but its notion of the good did not account for the value of culture, spirituality, and a great many other things we now see as intrinsic to human flourishing. As Mill recounts in his autobiography, he realized these shortcomings by reading England’s first-generation Romantics, William Wordsworth and Samuel Taylor Coleridge.

This is why, in 1840, Mill bemoaned the fact that his fellow progressives thought they had nothing to learn from Coleridge’s philosophy, warning them that “the besetting danger is not so much of embracing falsehood for truth, as of mistaking part of the truth for the whole.” We are committing a similar error today when we treat Romanticism simply as a “counter-Enlightenment.” Ultimately this limits our understanding not just of Romanticism but of the Enlightenment as well.


This essay was first published in Areo Magazine on June 10 2018. See it here.

When did death become so personal?


I have a slightly gloomy but, I think, not unreasonable view of birthdays, which is that they are really all about death. It rests on two simple observations. First, much as they pretend otherwise, people do generally find birthdays to be poignant occasions. And second, a milestone can have no poignancy which does not ultimately come from the knowledge that the journey in question must end. (Would an eternal being find poignancy in ageing, nostalgia, or anything else associated with the passing of time? Surely not in the sense that we use the word). In any case, I suspect most of us are aware that at these moments when our life is quantified, we are in some sense facing our own finitude. What I find interesting, though, is that to acknowledge this is verboten. In fact, we seem to have designed a whole edifice of niceties and diversions – cards, parties, superstitions about this or that age – to avoid saying it plainly.

Well it was my birthday recently, and it appears at least one of my friends got the memo. He gave a copy of Hans Holbein’s Dance of Death, a sequence of woodcuts composed in 1523-5. They show various classes in society being escorted away by a Renaissance version of the grim reaper – a somewhat cheeky-looking skeleton who plays musical instruments and occasionally wears a hat. He stands behind The Emperor, hands poised to seize his crown; he sweeps away the coins from The Miser’s counting table; he finds The Astrologer lost in thought, and mocks him with a skull; he leads The Child away from his distraught parents.

Screen Shot 2018-05-21 at 09.58.17
Hans Holbein, “The Astrologer” and “The Child,” from “The Dance of Death” (1523-5)

It is striking for the modern viewer to see death out in the open like this. But the “dance of death” was a popular genre that, before the advent of the printing press, had adorned the walls of churches and graveyards. Needless to say, this reflects the fact that in Holbein’s time, death came frequently, often without warning, and was handled (both literally and psychologically) within the community. Historians speculate about what pre-modern societies really believed regarding death, but belief is a slippery concept when death is part of the warp and weft of culture, encountered daily through ritual and artistic representations. It would be a bit like asking the average person today what their “beliefs” are about sex – where to begin? Likewise in Holbein’s woodcuts, death is complex, simultaneously a bringer of humour, justice, grief, and consolation.

Now let me be clear, I am not trying to romanticise a world before antibiotics, germ theory, and basic sanitation. In such a world, with child mortality being what it was, you and I would most likely be dead already. Nonetheless, the contrast with our own time (or at least with certain cultures, and more about that later) is revealing. When death enters the public sphere today – which is to say, fictional and news media – it rarely signifies anything, for there is no framework in which it can do so. It is merely a dramatic device, injecting shock or tragedy into a particular set of circumstances. The best an artist can do now is to expose this vacuum, as the photographer Jo Spence did in her wonderful series The Final Project, turning her own death into a kitsch extravaganza of joke-shop masks and skeletons.

Screen Shot 2018-05-21 at 10.04.25
From Jo Spence, “The Final Project,” 1991-2, courtesy of The Jo Spence Memorial Archive and Richard Saltoun Gallery

And yet, to say that modern secular societies ignore or avoid death is, in my view, to miss the point. It is rather that we place the task of interpreting mortality squarely and exclusively upon the individual. In other words, if we lack a common means of understanding death – a language and a liturgy, if you like – it is first and foremost because we regard that as a private affair. This convention is hinted at by euphemisms like “life is short” and “you only live once,” which acknowledge that our mortality has a bearing on our decisions, but also imply that what we make of that is down to us. It is also apparent, I think, in our farcical approach to birthdays.

Could it be that, thanks to this arrangement, we have actually come to feel our mortality more keenly? I’m not sure. But it does seem to produce some distinctive experiences, such as the one described in Philip Larkin’s famous poem “Aubade” (first published in 1977):

Waking at four to soundless dark, I stare.
In time the curtain-edges will grow light.
Till then I see what’s really always there:
Unresting death, a whole day nearer now,
Making all thought impossible but how
And where and when I shall myself die.

Larkin’s sleepless narrator tries to persuade himself that humanity has always struggled with this “special way of being afraid.” He dismisses as futile the comforts of religion (“That vast moth-eaten musical brocade / Created to pretend we never die”), as well as the “specious stuff” peddled by philosophy over the centuries. Yet in the final stanza, as he turns to the outside world, he nonetheless acknowledges what does make his fear special:

telephones crouch, getting ready to ring
In locked-up offices, and all the uncaring
Intricate rented world begins to rouse.

Work has to be done.
Postmen like doctors go from house to house.

There is a dichotomy here, between a personal world of introspection, and a public world of routine and action. The modern negotiation with death is confined to the former: each in our own house.


*     *     *


When did this internalisation of death occur, and why? Many reasons spring to mind: the decline of religion, the rise of Freudian psychology in the 20thcentury, the discrediting of a socially meaningful death by the bloodletting of the two world wars, the rise of liberal consumer societies which assign death to the “personal beliefs” category, and would rather people focused on their desires in the here and now. No doubt all of these have had some part to play. But there is also another way of approaching this question, which is to ask if there isn’t some sense in which we actually savour this private relationship with our mortality that I’ve outlined, whatever the burden we incur as a result. Seen from this angle, there is perhaps an interesting story about how these attitudes evolved.

I direct you again to Holbein’s Dance of Death woodcuts.As I’ve said, what is notable from our perspective is that they picture death within a traditional social context. But as it turns out, these images also reflect profound changes that were taking place in Northern Europe during the early modern era. Most notably, Martin Luther’s Protestant Reformation had erupted less than a decade before Holbein composed them. And among the many factors which led to that Reformation was a tendency which had begun emerging within Christianity during the preceding century, and which would be enormously influential in the future. This tendency was piety, which stressed the importance of the individual’s emotional relationship to God.

As Ulinka Rublack notes in her commentary on The Dance of Death, one of the early contributions of piety was the convention of representing death as a grisly skeleton. This figure, writes Rublack, “tested its onlooker’s immunity to spiritual anxiety,” since those who were firm in their convictions “could laugh back at Death.” In other words, buried within Holbein’s rich and varied portrayal of mortality was already, in embryonic form, an emotionally charged, personal confrontation with death. And nor was piety the only sign of this development in early modern Europe.

Hans Holbein, The Ambassadors (1533)

In 1533, Holbein produced another, much more famous work dealing with death: his painting The Ambassadors. Here we see two young members of Europe’s courtly elite standing either side of a table, on which are arrayed various objects that symbolise a certain Renaissance ideal: a life of politics, art, and learning. There are globes, scientific instruments, a lute, and references to the ongoing feud within the church. The most striking feature of the painting, however, is the enormous skull which hovers inexplicably in the foreground, fully perceptible only from a sidelong angle. This remarkable and playful item signals the arrival of another way of confronting death, which I describe as decadent. It is not serving any moral or doctrinal message, but illuminating what is most precious to the individual: status, ambition, accomplishment.

The basis of this decadent stance is as follows: death renders meaningless our worldly pursuits, yet at the same time makes them seem all the more urgent and compelling. This will be expounded in a still more iconic Renaissance artwork: Shakespeare’s Hamlet (1599). It is no coincidence that the two most famous moments in this play are both direct confrontations with death. One is, of course, the “To be or not to be” soliloquy; the other is the graveside scene, in which Hamlet holds a jester’s skull and asks: “Where be your gibes now, your gambols, your songs, your flashes of merriment, that were wont to set the table on a roar?” These moments are indeed crucial, for they suggest why the tragic hero, famously, cannot commit to action. As he weighs up various decisions from the perspective of mortality, he becomes intoxicated by the nuances of meaning and meaninglessness. He dithers because ultimately, such contemplation itself is what makes him feel, as it were, most alive.

All of this is happening, of course, within the larger development that historians like to call “the birth of the modern individual.” But as the modern era progresses, I think there are grounds to say that these two approaches – the pious and the decadent – will be especially influential in shaping how certain cultures view the question of mortality. And although there is an important difference between them insofar as one addresses itself to God, they also share something significant: a mystification of the inner life, of the agony and ecstasy of the individual soul, at the expense of religious orthodoxy and other socially articulated ideas about life’s purpose and meaning.

During the 17thcentury, piety became the basis of Pietism, a Lutheran movement that enshrined an emotional connection with God as the most important aspect of faith. Just as pre-Reformation piety may have been a response, in part, to the ravages of the Black Death, Pietism emerged from the utter devastation wreaked in Germany by the Thirty Years War. Its worship was based on private study of the bible, alone or in small groups (sometimes called “churches within a church”), and on evangelism in the wider community. In Pietistic sermons, the problem of our finitude – of our time in this world – is often bound up with a sense of mystery regarding how we ought to lead our lives. Everything points towards introspection, a search for duty. We can judge how important these ideas were to the consciousness of Northern Europe and the United States simply by naming two individuals who came strongly under their influence: Immanuel Kant and John Wesley.

It was also from the Central German heartlands of Pietism that, in the late-18thcentury, Romanticism was born – a movement which took the decadent fascination with death far beyond what we find in Hamlet. Goethe’s novel The Sorrows of Young Werther, in which the eponymous artist shoots himself from lovesickness, led to a wave of copycat suicides by men dressed in dandyish clothing. As Romanticism spread across Europe and into the 19thcentury, flirting with death, using its proximity as a kind of emotional aphrodisiac, became a prominent theme in the arts. As Byron describes one of his typical heroes: “With pleasure drugged, he almost longed for woe, / And e’en for change of scene would seek the shades below.” Similarly, Keats: “Many a time / I have been half in love with easeful Death.”


*     *     *


This is a very cursory account, and I am certainly not claiming there is any direct or inevitable progression between these developments and our own attitudes to death. Indeed, with Pietism and Romanticism, we have now come to the brink of the Great Awakenings and Evangelism, of Wagner and mystic nationalism – of an age, in other words, where spirituality enters the public sphere in a dramatic and sometimes apocalyptic way. Nonetheless, I think all of this points to a crucial idea which has been passed on to some modern cultures, perhaps those with a northern European, Protestant heritage; the idea that mortality is an emotional and psychological burden which the individual should willingly assume.

And I think we can now discern a larger principle which is being cultivated here – one that has come to define our understanding of individualism perhaps more than any other. That is the principle of freedom. To take responsibility for one’s mortality – to face up to it and, in a manner of speaking, to own it – is to reflect on life itself and ask: for what purpose, for what meaning? Whether framed as a search for duty or, in the extreme decadent case, as the basis of an aesthetic experience, such questions seem to arise from a personal confrontation with death; and they are very central to our notions of freedom. This is partly, I think, what underlies our convention that what you make of death is your own business.

The philosophy that has explored these ideas most comprehensively is, of course, existentialism. In the 20thcentury, Martin Heidegger and Jean Paul Sartre argued that the individual can only lead an authentic life – a life guided by the values they deem important – by accepting that they are free in the fullest, most terrifying sense. And this in turn requires that the individual honestly accept, or even embrace, their finitude. For the way we see ourselves, these thinkers claim, is future-oriented: it consists not so much in what we have already done, but in the possibility of assigning new meaning to those past actions through what we might do in the future. Thus, in order to discover what our most essential values really are – the values we wish to direct our choices as free beings – we should consider our lives from its real endpoint, which is death.

Sartre and Heidegger were eager to portray these dilemmas, and their solutions, as brute facts of existence which they had uncovered. But it is perhaps truer to say that they were signing off on a deal which had been much longer in the making – a deal whereby the individual accepts the burden of understanding their existence as doomed beings, with all the nausea that entails, in exchange for the very expansive sense of freedom we now consider so important. Indeed, there is very little that Sartre and Heidegger posited in this regard which cannot be found in the work of the 19thcentury Danish philosopher Søren Kierkegaard; and Kierkegaard, it so happens, can also be placed squarely within the traditions of both Pietism and Romanticism.

To grasp how deeply engrained these ideas have become, consider again Larkin’s poem “Aubade:”

Most things may never happen: this one will,
And realisation of it rages out
In furnace-fear when we are caught without
People or drink. Courage is no good:
It means not scaring others. Being brave
Lets no one off the grave.
Death is no different whined at than withstood.

Here is the private confrontation with death framed in the most neurotic and desperate way. Yet part and parcel with all the negative emotions, there is undoubtedly a certain lugubrious relish in that confrontation. There is, in particular, something titillating in the rejection of all illusions and consolations, clearing the way for chastisement by death’s uncertainty. This, in other words, is the embrace of freedom taken to its most masochistic limit. And if you find something strangely uplifting about this bleak poem, it may be that you share some of those intuitions.




The Price of Success: Britain’s Tumultuous 19th Century

In 1858, an exclusive Soho dining society known simply as “the Club” – attended by former and future Prime Ministers, prominent clergymen, poets and men of letters – debated the question of “the highest period of civilization” ever reached. It was, they decided, “in London at the present moment.” The following year, several books were published which might, at first glance, appear to support this grandiose conclusion. They included On Liberty by John Stewart Mill, now a cornerstone of political philosophy; Adam Bede, the first novel by the great George Eliot; and Charles Darwin’s On the Origin of Species, which presented the most comprehensive argument yet for the theory of evolution.

Certainly, all of these works were products of quintessentially Victorian seams of thought. Yet they also revealed the fragility of what most members of “the Club” considered the very pillars of their “highest period of civilization.” Mill’s liberalism was hostile to the widespread complacency which held the British constitution to be perfect. George Eliot, aka Marian Evans, was a formidably educated woman living out of wedlock with the writer George Henry Lewes; as such, she was an affront to various tenets of contemporary morality. And Darwin’s work, of course, would fatally undermine the Victorian assumption that theirs was a divinely ordained greatness.

These are just some of the insecurities, tensions, and contradictions which lie at the heart of Britain’s history in the 19th century, and which provide the central theme of David Cannadine’s sweeping (and somewhat ironically titled) new volume, Victorious Century: The United Kingdom 1800-1906. This was a period when Britain’s global hegemony in economic, financial, and imperial terms was rendered almost illusory by an atmosphere of entropy and flux at home. It was a period when the state became more proactive and informed than ever before, yet could never fully comprehend the challenges of its rapidly industrialising economy. And it was a period when Britain’s Empire continued incessantly to expand, despite no one in Westminster finding a coherent plan of how, or for what purpose, to govern it.

Cannadine’s interest in discomfort and dilemma also explains the dates which bookend his narrative. In 1800 William Pitt’s administration enacted the Union with Ireland, bringing into existence the “United Kingdom” of the book’s title. Throughout the ensuing century, the “Irish question” would periodically overwhelm British politics through religious tension, famine, and popular unrest (indeed, I refer mainly to Britain in this review because Ireland was never assimilated into its cultural or political life). The general election of 1906, meanwhile, was the last hurrah of the Liberal Party, a coalition of progressive aristocrats, free traders and radical reformers whose internal conflicts in many ways mirrored those of Victorian Britain at large.

Cannadine’s approach is not an analytical one, and so there is little discussion of the great, complex question which looms over Britain’s 19th century: namely, why that seismic shift in world history, the industrial revolution, happened here. He does make clear, however, the importance of victory in the Napoleonic Wars which engulfed Europe until 1815. Without this hard-won success, Britain could not have exploited its geographical and cultural position in between its two largest export markets, Europe and the United States. Moreover, entrepreneurial industrial activity was directly stimulated by the state’s demand for materiel, and the wheels of international finance greased by government borrowing for the war effort.

From the outset, the volatility of this new model of capitalism was painfully clear. Until mid-century, Britain’s population, industrial output, investment and trade expanded at a dizzying rate, only to stumble repeatedly into prolonged and wrenching economic crises. The accompanying urban deprivation was brutal – life expectancy for a working-class man in 1840s Liverpool was 22 – though arguably no worse than the rural deprivation which had preceded it. Nonetheless, these realities, together with the regular outbreaks of revolution on the continent, meant that from the 1830s onwards the British state assumed a radically new role of “legislative engagement with contemporary issues”: regulating industry, enhancing local government and public services, and gauging public opinion to judge whether political concessions, particularly electoral reform, were necessary.

The second half of the century, by contrast, hatched anxieties which were less dramatic but more insidious. Rising giants such as the United States and Germany, with their superior resources and higher standards of science, technology, and education, foretold the end of British preeminence long before it came to pass. Certainly, the price of global competition was paid largely by landlords, farmers, and manufacturers; working-class living standards steadily improved. But declinism permeated the culture as a whole, manifesting itself in a range of doubts which may sound familiar to us today: immigration and loss of national identity, intractable inequality, military unpreparedness, the spiritual and physical decrepitude of the masses, and the depravity of conspicuous consumption among the upper classes.

Cannadine recounts all of this with lucidity, verve, and a dazzling turn of phrase. He is, however, committed to a top-down view of history which places Westminster politics at the centre of events. This has its benefits: we gain an understanding not just of such fascinating figures as Robert Peel, Benjamin Disraeli and William Gladstone, but also a detailed grasp of the evolution of modern government. This perspective does, however, run counter to the real story of the 19th century, which is precisely the redistribution of historical agency through expanding wealth, literacy, technology and political participation. Cannadine might have reassessed his priorities in light of his own book’s epigraph, from Marx’s Eighteenth Brumaire: “Men make their own history, but they do not do so freely, not under conditions of their own choosing.”