How Napoleon made the British

In 1803, the poet and philosopher Samuel Taylor Coleridge wrote to a friend about his relish at the prospect of being invaded by Napoleon Bonaparte. “As to me, I think, the Invasion must be a Blessing,” he said, “For if we do not repel it, & cut them to pieces, we are a vile sunken race… And if we do act as Men, Christians, Englishmen – down goes the Corsican Miscreant, & Europe may have peace.”

This was during the great invasion scare, when Napoleon’s Army of England could on clear days be seen across the channel from Kent. Coleridge’s fighting talk captured the rash of patriotism that had broken out in Britain. The largest popular mobilisation of the entire Hanoverian era was set in motion, as some 400,000 men from Inverness to Cornwall entered volunteer militia units. London’s playhouses were overtaken by anti-French songs and plays, notably Shakespeare’s Henry V. Caricaturists such as James Gillray took a break from mocking King George III and focused on patriotic propaganda, contrasting the sturdy beef-eating Englishman John Bull with a puny, effete Napoleon.

These years were an important moment in the evolution of Britain’s identity, one that resonated through the 19th century and far beyond. The mission identified by Coleridge – to endure some ordeal as a vindication of national character, preferably without help from anyone else, and maybe benefit wider humanity as a by-product – anticipates a British exceptionalism that loomed throughout the Victorian era, reaching its final apotheosis in the Churchillian “if necessary alone” patriotism of the Second World War. Coleridge’s friend William Wordsworth expressed the same sentiment in 1806, after Napoleon had smashed the Prussian army at Jena, leaving the United Kingdom his only remaining opponent. “We are left, or shall be left, alone;/ The last that dare to struggle with the Foe,” Wordsworth wrote, “’Tis well! From this day forward we shall know/ That in ourselves our safety must be sought;/ That by our own right hands it must be wrought.”

As we mark the bicentennial of Napoleon’s death on St Helena in 1821, attention has naturally been focused on his legacy in France. But we shouldn’t forget that in his various guises – conquering general, founder of states and institutions, cultural icon – Napoleon transformed every part of Europe, and Britain was no exception. Yet the apparent national pride of the invasion scare was very far from the whole story. If the experience of fighting Napoleon left the British in important ways more cohesive, confident and powerful, it was largely because the country had previously looked like it was about to fall apart. 

Throughout the 1790s, as the French Revolution followed the twists and turns that eventually brought Napoleon to power, Britain was a tinder box. Ten years before he boasted of confronting Napoleon as “Men, Christians, Englishmen,” Coleridge had burned the words “Liberty” and “Equality” into the lawns of Cambridge university. Like Wordsworth, and like countless other radicals and republicans, he had embraced the Revolution as the dawn of a glorious new age in which the corrupt and oppressive ancien régime, including the Anglican establishment of Britain, would be swept away. 

And the tide of history seemed to be on the radicals’ side. The storming of the Bastille came less than a decade after Britain had lost its American colonies, while in George III the country had an unpopular king, prone to bouts of debilitating madness, whose scandalous sons appeared destined to drag the monarchy into disgrace. 

Support for the Revolution was strongest among Nonconformist Protestant sects – especially Unitarians, the so-called “rational Dissenters” – who formed the intellectual and commercial elite of cities such as Norwich, Birmingham and Manchester, and among the radical wing of the Whig party. But for the first time, educated working men also entered the political sphere en masse. They joined the Corresponding Societies which held public meetings and demonstrations across the country, so named because of their contacts with Jacobin counterparts in France. Influential Unitarian ministers, such as the Welsh philosopher Richard Price and the chemist Joseph Priestly, interpreted the Revolution as the work of providence and possibly a sign of the imminent Apocalypse. In the circle of Whig aristocrats around Charles James Fox, implacable adversary of William Pitt’s Tory government, the radicals had sympathisers at the highest levels of power. Fox famously said of the Revolution “how much the greatest event it is that ever happened in the world, and how much the best.”

From 1792 Britain was at war with revolutionary France, and this mix of new ideals and longstanding religious divides boiled over into mass unrest and fears of insurrection. In 1795 protestors smashed the windows at 10 Downing Street, and at the opening of parliament a crowd of 200,000 jeered at Pitt and George III. The radicals were met by an equally volatile loyalist reaction in defence of church and king. In 1793, a dinner celebrating Bastille Day in Birmingham sparked three days of rioting, including attacks on Nonconformist chapels and Priestly’s home. Pitt’s government introduced draconian limitations on thought, speech and association, although his attempt to convict members of the London Corresponding Society with high treason was foiled by a jury. 

Both sides drew inspiration from an intense pamphlet war that included some of the most iconic and controversial texts in British intellectual history. Conservatives were galvanised by Edmund Burke’s Reflections on the Revolution in France, a defence of England’s time-honoured social hierarchies, while radicals hailed Thomas Paine’s Rights of Man, calling for the abolition of Britain’s monarchy and aristocracy. When summoned on charges of seditious libel, Paine fled to Paris, where he sat in the National Assembly and continued to support the revolutionary regime despite almost being executed during the Reign of Terror that began in 1793. Among his supporters were the pioneering feminist Mary Wollstonecraft and the utopian progressive William Godwin, who shared an intellectual circle with Coleridge and Wordsworth. 

Britain seemed to be coming apart at the seams. Bad harvests at the turn of the century brought misery and renewed unrest, and the war effort failed to prevent France (under the leadership, from 1799, of First Consul Bonaparte) from dominating the continent. Paradoxically, nothing captures the paralysing divisions of the British state at this moment better than its expansion in 1801 to become the United Kingdom of Great Britain and Ireland. The annexation of Ireland was a symptom of weakness, not strength, since it reflected the threat posed by a bitterly divided and largely hostile satellite off Britain’s west coast. The only way to make it work, as Pitt insisted, was to grant political rights to Ireland’s Catholic majority – but George III refused. So Pitt resigned, and the Revolutionary Wars ended with the Treaty of Amiens in 1802, effectively acknowledging French victory.

Britain’s tensions and weaknesses certainly did not disappear during the ensuing, epic conflict with Napoleon from 1803-15. Violent social unrest continued to flare up, especially at times of harvest failure, financial crisis, and economic hardship resulting from restriction of trade with the continent. There were, at times, widespread demands for peace. The government continued to repress dissent with military force and legal measures; the radical poet and engraver William Blake (later rebranded as a patriotic figure when his words were used for the hymn Jerusalem) stood trial for sedition in 1803, following an altercation with two soldiers. Many of those who volunteered for local military units probably did so out of peer pressure and to avoid being impressed into the navy. Ireland, of course, would prove to be a more intractable problem than even Pitt had imagined.  

Nonetheless, Coleridge and Wordsworth’s transition from radicals to staunch patriots was emblematic. Whether the population at large was genuinely loyal or merely quiescent, Britain’s internal divisions lost much of their earlier ideological edge, and the threat of outright insurrection faded away. This process had already started in the 1790s, as many radicals shied away from the violence and militarism of revolutionary France, but it was galvanised by Napoleon. This was not just because he appeared determined and able to crush Britain, but also because of British perceptions of his regime. 

As Yale professor Stuart Semmel has observed, Napoleon did not fit neatly into the dichotomies with which Britain was used to contrasting itself against France. For the longest time, the opposition had been (roughly) “free Protestant constitutional monarchy” vs “Popish absolutist despotism”; after the Revolution, it had flipped to “Christian peace and order” vs “bloodthirsty atheism and chaos.” Napoleon threw these catagories into disarray. The British, says Semmel, had to ask “Was he a Jacobin or a king …; Italian or Frenchman; Catholic, atheist, or Muslim?” The religious uncertainty was especially unsettling, after Napoleon’s “declaration of kinship with Egyptian Muslims, his Concordat with the papacy, his tolerance for Protestants, and his convoking a Grand Sanhedrin of European Jews.” 

This may have forced some soul-searching on the part of the British as they struggled to define Napoleonic France, but in some respects the novelty simplified matters. Former radicals could argue Napoleon represented a betrayal of the Revolution, and could agree with loyalists that he was a tyrant bent on personal domination of Europe, thus drawing a line under the ideological passions of the revolutionary period. In any case, loyalist propaganda had no difficulty transferring to Napoleon the template traditionally reserved for the Pope – that of the biblical Antichrist. This simple fact of having a single infamous figure on which to focus patriotic feelings no doubt aided national unity. As the essayist William Hazlitt, an enduring supporter of Napoleon, later noted: “Everybody knows that it is only necessary to raise a bugbear before the English imagination in order to govern it at will.”

More subtly, conservatives introduced the concept of “legitimacy” to the political lexicon, to distinguish the hereditary power of British monarchs from Napoleon’s usurpation of the Bourbon throne. This was rank hypocrisy, given the British elite’s habit of importing a new dynasty whenever it suited them, but it played to an attitude which did help to unify the nation: during the conflict with Napoleon, people could feel that they were defending the British system in general, rather than supporting the current government or waging an ideological war against the Revolution. The resulting change of sentiment could be seen in 1809, when there were vast celebrations to mark the Golden Jubilee of the once unpopular George III. 

Undoubtedly British culture was also transformed by admiration for Napoleon, especially among artists, intellectuals and Whigs, yet even here the tendency was towards calming antagonisms rather than enflaming them. This period saw the ascendance of Romanticism in European culture and ways of thinking, and there was not and never would be a greater Romantic hero than Napoleon, who had turned the world upside down through force of will and what Victor Hugo later called “supernatural instinct.” But ultimately this meant aestheticizing Napoleon, removing him from the sphere of politics to that of sentiment, imagination and history. Thus when Napoleon abdicated his throne in 1814, the admiring poet Lord Byron was mostly disappointed he had not fulfilled his dramatic potential by committing suicide. 

But Napoleon profoundly reshaped Britain in another way: the long and grueling conflict against him left a lasting stamp on every aspect of the British state. In short, while no-one could have reasonably predicted victory until Napoleon’s catastrophic invasion of Russia in 1812, the war was nonetheless crucial in forging Britain into the global superpower it would become after 1815. 

The British had long been in the habit of fighting wars with ships and money rather than armies, and for the most part this was true of the Napoleonic wars as well. But the unprecedented demands of this conflict led to an equally unprecedented development of Britain’s financial system. This started with the introduction of new property taxes and, in 1799, the first income tax, which were continually raised until by 1814 their yield had increased by a factor of ten. What mattered here was not so much the immediate revenue as the unparalleled fiscal base it gave Britain for the purpose of borrowing money – which it did, prodigiously. In 1804, the year Bonaparte was crowned Emperor, the “Napoleon of finance” Nathan Rothschild arrived in London from Frankfurt, helping to secure a century of British hegemony in the global financial system. 

No less significant were the effects of war in stimulating Britain’s nascent industrial revolution, and its accompanying commercial empire. The state relied on private contractors for most of its materiel, especially that required to build and maintain the vast Royal Navy, while creating immense demand for iron, coal and timber. In 1814, when rulers and representatives of Britain’s European allies came to Portsmouth, they were shown a startling vision of the future: enormous factories where pulley blocks for the rigging of warships were being mass-produced with steam-driven machine tools. Meanwhile Napoleon’s Continental System, by shutting British manufacturers and exporters out of Europe, forced them to develop markets in South Asia, Africa and Latin America. 

Even Britain’s fabled “liberal” constitution – the term was taken from Spanish opponents to Napoleon – did in fact do some of the organic adaptation that smug Victorians would later claim as its hallmark. The Nonconformist middle classes, so subversive during the revolutionary period, were courted in 1812-13 with greater political rights and by the relaxation of various restrictions on trade. Meanwhile, Britain discovered what would become its greatest moral crusade of the 19thcentury. Napoleon’s reintroduction of slavery in France’s Caribbean colonies created the conditions for abolitionism to grow as a popular movement in Britain, since, as William Wilberforce argued, “we should not give advantages to our enemies.” Two bills in 1806-7 effectively ended Britain’s centuries-long participation in the trans-Atlantic slave trade.

Thus Napoleon was not just a hurdle to be cleared en route to the British century – he was, with all his charisma and ruthless determination, a formative element in the nation’s history. And his influence did not end with his death in 1821, of course. He would long haunt the Romantic Victorian imagination as, in Eric Hobsbawm’s words, “the figure every man who broke with tradition could identify himself with.”

The age of mass timber: why we should build in wood

This article was published by The Critic on March 10th 2021.

There are few more evocative images of modernity than the glittering skyscrapers of Tokyo. It’s easy to forget that Japan’s cities used to consist largely of timber structures up until the mid-twentieth century. It was only after the nightmarish final months of the Second World War, when American B-29 bombers reduced these wooden metropolises to smouldering ash, that Japan embraced concrete, glass and steel.

But luckily Japanese timber expertise did not vanish entirely, for it now appears wood is the future again. Late last year Sumitomo Forestry, a 300-year-old company, announced it was partnering with Kyoto University to design a surprising product: wooden satellites. This innovation aims to stop the dangerous build-up of space junk orbiting the Earth. The ultimate goal of the research, however, is back on terra firma, where Sumitomo hopes to design “ultra-strong, weather resistant wooden buildings”. It has already announced its ambitions to build a skyscraper more than 1,000 feet tall, constructed from 90 per cent wood, by 2041.

Could timber really be a major building material in the dense, vertical cities of the future? In fact, this possibility is well on the way to being realised. In recent years, architects and planners around the world have hailed the coming age of “mass timber”. This term refers to prefabricated wooden building components, such as cross-laminated timber, which can replace concrete and steel in large-scale construction.

Continue reading here.

“Euro-English”: A thought experiment

There was an interesting story in Politico last weekend about “Euro-English,” and a Swedish academic who wants to make it an official language. Marko Modiano, a professor at the University of Gävle, says the European Union should stop using British English for its documents and communications, and replace it with the bastardised English which is actually spoken in Brussels and on the continent more generally.

Politico offers this example of how Euro-English might sound, as spoken by someone at the European Commission: “Hello, I am coming from the EU. Since 3 years I have competences for language policy and today I will eventually assist at a trilogue on comitology.”

Although the EU likes to maintain the pretence of linguistic equality, English is in practice the lingua franca of its bureaucrats, the language in which most laws are drafted, and increasingly default language of translation for foreign missions. It is also the most common second language across the continent. But according to Modiano, this isn’t the same English used by native speakers, and it’s silly that the EU’s style guides try to make it conform to the latter. (Spare a thought for Ireland and Malta, who under Modiano’s plans would presumably have to conduct EU business in a slightly different form of English).

It’s a wonderful provocation, but could it also be a veiled political strategy? A distinctively continental English might be a way for the EU to cultivate a stronger pan-European identity, thus increasing its authority both in absolute terms and relative to national governments. The way Modiano presents his proposal certainly makes it sound like that: “Someone is going to have to step forward and say, ‘OK, let’s break our ties with the tyranny of British English and the tyranny of American English.’ And instead say… ‘This is our language.’” (My emphasis).

The EU has forever been struggling with the question of whether it can transcend the appeal of nation states and achieve a truly European consciousness. Adopting Euro-English as an official lingua franca might be a good start. After all, a similar process of linguistic standardisation was essential to the creation of the modern nation state itself.   

As Eric Hobsbawm writes in his classic survey of the late-19th and early-20th century, The Age of Empire, the invention of national languages was a deliberate ideological project, part of the effort to forge national identities out of culturally heterogeneous regions. Hobsbawm explains:

Linguistic nationalism was the creation of people who wrote and read, not of people who spoke. And the ‘national languages’ in which they discovered the essential character of their nations were, more often than not, artefacts, since they had to be compiled, standardized, homogenized and modernized for contemporary and literary use, out of the jigsaw puzzle of local or regional dialects which constituted non-literary languages as actually spoken. 

Perhaps the most remarkable example was the Zionist movement’s promotion of Hebrew, “a language which no Jews had used for ordinary purposes since the days of the Babylonian captivity, if then.”

Where this linguistic engineering succeeded, it was thanks to the expansion of state education and the white-collar professions. A codified national language, used in schools, the civil service and public communications like street signs, was an ideal tool for governments to instil a measure of unity and loyalty in their diverse and fragmented populations. This in turn created incentives for the emerging middle class to prefer an official language to their own vernaculars, since it gave access to careers and social status. 

Could the EU not pursue a similar strategy with Euro-English? There could a special department in Brussels tracking the way English is used by EU citizens on social media, and each year issuing an updated compendium on Euro-English. This emergent language, growing ever more distinctly European, could be mandated in schools, promoted through culture and in the media, and of course used for official EU business. Eventually the language would be different enough to be rebranded simply as “European.”

You’ll notice I’m being facetious now; obviously this would never work. Privileging one language over others would instantly galvanise the patriotism of EU member states, and give politicians a new terrain on which to defend national identity against Brussels. This is pretty much how things played out in multinational 19th century states such as Austria-Hungary, where linguistic hierarchies enflamed the nationalism of minority cultures. One can already see something like this in the longstanding French resentment against the informal dominance of English on the continent.

Conversely, Euro-English wouldn’t work because for Europe’s middle-classes and elites, the English language is a gateway not to Europe, but to the world. English is the language of global business and of American cultural output, and so is a prerequisite for membership of any affluent cosmopolitan milieu. 

And this, I think, is the valuable insight to be gained from thought experiments like the one suggested by Modiano. Whenever we try to imagine what the path to a truly European demos might look like, we always encounter these two quite different, almost contradictory obstacles. On the one hand, the structure of the EU seems to have frozen in place the role of the nation state as the rightful locus of imagined community and symbolic attachment. At the same time, among those who identify most strongly with the European project, many are ultimately universalist in their outlook, and unlikely to warm to anything that implies a distinctively European identity. 

What space architecture says about us

With the recent expedition of Nasa’s Perseverence rover to Mars, I’ve taken an interest in space architecture; more specifically, habitats for people on the moon or the Red Planet. The subject first grabbed my attention earlier this year, when I saw that a centuries-old forestry company in Japan is developing wooden structures for future space colonies. Space architecture is not as other-worldly as you might think. In various ways, it holds a revealing mirror to life here on Earth. 

Designing human habitats for Mars is more than just a technical challenge (though protecting against intense radiation and minus 100C temperatures is, of course, a technical challenge). It’s also an exercise in anthropology. To ask what a group of scientists or pioneers will need from their Martian habitats is to ask what human beings need to be healthy, happy and productive. And we aren’t just talking about the material basics here. 

As Jonathan Morrison reported in the Times last weekend, Nasa is taking inspiration from the latest polar research bases. According to architects like Hugh Broughton, researchers working in these extreme environments need creature comforts. The fundamental problem, says Broughton, is “how architecture can respond to the human condition.” The extreme architect has to consider “how you deal with isolation, how you create a sense of community… how you support people in the darkness.”

I found these words disturbingly relatable; not just in light of the pandemic, which has forced us all into a kind of polar isolation, but in light of the wider problem of anomie in modern societies. Broughton’s questions are the same ones we tend to ask as we observe stubbornly high rates of depression, loneliness, self-medication, and so on. Are we all now living in an extreme environment?

Many architects in the modernist period dreamed that they could tackle such issues through the design of the built environment. But the problem of what people need in order to flourish confronted them in a much harder form. Given the complexity of modern societies, trying to facilitate a vision of human flourishing through architecture started to look a lot like forcing society into a particular mould.

The “master households” designed by Walter Gropius in the 1920s and 30s illustrates the dilemma. Gropius insisted his blueprints, which reduced private family space in favour of communal living, reflected the emerging socialist character of modern individuals. At the same time, he implied that this transformation in lifestyle needed the architect as its midwife. 

Today architecture has largely abandoned the dream of a society engineered by experts and visionaries. But heterotopias like research stations and space colonies still offer something of a paradise for the philosophical architect. By contrast to the messy complexity of society at large, these small communities have a very specific shared purpose. They offer clearly defined parameters for architects to address the problem of what human beings need. 

Sometimes the solutions to this profound question, however, are almost comically mundane. Morrison’s Times report mentions some features of recent polar bases:

At the Scott Base, due to be completed in 2027, up to 100 residents might while away the hours in a cafeteria and even a Kiwi-themed pub, while Halley VI… boasts a gym, library, large canteen, bar and mini cinema.

If this turns out to be the model, then a future Mars colony will be a lot like a cruise ship. This doesn’t reflect a lack of imagination on the architects’ part though. It points to the fact that people don’t just want sociability, stimulation and exercise as such – they want familiar forms of these things. So a big part of designing habitats for space pioneers will involve replicating institutions from their original, earthbound cultures. In this sense, Martian colonies won’t be a fresh start for humanity any more than the colonisation of the Americas was. 

Finally, it’s worth saying something about the politics of space habitats. It seems inevitable that whichever regime sends people to other planets will use the project as a means of legitimation: the government(s) and corporations involved will want us to be awed by their achievement. And this will be done by turning the project into a media spectacle. 

The recent Perseverance expedition has already shown this potential: social media users were thrilled to hear audio of Martian winds, and to see a Martian horizon with Earth sparkling in the distance (the image, alas, turned out to be a fake). The first researchers or colonists on Mars will likely be reality TV stars, their everyday lives an on-going source of fascination for viewers back home. 

The lunar base in Kubrick’s 2001: A Space Odyssey

This means space habitats won’t just be designed for the pioneers living in them, but also for remote visual consumption on Earth. The aesthetics of these structures will not, therefore, be particularly novel. Thanks to Hollywood, we already have established ideas of what space exploration should look like, and space architecture will try to satisfy these expectations. Beyond that, it will simply try to project a more futuristic version of the good life as we know it through pop culture: comfort, luxury and elegance. 

We already see this, I think, in the Mars habitat designed by Xavier De Kestelier of Hassel Studio, which features sweeping open-plan spaces with timber flooring, glass walls and minimalist furniture. It resembles a luxury spa more than a rugged outpost of civilisation. But this was already anticipated, with characteristic flair, by Stanley Kubrick in his 1968 sci-fi classic 2001: A Space Odyssey. In Kubrick’s imagined lunar base, there is a Hilton hotel hosting the stylish denizens of corporate America. The task of space architects will be to design this kind of enchanting fantasy, no less than to meet the needs of our first Martian settlers.  

How much is a high-status meme worth?

This article was published by Unherd on February 25th 2021.

Today one of the most prestigious institutions in the art world, the 250-year-old auction house Christie’s, is selling a collection of Instagram posts. Or in its own more reserved language, Christie’s is now “the first major auction house to offer a purely digital work.”

The work in question is “Everydays: The First 5000 Days” by the South Carolina-based animation artist Beeple (real name Mike Winkelmann), an assemblage of images he has posted online over the last thirteen-odd years. Whoever acquires “Everydays” won’t get a unique product — the image is a digital file which can be copied like any other. They’ll just be paying for a proof of ownership secured through the blockchain.

But more significant than the work’s format is its artistic content. Beeple is opening the way for the traditional art world to embrace internet memes. 

Continue reading here.

The double nightmare of the cat-lawyer

Analysing internet memes tends to be self-defeating: mostly their magic comes from a fleeting, blasé irony which makes you look like a fool if you try to pin it down. But sometimes a gem comes along that’s too good to let pass. Besides, the internet’s endless stream of found objects, jokes and observations are ultimately a kind of glorious collective artwork, somewhere between Dada collage and an epic poem composed by a lunatic. And like all artworks, this one has themes and motifs worth exploring.

Which brings me to cat-lawyer. The clip of the Texas attorney who, thanks to a visual filter, manages to take the form of a fluffy kitten in a Zoom court hearing, has gone superviral. The hapless attorney, Rod Ponton, claims he’s been contacted by news outlets around the world. “I always wanted to be famous for being a great lawyer,” he reflected, “now I’m famous for appearing in court as a cat.”

The video clearly recalls the similarly sensational case of Robert Kelly, the Korea expert whose study was invaded by his two young children during a live interview with the BBC. What makes both clips so funny is the pretence of public formality – already under strain in the video-call format, since people are really just smartly dressed in their homes – being punctured by the frivolity of childhood. Ridiculously, the victims try to maintain a sense of decorum. The punctilious Kelly ignores his rampaging infants and mumbles an apology; the beleaguered Ponton, his saucer-like kitten’s eyes shifting nervously, insists he’s happy to continue the hearing (“I’m not a cat” he reassures the judge, a strong note of desperation in his voice).

These incidents don’t become so famous just because they’re funny, though. Like a lot of comedy, they offer a light-hearted, morally acceptable outlet for impulses that often appear in much darker forms. We are essentially relishing the humiliation of Ponton and Kelly, much as the roaming mobs of “cancel culture” relish the humiliation of their targets, but we expect the victims to recognise their own embarrassment as a public good. The thin line between such jovial mockery and the more malign search for scapegoats is suggested by the fact that people have actually tried to discredit both men. Kelly was criticised for how he handled his daughter during his ordeal, while journalists have dredged up old harassment allegations against Ponton.

But there are other reasons why, in the great collective fiction of internet life, cat-lawyer is an interesting character. As I’ve previously written at greater length, online culture carries a strong strain of the grotesque. The strange act of projecting the self into digital space, both liberating and anxiety-inducing, has spurred forms of expression that blur the boundaries of the human and of social identity. In this way, internet culture joins a long artistic tradition where surreal, monstrous or bizarre beings give voice to repressed aspects of the human imagination. Human/animal transformations like the cat-lawyer have always been a part of this motif.

Of course it’s probably safe to assume that Ponton’s children, and not Ponton himself, normally use the kitten filter. But childhood and adolescence are where we see the implications of the grotesque most clearly. Bodily transformation and animal characters are a staple of adolescent fiction, because teenagers tend to interpret them in light of their growing awareness of social boundaries, and of their own subjectivity. Incidentally, I remember having this response to a particularly cheesy series of pulp novels for teens called Animorphs. But the same ideas are being explored, whether playfully or disturbingly, in gothic classics like Frankenstein and the tales of E.T.A Hoffman, in the films of David Lynch, or indeed in the way people use filters and face-changing apps on social media. 

The cat-lawyer pushes these buttons too: his wonderful, mesmerising weirdness is a familiar expression of the grotesque. And this gels perfectly with the comedy of interrupted formality and humiliation. The guilty expression on his face makes it feels like he has, by appearing as a cat, accidentally exposed some embarrassing private fetish in the workplace. 

Perhaps the precedent this echoes most clearly is Kafka’s “Metamorphosis,” where the longsuffering salesman Gregor Samsa finds he has turned into an insect. Recall that Samsa’a family resents his transformation not just because he is ghastly, but because his ghastliness makes him useless in a world which demands respectability and professionalism. It is darkly absurd, but unsettling too: it awakens anxieties about the aspects of ourselves that we conceal from public view. 

The cat-lawyer’s ordeal is a similar kind of double nightmare: a surreal incident of transformation, an anxiety dream about being publicly exposed. Part of its appeal is that it lets us appreciate these strange resonances by cloaking them in humour. 

The Philosophy of Rupture: How the 1920s Gave Rise to Intellectual Magicians

This essay was originally published by Areo magazine on 4th November 2020.

When it comes to intellectual history, Central Europe in the decade of the 1920s presents a paradox. It was an era when revolutionary thought – original and iconoclastic ideas and modes of thinking – was not in fact revolutionary, but almost the norm. And the results are all around us today. The 1920s were the final flourish in a remarkable period of path-breaking activity in German-speaking Europe, one that laid many of the foundations for both analytic and continental philosophy, for psychology and sociology, and for several branches of legal philosophy and of theoretical science.

This creative ferment is partly what people grasp at when they refer to the “spirit” of the ’20s, especially in Germany’s Weimar Republic. But this doesn’t help us understand where that spirit came from, or how it draws together the various thinkers who, in hindsight, seem to be bursting out of their historical context rather than sharing it.

Wolfram Eilenberger attempts one solution to that problem in his new book, Time of the Magicians: The Invention of Modern Thought, 1919-1929. He manages to weave together the ideas of four philosophers – Ludwig Wittgenstein, Martin Heidegger, Walter Benjamin and Ernst Cassirer – by showing how they emerged from those thinkers’ personal lives. We get colourful accounts of money troubles, love affairs, career struggles and mental breakdowns, each giving way to a discussion of the philosophical material. In this way, the personal and intellectual journeys of the four protagonists are linked in an expanding web of experiences and ideas.

This is a satisfying format. There’s just no denying the voyeuristic pleasure of peering into these characters’ private lives, whether it be Heidegger’s and Benjamin’s attempts to rationalise their adulterous tendencies, or the series of car crashes that was Wittgenstein’s social life. Besides, it’s always useful to be reminded that, with the exception of the genuinely upstanding Cassirer, these great thinkers were frequently selfish, delusional, hypocritical and insecure. Just like the rest of us then.

But entertaining as it is, Eilenberger’s biographical approach does not really cast much light on that riddle of the age: why was this such a propitious time for magicians? If anything, his portraits play into the romantic myth of the intellectual window-breaker as a congenital outsider and unusual genius – an ideal that was in no small part erected by this very generation. This is a shame because, as I’ll try to show later, these figures become still more engaging when considered not just as brilliant individuals, but also as products of their time.

First, it’s worth looking at how Eilenberger manages to draw parallels between the four philosophers’ ideas, for that is no mean feat. Inevitably this challenge makes his presentation selective and occasionally tendentious, but it also produces some imaginative insights.

*          *          *

 

At first sight, Wittgenstein seems an awkward fit for this book, seeing as he did not produce any philosophy during the decade in question. His famous early work, the Tractatus Logico-Philosophicus, claimed to have solved the problems of philosophy “on all essential points.” So we are left with the (admittedly fascinating) account of how he signed away his vast inheritance, trained as a primary school teacher, and moved through a series of remote Austrian towns becoming increasingly isolated and depressed.

But this does leave Eilenberger plenty of space to discuss the puzzling Tractatus. He points out, rightly, that Wittgenstein’s mission to establish once and for all what can meaningfully be said – that is, what kinds of statements actually make sense – was far more than an attempt to rid philosophy of metaphysical hokum (even if that was how his logical-empiricist fans in Cambridge and the Vienna Circle wanted to read the work).

Wittgenstein did declare that the only valid propositions were those of natural science, since these alone shared the same logical structure as empirical reality, and so could capture an existing or possible “state of affairs” in the world. But as Wittgenstein freely admitted, this meant the Tractatus itself was nonsense. Therefore its reader was encouraged to disregard the very claims which had established how to judge claims, to “throw away the ladder after he has climbed up it.” Besides, it remained the case that “even if all possible scientific questions be answered, the problems of life have still not been touched at all.”

According to Eilenberger, who belongs to the “existentialist Wittgenstein” school, the Tractatus’ real goals were twofold. First, to save humanity from pointless conflict by clarifying what could be communicated with certainty. And second, to emphasise the degree to which our lives will always be plagued by ambiguity – by that which can only be “shown,” not said – and hence by decisions that must be taken on the basis of faith.

This reading allows Eilenberger to place Wittgenstein in dialogue with Heidegger and Benjamin. The latter both styled themselves as abrasive outsiders: Heidegger as the Black Forest peasant seeking to subvert academic philosophy from within, Benjamin as the struggling journalist and flaneur who, thanks to his erratic behaviour and idiosyncratic methods, never found an academic post. By the end of the ’20s, they had gravitated towards the political extremes, with Heidegger eventually joining the Nazi party and Benjamin flirting with Communism.

Like many intellectuals at this time, Heidegger and Benjamin were interested in the consequences of the scientific and philosophical revolutions of the 17th century, the revolutions of Galileo and Descartes, which had produced the characteristic dualism of modernity: the separation of the autonomous, thinking subject from a scientific reality governed by natural laws. Both presented this as an illusory and fallen state, in which the world had been stripped of authentic human purpose and significance.

Granted, Heidegger did not think such fine things were available to most of humanity anyway. As he argued in his masterpiece Being and Time, people tend to seek distraction in mundane tasks, social conventions and gossip. But it did bother him that philosophers had forgotten about “the question of the meaning of Being.” To ask this question was to realise that, before we come to do science or anything else, we are always already “thrown” into an existence we have neither chosen nor designed, and which we can only access through the meanings made available by language and by the looming horizon of our own mortality.

Likewise, Benjamin insisted language was not a means of communication or rational thought, but an aesthetic medium through which the world was revealed to us. In his work on German baroque theatre, he identified the arrival of modernity with a tragic distortion in that medium. Rather than a holistic existence in which in which everything had its proper name and meaning – an existence that, for Benjamin, was intimately connected with the religious temporality of awaiting salvation – the very process of understanding had become arbitrary and reified, so that any given symbol might as well stand for any given thing.

As Eilenberger details, both Heidegger and Benjamin found some redemption in the idea of decision – a fleeting moment when the superficial autonomy of everyday choices gave way to an all-embracing realisation of purpose and fate. Benjamin identified such potential in love and, on a collective and political level, in the “profane illuminations” of the metropolis, where the alienation of the modern subject was most profound. For Heidegger, only a stark confrontation with death could produce a truly “authentic” decision. (This too had political implications, which Eilenberger avoids: Heidegger saw the “possibilities” glimpsed in these moments as handed down by tradition to each generation, leaving the door open to a reactionary idea of authenticity as something a community discovers in its past).

If Wittgenstein, Heidegger and Benjamin were outsiders and “conceptual wrecking balls,” Ernst Cassirer cuts a very different figure. His inclusion in this book is the latest sign of an extraordinary revival in his reputation over the past fifteen years or so. That said, some of Eilenberger’s remarks suggest Cassirer has not entirely shaken off the earlier judgment, that he was merely “an intellectual bureaucrat,” “a thoroughly decent man and thinker, but not a great one.”

Cassirer was the last major figure in the Neo-Kantian tradition, which had dominated German academic philosophy from the mid-19th century until around 1910. At this point, it grew unfashionable for its associations with scientific positivism and naïve notions of rationality and progress (not to mention the presence of prominent Jewish scholars like Cassirer within its ranks). The coup de grâce was delivered by Heidegger himself at the famous 1929 “Davos debate” with Cassirer, the event which opens and closes Eilenberger’s book. Here contemporaries portrayed Cassirer as an embodiment of “the old thinking” that was being swept away.

That judgment was not entirely accurate. It’s true that Cassirer was an intellectual in the mould of 19th century Central European liberalism, committed to human progress and individual freedom, devoted to science, culture and the achievements of German classicism. Not incidentally, he was the only one of our four thinkers to wholeheartedly defend Germany’s Weimar democracy. But he was also an imaginative, versatile and unbelievably prolific philosopher.

Cassirer’s three-volume project of the 1920s, The Theory of Symbolic Forms, showed that he, too, understood language and meaning as largely constitutive of reality. But for Cassirer, the modern scientific worldview was not a debasement of the subject’s relationship to the world, but a development of the same faculty which underlay language, myth and culture – that of representing phenomena through symbolic forms. It was, moreover, an advance. The logical coherence of theoretical science, and the impersonal detachment from nature it afforded, was the supreme example of how human beings achieved freedom: by understanding the structure of the world they inhabited to ever greater degrees.

But nor was Cassirer dogmatic in his admiration for science. His key principle was the plurality of representation and understanding, allowing the same phenomenon to be grasped in different ways. The scientist and artist are capable of different insights. More to the point, the creative process through which human minds devised new forms of representation was open ended. The very history of science, as of culture, showed that there were always new symbolic forms to be invented, transforming our perception of the world in the process.

*          *          *

 

It would be unfair to say Eilenberger gives us no sense of how these ideas relate to the context in which they were formed; his biographical vignettes do offer vivid glimpses of life in 1920s Europe. But that context is largely personal, and rarely social, cultural or intellectual. As a result, the most striking parallel of all – the determination of Wittgenstein, Heidegger and Benjamin to upend the premises of the philosophical discipline, and that of Cassirer to protect them – can only be explained in terms of personality. This is misleading.

A time-traveller visiting Central Europe in the years after 1918 could not help but notice that all things intellectual were in a state of profound flux. Not only was Neo-Kantianism succumbing to a generation of students obsessed with metaphysics, existence and (in the strict sense) nihilism. Every certainty was being forcefully undermined: the superiority of European culture in Oswald Spengler’s bestselling Decline of the West (1918); the purpose and progress of history in Ernst Troeltsch’s “Crisis of Historicism” (1922); the Protestant worldview in Karl Barth’s Epistle to the Romans (1919); and the structure of nature itself in Albert Einstein’s article “On the Present Crisis in Theoretical Physics” (1922).

In these years, even the concept of revolution was undergoing a revolution, as seen in the influence of unorthodox Marxist works like György Lukács’ History and Class Consciousness (1923). And this is to say nothing of what our time-traveller would discover in the arts. Dada, a movement dedicated to the destruction of bourgeois norms and sensibilities, had broken out in Zurich in 1917 and quickly spread to Berlin. Here it infused the works of brilliant but scandalous artists such as George Grosz and Otto Dix.

German intellectuals, in other words, were conscious of living in an age of immense disruption. More particularly, they saw themselves as responding to world defined by rupture; or to borrow a term from Heidegger and Benjamin, by “caesura” – a decisive and irreversible break from the past.

It’s not difficult to imagine where that impression came from. This generation experienced the cataclysm of the First World War, an unprecedented bloodbath that discredited assumptions of progress even as it toppled ancient regimes (though among Eilenberger’s quartet, only Wittgenstein served on the front lines). In its wake came the febrile economic and political atmosphere of the Weimar Republic, which has invited so many comparisons to our own time. Less noticed is that the ’20s were also, like our era, a time of destabilising technological revolution, witnessing the arrival of radio, the expansion of the telephone, cinema and aviation, and a bevy of new capitalist practices extending from factory to billboard.

Nonetheless, in philosophy and culture, we should not imagine that an awareness of rupture emerged suddenly in 1918, or even in 1914. The war is best seen as an explosive catalyst which propelled and distorted changes already underway. The problems that occupied Eilenberger’s four philosophers, and the intellectual currents that drove them, stem for a deeper set of dislocations.

 Anxiety over the scientific worldview, and over philosophy’s relationship to science, was an inheritance from the 19thcentury. In Neo-Kantianism, Germany had produced a philosophy at ease with the advances of modern science. But paradoxically, this grew to be a problem when it became clear how momentous those advances really were. Increasingly science was not just producing strange new ways of seeing the world, but through technology and industry, reshaping it. Ultimately the Neo-Kantian holding pattern, which had tried to reconcile science with the humanistic traditions of the intellectual class, gave way. Philosophy became the site of a backlash against both.

But critics of philosophy’s subordination to science had their own predecessors to call on, not least with respect to the problem of language. Those who, like Heidegger and Benjamin, saw language not as a potential tool for representing empirical reality, but the medium which disclosed that reality to us (and who thus began to draw the dividing line between continental and Anglo-American philosophy), were sharpening a conflict that had simmered since the Enlightenment. They took inspiration from the 18th century mystic and scourge of scientific rationality, Johann Georg Hamann.

Meanwhile, the 1890s saw widespread recognition of the three figures most responsible for the post-war generation’s ideal of the radical outsider: Søren Kierkegaard, Friedrich Nietzsche and Karl Marx. That generation would also be taught by the great pioneers of sociology in Germany, Max Weber and Georg Simmel, whose work recognised what many could feel around them: that modern society was impersonal, fragmented and beset by irresolvable conflicts of value.

In light of all this, it’s not surprising that the concept of rupture appears on several levels in Wittgenstein, Heidegger and Benjamin. They presented their works as breaks in and with the philosophical tradition. They reinterpreted history in terms of rupture, going back and seeking the junctures when pathologies had appeared and possibilities had been foreclosed. They emphasised the leaps of faith and moments of decision that punctuated the course of life.

Even the personal qualities that attract Eilenberger to these individuals – their eccentric behaviour, their search for authenticity – were not theirs alone. They were part of a generational desire to break with the old bourgeois ways, which no doubt seemed the only way to take ownership of such a rapidly changing world.

 

Train-splaining a new world order

This article was originally published by The Critic on August 4th 2020.

“We have great ambitions for night trains in France,’ said transport minister Jean-Baptiste Djebbari in June. It was a curious statement. When it comes to infrastructure, the language of ambition is usually reserved for projects that convey scale, speed and technological prowess. Europe’s dwindling network of sleeper trains, by contrast, have long been considered a charming relic in an age of ever cheaper, faster and more atomised travel.

Not any longer. On Bastille Day, president Emmanuel Macron confirmed that sleeper trains would be returning to French rails, and in so doing, he was merely joining a continental trend. In January, the first sleeper service since 2003 departed Vienna’s Westbahnhof for Brussels. Its provider, the Austrian ÖBB network, had already resurrected routes to Germany, Italy and Switzerland. A new night train linking states on the European Union’s eastern periphery commenced in June, and is already increasing services to meet a growing demand – as are sleeper routes connecting the Nordic countries to Germany. The Swedish government last month committed to fund new services linking Stockholm and Malmö with Hamburg and Brussels.

This piqued my interest, because I’ve long felt that railways offer vivid windows into the states across which they roam. They tend to exhibit attitudes to public service provision and capital-intensive infrastructure, but they also say a great deal about the nature and extent of a society’s interrelatedness, its pace of life, and indeed its ambition.

On its face, the return of sleeper trains signals the rise of flygskam – a popular Swedish coinage meaning “flight shame,” part of the growing environmental conscience of European governments and consumers. In recent months, Covid-19 has also been boosting demand. And it remains true that continental Europe’s investment in all forms of rail leaves the UK’s patchy, overcrowded and overpriced networks in the shade (let’s not even mention HS2).

But just as Britain’s rail headaches say a great deal about us as a country – our uncertainty over the proper roles of the public and private sector, our incorrigible NIMBYism and our longstanding neglect of the nation beyond London – so it would only be a little facetious to say that sleeper trains capture something deeper about the European Geist today.

At the height of its 19th century confidence, the steam locomotive was the ultimate symbol of Europe’s headlong rush into modernity. Its near-manic desire to control the globe was likewise measured in yards and metres of railway track. Now, as Bruno Maçães eloquently argues, Europe has reached a different inflection point: it is coming to realize that the values it once took to be universal are merely those of its own “civilization state.” Relinquishing any sense of global mission, liberal-minded Europeans now seek to cultivate, in Maçães’ words, ‘a specific way of life: uncommitted, free, detached, aesthetic.’

Surely there’s no better metaphor for this inward turn than the tranquilising comforts of a slow-moving sleeper train. With the world around it growing increasingly chaotic and nasty, I picture Europe seated in the dining car with a Kindle edition of Proust, ordering the vegetarian option, and finally gazing half-drunk into the sunset. Would you not, dear reader, prefer that to the unseemly crush of your 6am Ryanair flight? Would you not prefer it to arriving anywhere at all?

Certainly, writers who step on board a night train cannot help but mention their “nostalgic” or “romantic” appeal – that is, if they don’t simply wallow in kitsch sentimentality. Consider one such account in The Guardian:

“I wake in the pre-dawn light – still inky blue in the compartment. I lie there, feeling the train rock beneath me and then push up the window blind with a foot. I’m rolling through misty flatlands. The landscape spooling past. Austria.”

But perhaps we don’t need to be figurative about this. After all, a quasi-national European consciousness, based around a common purpose like environmentalism, is undoubtedly something the EU would like to foster. And railways, being as skeletons to the bodies of nations, have always been a choice tool for such unification. So it should not surprise us that the return of sleeper trains comes partly under the auspices of the European Commission’s Green Deal, with 2021 slated as “the European Year of Rail.”

The distinctiveness of train culture in Europe comes into sharper focus when we consider its troubled cousin across the Atlantic, the United States. There too the westwards expansion of the railway was once a crucial component, both practically and symbolically, in the creation of a unified nation. Yet today the railway can be seen, like almost everything in American life, as an emblem of estrangement.

The so-called “flyover states,” those swathes of the continental heartland not visited by coastal elites, are in many cases states crossed by the long-distance Amtrak service. But taking the Amtrak, especially overnight, is viewed as a profound eccentricity. Last year a not entirely ironic New York Times Magazine feature reported the experience as though it belonged to another planet. ‘Train people,’ writes our correspondent, ‘are content to stare out the window for hours, like indoor cats … Train people are also individuals for whom small talk is as invigorating as a rail of cocaine.’

It is largely within Blue America – the coastal strips and the urbanised mid-West around Chicago – that high-speed links after the European fashion are being planned. Meanwhile, Elon Musk and others are racing to complete the first “hyperloop” service: a flashy, futuristic transport project of the kind loved by celebrity entrepreneurs, which will use vacuum technology to send passenger pods through tubes at over 750 mph (destinations San Francisco, Las Vegas, Orlando).

Of course, no discussion of modern rail systems would be complete without China, where the staggering proliferation of high-speed networks in recent decades (think two-thirds of world’s total) illustrates a scale and dynamism of which the west can only dream. These are a typical product of the Chinese economic model, which suppresses consumer spending in favor of state-managed export and investment as an engine of growth. That being said, China’s semi-private developers have still borrowed prodigiously, so that a number rail projects have recently ground to halt under a crushing debt burden.

Such vaulting ambition seems a world away from European decadence, but in one sense it is not. Railways also comprise a crucial element of the New Silk Road initiative, whereby China’s power is projected across the Eurasian landmass through infrastructure projects and trade. With over thirty Chinese cities already connected with Europe by rail, it may not be long before Chinese freight carriages and European sleeper carriages routinely share the same tracks.

Anti-racism and the long shadow of the 1970s

The essay was originally published by Unherd on August 3rd 2020.

Last month, following a bout of online outrage, the National Museum of African American History and Culture removed an infographic from its website. Carrying the title “Aspects and assumptions of whiteness and white culture in the United States,” the offending chart presented a list of cultural expectations which, apparently, reflect the “traditions, attitudes and ways of life” characteristic of “white people.” Among the items listed were “self-reliance,” “the nuclear family,” “respect authority,” “plan for future” and “objective, rational linear thinking”.

Critics seized on this as evidence that the anti-racism narrative that has taken hold in institutional America is permeated by a bigotry of low expectations. The chart seemed to suggest that African Americans should not be expected to adhere to the basic tenets of modern civil society and intellectual life. Moreover, the notion that prudence, personal responsibility and rationality are inherently white echoes to an uncanny degree the racist claims that have historically been used to justify the oppression of people of African descent.

We could assume, in the interests of fairness, that the problem with the NMAAHC’s chart was a lack of context. Surely the various qualities it ascribes to “white culture” should be read as though followed by a phrase like “as commonly understood in the United States today?” The problem is that the original document which inspired the chart, and which bore the copyright of corporate consultant Judith H. Katz, provides no such caveats.

If we look at Katz’s own career, however, we do find some illuminating context — not just for this particular incident, but also regarding the origins of the current anti-racism movement more broadly. During the 1970s, Katz pioneered a distinctive approach to combatting racism, one that was above all therapeutic and managerial. This approach, as the NMAAHC chart suggests, took little interest in the opinions and experiences of ethnic and racial minorities, but focused on helping white Americans understand their identity.

Katz’s most obvious descendent today is Robin DiAngelo, author of the bestselling White Fragility — a book relating the experiences and methods of DiAngelo’s lucrative career in corporate anti-racism training. Katz too developed a re-education program, “White awareness training,” which, according to her 1978 book White Awareness, “strives to help Whites understand that racism in the United States is a White problem and that being White implies being racist.”

Like DiAngelo, Katz rails against the pretense of individualism and colour blindness, which she regards as strategies for denying complicity in racism. And like DiAngelo, Katz emphasizes the need for exclusively white discussions (the “White-on-White training group”) to avoid turning minorities into teachers, which would be merely another form of exploitation.

Yet the most striking aspect of Katz’s ideas, by contrast to the puritanical DiAngelo, is her insistence that the real purpose of anti-racism training is to enable the psychological liberation and self-fulfillment of white Americans. She consistently discusses the problem of racism in the medicalizing language of sickness and trauma. It is, she says, “a form of schizophrenia,” “a pervasive form of mental illness,” a “disease,” and “a psychological disorder… deeply embedded in White people from a very early age on both a conscious and an unconscious level.” Thus the primary benefit offered by Katz is to save white people from this pathology, by allowing them to establish a coherent identity as whites.

Her program, she repeatedly emphasizes, is not meant to produce guilt. Rather, its premise is that in order to discover “our unique identities,” we must not overlook “[o]ur sexual and racial essences.” Her training allows its subjects to “become more fully human,” to “identify themselves as White and feel good about it.” Or as Katz writes in a journal article: “We must begin to remove the intellectual shackles and psychological chains that keep us in a mental and spiritual bondage. White people have been hurt for too long.”

Reading all of this, it is difficult not to be reminded of the critic Christopher Lasch’s portrayal of 1970s America as a “culture of narcissism”. Lasch was referring to a bundle of tendencies that characterised the hangover from the radicalism of the 1960s: a catastrophising hypochondria that found in everything the signs of impending disaster or decay; a naval-gazing self-awareness which sought expression in various forms of spiritual liberation; and consequently, a therapeutic culture obsessed with self-improvement and personal renewal.

The great prophet of this culture was surely Woody Allen, whose work routinely evoked crippling neuroses, fear of death, and psychiatry as the customary tool for managing the inner tensions of the liberated bourgeois. That Allen treated all of this with layer upon layer of self-deprecating irony points to another key part of Lasch’s analysis. The narcissist of this era retained enough idealism so as to be slightly ashamed of his self-absorption — unless, of course, some way could be found to justify it as a means towards wider social improvement.

And that is what Katz’s white awareness training offered: a way to resolve the tensions between a desire for personal liberation and a social conscience, or more particularly, a new synthesis of ’70s therapeutic culture with the collectivist political currents unleashed in the ’60s.

Moreover, in Katz’s work we catch a glimpse of what the vehicle for this synthesis would be: the managerial structures of the public or private institution, where a paternalistic attitude towards students, employees and the general public could provide the ideal setting for the tenets of “white awareness.” By way of promoting her program, Katz observed in the late ’70s a general trend towards “a more educational role for the psychotherapist… utilizing systemic training as the process by which to meet desired behavior change.” There was, she noted, a “growing demand” for such services.

Which brings us back to the NMAAHC’s controversial chart. It would be wrong to suggest that this single episode allows us to draw a straight line from the culture of narcissism in which Katz’s ideas emerged to the present anti-racism narrative. But the fact that there continues to be so much emphasis placed on the notion of “whiteness” today — the NMAAHC has an entire webpage under this heading, which prominently features Katz’s successor Robin DiAngelo — suggests that progressive politics has not entirely escaped the identity crises of the 1970s.

Today that politics might be more comfortable assigning guilt than Katz was, but it still places a disproportionate emphasis on those it calls “white” to adopt a noble burden of self-transformation, while relegating minorities to the role of a helpless other.

Of course, it is precisely this simplistic dichotomy which allows the anti-racism narrative to jump across borders and even oceans, as we have seen happening recently, into any context where there are people who can be called “white” and an institutional framework for administering reeducation. Already in 1983, Katz was able to promote her “white awareness training” in the British journal Early Child Development and Care, simply swapping her standard American intro for a discussion of English racism.

Then as now, the implication is that from the perspective of “whiteness,” the experience of African-Americans and of ethnic minorities in a host of other places is somehow interchangeable. This, I think, can justifiably be called a kind of narcissism.

Why I’m not giving up on my ego

This spring, I finally got round to reading Derek Parfit’s famous work, Reasons and Persons. Published in 1984, the book is often cited as a key inspiration for subsequent developments in moral philosophy, notably the field of population ethics and the Effective Altruism movement. (Both of which, incidentally, are closely associated with Oxford University, the institution where Parfit himself worked until his death in 2017). I found Reasons and Persons every bit the masterpiece many have made it out to be – a work not just of rich insight, but also of persuasive humility and charm. For this reason, and because some themes of the book resonate with certain cultural trends today, I thought it would be worth saying something about why Parfit did not win me over to his way of seeing the world.

In Reasons and Persons, Parfit takes on three main issues:

  1. He makes numerous arguments against the self-interest theory of rationality, which holds that what is most rational for any individual to do is whatever will benefit him or her the most;
  1. He argues for a Reductionist theory of identity, according to which there is no “deep further fact” or metaphysical essence underpinning our existence as individual persons, only the partial continuity of psychological experiences across time;
  1. He argues for the moral significance of future generations, and searches (unsuccessfully, by his own admission) for the best way to recognise that significance in our own decisions.

I want to consider (2), Parfit’s Reductionist view of identity. On my reading, this was really the lynchpin of the whole book. According to Parfit, we are inclined to believe there is a “deep further fact” involved in personal identity – that our particular bodies and conscious minds constitute an identity which is somehow more than the sum of these parts. If your conscious mind (your patterns of thought, memories and intentions) managed somehow to survive the destruction of your body (including your brain), and to find itself in a replica body, you may suspect that this new entity would not be you. Likewise if your body continued with some other mind. In either case some fundamental aspect of your personhood, perhaps a metaphysical essence or soul or self, would surely have perished along the way.

Parfit says these intuitions are wrong: there simply is no further fact involved in personal identity. In fact, as regards both a true understanding of reality and what we should value (or “what really matters,” as he puts it), Parfit thinks the notion of persons as bearers of distinct identities can be dispensed with altogether.

What really matters about identity, he argues, is nothing more than the psychological continuity that characterises our conscious minds; and this can be understood without reference to the idea of a person at all. If your body were destroyed and your mind transferred to a replica body, this would merely be “about as bad as ordinary survival.” Your mind could even find itself combined with someone else’s mind, in someone else’s body, which would no doubt present some challenges. In both cases, though, whether the new entity would “really be you” is an empty question. We could describe what had taken place, and that would be enough.

Finally, once we dispense with the idea of a person as bearer of a distinct identity, we notice how unpersonlike our conscious minds really are. Psychological continuity is, over the course of a life, highly discontinuous. Thought patterns, memories and intentions form overlapping “chains” of experience, and each of these ultimately expires or evolves in such a way that, although there is never a total rupture, our future selves might as well be different people.

As I say, I found these claims about identity to be the lynchpin of Reasons and Persons. Parfit doesn’t refer to them in the other sections of his book, where he argues against self-interest and for the moral significance of future generations. But you can hardly avoid noticing its relevance for both. Parfit’s agenda, ultimately, is to show that ethics is about the quality of human experiences, and that all experiences across time and space should have the same moral significance. Denying the sanctity of personal identity provides crucial support for that agenda. Once you accept that the notion of an experience being your experience is much less important than it seems, it is easier to care more about experiences happening on the other side of the planet, or a thousand years in the future.

But there is another reason I was especially interested in Parfit’s treatment of identity.  In recent years, some friends and acquaintances of mine have become fascinated by the idea of escaping from the self or ego, whether through neo-Buddhist meditation (I know people who really like Sam Harris) or the spiritualism of Eckhart Tolle. I’m also aware that various subcultures, notably in Silicon Valley, have become interested in the very Parfitian idea of transhumanism, whereby the transferal of human minds to enhanced bodies or machines raises the prospect of superseding humanity altogether. Add to these the new conceptions of identity emerging from the domain of cultural politics – in particular, the notion of gender fluidity and the resurgence of racial essentialism – and it seems to me we are living at a time when the metaphysics of selfhood and personhood have become an area of pressing uncertainty.

I don’t think it would be very productive to make Reasons and Persons speak to these contemporary trends, but they did inform my own reading of the book. In particular, they led me to notice something about Parfit’s presentation of the Reductionist view.

In the other sections of the Reasons and Persons, Parfit makes some striking historical observations. He argues for a rational, consequentialist approach to ethics by pointing out that in the modern world, our actions affect a far larger number of people than they did in the small communities where our traditional moral systems evolved. He reassures us of the possibility of moral progress by claiming that ethics is still in its infancy, since it has only recently broken free from a religious framework. In other words, he encourages us to situate his ideas in a concrete social and historical context, where they can be evaluated in relation to the goal of maximising human flourishing.

But this kind of contextualisation is entirely absent from Parfit’s treatment of identity. What he offers us instead is, ironically, a very personal reason for accepting the Reductionist view:

Is the truth depressing? Some may find it so. But I find it liberating, and consoling. When I believed that my existence was such a further fact, I seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others.

Parfit goes on to explain how accepting the Reductionist view helps him to reimagine his relationship to those who will be living after he has died. Rather than thinking “[a]fter my death, there will be no one living who will be me,” he can now think:

Though there will later be many experiences, none of these experiences will be connected to my present experiences by chains of such direct connections as those involved in experience-memory, or in the carrying out of an earlier intention.

There is certainly a suggestion here that, as I said earlier, the devaluation of personal identity supports a moral outlook which grants equal importance to all experiences across time and space. But there is no consideration of what it might be like if a significant number of people in our societies did abandon the idea of persons as substantive, continuous entities with real and distinct identities.

So what would that be like? Well, I don’t think the proposition makes much sense. As soon as we introduce the social angle, we see that Parfit’s treatment of identity is lacking an entire dimension. His arguments make us think about our personal identity in isolation, to show that in certain specific scenarios we imagine a further fact where there is none. But in social terms, our existence does involve a further fact – or rather, a multitude of further facts: facts describing our relations with others and the institutions that structure them. We are sons and daughters, parents, spouses, friends, citizens, strangers, worshippers, students, teachers, customers, employees, and so on. These are not necessarily well-defined categories, but they suggest the extent to which social life is dependent on individuals apprehending one another not in purely empirical terms, but in terms of roles with associated expectations, allowances and responsibilities.

And that, crucially, is also how we tend to understand ourselves – how we interpret our desires and formulate our motivations. The things we value, aim for, think worth doing, and want to become, inevitably take their shape from our impressions of the social world we inhabit, with its distinctive roles and practices.

We emulate people we admire, which does not mean we want to be exactly like them, but that they perform a certain role in a way that we identify with. There is some aspect of their identity, as we understand it, that we want to incorporate into our own. Likewise, when we care about something, we are typically situating ourselves in a social milieu whose values and norms become part of our identity. Such is the case with raising a family, being successful in some profession, or finding a community of interest like sport or art or playing with train sets. It is also the case, I might add, with learning meditation or studying philosophy in order to write a masterpiece about ethics.

There is, of course, a whole other tradition in philosophy that emphasises this interdependence of the personal and the social, from Aristotle and Hegel to Hannah Arendt and Alasdair MacIntyre. This tradition is sometimes called communitarian, by which is meant, in part, that it views the roles provided by institutions as integral to human flourishing. But the objection to Parfit I am trying to make here is not necessarily ethical.

My objection is that we can’t, in any meaningful sense, be Reductionists, framing our experiences and decisions as though they belong merely to transient nodes of psychological connectivity. Even if we consider personhood an illusion, it is an illusion we cannot help but participate in as soon as we begin to interact with others and to pursue ends in the social world. Identity happens, whether we like it or not: other people regard us in a certain way, we become aware of how they regard us, and in our ensuing negotiation with ourselves about how to behave, a person is born.

This is, of course, one reason that people find escaping the self so appealing: the problem of how to present ourselves in the world, and of deciding which values to consider authentically our own, can be a source of immense neurosis and anxiety. But the psychological dynamics from which all of this springs are a real and inescapable part of being human (there is a reason Buddhist sages have often lived in isolation – something I notice few of their contemporary western descendants do). You can go around suppressing these thoughts by continuously telling yourself they do not amount to a person or self, but then you would just be repeating the fallacy identified by Parfit – putting the emphasis on personhood rather than on experiences. Meanwhile, if you actually want to find purpose and fulfilment in the world, you will find yourself behaving like a person in all but name.

To truly step outside our identities by denying any further fact in our existence (or, for that matter, by experiencing the dissolution of the ego through meditation, or fantasising about being uploaded to a machine) is at most a private, intermittent exercise. And even then, our desire to undertake this exercise, our reasons for thinking it worthwhile, and the things we hope to achieve in the process, are firmly rooted in our histories as social beings. You must be a person before you can stop being a person.

Perhaps these complications explain why Parfit is so tentative in his report of what it is like to be a Reductionist: “There is still a difference between my life and the lives of other people. But the difference is less.” I interpret his claim that we should be Reductionists as the echo of an age-old wisdom: don’t get so caught up in your own personal dramas that you overlook your relative insignificance and the fact that others are, fundamentally, not so different to you. But this moral stance does not follow inevitably from a theoretical commitment to Reductionism (and like I say, I don’t think that commitment could be anything more than theoretical). In fact, it’s possible to imagine some horrific beliefs being just as compatible with the principle that persons do not really exist. Parfit’s claim that Reductionism makes him care more about humanity in general seems to betray his own place in the tradition of universalist moral thought – a tradition in which the sanctity of persons (and indeed of souls) has long been central.

As for my friends who like to step away from the self through meditation, if this helps them stay happy and grounded, more power to them. But I don’t think this could ever obviate the importance of engaging in another kind of reflection: one that recognises life as a journey we must all undertake as real persons living in a world with others, and which requires us to struggle to define who we are and want to be. This is not easy today, because the social frameworks that have always been necessary for persons, like so many climbing flowers, to grow, are now in a state of flux (but that is a subject for another time). Still, difficult as it may be, the road awaits.