How Napoleon made the British

In 1803, the poet and philosopher Samuel Taylor Coleridge wrote to a friend about his relish at the prospect of being invaded by Napoleon Bonaparte. “As to me, I think, the Invasion must be a Blessing,” he said, “For if we do not repel it, & cut them to pieces, we are a vile sunken race… And if we do act as Men, Christians, Englishmen – down goes the Corsican Miscreant, & Europe may have peace.”

This was during the great invasion scare, when Napoleon’s Army of England could on clear days be seen across the channel from Kent. Coleridge’s fighting talk captured the rash of patriotism that had broken out in Britain. The largest popular mobilisation of the entire Hanoverian era was set in motion, as some 400,000 men from Inverness to Cornwall entered volunteer militia units. London’s playhouses were overtaken by anti-French songs and plays, notably Shakespeare’s Henry V. Caricaturists such as James Gillray took a break from mocking King George III and focused on patriotic propaganda, contrasting the sturdy beef-eating Englishman John Bull with a puny, effete Napoleon.

These years were an important moment in the evolution of Britain’s identity, one that resonated through the 19th century and far beyond. The mission identified by Coleridge – to endure some ordeal as a vindication of national character, preferably without help from anyone else, and maybe benefit wider humanity as a by-product – anticipates a British exceptionalism that loomed throughout the Victorian era, reaching its final apotheosis in the Churchillian “if necessary alone” patriotism of the Second World War. Coleridge’s friend William Wordsworth expressed the same sentiment in 1806, after Napoleon had smashed the Prussian army at Jena, leaving the United Kingdom his only remaining opponent. “We are left, or shall be left, alone;/ The last that dare to struggle with the Foe,” Wordsworth wrote, “’Tis well! From this day forward we shall know/ That in ourselves our safety must be sought;/ That by our own right hands it must be wrought.”

As we mark the bicentennial of Napoleon’s death on St Helena in 1821, attention has naturally been focused on his legacy in France. But we shouldn’t forget that in his various guises – conquering general, founder of states and institutions, cultural icon – Napoleon transformed every part of Europe, and Britain was no exception. Yet the apparent national pride of the invasion scare was very far from the whole story. If the experience of fighting Napoleon left the British in important ways more cohesive, confident and powerful, it was largely because the country had previously looked like it was about to fall apart. 

Throughout the 1790s, as the French Revolution followed the twists and turns that eventually brought Napoleon to power, Britain was a tinder box. Ten years before he boasted of confronting Napoleon as “Men, Christians, Englishmen,” Coleridge had burned the words “Liberty” and “Equality” into the lawns of Cambridge university. Like Wordsworth, and like countless other radicals and republicans, he had embraced the Revolution as the dawn of a glorious new age in which the corrupt and oppressive ancien régime, including the Anglican establishment of Britain, would be swept away. 

And the tide of history seemed to be on the radicals’ side. The storming of the Bastille came less than a decade after Britain had lost its American colonies, while in George III the country had an unpopular king, prone to bouts of debilitating madness, whose scandalous sons appeared destined to drag the monarchy into disgrace. 

Support for the Revolution was strongest among Nonconformist Protestant sects – especially Unitarians, the so-called “rational Dissenters” – who formed the intellectual and commercial elite of cities such as Norwich, Birmingham and Manchester, and among the radical wing of the Whig party. But for the first time, educated working men also entered the political sphere en masse. They joined the Corresponding Societies which held public meetings and demonstrations across the country, so named because of their contacts with Jacobin counterparts in France. Influential Unitarian ministers, such as the Welsh philosopher Richard Price and the chemist Joseph Priestly, interpreted the Revolution as the work of providence and possibly a sign of the imminent Apocalypse. In the circle of Whig aristocrats around Charles James Fox, implacable adversary of William Pitt’s Tory government, the radicals had sympathisers at the highest levels of power. Fox famously said of the Revolution “how much the greatest event it is that ever happened in the world, and how much the best.”

From 1792 Britain was at war with revolutionary France, and this mix of new ideals and longstanding religious divides boiled over into mass unrest and fears of insurrection. In 1795 protestors smashed the windows at 10 Downing Street, and at the opening of parliament a crowd of 200,000 jeered at Pitt and George III. The radicals were met by an equally volatile loyalist reaction in defence of church and king. In 1793, a dinner celebrating Bastille Day in Birmingham sparked three days of rioting, including attacks on Nonconformist chapels and Priestly’s home. Pitt’s government introduced draconian limitations on thought, speech and association, although his attempt to convict members of the London Corresponding Society with high treason was foiled by a jury. 

Both sides drew inspiration from an intense pamphlet war that included some of the most iconic and controversial texts in British intellectual history. Conservatives were galvanised by Edmund Burke’s Reflections on the Revolution in France, a defence of England’s time-honoured social hierarchies, while radicals hailed Thomas Paine’s Rights of Man, calling for the abolition of Britain’s monarchy and aristocracy. When summoned on charges of seditious libel, Paine fled to Paris, where he sat in the National Assembly and continued to support the revolutionary regime despite almost being executed during the Reign of Terror that began in 1793. Among his supporters were the pioneering feminist Mary Wollstonecraft and the utopian progressive William Godwin, who shared an intellectual circle with Coleridge and Wordsworth. 

Britain seemed to be coming apart at the seams. Bad harvests at the turn of the century brought misery and renewed unrest, and the war effort failed to prevent France (under the leadership, from 1799, of First Consul Bonaparte) from dominating the continent. Paradoxically, nothing captures the paralysing divisions of the British state at this moment better than its expansion in 1801 to become the United Kingdom of Great Britain and Ireland. The annexation of Ireland was a symptom of weakness, not strength, since it reflected the threat posed by a bitterly divided and largely hostile satellite off Britain’s west coast. The only way to make it work, as Pitt insisted, was to grant political rights to Ireland’s Catholic majority – but George III refused. So Pitt resigned, and the Revolutionary Wars ended with the Treaty of Amiens in 1802, effectively acknowledging French victory.

Britain’s tensions and weaknesses certainly did not disappear during the ensuing, epic conflict with Napoleon from 1803-15. Violent social unrest continued to flare up, especially at times of harvest failure, financial crisis, and economic hardship resulting from restriction of trade with the continent. There were, at times, widespread demands for peace. The government continued to repress dissent with military force and legal measures; the radical poet and engraver William Blake (later rebranded as a patriotic figure when his words were used for the hymn Jerusalem) stood trial for sedition in 1803, following an altercation with two soldiers. Many of those who volunteered for local military units probably did so out of peer pressure and to avoid being impressed into the navy. Ireland, of course, would prove to be a more intractable problem than even Pitt had imagined.  

Nonetheless, Coleridge and Wordsworth’s transition from radicals to staunch patriots was emblematic. Whether the population at large was genuinely loyal or merely quiescent, Britain’s internal divisions lost much of their earlier ideological edge, and the threat of outright insurrection faded away. This process had already started in the 1790s, as many radicals shied away from the violence and militarism of revolutionary France, but it was galvanised by Napoleon. This was not just because he appeared determined and able to crush Britain, but also because of British perceptions of his regime. 

As Yale professor Stuart Semmel has observed, Napoleon did not fit neatly into the dichotomies with which Britain was used to contrasting itself against France. For the longest time, the opposition had been (roughly) “free Protestant constitutional monarchy” vs “Popish absolutist despotism”; after the Revolution, it had flipped to “Christian peace and order” vs “bloodthirsty atheism and chaos.” Napoleon threw these catagories into disarray. The British, says Semmel, had to ask “Was he a Jacobin or a king …; Italian or Frenchman; Catholic, atheist, or Muslim?” The religious uncertainty was especially unsettling, after Napoleon’s “declaration of kinship with Egyptian Muslims, his Concordat with the papacy, his tolerance for Protestants, and his convoking a Grand Sanhedrin of European Jews.” 

This may have forced some soul-searching on the part of the British as they struggled to define Napoleonic France, but in some respects the novelty simplified matters. Former radicals could argue Napoleon represented a betrayal of the Revolution, and could agree with loyalists that he was a tyrant bent on personal domination of Europe, thus drawing a line under the ideological passions of the revolutionary period. In any case, loyalist propaganda had no difficulty transferring to Napoleon the template traditionally reserved for the Pope – that of the biblical Antichrist. This simple fact of having a single infamous figure on which to focus patriotic feelings no doubt aided national unity. As the essayist William Hazlitt, an enduring supporter of Napoleon, later noted: “Everybody knows that it is only necessary to raise a bugbear before the English imagination in order to govern it at will.”

More subtly, conservatives introduced the concept of “legitimacy” to the political lexicon, to distinguish the hereditary power of British monarchs from Napoleon’s usurpation of the Bourbon throne. This was rank hypocrisy, given the British elite’s habit of importing a new dynasty whenever it suited them, but it played to an attitude which did help to unify the nation: during the conflict with Napoleon, people could feel that they were defending the British system in general, rather than supporting the current government or waging an ideological war against the Revolution. The resulting change of sentiment could be seen in 1809, when there were vast celebrations to mark the Golden Jubilee of the once unpopular George III. 

Undoubtedly British culture was also transformed by admiration for Napoleon, especially among artists, intellectuals and Whigs, yet even here the tendency was towards calming antagonisms rather than enflaming them. This period saw the ascendance of Romanticism in European culture and ways of thinking, and there was not and never would be a greater Romantic hero than Napoleon, who had turned the world upside down through force of will and what Victor Hugo later called “supernatural instinct.” But ultimately this meant aestheticizing Napoleon, removing him from the sphere of politics to that of sentiment, imagination and history. Thus when Napoleon abdicated his throne in 1814, the admiring poet Lord Byron was mostly disappointed he had not fulfilled his dramatic potential by committing suicide. 

But Napoleon profoundly reshaped Britain in another way: the long and grueling conflict against him left a lasting stamp on every aspect of the British state. In short, while no-one could have reasonably predicted victory until Napoleon’s catastrophic invasion of Russia in 1812, the war was nonetheless crucial in forging Britain into the global superpower it would become after 1815. 

The British had long been in the habit of fighting wars with ships and money rather than armies, and for the most part this was true of the Napoleonic wars as well. But the unprecedented demands of this conflict led to an equally unprecedented development of Britain’s financial system. This started with the introduction of new property taxes and, in 1799, the first income tax, which were continually raised until by 1814 their yield had increased by a factor of ten. What mattered here was not so much the immediate revenue as the unparalleled fiscal base it gave Britain for the purpose of borrowing money – which it did, prodigiously. In 1804, the year Bonaparte was crowned Emperor, the “Napoleon of finance” Nathan Rothschild arrived in London from Frankfurt, helping to secure a century of British hegemony in the global financial system. 

No less significant were the effects of war in stimulating Britain’s nascent industrial revolution, and its accompanying commercial empire. The state relied on private contractors for most of its materiel, especially that required to build and maintain the vast Royal Navy, while creating immense demand for iron, coal and timber. In 1814, when rulers and representatives of Britain’s European allies came to Portsmouth, they were shown a startling vision of the future: enormous factories where pulley blocks for the rigging of warships were being mass-produced with steam-driven machine tools. Meanwhile Napoleon’s Continental System, by shutting British manufacturers and exporters out of Europe, forced them to develop markets in South Asia, Africa and Latin America. 

Even Britain’s fabled “liberal” constitution – the term was taken from Spanish opponents to Napoleon – did in fact do some of the organic adaptation that smug Victorians would later claim as its hallmark. The Nonconformist middle classes, so subversive during the revolutionary period, were courted in 1812-13 with greater political rights and by the relaxation of various restrictions on trade. Meanwhile, Britain discovered what would become its greatest moral crusade of the 19thcentury. Napoleon’s reintroduction of slavery in France’s Caribbean colonies created the conditions for abolitionism to grow as a popular movement in Britain, since, as William Wilberforce argued, “we should not give advantages to our enemies.” Two bills in 1806-7 effectively ended Britain’s centuries-long participation in the trans-Atlantic slave trade.

Thus Napoleon was not just a hurdle to be cleared en route to the British century – he was, with all his charisma and ruthless determination, a formative element in the nation’s history. And his influence did not end with his death in 1821, of course. He would long haunt the Romantic Victorian imagination as, in Eric Hobsbawm’s words, “the figure every man who broke with tradition could identify himself with.”

The age of mass timber: why we should build in wood

This article was published by The Critic on March 10th 2021.

There are few more evocative images of modernity than the glittering skyscrapers of Tokyo. It’s easy to forget that Japan’s cities used to consist largely of timber structures up until the mid-twentieth century. It was only after the nightmarish final months of the Second World War, when American B-29 bombers reduced these wooden metropolises to smouldering ash, that Japan embraced concrete, glass and steel.

But luckily Japanese timber expertise did not vanish entirely, for it now appears wood is the future again. Late last year Sumitomo Forestry, a 300-year-old company, announced it was partnering with Kyoto University to design a surprising product: wooden satellites. This innovation aims to stop the dangerous build-up of space junk orbiting the Earth. The ultimate goal of the research, however, is back on terra firma, where Sumitomo hopes to design “ultra-strong, weather resistant wooden buildings”. It has already announced its ambitions to build a skyscraper more than 1,000 feet tall, constructed from 90 per cent wood, by 2041.

Could timber really be a major building material in the dense, vertical cities of the future? In fact, this possibility is well on the way to being realised. In recent years, architects and planners around the world have hailed the coming age of “mass timber”. This term refers to prefabricated wooden building components, such as cross-laminated timber, which can replace concrete and steel in large-scale construction.

Continue reading here.

“Euro-English”: A thought experiment

There was an interesting story in Politico last weekend about “Euro-English,” and a Swedish academic who wants to make it an official language. Marko Modiano, a professor at the University of Gävle, says the European Union should stop using British English for its documents and communications, and replace it with the bastardised English which is actually spoken in Brussels and on the continent more generally.

Politico offers this example of how Euro-English might sound, as spoken by someone at the European Commission: “Hello, I am coming from the EU. Since 3 years I have competences for language policy and today I will eventually assist at a trilogue on comitology.”

Although the EU likes to maintain the pretence of linguistic equality, English is in practice the lingua franca of its bureaucrats, the language in which most laws are drafted, and increasingly default language of translation for foreign missions. It is also the most common second language across the continent. But according to Modiano, this isn’t the same English used by native speakers, and it’s silly that the EU’s style guides try to make it conform to the latter. (Spare a thought for Ireland and Malta, who under Modiano’s plans would presumably have to conduct EU business in a slightly different form of English).

It’s a wonderful provocation, but could it also be a veiled political strategy? A distinctively continental English might be a way for the EU to cultivate a stronger pan-European identity, thus increasing its authority both in absolute terms and relative to national governments. The way Modiano presents his proposal certainly makes it sound like that: “Someone is going to have to step forward and say, ‘OK, let’s break our ties with the tyranny of British English and the tyranny of American English.’ And instead say… ‘This is our language.’” (My emphasis).

The EU has forever been struggling with the question of whether it can transcend the appeal of nation states and achieve a truly European consciousness. Adopting Euro-English as an official lingua franca might be a good start. After all, a similar process of linguistic standardisation was essential to the creation of the modern nation state itself.   

As Eric Hobsbawm writes in his classic survey of the late-19th and early-20th century, The Age of Empire, the invention of national languages was a deliberate ideological project, part of the effort to forge national identities out of culturally heterogeneous regions. Hobsbawm explains:

Linguistic nationalism was the creation of people who wrote and read, not of people who spoke. And the ‘national languages’ in which they discovered the essential character of their nations were, more often than not, artefacts, since they had to be compiled, standardized, homogenized and modernized for contemporary and literary use, out of the jigsaw puzzle of local or regional dialects which constituted non-literary languages as actually spoken. 

Perhaps the most remarkable example was the Zionist movement’s promotion of Hebrew, “a language which no Jews had used for ordinary purposes since the days of the Babylonian captivity, if then.”

Where this linguistic engineering succeeded, it was thanks to the expansion of state education and the white-collar professions. A codified national language, used in schools, the civil service and public communications like street signs, was an ideal tool for governments to instil a measure of unity and loyalty in their diverse and fragmented populations. This in turn created incentives for the emerging middle class to prefer an official language to their own vernaculars, since it gave access to careers and social status. 

Could the EU not pursue a similar strategy with Euro-English? There could a special department in Brussels tracking the way English is used by EU citizens on social media, and each year issuing an updated compendium on Euro-English. This emergent language, growing ever more distinctly European, could be mandated in schools, promoted through culture and in the media, and of course used for official EU business. Eventually the language would be different enough to be rebranded simply as “European.”

You’ll notice I’m being facetious now; obviously this would never work. Privileging one language over others would instantly galvanise the patriotism of EU member states, and give politicians a new terrain on which to defend national identity against Brussels. This is pretty much how things played out in multinational 19th century states such as Austria-Hungary, where linguistic hierarchies enflamed the nationalism of minority cultures. One can already see something like this in the longstanding French resentment against the informal dominance of English on the continent.

Conversely, Euro-English wouldn’t work because for Europe’s middle-classes and elites, the English language is a gateway not to Europe, but to the world. English is the language of global business and of American cultural output, and so is a prerequisite for membership of any affluent cosmopolitan milieu. 

And this, I think, is the valuable insight to be gained from thought experiments like the one suggested by Modiano. Whenever we try to imagine what the path to a truly European demos might look like, we always encounter these two quite different, almost contradictory obstacles. On the one hand, the structure of the EU seems to have frozen in place the role of the nation state as the rightful locus of imagined community and symbolic attachment. At the same time, among those who identify most strongly with the European project, many are ultimately universalist in their outlook, and unlikely to warm to anything that implies a distinctively European identity. 

What space architecture says about us

With the recent expedition of Nasa’s Perseverence rover to Mars, I’ve taken an interest in space architecture; more specifically, habitats for people on the moon or the Red Planet. The subject first grabbed my attention earlier this year, when I saw that a centuries-old forestry company in Japan is developing wooden structures for future space colonies. Space architecture is not as other-worldly as you might think. In various ways, it holds a revealing mirror to life here on Earth. 

Designing human habitats for Mars is more than just a technical challenge (though protecting against intense radiation and minus 100C temperatures is, of course, a technical challenge). It’s also an exercise in anthropology. To ask what a group of scientists or pioneers will need from their Martian habitats is to ask what human beings need to be healthy, happy and productive. And we aren’t just talking about the material basics here. 

As Jonathan Morrison reported in the Times last weekend, Nasa is taking inspiration from the latest polar research bases. According to architects like Hugh Broughton, researchers working in these extreme environments need creature comforts. The fundamental problem, says Broughton, is “how architecture can respond to the human condition.” The extreme architect has to consider “how you deal with isolation, how you create a sense of community… how you support people in the darkness.”

I found these words disturbingly relatable; not just in light of the pandemic, which has forced us all into a kind of polar isolation, but in light of the wider problem of anomie in modern societies. Broughton’s questions are the same ones we tend to ask as we observe stubbornly high rates of depression, loneliness, self-medication, and so on. Are we all now living in an extreme environment?

Many architects in the modernist period dreamed that they could tackle such issues through the design of the built environment. But the problem of what people need in order to flourish confronted them in a much harder form. Given the complexity of modern societies, trying to facilitate a vision of human flourishing through architecture started to look a lot like forcing society into a particular mould.

The “master households” designed by Walter Gropius in the 1920s and 30s illustrates the dilemma. Gropius insisted his blueprints, which reduced private family space in favour of communal living, reflected the emerging socialist character of modern individuals. At the same time, he implied that this transformation in lifestyle needed the architect as its midwife. 

Today architecture has largely abandoned the dream of a society engineered by experts and visionaries. But heterotopias like research stations and space colonies still offer something of a paradise for the philosophical architect. By contrast to the messy complexity of society at large, these small communities have a very specific shared purpose. They offer clearly defined parameters for architects to address the problem of what human beings need. 

Sometimes the solutions to this profound question, however, are almost comically mundane. Morrison’s Times report mentions some features of recent polar bases:

At the Scott Base, due to be completed in 2027, up to 100 residents might while away the hours in a cafeteria and even a Kiwi-themed pub, while Halley VI… boasts a gym, library, large canteen, bar and mini cinema.

If this turns out to be the model, then a future Mars colony will be a lot like a cruise ship. This doesn’t reflect a lack of imagination on the architects’ part though. It points to the fact that people don’t just want sociability, stimulation and exercise as such – they want familiar forms of these things. So a big part of designing habitats for space pioneers will involve replicating institutions from their original, earthbound cultures. In this sense, Martian colonies won’t be a fresh start for humanity any more than the colonisation of the Americas was. 

Finally, it’s worth saying something about the politics of space habitats. It seems inevitable that whichever regime sends people to other planets will use the project as a means of legitimation: the government(s) and corporations involved will want us to be awed by their achievement. And this will be done by turning the project into a media spectacle. 

The recent Perseverance expedition has already shown this potential: social media users were thrilled to hear audio of Martian winds, and to see a Martian horizon with Earth sparkling in the distance (the image, alas, turned out to be a fake). The first researchers or colonists on Mars will likely be reality TV stars, their everyday lives an on-going source of fascination for viewers back home. 

The lunar base in Kubrick’s 2001: A Space Odyssey

This means space habitats won’t just be designed for the pioneers living in them, but also for remote visual consumption on Earth. The aesthetics of these structures will not, therefore, be particularly novel. Thanks to Hollywood, we already have established ideas of what space exploration should look like, and space architecture will try to satisfy these expectations. Beyond that, it will simply try to project a more futuristic version of the good life as we know it through pop culture: comfort, luxury and elegance. 

We already see this, I think, in the Mars habitat designed by Xavier De Kestelier of Hassel Studio, which features sweeping open-plan spaces with timber flooring, glass walls and minimalist furniture. It resembles a luxury spa more than a rugged outpost of civilisation. But this was already anticipated, with characteristic flair, by Stanley Kubrick in his 1968 sci-fi classic 2001: A Space Odyssey. In Kubrick’s imagined lunar base, there is a Hilton hotel hosting the stylish denizens of corporate America. The task of space architects will be to design this kind of enchanting fantasy, no less than to meet the needs of our first Martian settlers.  

How much is a high-status meme worth?

This article was published by Unherd on February 25th 2021.

Today one of the most prestigious institutions in the art world, the 250-year-old auction house Christie’s, is selling a collection of Instagram posts. Or in its own more reserved language, Christie’s is now “the first major auction house to offer a purely digital work.”

The work in question is “Everydays: The First 5000 Days” by the South Carolina-based animation artist Beeple (real name Mike Winkelmann), an assemblage of images he has posted online over the last thirteen-odd years. Whoever acquires “Everydays” won’t get a unique product — the image is a digital file which can be copied like any other. They’ll just be paying for a proof of ownership secured through the blockchain.

But more significant than the work’s format is its artistic content. Beeple is opening the way for the traditional art world to embrace internet memes. 

Continue reading here.

The double nightmare of the cat-lawyer

Analysing internet memes tends to be self-defeating: mostly their magic comes from a fleeting, blasé irony which makes you look like a fool if you try to pin it down. But sometimes a gem comes along that’s too good to let pass. Besides, the internet’s endless stream of found objects, jokes and observations are ultimately a kind of glorious collective artwork, somewhere between Dada collage and an epic poem composed by a lunatic. And like all artworks, this one has themes and motifs worth exploring.

Which brings me to cat-lawyer. The clip of the Texas attorney who, thanks to a visual filter, manages to take the form of a fluffy kitten in a Zoom court hearing, has gone superviral. The hapless attorney, Rod Ponton, claims he’s been contacted by news outlets around the world. “I always wanted to be famous for being a great lawyer,” he reflected, “now I’m famous for appearing in court as a cat.”

The video clearly recalls the similarly sensational case of Robert Kelly, the Korea expert whose study was invaded by his two young children during a live interview with the BBC. What makes both clips so funny is the pretence of public formality – already under strain in the video-call format, since people are really just smartly dressed in their homes – being punctured by the frivolity of childhood. Ridiculously, the victims try to maintain a sense of decorum. The punctilious Kelly ignores his rampaging infants and mumbles an apology; the beleaguered Ponton, his saucer-like kitten’s eyes shifting nervously, insists he’s happy to continue the hearing (“I’m not a cat” he reassures the judge, a strong note of desperation in his voice).

These incidents don’t become so famous just because they’re funny, though. Like a lot of comedy, they offer a light-hearted, morally acceptable outlet for impulses that often appear in much darker forms. We are essentially relishing the humiliation of Ponton and Kelly, much as the roaming mobs of “cancel culture” relish the humiliation of their targets, but we expect the victims to recognise their own embarrassment as a public good. The thin line between such jovial mockery and the more malign search for scapegoats is suggested by the fact that people have actually tried to discredit both men. Kelly was criticised for how he handled his daughter during his ordeal, while journalists have dredged up old harassment allegations against Ponton.

But there are other reasons why, in the great collective fiction of internet life, cat-lawyer is an interesting character. As I’ve previously written at greater length, online culture carries a strong strain of the grotesque. The strange act of projecting the self into digital space, both liberating and anxiety-inducing, has spurred forms of expression that blur the boundaries of the human and of social identity. In this way, internet culture joins a long artistic tradition where surreal, monstrous or bizarre beings give voice to repressed aspects of the human imagination. Human/animal transformations like the cat-lawyer have always been a part of this motif.

Of course it’s probably safe to assume that Ponton’s children, and not Ponton himself, normally use the kitten filter. But childhood and adolescence are where we see the implications of the grotesque most clearly. Bodily transformation and animal characters are a staple of adolescent fiction, because teenagers tend to interpret them in light of their growing awareness of social boundaries, and of their own subjectivity. Incidentally, I remember having this response to a particularly cheesy series of pulp novels for teens called Animorphs. But the same ideas are being explored, whether playfully or disturbingly, in gothic classics like Frankenstein and the tales of E.T.A Hoffman, in the films of David Lynch, or indeed in the way people use filters and face-changing apps on social media. 

The cat-lawyer pushes these buttons too: his wonderful, mesmerising weirdness is a familiar expression of the grotesque. And this gels perfectly with the comedy of interrupted formality and humiliation. The guilty expression on his face makes it feels like he has, by appearing as a cat, accidentally exposed some embarrassing private fetish in the workplace. 

Perhaps the precedent this echoes most clearly is Kafka’s “Metamorphosis,” where the longsuffering salesman Gregor Samsa finds he has turned into an insect. Recall that Samsa’a family resents his transformation not just because he is ghastly, but because his ghastliness makes him useless in a world which demands respectability and professionalism. It is darkly absurd, but unsettling too: it awakens anxieties about the aspects of ourselves that we conceal from public view. 

The cat-lawyer’s ordeal is a similar kind of double nightmare: a surreal incident of transformation, an anxiety dream about being publicly exposed. Part of its appeal is that it lets us appreciate these strange resonances by cloaking them in humour. 

Gambling on technocrats

The likely appointment of Mario Draghi as Italy’s prime minister has been widely, if nervously, greeted as a necessary step. Draghi, an esteemed economist and central banker, will be the fourth unelected technocrat to fill the post in Italy in the last 30 years. As the Guardian concedes by way of welcoming Draghi’s appointment, a ready embrace of unelected leaders is “not a good look for any self-respecting democracy.” 

Italy’s resort to temporary “technical governments” reflects the fact that its fractious political system, with its multitude of parties and short-lived coalitions, is vulnerable to paralysis at moments of crisis. Such has been the price for a constitution designed to prevent the rise of another Mussolini. Ironically though, the convention of installing technocrats recalls the constitutional role of Dictator in the ancient Roman Republic: a trusted leader who, by consensus among the political class, takes charge for a limited term during emergencies.

During the 1990s, it was the crisis of the European Exchange Rate Mechanism, the vast Mani pulite corruption scandal, and Silvio Berlusconi’s first chaotic administration which formed the backdrop for the technocratic governments of Carlo Ciampi and Lamberto Dini. Now in the midst of a pandemic and a gathering economic storm, the immediate pretext comes from the collapse of a government led by Giuseppe Conte of the Five Star Movement, amid machinations by Conte’s rivals and accusations of EU emergency funds being deployed for political patronage

Yet despite its distinctively Italian flavour, this tradition of the technocratic dictator has a much wider European resonance. It reflects the economic and political strains of European integration. And ultimately, the Italian case merely offers a pronounced example of the precarious interplay between depoliticised technocratic governance and democracy which haunts the European Union at large.

The agendas of the Ciampi and Dini cabinets included politically sensitive reforms to state benefits and the public sector, with the purpose of rendering Italy fit for a European economy where Germany set the tune. This pattern was repeated much more emphatically when the next technocratic prime minister, the economist Mario Monti, served from 2011-13. Monti’s mission on behalf of Berlin and Brussels was to temper Italy’s sovereign debt crisis by overseeing harsh austerity measures. 

The legacy of that strategy was the rise of the Italian populism in the form of the Five Star Movement and, on the right, Matteo Salvini’s Lega Nord. Which brings us to another crucial piece of background for Draghi’s appointment this week. With Italian Euroscepticism making further advances during the disastrous first phase of the pandemic, it seems likely that were an election called now a rightwing coalition led by Salvini would take power.

For Italy’s financial and administrative class, that prospect is especially scary given how much the country’s stability now depends on support from the EU. It can be hoped that Draghi will calm the nerves of Italy’s northern creditors, and Germany especially, to pave the way for a much needed second instalment of the coronavirus relief fund. But while all the talk now is of spending and investment, Italy has a public debt worth 160% of GDP and rising, which is only sustainable thanks to the European Central Bank (ECB) continuing to buy its government bonds. It is surely a matter of time before further “structural reforms” are demanded of Italy. 

In other words, when the political parties aren’t up to it, technical governments do the dirty work of squeezing the Italy into the ever-tightening corset of the EU’s economic model. So this is not simply a pathology of Italian politics, but nor can it be characterised as an imposition. Figures like Monti and Draghi have long been invested in this arrangement: they cut their teeth during the 1990s hammering Italian finances into shape for entry to the Euro, and subsequently held important posts in EU institutions. 

Indeed, the basic logic at work here, whereby tasks considered too difficult for democratic politics are handed over to the realm of technocratic expertise, has become a deeply European one. We see it most clearly in the EU’s increasing reliance on the monetary instruments of the ECB as the only acceptable tool with which to respond to economic crises. This goes back to the original political failure of not achieving fiscal integration in the Eurozone, which would have allowed wealth transfers to ailing economies no longer able to negotiate debt reductions or devalue their currencies. But during the Eurozone crisis and its aftermath, politicians avoided confronting their electorates with the need to provide funds for the stricken Club Med states. In stead they relied on the ECB to keep national governments solvent through sovereign bond purchases.

And lest we forget, it was these same bond purchases that made the name of Italy’s incoming prime minister, Mario Draghi. In 2012, when Draghi was ECB president, he appeared to almost magically calm the debt markets by announcing he would do “whatever it takes” to keep the Eurozone afloat. This statement, revealing that Draghi had been empowered to step outside the bounds of rule and precedent, is again suggestive of a kind of constitutionally-mandated technocratic dictator, but at a Europe-wide level. 

Of course to focus on monetary policy is also to highlight that these tensions between technocracy and democracy go far beyond the EU. It is certainly not just in Europe that central bankers have accrued vast power through their ability to provide back-door stimulus and keep huge debt burdens sustainable. The growing importance of central banks points back to an earlier moment of depoliticisation at the dawn of neoliberalism in the early 1980s, when control of interest rates was removed from the realm of democratic politics. More fundamentally, it points to the limitations imposed on democracy by the power of financial markets. 

Still, it is no accident that this tension has appeared in such acute form in the EU. As with Italy’s ready supply of emergency prime ministers, the EU’s dense canopy of technocratic institutions provides an irresistible way for politicians to pass the buck on issues they would otherwise have to subject to democratic conflict. This is all well and good if the technocrats succeed, but as we have seen recently with the EU’s vaccine program, it also raises the stakes of failure. Handing difficult and sensitive matters over to unaccountable administrators means that blame and resentment will be directed against the system as whole. 

Why accusations of vaccine nationalism miss the mark

This article was first published by The Critic magazine on 2nd February 2021.

n the wake of Friday’s decision by the European Union to introduce controls on vaccine exports, there has once again been much alarm about “vaccine nationalism.”  This term is meant to pour scorn on governments that prioritise their own citizens’ access to vaccines over that of other countries. It points to the danger that richer parts of the world will squabble for first dibs on limited vaccine supplies – “fighting over the cake,” as a World Health Organisation official aptly described it – while leaving poorer countries trailing far behind in their vaccination efforts.

Certainly, there’s a real danger that the EU’s export controls will end up hampering overall vaccine production by sparking a trade war over raw materials. This is somewhat ironic, given that few have been as outspoken about countries “unduly restricting access to vaccines” as the EU itself. As for global inequalities in vaccine access, make no mistake – they are shaping up to be very ugly indeed. It looks likely that poorer countries, having already faced an economic, social, and public health catastrophe, will struggle to vaccinate their most vulnerable citizens even as richer states give jabs to the majority of their populations.

Wealthy nations undoubtedly have a moral obligation to minimize the impact of these disparities. Nonetheless, wielding vaccine nationalism as a pejorative term is an unhelpful way to diagnose or even to address this problem. Given how the world is structured politically, the best way to ensure that vaccines reach poorer countries is for richer ones to vaccinate a critical mass of their own citizens as quickly as possible.

To condemn vaccine nationalism is to imply that, in the early summer of 2020 when governments began bidding for Advance Purchase Agreements with pharmaceutical companies, a more cooperative global approach would have been feasible. In reality, the political, bureaucratic and logistical structures to meet such a challenge did not exist. Some are still pointing to Covax, the consortium of institutions trying to facilitate global vaccine equality, as a path not taken. But Covax’s proposed strategy was neither realistic nor effective.

The bottom line here is that for governments around the world, whether democratic or not, legitimacy and political stability depends on protecting the welfare of their citizens – a basic principle that even critics of vaccine nationalism struggle to deny. Only slightly less important are the social unrest and geopolitical setbacks that states anticipate if they fall behind in the race to get economies back up and running.

In light of these pressures, Covax never stood a chance. Its task of forging agreement between an array of national, international and commercial players was bound to be difficult, and no state which had the industrial capacity or market access to secure its own vaccines could have afforded to wait and see if it would work. To meet Covax’s aim of vaccinating 20 per cent of the population in every country at the same speed, nations with the infrastructure to deliver vaccines would have had to wait for those that lacked it. They would have surrendered responsibility for the sensitive task of selecting and securing the best vaccines from among the multitude of candidates. (As late as November last year Covax had just nine vaccines in its putative global portfolio; it did not reach a deal with the first successful candidate, Pfizer-BioNTech, until mid-January).

But even if a more equitable approach to global vaccine distribution had been plausible, it wouldn’t necessarily have been more desirable. Seeing some states race ahead in the vaccine race is unsettling, but at least countries with the capacity to roll out vaccines are using it, and just as important, we are getting crucial information about how to organise vaccination campaigns from a range of different models. The peculiarity of the vaccine challenge means that, in the long run, having a few nations to serve as laboratories will probably prove more useful to everyone than a more monolithic approach that prioritises equality above all.

The EU’s experience is instructive here. Given its fraught internal politics, it really had no choice but to adopt a collective approach for its 27 member states. To do otherwise would have left less fortunate member states open to offers from Russia and China. Still, the many obstacles and delays it has faced – ultimately driving it to impose its export controls – are illustrative of the costs imposed by coordination. Nor should we overlook the fact that its newfound urgency has come from the example of more successful strategies in Israel, the United States and United Kingdom.

Obviously, richer states should be helping Covax build up its financial and logistical resources as well as ensuring their own populations are vaccinated. Many are doing so already. What is still lacking are the vaccines themselves. Since wealthy states acting alone have been able to order in advance from multiple sources, they have gained access to an estimated 800 million surplus vaccine doses, or more than two billion when options are taken into account.

There’s no denying that if such hoarding continues in the medium-term, it will constitute an enormous moral failing. But rather than condemning governments for having favoured their own citizens in this way, we should focus on how that surplus can reach poorer parts of the world as quickly as possible.

This means, first, scaling up manufacturing to ease the supply bottlenecks which are making governments unsure of their vaccine supply. Most importantly though, it means concentrating on how nations that do have access to vaccines can most efficiently get them into people’s arms. The sooner they can see an end to the pandemic in sight, the sooner they can begin seriously diverting vaccines elsewhere. Obviously this will also require resolving the disputes sparked by the EU’s export controls, if necessary by other nations donating vaccines to the EU.

But we also need to have an urgent discussion about when exactly nations should stop prioritising their citizens. Governments should be pressured to state under what conditions they will deem their vaccine supply sufficient to focus on global redistribution. Personally, not being in a high-risk category, I would like to see a vaccine reach vulnerable people in other countries before it reaches me. Admittedly the parameters of this decision are not yet fully in view, with new strains emerging and the nature of herd immunity still unclear. But it would be a more productive problem to focus our attention on than the issue of vaccine nationalism as such.

What’s really at stake in the fascism debate

This essay was originally published by Arc magazine on January 27th 2021.

Many themes of the Trump presidency reached a crescendo on January 6th, when the now-former president’s supporters rampaged through the Capitol building. Among those themes is the controversy over whether we should label the Trump movement “fascist.”

This argument has flared-up at various points since Trump won the Republican nomination in 2015. After the Capitol attack, commentators who warned of a fascist turn in American politics have been rushed back into interview slots and op-ed columns. Doesn’t this attempt by a violent, propaganda-driven mob to overturn last November’s presidential election vindicate their claims?

Many themes of the Trump presidency reached a crescendo on January 6th, when the now-former president’s supporters rampaged through the Capitol building. Among those themes is the controversy over whether we should label the Trump movement “fascist.”

This argument has flared-up at various points since Trump won the Republican nomination in 2015. After the Capitol attack, commentators who warned of a fascist turn in American politics have been rushed back into interview slots and op-ed columns. Doesn’t this attempt by a violent, propaganda-driven mob to overturn last November’s presidential election vindicate their claims?

If Trumpism continues after Trump, then so will this debate. But whether the fascist label is descriptively accurate has always struck me as the least rewarding part. Different people mean different things by the word, and have different aims in using it. Here’s a more interesting question: What is at stake if we choose to identify contemporary politics as fascist?

Many on the activist left branded Trump’s project fascist from the outset. This is not just because they are LARPers trying to re-enact the original anti-fascist struggles of the 1920s and 30s — even if Antifa, the most publicized radicals on the left, derive their name and flag from the communist Antifaschistische Aktion movement of early 1930s Germany. More concretely, the left’s readiness to invoke fascism reflects a longstanding, originally Marxist convention of using “fascist” to describe authoritarian and racist tendencies deemed inherent to capitalism.

From this perspective, the global shift in politics often labeled “populist” — including not just Trump, but also Brexit, the illiberal regimes of Eastern Europe, Narendra Modi’s India, and Jair Bolsonaro’s Brazil — is another upsurge of the structural forces that gave rise to fascism in the interwar period, and therefore deserves the same name.

In mainstream liberal discourse, by contrast, the debates about Trumpism and fascism have a strangely indecisive, unending quality. Journalists and social media pundits often defer to experts, so arguments devolve into bickering about who really counts as an expert and what they’ve actually said. After the Capitol attack, much of the discussion pivoted on brief comments by historians Robert Paxton and Ruth Ben-Ghiat. Paxton claimedin private correspondence that the Capitol attack “crosses the red line” beyond which the “F word” is appropriate, while on Twitter Ben-Ghiat drew a parallel with Mussolini’s 1922 March on Rome.

Meanwhile, even experts who have consistently equated Trumpism and fascism continue adding caveats and qualifications. Historian Timothy Snyder, who sounded the alarm in 2017 with his book On Tyrannyrecently described Trump’s politics as “pre-fascist” and his lies about election fraud as “structurally fascist,” leaving for the future the possibility Trump’s Republican enablers could “become the fascist faction.” Philosopher Jason Stanley, who makes a version of the left’s fascism-as-persistent-feature argument, does not claim that the label is definitive so much as a necessary framing, highlighting important aspects of Trump’s politics.

The hesitancy of the fascism debate reflects the difficulty of assigning a banner to movements that don’t claim it. A broad theory of fascism unavoidably relies on the few major examples of avowedly fascist regimes— especially interwar Italy and Germany –– even if, as Stanley has detailed in his book How Fascism Works, such regimes drew inspiration from the United States, and inspired Hindu nationalists in India. This creates an awkward relationship between fascism as empirical phenomenon and fascism as theoretical construct, and means there will always be historians stepping in, as Richard Evans recently did, to point out all the ways that 1920s-30s fascism was fundamentally different from the 21st century movements which are compared to it.

But there’s another reason the term “fascism” remains shrouded in perpetual controversy, one so obvious it’s rarely explored: The concept has maintained an aura of seriousness, of genuine evil, such that acknowledging its existence seems to represent a moral and political crisis. The role of fascism in mainstream discourse is like the hammer that sits in the box marked “in case of emergency break glass” — we might point to it and talk about breaking the glass one day, but actually doing so would signify a kind of rupture in the fabric of politics, opening up a world where extreme measures would surely be justified.

We see this in the impulse to ask “do we really want to call everyone who voted for fascist?” “Aren’t we being alarmist?” And “if we use that word now, what will we use when things get much worse?” Stanley has acknowledged this trepidation, suggesting it shows we’ve become accustomed to things that should be considered a crisis. I would argue otherwise. It reflects the crucial place of fascism in grand narrative of liberal democracy, especially after the Cold War — a narrative that relies on the idea of fascism as a historical singularity.

This first occurred to me when I visited Holocaust memorials in Berlin, and realized, to my surprise, that they had all been erected quite recently. The first were the Jewish Museum and the Memorial to the Murdered Jews of Europe, both disturbingly beautiful, evocative structures, conceived during the 1990s, after the collapse of communist East Germany, and opened between 2000–2005. Over the next decade, these were followed by smaller memorials to various other groups the Nazis persecuted: homosexuals, the Sinti and Roma, the disabled.

There were obvious reasons for these monuments to appear at this time and place. Post-reunification, Germany was reflecting on its national identity, and Berlin had been the capital of the Third Reich. But they still strike me as an excellent representation of liberal democracies’ need to identify memories and values that bind them together, especially when they could no longer contrast themselves to the USSR.

Vanquishing fascist power in the Second World War was and remains a foundational moment. Even as they recede into a distant, mythic past, the horrors overcome at that moment still grip the popular imagination. We saw this during the Brexit debate, when the most emotionally appealing argument for European integration referred back to its original, post-WWII purpose: constraining nationalism. And as the proliferation of memorials in Berlin suggests, fascism can retroactively be defined as the ultimate antithesis to what has, from the 1960s onwards, become liberalism’s main moral purpose: protection and empowerment of traditionally marginalized groups in society.

The United States plays a huge part in maintaining this narrative throughout the West and the English-speaking world, producing an endless stream of books, movies, and documentaries about the Second World War. The American public’s appetite for it seems boundless. That war is infused with a sense of heroism and tragedy unlike any other. But all of this stems from the unique certainty regarding the evil nature of 20th century European fascism.

This is why those who want to identify fascism in the present will always encounter skepticism and reluctance. Fascism is a moral singularity, a point of convergence in otherwise divided societies, because it is a historical singularity, the fixed source from which our history flows. To remove fascism from this foundational position – and worse, to implicate us in tolerating it – is morally disorientating. It raises the suspicion that, while claiming to separate fascism from the European historical example, those who invoke the term are actually trading off the emotional impact of that very example.

I don’t think commentators like Snyder and Stanley have such cynical intentions, and nor do I believe it’s a writer’s job to respect the version of history held dear by the public. Nonetheless, those who try to be both theorists and passionate opponents of fascism must recognize that they are walking a tightrope.

By making fascism a broader, more abstract signifier, and thereby bringing the term into the grey areas of semantic and historiographical bickering, they risk diminishing the aura of singular evil that surrounds fascism in the popular consciousness. But this is an aura which, surely, opponents of fascism should want to maintain.

After the Capitol, the battle for the dream machine

Sovereign is he who decides on the exception. In a statement on Wednesday afternoon, Facebook’s VP of integrity Guy Rosen declared: “This is an emergency situation and we are taking appropriate emergency measures, including removing President Trump’s video.” This came as Trump’s supporters, like a hoard of pantomime barbarians, were carrying out their surreal sacking of the Washington Capitol, and the US president attempted to publish a video which, in Rosen’s words, “contributes to rather than diminishes the risk of ongoing violence.” In the video, Trump had told the mob to go home, but continued to insist that the election of November 2020 had been fraudulent.

The following day Mark Zuckerberg announced that the sitting president would be barred from Facebook and Instagram indefinitely, and at least “until the peaceful transition of power is complete.” Zuckerberg reflected that “we have allowed President Trump to use our platform consistent with our own rules,” so as to give the public “the broadest possible access to political speech,” but that “the current context is now fundamentally different.”

Yesterday Trump’s main communication platform, Twitter, went a step further and suspended the US president permanently (it had initially suspended Trump’s account for 12 hours during the Capitol riot). Giving its rationale for the decision, Twitter also insisted its policy was to “enable the public to hear from elected officials” on the basis that “the people have a right to hold power to account in the open.” It stated, however, that “In the context of horrific events this week,” it had decided “recent Tweets from the @realDonaldTrump account and the context around them – specifically how they are being received and interpreted” (my emphasis) amounted to a violation of its rules against incitement to violence.

These emergency measures by the big tech companies were the most significant development in the United States this week, not the attack on the Capitol itself. In the language used to justify to them, we hear the unmistakable echoes of a constitutional sovereign claiming its authority to decide how the rules should be applied – for between the rules and their application there is always judgment and discretion – and more importantly, to decide that a crisis demands an exceptional interpretation of the rules. With that assertion of authority, Silicon Valley has reminded us – even if it would have preferred not to – where ultimate power lies in a new era of American politics. It does not lie in the ability to raise a movement of brainwashed followers, but in the ability to decide who is allowed the means to do so.

The absurd assault on the Capitol was an event perfectly calibrated to demonstrate this configuration of power. First, the seriousness of the event – a violent attack against an elected government, however spontaneous – forced the social media companies to reveal their authority by taking decisive action. In doing so, of course, they also showed the limits of their authority (no sovereignty is absolute, after all). The tech giants are eager to avoid being implicated in a situation that would justify greater regulation, or perhaps even dismemberment by a Democrat government. Hence their increasing willingness over the last six months, as a Democratic victory in the November elections loomed, to actively regulate the circulation of pro-Trump propaganda with misinformation warnings, content restrictions and occasional bans on outlets such as the New York Post, following its Hunter Biden splash on the eve of the election.

It should be remembered that the motivations of companies like Facebook and Twitter are primarily commercial rather than political. They must keep their monopolistic hold on the public sphere intact to safeguard their data harvesting and advertising mechanisms. This means they need to show lawmakers that they will wield authority over their digital fiefdoms in an appropriate fashion.

Trump’s removal from these platforms was therefore over determined, especially after Wednesday’s debacle in Washington. Yes, the tech companies want to signal their political allegiance to the Democrats, but they also need to show that their virtual domains will not destabilize the United States to the extent that it is no longer an inviting place to do business – for that too would end in greater regulation. They were surely looking for an excuse to get rid of Trump, but from their perspective, the Capitol invasion merited action by itself. It was never going to lead to the overturning of November’s election, still less the toppling of the regime; but it could hardly fail to impress America’s allies, not to mention the global financial elite, as an obvious watershed in the disintegration of the country’s political system.

But it was also the unseriousness of Wednesday’s events that revealed why control of the media apparatus is so important. A popular take on the Capitol invasion itself – and, given the many surreal images of the buffoonish rioters, a persuasive one – is that it was the ultimate demonstration of the United States’ descent into a politics of fantasy; what the theorist Bruno Maçães calls “Dreampolitik.” Submerged in the alternative realities of partisan media and infused with the spirit of Hollywood, Americans have come to treat political action as a kind of role-play, a stage where the iconic motifs of history are unwittingly reenacted as parody. Who could be surprised that an era when a significant part of America has convinced itself that it is fighting fascism, and another that it is ruled by a conspiracy of pedophiles, has ended with men in horned helmets, bird-watching camouflage and MAGA merchandise storming the seat of government with chants of “U-S-A”?

At the very least, it is clear that Trump’s success as an insurgent owes a great deal to his embrace of followers whose view of politics is heavily colored by conspiracy theories, if not downright deranged. The Capitol attack was the most remarkable evidence to date of how such fantasy politics can be leveraged for projects with profound “real world” implications. It was led, after all, by members of the QAnon conspiracy theory movement, and motivated by elaborate myths of a stolen election. Barack Obama was quite right to call it the product of a “fantasy narrative [which] has spiraled further and further from reality… [building] upon years of sown resentments.”

But while there is justifiably much fascination with this new form of political power, it must be remembered that such fantasy narratives are a superstructure. They can only operate through the available technological channels – that is, through the media, all of which is today centred around the major social media platforms. The triumph of Dreampolitik at the Capitol therefore only emphasises the significance of Facebook and Twitter’s decisive action against Trump. For whatever power is made available through the postmodern tools of partisan narrative and alternative reality, an even greater power necessarily belongs to those who can grant or deny access to these tools.

And this week’s events are, of course, just the beginning. The motley insurrection of the Trumpists will serve as a justification, if one was needed, for an increasingly strict regime of surveillance and censorship by major social media platforms, answering to their investors and to the political class in Washington. Already the incoming president Joe Biden has stated his intentions to introduce new legislation against “domestic terrorism,” which will no doubt involve the tech giants maintaining their commercial dominance in return for carrying out the required surveillance and reporting of those deemed subversive. Meanwhile, Google and Apple yesterday issued an ultimatum to the platform Parler, which offers the same basic model as Twitter but with laxer content rules, threatening to banish it from their app stores if it did not police conversation more strictly.

But however disturbing the implications of this crackdown, we should welcome the clarity we got this week. For too long, the tech giants have been able to pose as neutral arbiters of discussion, cloaking their authority in corporate euphemisms about public interest. Consequently, they have been able to set the terms of communication over much of the world according to their own interests and political calculations. Whether or not they were right to banish Trump, the key fact is that it was they who had the authority to do so, for their own reasons. The increasing regulation of social media – which was always inevitable, in one form or another, given its incendiary potential – will now proceed according to the same logic. Hopefully the dramatic nature of their decisions this week will make us question if this is really a tolerable situation.