Disaster Junkies

We live in an era where catastrophe looms large in the political imagination. On the one side, we find hellacious visions of climate crisis and ecological collapse; on the other, grim warnings of social disintegration through plummeting birth rates, mass immigration and crime. Popular culture’s vivid post-apocalyptic worlds, from Cormac McCarthy’s The Road to Margaret Atwood’s Handmaid’s Tale, increasingly echo in political discourse – most memorably in Donald Trump’s 2016 inauguration speech on the theme of “American Carnage.” For more imaginative doom-mongers there are various technological dystopias to contemplate, whether AI run amok, a digital surveillance state, or simply the replacement of physical experience with virtual surrogates. Then in 2020, with the eruption of a global pandemic, catastrophe crossed from the silver screen to the news studio, as much of the world sat transfixed by a profusion of statistics, graphs and harrowing reports of sickness and death.

If you are anything like me, the role of catastrophe in politics and culture raises endless fascinating questions. How should we explain our visceral revulsion at fellow citizens dying en mass from an infectious disease, and our contrasting apathy to other forms of large-scale suffering and death? Why can we be terrified by climate change without necessarily feeling a commensurate urgency to do something about it? Why do certain political tribes obsess over certain disasters?

It was questions like these that led me to pick up Niall Ferguson’s new book, Doom: The Politics of Catastrophe. I did this somewhat nervously, it must be said. I found one of Ferguson’s previous books extremely boring, and tend to cringe at his use of intellectual gimmicks – like his idea that the past success of Western civilisation can be attributed to six “killer apps.” Then again, Ferguson’s contrarianism does occasionally produce an interesting perspective, such as his willingness to weigh the negative aspects of the British Empire against the positive, as historians do with most other empires. But as I say, it was really the subject of this latest book that drew me in.

I might as well say upfront that I found it very disappointing. This is going to be a bad review – though hopefully not a pointless one. The flaws of this book can, I think, point us towards a richer understanding of catastrophe than Ferguson himself offers.

Firstly, Doom is not really about “the politics of catastrophe” as I understand that phrase. A few promising questions posed in the introduction – “Why do some societies and states respond to catastrophe so much better than others? Why do some fall apart, most hold together, and a few emerge stronger? Why does politics sometimes cause catastrophe?” – are not addressed in any sustained way. What this book is really about is the difficulty of predicting and mitigating statistically irregular events which cause excess deaths. That sounds interesting enough, to be sure, but there’s just one fundamental problem: Ferguson never gets to grips with what actually makes such events catastrophic, leaving a rather large hole where the subject of the book should be. 

The alarm bells start ringing when Ferguson introduces the book as “a general history of catastrophe” and, in case we didn’t grasp how capacious that sounds, tells us it will include:

not just pandemics but all kinds of disasters, from the geological (earthquakes) to the geopolitical (wars), from the biological (pandemics) to the technological (nuclear accidents). Asteroid strikes, volcanic eruptions, extreme weather events, famines, catastrophic accidents, depressions, revolutions, wars, and genocides: all life – and much death – is here.

You may be asking if there is really much of a relationship, throughout all the ages of history, between asteroid strikes, nuclear accidents and revolutions – and I’d say this gets to a pretty basic problem with tackling a subject like this. Writing about catastrophe (or disaster – the two are used a synonyms) requires finding a way to coherently group together the extremely diverse phenomena that might fall into this category. It requires, in other words, developing an understanding of what catastrophe actually means, in a way that allows for useful parallels between its different manifestations. 

Ferguson seems to acknowledge this when he rounds off his list by asking “For how else are we to see our disaster [i.e. Covid] – any disaster – in proper perspective?” Yet his concept of catastrophe turns out to be circular, inconsistent and inadequate. Whatever aspect of catastrophe Ferguson happens to be discussing in a particular chapter becomes, temporarily, his definition of catastrophe as such. When he is talking about mortality, mortality becomes definitive of catastrophe (“disaster, in the sense of excess mortality, can take diverse forms and yet pose similar challenges”). Likewise when he is showing how infrequent and therefore hard to predict catastrophes are (“the rare, large scale disasters that are the subject of this book”). In Ferguson’s chapter seeking similarities between smaller and larger disasters, he seems happy to simply accept whatever is viewed as a disaster in the popular memory: the Titanic, Chernobyl, the failed launch of NASA’s Challenger spacecraft. 

This is not nitpicking. I’m not expecting the metaphysical rigor of Immanuel Kant. I like an ambitious, wide-ranging discussion, even if that means sacrificing some depth. But attempting this without any real thesis, or even a firm conceptual framework, risks descending into a series of aimless and confusing digressions which don’t add up to anything. And that is more or less what happens in this book.

Consider Ferguson’s chapter on “The Psychology of Political Incompetence.” After a plodding and not especially relevant summary of Tolstoy’s concluding essay in War and Peace, Ferguson briefly introduces the idea that political leaders’ power is curtailed by the bureaucratic structures they inhabit. He then cuts to a discussion of the role of ideology in creating disastrous food shortages, by way of supporting Amartya Sen’s argument that democratic regimes respond better to famines than non-democratic ones. It’s not clear how this relates to the theme of bureaucracy and leadership, but this is one of the few sections where Ferguson is actually addressing something like “the politics of catastrophe;” and when he poses the interesting question of “why Sen’s theory does not apply to all forms of disaster” it feels like we are finally getting somewhere.

Alas, as tends to be the case in this book, Ferguson doesn’t answer the question, but embarks on a series of impromptu arguments against straw men. A winding discussion of British ineptness during the two World Wars brings him to the conclusion that “Democracy may insure a country against famine; it clearly does not insure against military disaster.” Who said that it does? Then Ferguson has suddenly returned to the issue of individual leadership, arguing that “it makes little sense” to hold Churchill solely responsible for the fall of Singapore to the Japanese in 1942. Again, who said we should? Ferguson then rounds off the chapter with an almost insultingly cursory discussion of “How Empires Fall,” cramming eight empires into less than five pages, to make the highly speculative argument that that imperial collapse is as unpredictable as various other kinds of disaster.

Insofar as anything holds this book together, it is the thin sinews of statistical probability models and network science. These do furnish a few worthwhile insights. Many of the events Ferguson classes as disasters follow power-law distributions, which is to say there is no regular relationship between their scale and the frequency with which they occur. So big disasters are essentially impossible to predict. In many cases, this is because they emerge from complex systems – natural, economic and social – which can unexpectedly amplify small events into enormous ones. In hindsight, these often seem to have been entirely predictable, and the Cassandras who warned of them are vindicated. But a regime that listened to every Cassandra would incur significant political costs in preparing for disasters that usually won’t materialize.

I also liked Ferguson’s observation that the key factor determining the scale of a disaster, in terms of mortality, is “whether or not there is contagion – that is, some way of propagating the initial shock through the biological networks of life or the social networks of humanity.” But his other useful comments about networks come in a single paragraph, and can be quoted without much further explanation:

If Cassandras had higher centrality [in the network], they might be more often heeded. If erroneous doctrines [i.e. misinformation] spread virally through a large social network, effective mitigation of disaster becomes much harder. Finally… hierarchical structures such as states exist principally because, while inferior to distributed networks when it comes to innovation, they are superior when it comes to defence.

I’m not sure it was necessary to have slogged through an entire chapter on network science, recycled from Ferguson’s last book, The Square and the Tower, to understand these points.

But returning to my main criticism, statistical and network analysis doesn’t really allow for meaningful parallels between different kinds of catastrophe. This is already evident in the introduction, when Ferguson states that “disaster takes too many forms for us to process with conventional approaches to risk mitigation. No sooner have we focused our minds on the threat of Salafi jihad than we find ourselves in a financial crisis originating in subprime mortgages.” As this strange comment suggests, the implied perspective of the book is that of a single government agency tasked with predicting everything from financial crises and terrorist attacks to volcanic eruptions and genocides. But no such agency exists, of course, for the simple reason that when you zoom in from lines plotted on a graph, the illusion that these risks are similar dissolves into a range of totally different phenomena attached to various concrete situations. The problem is absurdly illustrated when, having cited a statistical analysis of 315 conflicts between 1820-1950, Ferguson declares that in terms of predictability, “wars do indeed resemble pandemics and earthquakes. We cannot know in advance when or where a specific event will strike, nor on what scale.” Which makes it sound like we simply have no way of knowing whether the next conflict is more likely to break out in Gaza or Switzerland.  

In any case, there is something patently inadequate about measuring catastrophe in terms of mortality figures and QALYs (quality-adjusted life years), as though the only thing we have in common is a desire to live for as long as possible. Not once is the destruction of culture or ways of life mentioned in the book, despite the fact that throughout history these forms of loss have loomed large in people’s sense of catastrophe. Ferguson even mentions several times that the most prolific causes of mortality are often not recognised as catastrophes – but does not seem to grasp the corollary that catastrophe is about something more than large numbers of deaths. 

Indeed, maybe the best thing that can be said about Doom is that its shortcomings help us to realise what does need to be included in an understanding of catastrophe. Throughout the book, we see such missing dimensions flicker briefly into view. In his discussion of the flu pandemic of the late 1950s, Ferguson notes in passing that the Soviet launch of the Sputnik satellite in October 1957 “may help to explain why the memory of the Asian flu has faded” in the United States. This chimes with various other hints that this pandemic was not really perceived as a catastrophe. But why? And it what sense was it competing with the Cold War in the popular imagination? Likewise, Ferguson mentions that during the 1930s the lawyer Basil O’Connor used “the latest techniques in advertising and fundraising” to turn the “horrific but relatively rare disease” of polio into “the most feared affliction of the age.” This episode is briefly contrasted to the virtual silence of the American media and political class over AIDS during the 1980s. 

In fact, unacknowledged catastrophes are an unacknowledged theme of the book. It re-emerges in several intriguing mentions of the opioid epidemic in the United States, with its associated “deaths of despair.” At the same time as there was “obsessive discussion” of global warming among the American elite, Ferguson points out, “the chance of dying from an overdose was two hundred times greater than the chance of being killed by a cataclysmic storm.” He also describes the opioid crisis as “the biggest disaster of the Obama presidency,” and suggests that although “the media assigned almost no blame to Obama” for it, “such social trends did much to explain Donald J. Trump’s success.” Finally, Ferguson notes that during the current Covid crisis, the relative importance of protecting the vulnerable from the disease versus maintaining economic activity became an active front in the American culture war. 

The obvious implication of all this is that, while Ferguson does not really engage with “the politics of catastrophe,” the concept and reality of catastrophe is inherently political. There isn’t really an objective measure of catastrophe: the concept implies judging the nature and consequences of an event to be tragic. Whether or not something meets this standard often depends on who it affects and whether it fits into the emotionally compelling narratives of the day. The AIDS and opioid epidemics initially went unrecognized because their victims were homosexuals and working class people respectively. To take another example, the 1921 pogrom against the affluent African American community in Tulsa, Oklahoma, was for the longest time barely known about, let alone mourned (except of course by African Americans themselves); yet a hundred years later it is being widely recognised as a travesty. Last week’s volcanic eruption in the Democratic Republic of Congo, which may have left 20,000 people homeless, would probably be acknowledged as catastrophic by a Westerner who happened to read about it in the news. But we are much more likely to be aware of, and emotionally invested in, the disastrous Israeli-Palestinian conflict of recent weeks. 

Catastrophe, in other words, is inextricably bound up with popular perception and imagination. It is rooted in the emotions of fear, anger, sadness, horror and titillation with which certain events are experienced, remembered or anticipated. This is how we can make sense of apathy to the late-1950s flu pandemic: such hazards, as Ferguson mentions, were still considered a normal part of life rather than an exceptional danger, and people’s minds were focused on the potential escalation of the Cold War. Hence also the importance of the media in determining whether and how disasters become embedded in public discourse. While every culture has its religious and mythical visions of catastrophe (a few are mentioned in a typically fleeting discussion near the start of Doom), today Netflix and the news media have turned us into disaster junkies, giving form and content to our apocalyptic impulses. The Covid pandemic has been a fully mediated experience, an epic rollercoaster of the imagination, its personal and social significance shaped by a constant drumbeat of new information. It is because climate change cannot be made to fit this urgent tempo that is has been cast in stead as a source of fatalism and dread, always looming on the horizon and inspiring millions with a sense of terrified helplessness.  

Overlooking the central role of such cultural and political narratives probably meant that Ferguson’s Doom was doomed from the start. For one thing, this missing perspective immediately shows the problem with trying to compare catastrophes across all human history. Yes, there are fascinating patterns even at this scale, like the tendency of extreme ideological movements to emerge in the midst of disasters – whether the flagellant orders that sprang from the 14th century Black Death, or the spread of Bolshevism in the latter part of the First World War. But to really understand any catastrophe, we have to know what it meant to the people living through it, and this means looking at the particulars of culture, politics and religion which vary enormously between epochs. This, I would argue, is why Ferguson’s attempt to compare the Athenian plague of the late 5th century BC to the Black Death in medieval England feels rather superficial. 

And whatever the historical scope, statistics simply don’t get close to the imaginative essence of catastrophe. Whether or not a disaster actually happens is incidental to its significance in our lives; many go unnoticed, others transform culture through mere anticipation. Nor do we experience catastrophes as an aggregate of death-fearing individuals. We do so as social beings whose concerns are much more elaborate and interesting than mere life and death.

How the Celebs Rule Us

Who should we call the first “Instagram billionaire”? It’s a mark of the new Gilded Age we’ve entered that both women vying for that title belong to the same family, the illustrious Kardashian-Jenner clan. In 2019, it looked like Kylie Jenner had passed the ten-figure mark, only for Forbes to revise its estimates, declaring that Jenner had juiced her net worth with “white lies, omissions and outright fabrications.” (Her real wealth, the magazine thought, was a paltry $900 million). So, as of April this year, the accolade belongs to Jenner’s no less enterprising sister, Kim Kardashian West.

Social media has ushered in a new fusion of celebrity worship and celebrity entrepreneurship, giving rise to an elite class of “influencers” like Jenner and Kardashian West. Reality TV stars who were, in that wonderful phrase, “famous for being famous,” they now rely on their vast social media followings to market advertising space and fashion and beauty products. As such, they are closely entwined with another freshly minted elite, the tech oligarchs whose platforms are the crucial instruments of celebrity today. Word has it the good people at Instagram are all too happy to offer special treatment to the likes of the Kardashians, Justin Bieber, Taylor Swift and Lady Gaga – not to mention His Holiness the Supreme Pontiff of the Universal Church (that’s @franciscus to you and me). And there’s every reason for social media companies to accommodate their glamorous accomplices: in 2018, Jenner managed to wipe $1.3 billion off the market value of Snapchat with a single tweet questioning the platform’s popularity. 

It’s perfectly obvious, of course, what hides behind the embarrassingly thin figleaf of “influence,” and that is power. Not just financial power but social status, cultural clout and, on the tech companies’ side of the bargain, access to the eyeballs and data of huge audiences. The interesting question is where this power ultimately stems from. The form of capital being harvested is human attention; but how does the tech/influencer elite monopolise this attention? One well-known answer is through the addictive algorithms and user interfaces that turn us into slaves of our own brain chemistry; another invokes those dynamics of social rivalry, identified by the philosopher René Girard, whereby we look to others to tell us what we should want. 

But I think there’s a further factor here which needs to be explored, and it begins with the idea of charisma. In a recent piece for Tablet magazine, I argued that social media had given rise to a new kind of charismatic political leader, examples of which include Donald Trump, Jeremy Corbyn, Jordan Peterson and Greta Thunberg. My contention was that the charisma of these individuals, so evident in the intense devotion of their followers, does not stem from any innate quality of their personalities. In stead, charisma is assigned to them by online communities which, in the process of rallying around a leader, galvanise themselves into political movements.

Here I was drawing on the great German sociologist Max Weber, whose concept of “charismatic authority” describes how groups of people find coherence and structure by recognising certain individuals as special. And yet, the political leaders I discussed in the Tablet piece are far from the only examples showing the relevance of Weber’s ideas today. If anything, they are interlopers: accidental beneficiaries of a media system that is calibrated for a different type of charismatic figure, pursuing a different kind of power. I’m referring, of course, to the Kardashians, Biebers, and countless lesser “influencers” of this world. It is the twin elite of celebrities and tech giants, not the leaders of political movements, who have designed the template of charismatic authority in the social media age. 


When Weber talks about charismatic authority, he is talking about the emotional and ideological inspiration we find in other people. We are compelled to emulate or follow those individuals who issue us with a “calling” – a desire to lead our lives a certain way or aspire towards a certain ideal. To take an obvious example, think about the way members of a cult are often transfixed by a leader, dropping everything in their lives to enter his or her service; some of you will recall the scarlet-clad followers of the guru Bhagwan Shree Rajneesh in the 2018 Netflix documentary Wild Wild Country. Weber’s key observation is that this intensely subjective experience is always part of a wider social process: the “calling” of charisma, though it feels like an intimate connection with an exceptional person, is really the calling of our own urge to fit in, to grasp an identity, to find purpose and belonging. There’s a reason charismatic figures attract followers, plural. They are charismatic because they represent a social phenomenon we want to be a part of, or an aspiration our social context has made appealing. Whatever Rajneesh’s personal qualities, his cult was only possible thanks to the appeal of New Age philosophy and collectivist ways of life to a certain kind of disillusioned Westerner during the 1960s and ’70s. 

Today there’s no shortage of Rajneesh-like figures preaching homespun doctrines to enraptured audiences on Youtube. But in modern societies, charismatic authority really belongs to the domain of celebrity culture; the domain, that is, of the passionate, irrational, mass-scale worship of stars. Since the youth movements of the 1950s and 60s, when burgeoning media industries gave the baby-boomers icons like James Dean and The Beatles, the charismatic figures who inspire entire subcultures and generations have mostly come from cinema and television screens, from sports leagues, music videos and fashion magazines. Cast your mind back to your own teenage years – the time when our need for role models is most pressing – and recall where you and your chums turned for your wardrobe choices, haircuts and values. To the worlds of politics and business, perhaps? Not likely. We may not be so easily star-struck as adults, but I’d vouch most of your transformative encounters with charisma still come, if not from Hollywood and Vogue, then from figures projected into your imagination via the media apparatus of mass culture. It’s no coincidence that when a politician does gain a following through personality and image, we borrow clichés from the entertainment industry, whether hailing Barack Obama’s “movie star charisma” or dubbing Yanis Varoufakis “Greece’s rock-star finance minister.”

Celebrity charisma relies on a peculiar suspension of disbelief. We can take profound inspiration from characters in films, and on some level we know that the stars presented to us in the media (or now presenting themselves through social media) are barely less fictional. They are personae designed to harness the binding force of charismatic authority – to embody movements and cultural trends that people want to be part of. In the context of the media and entertainment business, their role is essentially to commodify the uncommodifiable, to turn our search for meaning and identity into a source of profit. Indeed, the celebrity culture of recent decades grew from the bosom of huge media conglomerates, who found that the saturation of culture by new media technologies allowed them to turn a small number of stars into prodigious brands.

In the 1980s performers like Michael Jackson and Madonna, along with sports icons like Michael Jordan, joined Hollywood actors in a class of mega celebrities. By the ’90s, such ubiquitous figures were flanked by stars catering to all kinds of specific audiences: in the UK, for instance, lad culture had premiership footballers, popular feminism had Sex and the City, Britpoppers had the Gallagher brothers and grungers had Kurt Cobain. For their corporate handlers, high-profile celebrities ensured revenues from merchandise, management rights and advertising deals, as well as reliable consumer audiences that offset the risks of more speculative ventures.

Long before social media, in other words, celebrity culture had become a thoroughly commercialised form of charismatic authority. It still relied on the ability of stars to issue their followers with a “calling” – to embody popular ideals and galvanise movements – but these roles and relationships were reflected in various economic transactions. Most obviously, where a celebrity became a figurehead for a particular subculture, people might express their membership of that subculture by buying stuff the celebrity advertised. But no less important, in hindsight, was the commodification of celebrities’ private lives, as audiences were bonded to their stars through an endless stream of “just like us” paparazzi shots, advertising campaigns, exclusive interviews and documentaries, and so on. As show-business sought to the maximise the value of star power, the personae of celebrities were increasingly constructed in the mould of “real” people with human, all-too-human lives.

Which brings us back to our influencer friends. For all its claims to have opened up arts and entertainment to the masses, social media really represents another step towards a celebrity culture dominated by an elite cluster of stars. Digital tech, as we know, has annihilated older business models in media-related industries. This has concentrated even more success in the hands of the few who can command attention and drive cultural trends – who can be “influencers” – through the commodification of their personal lives. And that, of course, is exactly what platforms like Instagram are designed for. A Bloomberg report describes how the Kardashians took over and ramped-up the trends of earlier decades:

Back in the 1990s, when the paparazzi were in their pomp, pictures of celebrities going about their daily lives… could fetch $15,000 a pop from tabloids and magazines… The publications would in turn sell advertising space alongside those images and rake in a hefty profit.

Thanks to social media, the Kardashians were able to cut out the middle man. Instagram let the family post images that they controlled and allowed them to essentially sell their own advertising space to brands… The upshot is that Kardashian West can make $1 million per sponsored post, while paparazzi now earn just $5 to $10 apiece for “Just Like Us” snaps.

Obviously, Instagram does not “let” the Kardashians do this out of the kindness of its heart: as platforms compete for users, it’s in their interests to accommodate the individuals who secure the largest audiences. In fact, through their efforts to identify and promote such celebrities, the social media companies are increasingly important in actually making them celebrities, effectively deciding who among the aspiring masses gets a shot at fame. Thus another report details how TikTok “assigned individual managers to thousands of stars to help with everything, whether tech support or college tuition,” while carefully coordinating with said stars to make their content go viral.

But recall, again, that the power of celebrities ultimately rests on their followers’ feeling that they’re part of something – that is the essence of their charisma. And it’s here that social media really has been revolutionary. It has allowed followers to become active communities, fused by constant communication with each other and with the stars themselves. Instagram posts revealing what some celeb had for breakfast fuel a vast web of interactions, through which their fans sustain a lively sense of group identity. Naturally, this being social media, the clearest sign of such bonding is the willingness of fans to group together like a swarm of hornets and attack anyone who criticises their idols. Hence the notorious aggression of the “Beleibers,” or fanatical Justin Bieber fans (apparently not even controllable by the pop star himself); and hence Instagram rewriting an algorithm to protect Taylor Swift from a wave of snake emojis launched by Kim Kardashian followers. This, surely, is the sinister meaning behind an e-commerce executive bragging to Forbes magazine about Kylie Jenner’s following, “No other influencer has ever gotten to the volume or had the rabid fans” that she does. 

In other words, the celebrity/tech elite’s power is rooted in new forms of association and identification made possible by the internet. It’s worth taking a closer look at one act which has revealed this in an especially vivid way: the K-Pop boy band BTS (the name stands for Bangtan Sonyeodan, or Beyond the Scene in English). Preppy outfits and feline good looks notwithstanding, these guys are no lightweights. Never mind the chart-topping singles, the stadium concerts and the collaborations with Ed Sheeran; their success registers on a macroeconomic scale. According to 2018 estimates from the Hyundai Research Institute, BTS contributes $3.6 billion annually to the South Korean economy, and is responsible for around 7% of tourism to the country. No less impressive are the band’s figures for online consumption: it has racked up the most YouTube views in a 24-hour period, and an unprecedented 750,000 paying viewers for a live-streamed concert. 

Those last stats are the most suggestive, because BTS’s popularity rests on a fanatical online community of followers, the “Adorable Representative M.C. for Youth” (ARMY), literally numbering in the tens of millions. In certain respects, the ARMY doesn’t resemble a fan club so much as an uncontacted tribe in the rainforest: it has its own aesthetics, norms and rituals centred around worship of BTS. All that’s missing, perhaps, is a cosmology, but the band’s management is working on that. It orchestrates something called the “Bangtan Universe”: an ongoing fictional metanarrative about BTS, unfolding across multiple forms of media, which essentially encourages the ARMY to inhabit its own alternate reality. 

Consequently, such is the ARMY’s commitment that its members take personal responsibility for BTS’s commercial success. They are obsessive about boosting the band’s chart performance, streaming new content as frequently and on as many devices as possible. The Wall Street Journal describes one fan’s devotion:  

When [the BTS song] “Dynamite” launched, Michelle Tack, 47, a cosmetics stores manager from Chicopee, Massachusetts, requested a day off work to stream the music video on YouTube. “I streamed all day,” Tack says. She made sure to watch other clips on the platform in between her streaming so that her views would count toward the grand total of views. […]

“It feels like I’m part of this family that wants BTS to succeed, and we want to do everything we can do to help them,” says Tack. She says BTS has made her life “more fulfilled” and brought her closer to her two daughters, 12 and 14. 

The pay-off came last October, when the band’s management company, Big Hit Entertainment, went public, making one of the most successful debuts in the history of the South Korean stock market. And so the sense of belonging which captivated that retail manager from Massachussetts now underpins the value of financial assets traded by banks, insurance companies and investment funds. Needless to say, members of the ARMY were clamouring to buy the band’s shares too. 


It is this paradigm of charismatic authority – the virtual community bound by devotion to a celebrity figurehead – which has been echoed in politics in recent years. Most conspicuously, Donald Trump’s political project shared many features with the new celebrity culture. The parallels between Trump and a figure like Kylie Jenner are obvious, from building a personal brand off the back of reality TV fame to exaggerating his wealth and recognising the innovative potential of social media. Meanwhile, the immersive fiction of the Bangtan Universe looks like a striking precedent for the wacky world of Deep State conspiracy theories inhabited by diehard Trump supporters, which spilled dramatically into view with the Washington Capitol invasion of January 6th.

As I argued in my Tablet essay – and as the chaos and inefficacy of the Trump presidency demonstrates – this social media-based form of charismatic politics is not very well suited to wielding formal power. In part, this is because the model is better suited to the kinds of power sought by celebrities: financial enrichment and cultural influence. The immersive character of online communities, which tend to develop their own private languages and preoccupations, carries no real downside for the celebrity: it just means more strongly identified fans. It is, however, a major liability in politics. The leaders elevated by such movements aren’t necessarily effective politicians to begin with, and they struggle to broaden their appeal due to the uncompromising agendas their supporters foist on them. We saw these problems not just with Trump movement but also with the Jeremy Corbyn phenomenon in the UK, and, to an extent, with the younger college-educated liberals who influenced Bernie Sanders after 2016. 

But this doesn’t mean online celebrity culture has had no political impact. Even if virtual communities aren’t much good at practical politics, they are extremely good at producing new narratives and norms, whether rightwing conspiracy theories in the QAnon mould, or the progressive ideas about gender and identity which Angela Nagle has aptly dubbed “Tumblr liberalism.” Celebrities are key to the process whereby such innovations are exported into the wider discourse as politically-charged memes. Thus Moya Lothian Mclean has described how influencers popularise feminist narratives – first taking ideas from academics and activists, then simplifying them for mass consumption and “regurgitat[ing] them via an aesthetically pleasing Instagram tile.” Once such memes reach a certain level of popularity, the really big celebrities will pick them up as part of their efforts to present a compelling personality to their followers (which is not to say, of course, that they don’t also believe in them). The line from Tumblr liberalism through Instagram feminism eventually arrives at the various celebrities who have revealed non-binary gender identities to their followers in recent years. Celebs also play an important role in legitimising grassroots political movements: last year BTS joined countless other famous figures in publicly giving money to Black Lives Matter, their $1 million donation being matched by their fans in little more than a day.

No celebrity can single-handedly move the needle of public opinion, but discourse is increasingly shaped by activists borrowing the tools of the influencer, and by influencers borrowing the language of the activist. Such charismatic figures are the most important nodes in the sprawling network of online communities that constitutes popular culture today; and through their attempts to foster an intimate connection with their followers, they provide a channel through which the political can be made to feel personal. This doesn’t quite amount to a “celebocracy,” but nor can we fully understand the nature of power today without acknowledging the authority of stars.

“Euro-English”: A thought experiment

There was an interesting story in Politico last weekend about “Euro-English,” and a Swedish academic who wants to make it an official language. Marko Modiano, a professor at the University of Gävle, says the European Union should stop using British English for its documents and communications, and replace it with the bastardised English which is actually spoken in Brussels and on the continent more generally.

Politico offers this example of how Euro-English might sound, as spoken by someone at the European Commission: “Hello, I am coming from the EU. Since 3 years I have competences for language policy and today I will eventually assist at a trilogue on comitology.”

Although the EU likes to maintain the pretence of linguistic equality, English is in practice the lingua franca of its bureaucrats, the language in which most laws are drafted, and increasingly default language of translation for foreign missions. It is also the most common second language across the continent. But according to Modiano, this isn’t the same English used by native speakers, and it’s silly that the EU’s style guides try to make it conform to the latter. (Spare a thought for Ireland and Malta, who under Modiano’s plans would presumably have to conduct EU business in a slightly different form of English).

It’s a wonderful provocation, but could it also be a veiled political strategy? A distinctively continental English might be a way for the EU to cultivate a stronger pan-European identity, thus increasing its authority both in absolute terms and relative to national governments. The way Modiano presents his proposal certainly makes it sound like that: “Someone is going to have to step forward and say, ‘OK, let’s break our ties with the tyranny of British English and the tyranny of American English.’ And instead say… ‘This is our language.’” (My emphasis).

The EU has forever been struggling with the question of whether it can transcend the appeal of nation states and achieve a truly European consciousness. Adopting Euro-English as an official lingua franca might be a good start. After all, a similar process of linguistic standardisation was essential to the creation of the modern nation state itself.   

As Eric Hobsbawm writes in his classic survey of the late-19th and early-20th century, The Age of Empire, the invention of national languages was a deliberate ideological project, part of the effort to forge national identities out of culturally heterogeneous regions. Hobsbawm explains:

Linguistic nationalism was the creation of people who wrote and read, not of people who spoke. And the ‘national languages’ in which they discovered the essential character of their nations were, more often than not, artefacts, since they had to be compiled, standardized, homogenized and modernized for contemporary and literary use, out of the jigsaw puzzle of local or regional dialects which constituted non-literary languages as actually spoken. 

Perhaps the most remarkable example was the Zionist movement’s promotion of Hebrew, “a language which no Jews had used for ordinary purposes since the days of the Babylonian captivity, if then.”

Where this linguistic engineering succeeded, it was thanks to the expansion of state education and the white-collar professions. A codified national language, used in schools, the civil service and public communications like street signs, was an ideal tool for governments to instil a measure of unity and loyalty in their diverse and fragmented populations. This in turn created incentives for the emerging middle class to prefer an official language to their own vernaculars, since it gave access to careers and social status. 

Could the EU not pursue a similar strategy with Euro-English? There could a special department in Brussels tracking the way English is used by EU citizens on social media, and each year issuing an updated compendium on Euro-English. This emergent language, growing ever more distinctly European, could be mandated in schools, promoted through culture and in the media, and of course used for official EU business. Eventually the language would be different enough to be rebranded simply as “European.”

You’ll notice I’m being facetious now; obviously this would never work. Privileging one language over others would instantly galvanise the patriotism of EU member states, and give politicians a new terrain on which to defend national identity against Brussels. This is pretty much how things played out in multinational 19th century states such as Austria-Hungary, where linguistic hierarchies enflamed the nationalism of minority cultures. One can already see something like this in the longstanding French resentment against the informal dominance of English on the continent.

Conversely, Euro-English wouldn’t work because for Europe’s middle-classes and elites, the English language is a gateway not to Europe, but to the world. English is the language of global business and of American cultural output, and so is a prerequisite for membership of any affluent cosmopolitan milieu. 

And this, I think, is the valuable insight to be gained from thought experiments like the one suggested by Modiano. Whenever we try to imagine what the path to a truly European demos might look like, we always encounter these two quite different, almost contradictory obstacles. On the one hand, the structure of the EU seems to have frozen in place the role of the nation state as the rightful locus of imagined community and symbolic attachment. At the same time, among those who identify most strongly with the European project, many are ultimately universalist in their outlook, and unlikely to warm to anything that implies a distinctively European identity. 

Gambling on technocrats

The likely appointment of Mario Draghi as Italy’s prime minister has been widely, if nervously, greeted as a necessary step. Draghi, an esteemed economist and central banker, will be the fourth unelected technocrat to fill the post in Italy in the last 30 years. As the Guardian concedes by way of welcoming Draghi’s appointment, a ready embrace of unelected leaders is “not a good look for any self-respecting democracy.” 

Italy’s resort to temporary “technical governments” reflects the fact that its fractious political system, with its multitude of parties and short-lived coalitions, is vulnerable to paralysis at moments of crisis. Such has been the price for a constitution designed to prevent the rise of another Mussolini. Ironically though, the convention of installing technocrats recalls the constitutional role of Dictator in the ancient Roman Republic: a trusted leader who, by consensus among the political class, takes charge for a limited term during emergencies.

During the 1990s, it was the crisis of the European Exchange Rate Mechanism, the vast Mani pulite corruption scandal, and Silvio Berlusconi’s first chaotic administration which formed the backdrop for the technocratic governments of Carlo Ciampi and Lamberto Dini. Now in the midst of a pandemic and a gathering economic storm, the immediate pretext comes from the collapse of a government led by Giuseppe Conte of the Five Star Movement, amid machinations by Conte’s rivals and accusations of EU emergency funds being deployed for political patronage

Yet despite its distinctively Italian flavour, this tradition of the technocratic dictator has a much wider European resonance. It reflects the economic and political strains of European integration. And ultimately, the Italian case merely offers a pronounced example of the precarious interplay between depoliticised technocratic governance and democracy which haunts the European Union at large.

The agendas of the Ciampi and Dini cabinets included politically sensitive reforms to state benefits and the public sector, with the purpose of rendering Italy fit for a European economy where Germany set the tune. This pattern was repeated much more emphatically when the next technocratic prime minister, the economist Mario Monti, served from 2011-13. Monti’s mission on behalf of Berlin and Brussels was to temper Italy’s sovereign debt crisis by overseeing harsh austerity measures. 

The legacy of that strategy was the rise of the Italian populism in the form of the Five Star Movement and, on the right, Matteo Salvini’s Lega Nord. Which brings us to another crucial piece of background for Draghi’s appointment this week. With Italian Euroscepticism making further advances during the disastrous first phase of the pandemic, it seems likely that were an election called now a rightwing coalition led by Salvini would take power.

For Italy’s financial and administrative class, that prospect is especially scary given how much the country’s stability now depends on support from the EU. It can be hoped that Draghi will calm the nerves of Italy’s northern creditors, and Germany especially, to pave the way for a much needed second instalment of the coronavirus relief fund. But while all the talk now is of spending and investment, Italy has a public debt worth 160% of GDP and rising, which is only sustainable thanks to the European Central Bank (ECB) continuing to buy its government bonds. It is surely a matter of time before further “structural reforms” are demanded of Italy. 

In other words, when the political parties aren’t up to it, technical governments do the dirty work of squeezing the Italy into the ever-tightening corset of the EU’s economic model. So this is not simply a pathology of Italian politics, but nor can it be characterised as an imposition. Figures like Monti and Draghi have long been invested in this arrangement: they cut their teeth during the 1990s hammering Italian finances into shape for entry to the Euro, and subsequently held important posts in EU institutions. 

Indeed, the basic logic at work here, whereby tasks considered too difficult for democratic politics are handed over to the realm of technocratic expertise, has become a deeply European one. We see it most clearly in the EU’s increasing reliance on the monetary instruments of the ECB as the only acceptable tool with which to respond to economic crises. This goes back to the original political failure of not achieving fiscal integration in the Eurozone, which would have allowed wealth transfers to ailing economies no longer able to negotiate debt reductions or devalue their currencies. But during the Eurozone crisis and its aftermath, politicians avoided confronting their electorates with the need to provide funds for the stricken Club Med states. In stead they relied on the ECB to keep national governments solvent through sovereign bond purchases.

And lest we forget, it was these same bond purchases that made the name of Italy’s incoming prime minister, Mario Draghi. In 2012, when Draghi was ECB president, he appeared to almost magically calm the debt markets by announcing he would do “whatever it takes” to keep the Eurozone afloat. This statement, revealing that Draghi had been empowered to step outside the bounds of rule and precedent, is again suggestive of a kind of constitutionally-mandated technocratic dictator, but at a Europe-wide level. 

Of course to focus on monetary policy is also to highlight that these tensions between technocracy and democracy go far beyond the EU. It is certainly not just in Europe that central bankers have accrued vast power through their ability to provide back-door stimulus and keep huge debt burdens sustainable. The growing importance of central banks points back to an earlier moment of depoliticisation at the dawn of neoliberalism in the early 1980s, when control of interest rates was removed from the realm of democratic politics. More fundamentally, it points to the limitations imposed on democracy by the power of financial markets. 

Still, it is no accident that this tension has appeared in such acute form in the EU. As with Italy’s ready supply of emergency prime ministers, the EU’s dense canopy of technocratic institutions provides an irresistible way for politicians to pass the buck on issues they would otherwise have to subject to democratic conflict. This is all well and good if the technocrats succeed, but as we have seen recently with the EU’s vaccine program, it also raises the stakes of failure. Handing difficult and sensitive matters over to unaccountable administrators means that blame and resentment will be directed against the system as whole. 

Why accusations of vaccine nationalism miss the mark

This article was first published by The Critic magazine on 2nd February 2021.

n the wake of Friday’s decision by the European Union to introduce controls on vaccine exports, there has once again been much alarm about “vaccine nationalism.”  This term is meant to pour scorn on governments that prioritise their own citizens’ access to vaccines over that of other countries. It points to the danger that richer parts of the world will squabble for first dibs on limited vaccine supplies – “fighting over the cake,” as a World Health Organisation official aptly described it – while leaving poorer countries trailing far behind in their vaccination efforts.

Certainly, there’s a real danger that the EU’s export controls will end up hampering overall vaccine production by sparking a trade war over raw materials. This is somewhat ironic, given that few have been as outspoken about countries “unduly restricting access to vaccines” as the EU itself. As for global inequalities in vaccine access, make no mistake – they are shaping up to be very ugly indeed. It looks likely that poorer countries, having already faced an economic, social, and public health catastrophe, will struggle to vaccinate their most vulnerable citizens even as richer states give jabs to the majority of their populations.

Wealthy nations undoubtedly have a moral obligation to minimize the impact of these disparities. Nonetheless, wielding vaccine nationalism as a pejorative term is an unhelpful way to diagnose or even to address this problem. Given how the world is structured politically, the best way to ensure that vaccines reach poorer countries is for richer ones to vaccinate a critical mass of their own citizens as quickly as possible.

To condemn vaccine nationalism is to imply that, in the early summer of 2020 when governments began bidding for Advance Purchase Agreements with pharmaceutical companies, a more cooperative global approach would have been feasible. In reality, the political, bureaucratic and logistical structures to meet such a challenge did not exist. Some are still pointing to Covax, the consortium of institutions trying to facilitate global vaccine equality, as a path not taken. But Covax’s proposed strategy was neither realistic nor effective.

The bottom line here is that for governments around the world, whether democratic or not, legitimacy and political stability depends on protecting the welfare of their citizens – a basic principle that even critics of vaccine nationalism struggle to deny. Only slightly less important are the social unrest and geopolitical setbacks that states anticipate if they fall behind in the race to get economies back up and running.

In light of these pressures, Covax never stood a chance. Its task of forging agreement between an array of national, international and commercial players was bound to be difficult, and no state which had the industrial capacity or market access to secure its own vaccines could have afforded to wait and see if it would work. To meet Covax’s aim of vaccinating 20 per cent of the population in every country at the same speed, nations with the infrastructure to deliver vaccines would have had to wait for those that lacked it. They would have surrendered responsibility for the sensitive task of selecting and securing the best vaccines from among the multitude of candidates. (As late as November last year Covax had just nine vaccines in its putative global portfolio; it did not reach a deal with the first successful candidate, Pfizer-BioNTech, until mid-January).

But even if a more equitable approach to global vaccine distribution had been plausible, it wouldn’t necessarily have been more desirable. Seeing some states race ahead in the vaccine race is unsettling, but at least countries with the capacity to roll out vaccines are using it, and just as important, we are getting crucial information about how to organise vaccination campaigns from a range of different models. The peculiarity of the vaccine challenge means that, in the long run, having a few nations to serve as laboratories will probably prove more useful to everyone than a more monolithic approach that prioritises equality above all.

The EU’s experience is instructive here. Given its fraught internal politics, it really had no choice but to adopt a collective approach for its 27 member states. To do otherwise would have left less fortunate member states open to offers from Russia and China. Still, the many obstacles and delays it has faced – ultimately driving it to impose its export controls – are illustrative of the costs imposed by coordination. Nor should we overlook the fact that its newfound urgency has come from the example of more successful strategies in Israel, the United States and United Kingdom.

Obviously, richer states should be helping Covax build up its financial and logistical resources as well as ensuring their own populations are vaccinated. Many are doing so already. What is still lacking are the vaccines themselves. Since wealthy states acting alone have been able to order in advance from multiple sources, they have gained access to an estimated 800 million surplus vaccine doses, or more than two billion when options are taken into account.

There’s no denying that if such hoarding continues in the medium-term, it will constitute an enormous moral failing. But rather than condemning governments for having favoured their own citizens in this way, we should focus on how that surplus can reach poorer parts of the world as quickly as possible.

This means, first, scaling up manufacturing to ease the supply bottlenecks which are making governments unsure of their vaccine supply. Most importantly though, it means concentrating on how nations that do have access to vaccines can most efficiently get them into people’s arms. The sooner they can see an end to the pandemic in sight, the sooner they can begin seriously diverting vaccines elsewhere. Obviously this will also require resolving the disputes sparked by the EU’s export controls, if necessary by other nations donating vaccines to the EU.

But we also need to have an urgent discussion about when exactly nations should stop prioritising their citizens. Governments should be pressured to state under what conditions they will deem their vaccine supply sufficient to focus on global redistribution. Personally, not being in a high-risk category, I would like to see a vaccine reach vulnerable people in other countries before it reaches me. Admittedly the parameters of this decision are not yet fully in view, with new strains emerging and the nature of herd immunity still unclear. But it would be a more productive problem to focus our attention on than the issue of vaccine nationalism as such.

What’s really at stake in the fascism debate

This essay was originally published by Arc magazine on January 27th 2021.

Many themes of the Trump presidency reached a crescendo on January 6th, when the now-former president’s supporters rampaged through the Capitol building. Among those themes is the controversy over whether we should label the Trump movement “fascist.”

This argument has flared-up at various points since Trump won the Republican nomination in 2015. After the Capitol attack, commentators who warned of a fascist turn in American politics have been rushed back into interview slots and op-ed columns. Doesn’t this attempt by a violent, propaganda-driven mob to overturn last November’s presidential election vindicate their claims?

Many themes of the Trump presidency reached a crescendo on January 6th, when the now-former president’s supporters rampaged through the Capitol building. Among those themes is the controversy over whether we should label the Trump movement “fascist.”

This argument has flared-up at various points since Trump won the Republican nomination in 2015. After the Capitol attack, commentators who warned of a fascist turn in American politics have been rushed back into interview slots and op-ed columns. Doesn’t this attempt by a violent, propaganda-driven mob to overturn last November’s presidential election vindicate their claims?

If Trumpism continues after Trump, then so will this debate. But whether the fascist label is descriptively accurate has always struck me as the least rewarding part. Different people mean different things by the word, and have different aims in using it. Here’s a more interesting question: What is at stake if we choose to identify contemporary politics as fascist?

Many on the activist left branded Trump’s project fascist from the outset. This is not just because they are LARPers trying to re-enact the original anti-fascist struggles of the 1920s and 30s — even if Antifa, the most publicized radicals on the left, derive their name and flag from the communist Antifaschistische Aktion movement of early 1930s Germany. More concretely, the left’s readiness to invoke fascism reflects a longstanding, originally Marxist convention of using “fascist” to describe authoritarian and racist tendencies deemed inherent to capitalism.

From this perspective, the global shift in politics often labeled “populist” — including not just Trump, but also Brexit, the illiberal regimes of Eastern Europe, Narendra Modi’s India, and Jair Bolsonaro’s Brazil — is another upsurge of the structural forces that gave rise to fascism in the interwar period, and therefore deserves the same name.

In mainstream liberal discourse, by contrast, the debates about Trumpism and fascism have a strangely indecisive, unending quality. Journalists and social media pundits often defer to experts, so arguments devolve into bickering about who really counts as an expert and what they’ve actually said. After the Capitol attack, much of the discussion pivoted on brief comments by historians Robert Paxton and Ruth Ben-Ghiat. Paxton claimedin private correspondence that the Capitol attack “crosses the red line” beyond which the “F word” is appropriate, while on Twitter Ben-Ghiat drew a parallel with Mussolini’s 1922 March on Rome.

Meanwhile, even experts who have consistently equated Trumpism and fascism continue adding caveats and qualifications. Historian Timothy Snyder, who sounded the alarm in 2017 with his book On Tyrannyrecently described Trump’s politics as “pre-fascist” and his lies about election fraud as “structurally fascist,” leaving for the future the possibility Trump’s Republican enablers could “become the fascist faction.” Philosopher Jason Stanley, who makes a version of the left’s fascism-as-persistent-feature argument, does not claim that the label is definitive so much as a necessary framing, highlighting important aspects of Trump’s politics.

The hesitancy of the fascism debate reflects the difficulty of assigning a banner to movements that don’t claim it. A broad theory of fascism unavoidably relies on the few major examples of avowedly fascist regimes— especially interwar Italy and Germany –– even if, as Stanley has detailed in his book How Fascism Works, such regimes drew inspiration from the United States, and inspired Hindu nationalists in India. This creates an awkward relationship between fascism as empirical phenomenon and fascism as theoretical construct, and means there will always be historians stepping in, as Richard Evans recently did, to point out all the ways that 1920s-30s fascism was fundamentally different from the 21st century movements which are compared to it.

But there’s another reason the term “fascism” remains shrouded in perpetual controversy, one so obvious it’s rarely explored: The concept has maintained an aura of seriousness, of genuine evil, such that acknowledging its existence seems to represent a moral and political crisis. The role of fascism in mainstream discourse is like the hammer that sits in the box marked “in case of emergency break glass” — we might point to it and talk about breaking the glass one day, but actually doing so would signify a kind of rupture in the fabric of politics, opening up a world where extreme measures would surely be justified.

We see this in the impulse to ask “do we really want to call everyone who voted for fascist?” “Aren’t we being alarmist?” And “if we use that word now, what will we use when things get much worse?” Stanley has acknowledged this trepidation, suggesting it shows we’ve become accustomed to things that should be considered a crisis. I would argue otherwise. It reflects the crucial place of fascism in grand narrative of liberal democracy, especially after the Cold War — a narrative that relies on the idea of fascism as a historical singularity.

This first occurred to me when I visited Holocaust memorials in Berlin, and realized, to my surprise, that they had all been erected quite recently. The first were the Jewish Museum and the Memorial to the Murdered Jews of Europe, both disturbingly beautiful, evocative structures, conceived during the 1990s, after the collapse of communist East Germany, and opened between 2000–2005. Over the next decade, these were followed by smaller memorials to various other groups the Nazis persecuted: homosexuals, the Sinti and Roma, the disabled.

There were obvious reasons for these monuments to appear at this time and place. Post-reunification, Germany was reflecting on its national identity, and Berlin had been the capital of the Third Reich. But they still strike me as an excellent representation of liberal democracies’ need to identify memories and values that bind them together, especially when they could no longer contrast themselves to the USSR.

Vanquishing fascist power in the Second World War was and remains a foundational moment. Even as they recede into a distant, mythic past, the horrors overcome at that moment still grip the popular imagination. We saw this during the Brexit debate, when the most emotionally appealing argument for European integration referred back to its original, post-WWII purpose: constraining nationalism. And as the proliferation of memorials in Berlin suggests, fascism can retroactively be defined as the ultimate antithesis to what has, from the 1960s onwards, become liberalism’s main moral purpose: protection and empowerment of traditionally marginalized groups in society.

The United States plays a huge part in maintaining this narrative throughout the West and the English-speaking world, producing an endless stream of books, movies, and documentaries about the Second World War. The American public’s appetite for it seems boundless. That war is infused with a sense of heroism and tragedy unlike any other. But all of this stems from the unique certainty regarding the evil nature of 20th century European fascism.

This is why those who want to identify fascism in the present will always encounter skepticism and reluctance. Fascism is a moral singularity, a point of convergence in otherwise divided societies, because it is a historical singularity, the fixed source from which our history flows. To remove fascism from this foundational position – and worse, to implicate us in tolerating it – is morally disorientating. It raises the suspicion that, while claiming to separate fascism from the European historical example, those who invoke the term are actually trading off the emotional impact of that very example.

I don’t think commentators like Snyder and Stanley have such cynical intentions, and nor do I believe it’s a writer’s job to respect the version of history held dear by the public. Nonetheless, those who try to be both theorists and passionate opponents of fascism must recognize that they are walking a tightrope.

By making fascism a broader, more abstract signifier, and thereby bringing the term into the grey areas of semantic and historiographical bickering, they risk diminishing the aura of singular evil that surrounds fascism in the popular consciousness. But this is an aura which, surely, opponents of fascism should want to maintain.

After the Capitol, the battle for the dream machine

Sovereign is he who decides on the exception. In a statement on Wednesday afternoon, Facebook’s VP of integrity Guy Rosen declared: “This is an emergency situation and we are taking appropriate emergency measures, including removing President Trump’s video.” This came as Trump’s supporters, like a hoard of pantomime barbarians, were carrying out their surreal sacking of the Washington Capitol, and the US president attempted to publish a video which, in Rosen’s words, “contributes to rather than diminishes the risk of ongoing violence.” In the video, Trump had told the mob to go home, but continued to insist that the election of November 2020 had been fraudulent.

The following day Mark Zuckerberg announced that the sitting president would be barred from Facebook and Instagram indefinitely, and at least “until the peaceful transition of power is complete.” Zuckerberg reflected that “we have allowed President Trump to use our platform consistent with our own rules,” so as to give the public “the broadest possible access to political speech,” but that “the current context is now fundamentally different.”

Yesterday Trump’s main communication platform, Twitter, went a step further and suspended the US president permanently (it had initially suspended Trump’s account for 12 hours during the Capitol riot). Giving its rationale for the decision, Twitter also insisted its policy was to “enable the public to hear from elected officials” on the basis that “the people have a right to hold power to account in the open.” It stated, however, that “In the context of horrific events this week,” it had decided “recent Tweets from the @realDonaldTrump account and the context around them – specifically how they are being received and interpreted” (my emphasis) amounted to a violation of its rules against incitement to violence.

These emergency measures by the big tech companies were the most significant development in the United States this week, not the attack on the Capitol itself. In the language used to justify to them, we hear the unmistakable echoes of a constitutional sovereign claiming its authority to decide how the rules should be applied – for between the rules and their application there is always judgment and discretion – and more importantly, to decide that a crisis demands an exceptional interpretation of the rules. With that assertion of authority, Silicon Valley has reminded us – even if it would have preferred not to – where ultimate power lies in a new era of American politics. It does not lie in the ability to raise a movement of brainwashed followers, but in the ability to decide who is allowed the means to do so.

The absurd assault on the Capitol was an event perfectly calibrated to demonstrate this configuration of power. First, the seriousness of the event – a violent attack against an elected government, however spontaneous – forced the social media companies to reveal their authority by taking decisive action. In doing so, of course, they also showed the limits of their authority (no sovereignty is absolute, after all). The tech giants are eager to avoid being implicated in a situation that would justify greater regulation, or perhaps even dismemberment by a Democrat government. Hence their increasing willingness over the last six months, as a Democratic victory in the November elections loomed, to actively regulate the circulation of pro-Trump propaganda with misinformation warnings, content restrictions and occasional bans on outlets such as the New York Post, following its Hunter Biden splash on the eve of the election.

It should be remembered that the motivations of companies like Facebook and Twitter are primarily commercial rather than political. They must keep their monopolistic hold on the public sphere intact to safeguard their data harvesting and advertising mechanisms. This means they need to show lawmakers that they will wield authority over their digital fiefdoms in an appropriate fashion.

Trump’s removal from these platforms was therefore over determined, especially after Wednesday’s debacle in Washington. Yes, the tech companies want to signal their political allegiance to the Democrats, but they also need to show that their virtual domains will not destabilize the United States to the extent that it is no longer an inviting place to do business – for that too would end in greater regulation. They were surely looking for an excuse to get rid of Trump, but from their perspective, the Capitol invasion merited action by itself. It was never going to lead to the overturning of November’s election, still less the toppling of the regime; but it could hardly fail to impress America’s allies, not to mention the global financial elite, as an obvious watershed in the disintegration of the country’s political system.

But it was also the unseriousness of Wednesday’s events that revealed why control of the media apparatus is so important. A popular take on the Capitol invasion itself – and, given the many surreal images of the buffoonish rioters, a persuasive one – is that it was the ultimate demonstration of the United States’ descent into a politics of fantasy; what the theorist Bruno Maçães calls “Dreampolitik.” Submerged in the alternative realities of partisan media and infused with the spirit of Hollywood, Americans have come to treat political action as a kind of role-play, a stage where the iconic motifs of history are unwittingly reenacted as parody. Who could be surprised that an era when a significant part of America has convinced itself that it is fighting fascism, and another that it is ruled by a conspiracy of pedophiles, has ended with men in horned helmets, bird-watching camouflage and MAGA merchandise storming the seat of government with chants of “U-S-A”?

At the very least, it is clear that Trump’s success as an insurgent owes a great deal to his embrace of followers whose view of politics is heavily colored by conspiracy theories, if not downright deranged. The Capitol attack was the most remarkable evidence to date of how such fantasy politics can be leveraged for projects with profound “real world” implications. It was led, after all, by members of the QAnon conspiracy theory movement, and motivated by elaborate myths of a stolen election. Barack Obama was quite right to call it the product of a “fantasy narrative [which] has spiraled further and further from reality… [building] upon years of sown resentments.”

But while there is justifiably much fascination with this new form of political power, it must be remembered that such fantasy narratives are a superstructure. They can only operate through the available technological channels – that is, through the media, all of which is today centred around the major social media platforms. The triumph of Dreampolitik at the Capitol therefore only emphasises the significance of Facebook and Twitter’s decisive action against Trump. For whatever power is made available through the postmodern tools of partisan narrative and alternative reality, an even greater power necessarily belongs to those who can grant or deny access to these tools.

And this week’s events are, of course, just the beginning. The motley insurrection of the Trumpists will serve as a justification, if one was needed, for an increasingly strict regime of surveillance and censorship by major social media platforms, answering to their investors and to the political class in Washington. Already the incoming president Joe Biden has stated his intentions to introduce new legislation against “domestic terrorism,” which will no doubt involve the tech giants maintaining their commercial dominance in return for carrying out the required surveillance and reporting of those deemed subversive. Meanwhile, Google and Apple yesterday issued an ultimatum to the platform Parler, which offers the same basic model as Twitter but with laxer content rules, threatening to banish it from their app stores if it did not police conversation more strictly.

But however disturbing the implications of this crackdown, we should welcome the clarity we got this week. For too long, the tech giants have been able to pose as neutral arbiters of discussion, cloaking their authority in corporate euphemisms about public interest. Consequently, they have been able to set the terms of communication over much of the world according to their own interests and political calculations. Whether or not they were right to banish Trump, the key fact is that it was they who had the authority to do so, for their own reasons. The increasing regulation of social media – which was always inevitable, in one form or another, given its incendiary potential – will now proceed according to the same logic. Hopefully the dramatic nature of their decisions this week will make us question if this is really a tolerable situation.

Poland and Hungary are exposing the EU’s flaws

The European Union veered into another crisis on Monday, as the governments of Hungary and Poland announced they would veto the bloc’s next seven-year budget. This comes after the European Parliament and Council tried to introduce “rule of law” measures for punishing member states that breach democratic standards — measures that Budapest and Warsaw, the obvious target of such sanctions, have declared unacceptable.

As I wrote last week, it is unlikely that the disciplinary mechanism would actually have posed a major threat to either the Fidesz regime in Hungary or the Law and Justice one in Poland. These stubborn antagonists of European liberalism have long threatened to block the entire budget if it came with meaningful conditions attached. That they have used their veto anyway suggests the Hungarian and Polish governments — or at least the hardline factions within them — feel they can extract further concessions.

There’s likely to be a tense video conference on Thursday as EU leaders attempt to salvage the budget. It’s tempting to assume a compromise will be found that allows everyone to save face (that is the European way), but the ongoing impasse has angered both sides. At least one commentator has statedthat further concessions to Hungary and Poland would amount to “appeasement of dictators.”

In fact compromises with illiberal forces are far from unprecedented in the history of modern democracy. The EU constitution that limits the power of federal institutions is what allows actors like Orban to misbehave — something the Hungarian Prime Minister has exploited to great effect.

And yet, it doesn’t help that the constitutional procedures in question — the treaties of the European Union — were so poorly designed in the first place. Allowing single states an effective veto over key policy areas is a recipe for dysfunction, as the EU already found out in September when Cyprus blockedsanctions against Belarus.

More to the point, the current deadlock with Hungary and Poland has come about because the existing Article 7 mechanism for disciplining member states is virtually unenforceable (both nations have been subject to Article 7 probes for several years, to no effect).

But this practical shortcoming also points to an ideological one. As European politicians have admitted, the failure to design a workable disciplinary mechanism shows the project’s architects did not take seriously the possibility that, once countries had made the democratic reforms necessary to gain access to the EU, they might, at a later date, move back in the opposite direction. Theirs was a naïve faith in the onwards march of liberal democracy.

In this sense, the crisis now surrounding the EU budget is another product of that ill-fated optimism which gripped western elites around the turn of the 21stcentury. Like the governing class in the United States who felt sure China would reform itself once invited into the comity of nations, the founders of the European Union had too rosy a view of liberalism’s future — and their successors are paying the price.

Europe’s deplorables have outwitted Brussels

This essay was originally published by Unherd on November 10th 2020

Throughout the autumn, the European Union has been engaged in a standoff with its two most antagonistic members, Hungary and Poland. At stake was whether the EU would finally take meaningful action against these pioneers of “illiberal democracy”, to use the infamous phrase of Hungarian Prime Minister Viktor Orbán. As of last week — and despite appearances to the contrary — it seems the Hungarian and Polish regimes have postponed the reckoning once more.

Last week, representatives of the European Parliament triumphantly announced a new disciplinary mechanism which, they claimed, would enable Brussels to withhold funds from states that violate liberal democratic standards. According to MEP Petri Sarvamaa, it meant the end of “a painful phase [in] the recent history of the European Union”, in which “the basic values of democracy” had been “threatened and undermined”.

No names were named, of course, but they did not need to be. Tensions between the EU and the recalcitrant regimes on its eastern periphery, Hungary under Orbán’s Fidesz and Poland under the Law and Justice Party, have been mounting for years. Those governments’ erosion of judicial independence and media freedom, as well as concerns over corruption, education, and minority rights, have resulted in a series of formal investigations and legal actions. And that is not to mention the constant rhetorical fusillades between EU officials and Budapest and Warsaw.

The new disciplinary mechanism is being presented as the means to finally bring Hungary and Poland to heel, but it is no such thing. Though not exactly toothless, it is unlikely to pose a serious threat to the illiberal pretenders in the east. Breaches of “rule of law” standards will only be sanctioned if they affect EU funds — so the measures are effectively limited to budget oversight. Moreover, enforcing the sanctions will require a weighted majority of member states in the European Council, giving Hungary or Poland ample room to assemble a blocking coalition.

In fact, what we have here is another sticking plaster so characteristic of the complex and unwieldy structures of European supranational democracy. The political dynamics of this system, heavily reliant on horse-trading and compromise, have allowed Hungary and Poland to outmanoeuvre their opponents.

The real purpose of the disciplinary measures is to ensure the timely passage of next EU budget, and in particular, a €750 billion coronavirus relief fund. That package will, for the first time, see member states issuing collective debt backed by their taxpayers, and therefore has totemic significance for the future of the Union. It is a real indication that fiscal integration might be possible in the EU — a step long regarded as crucial to the survival of Europe’s federal ambitions, and one that shows its ability to respond effectively to a major crisis.

But this achievement has almost been derailed by a showdown with Hungary and Poland. Liberal northern states such as Finland, Sweden and the Netherlands, together with the European Parliament, insisted that financial support should be conditional on upholding EU values and transparency standards. But since the relief fund requires unanimous approval, Hungary or Poland can simply veto the whole initiative, which is exactly what they have been threatening to do.

In other words, the EU landed itself with a choice between upholding its liberal commitments and securing its future as a viable political and economic project. The relatively weak disciplinary mechanism shows that European leaders are opting for the latter, as they inevitably would. It is a compromise that allows the defenders of democratic values to save face, while essentially letting Hungary and Poland off the hook. (Of course this doesn’t rule out the possibility that the Hungarian and Polish governments will continue making a fuss anyway.)

Liberals who place their hopes in the European project may despair at this, but these dilemmas are part and parcel of binding different regions and cultures in a democratic system. Such undertakings need strict constitutional procedures to hold them together, but those same procedures create opportunities to game the system, especially as demands in one area can be tied with cooperation in another.

As he announced the new rule of law agreement, Sarvamaa pointed to Donald Trump’s threat to win the presidential election via the Supreme Court as evidence of the need to uphold democratic standards. In truth, what is happening in Europe bears a closer resemblance to America in the 1930s, when F.D. Roosevelt was forced to make concessions to the Southern states to deliver his New Deal agenda.

That too was a high-stakes attempt at federal consolidation and economic repair, with the Great Depression at its height and democracy floundering around the world. As the political historian Ira Katznelson has noted,Roosevelt only succeeded by making “necessary but often costly illiberal alliances” — in particular, alliances with Southern Democratic legislators who held an effective veto in Congress. The result was that New Deal programs either avoided or actively upheld white supremacy in the Jim Crow South. (Key welfare programs, for instance, were designed to exclude some two-thirds of African American employees in the Southern states).

According to civil rights campaigner Walter White, Roosevelt himself explained his silence on a 1934 bill to combat the lynching of African Americans as follows: “I’ve got to get legislation passed by Congress to save America… If I come out for the anti-lynching bill, they [the Southern Democrats] will block every bill I ask Congress to pass to keep America from collapsing. I just can’t take that risk.”

This is not to suggest any moral equivalence between the Europe’s “illiberal democracies” and the Deep South of the 1930s. But the Hungarian and Polish governments do resemble the experienced Southern politicians of the New Deal era in their ability to manoeuvre within a federal framework, achieving an autonomy that belies their economic dependency. They have learned to play by the letter of the rules as well as to subvert them.

Orbán, for instance, has frequently insisted that his critics make a formal legal case against him, whereupon he has managed to reduce sanctions to mere technicalities. He has skilfully leveraged the arithmetic of the European Parliament to keep Fidesz within the orbit of the mainstream European People’s Party group. In September, the Hungarian and Polish governments even announced plans to establish their own institute of comparative legal studies, aiming to expose the EU’s “double standards.”

And now, with their votes required to pass the crucial relief fund, the regimes in Budapest and Warsaw are taking advantage of exceptionally high stakes much as their Southern analogues in the 1930s did. They have, in recent months, become increasingly defiant in their rejection of European liberalism. In September, Orbán published a searing essay in which he hailed a growing “rebellion against liberal intellectual oppression” in the western world. The recent anti-abortion ruling by the Polish high court is likewise a sign of that state’s determination to uphold Catholic values and a robust national identity.

Looking forward, however, it seems clear this situation cannot continue forever. Much has been made of Joe Biden’s hostility to the Hungarian and Polish regimes, and with his election victory, we may see the US attaching its own conditions to investment in Eastern Europe. But Biden cannot question the EU’s standards too much, since he has made the latter out to be America’s key liberal partner. The real issue is that if richer EU states are really going to accept the financial burdens of further integration, they will not tolerate deviant nations wielding outsized influence on key policy areas.

Of course such reforms would require an overhaul of the voting system, which means treaty change. This raises a potential irony: could the intransigence of Hungary and Poland ultimately spur on Europe’s next big constitutional step — one that will see their leverage taken away? Maybe. For the time being, the EU is unlikely to rein in the illiberal experiments within its borders.

Biden versus Beijing