Why accusations of vaccine nationalism miss the mark

This article was first published by The Critic magazine on 2nd February 2021.

n the wake of Friday’s decision by the European Union to introduce controls on vaccine exports, there has once again been much alarm about “vaccine nationalism.”  This term is meant to pour scorn on governments that prioritise their own citizens’ access to vaccines over that of other countries. It points to the danger that richer parts of the world will squabble for first dibs on limited vaccine supplies – “fighting over the cake,” as a World Health Organisation official aptly described it – while leaving poorer countries trailing far behind in their vaccination efforts.

Certainly, there’s a real danger that the EU’s export controls will end up hampering overall vaccine production by sparking a trade war over raw materials. This is somewhat ironic, given that few have been as outspoken about countries “unduly restricting access to vaccines” as the EU itself. As for global inequalities in vaccine access, make no mistake – they are shaping up to be very ugly indeed. It looks likely that poorer countries, having already faced an economic, social, and public health catastrophe, will struggle to vaccinate their most vulnerable citizens even as richer states give jabs to the majority of their populations.

Wealthy nations undoubtedly have a moral obligation to minimize the impact of these disparities. Nonetheless, wielding vaccine nationalism as a pejorative term is an unhelpful way to diagnose or even to address this problem. Given how the world is structured politically, the best way to ensure that vaccines reach poorer countries is for richer ones to vaccinate a critical mass of their own citizens as quickly as possible.

To condemn vaccine nationalism is to imply that, in the early summer of 2020 when governments began bidding for Advance Purchase Agreements with pharmaceutical companies, a more cooperative global approach would have been feasible. In reality, the political, bureaucratic and logistical structures to meet such a challenge did not exist. Some are still pointing to Covax, the consortium of institutions trying to facilitate global vaccine equality, as a path not taken. But Covax’s proposed strategy was neither realistic nor effective.

The bottom line here is that for governments around the world, whether democratic or not, legitimacy and political stability depends on protecting the welfare of their citizens – a basic principle that even critics of vaccine nationalism struggle to deny. Only slightly less important are the social unrest and geopolitical setbacks that states anticipate if they fall behind in the race to get economies back up and running.

In light of these pressures, Covax never stood a chance. Its task of forging agreement between an array of national, international and commercial players was bound to be difficult, and no state which had the industrial capacity or market access to secure its own vaccines could have afforded to wait and see if it would work. To meet Covax’s aim of vaccinating 20 per cent of the population in every country at the same speed, nations with the infrastructure to deliver vaccines would have had to wait for those that lacked it. They would have surrendered responsibility for the sensitive task of selecting and securing the best vaccines from among the multitude of candidates. (As late as November last year Covax had just nine vaccines in its putative global portfolio; it did not reach a deal with the first successful candidate, Pfizer-BioNTech, until mid-January).

But even if a more equitable approach to global vaccine distribution had been plausible, it wouldn’t necessarily have been more desirable. Seeing some states race ahead in the vaccine race is unsettling, but at least countries with the capacity to roll out vaccines are using it, and just as important, we are getting crucial information about how to organise vaccination campaigns from a range of different models. The peculiarity of the vaccine challenge means that, in the long run, having a few nations to serve as laboratories will probably prove more useful to everyone than a more monolithic approach that prioritises equality above all.

The EU’s experience is instructive here. Given its fraught internal politics, it really had no choice but to adopt a collective approach for its 27 member states. To do otherwise would have left less fortunate member states open to offers from Russia and China. Still, the many obstacles and delays it has faced – ultimately driving it to impose its export controls – are illustrative of the costs imposed by coordination. Nor should we overlook the fact that its newfound urgency has come from the example of more successful strategies in Israel, the United States and United Kingdom.

Obviously, richer states should be helping Covax build up its financial and logistical resources as well as ensuring their own populations are vaccinated. Many are doing so already. What is still lacking are the vaccines themselves. Since wealthy states acting alone have been able to order in advance from multiple sources, they have gained access to an estimated 800 million surplus vaccine doses, or more than two billion when options are taken into account.

There’s no denying that if such hoarding continues in the medium-term, it will constitute an enormous moral failing. But rather than condemning governments for having favoured their own citizens in this way, we should focus on how that surplus can reach poorer parts of the world as quickly as possible.

This means, first, scaling up manufacturing to ease the supply bottlenecks which are making governments unsure of their vaccine supply. Most importantly though, it means concentrating on how nations that do have access to vaccines can most efficiently get them into people’s arms. The sooner they can see an end to the pandemic in sight, the sooner they can begin seriously diverting vaccines elsewhere. Obviously this will also require resolving the disputes sparked by the EU’s export controls, if necessary by other nations donating vaccines to the EU.

But we also need to have an urgent discussion about when exactly nations should stop prioritising their citizens. Governments should be pressured to state under what conditions they will deem their vaccine supply sufficient to focus on global redistribution. Personally, not being in a high-risk category, I would like to see a vaccine reach vulnerable people in other countries before it reaches me. Admittedly the parameters of this decision are not yet fully in view, with new strains emerging and the nature of herd immunity still unclear. But it would be a more productive problem to focus our attention on than the issue of vaccine nationalism as such.

What’s really at stake in the fascism debate

This essay was originally published by Arc magazine on January 27th 2021.

Many themes of the Trump presidency reached a crescendo on January 6th, when the now-former president’s supporters rampaged through the Capitol building. Among those themes is the controversy over whether we should label the Trump movement “fascist.”

This argument has flared-up at various points since Trump won the Republican nomination in 2015. After the Capitol attack, commentators who warned of a fascist turn in American politics have been rushed back into interview slots and op-ed columns. Doesn’t this attempt by a violent, propaganda-driven mob to overturn last November’s presidential election vindicate their claims?

Many themes of the Trump presidency reached a crescendo on January 6th, when the now-former president’s supporters rampaged through the Capitol building. Among those themes is the controversy over whether we should label the Trump movement “fascist.”

This argument has flared-up at various points since Trump won the Republican nomination in 2015. After the Capitol attack, commentators who warned of a fascist turn in American politics have been rushed back into interview slots and op-ed columns. Doesn’t this attempt by a violent, propaganda-driven mob to overturn last November’s presidential election vindicate their claims?

If Trumpism continues after Trump, then so will this debate. But whether the fascist label is descriptively accurate has always struck me as the least rewarding part. Different people mean different things by the word, and have different aims in using it. Here’s a more interesting question: What is at stake if we choose to identify contemporary politics as fascist?

Many on the activist left branded Trump’s project fascist from the outset. This is not just because they are LARPers trying to re-enact the original anti-fascist struggles of the 1920s and 30s — even if Antifa, the most publicized radicals on the left, derive their name and flag from the communist Antifaschistische Aktion movement of early 1930s Germany. More concretely, the left’s readiness to invoke fascism reflects a longstanding, originally Marxist convention of using “fascist” to describe authoritarian and racist tendencies deemed inherent to capitalism.

From this perspective, the global shift in politics often labeled “populist” — including not just Trump, but also Brexit, the illiberal regimes of Eastern Europe, Narendra Modi’s India, and Jair Bolsonaro’s Brazil — is another upsurge of the structural forces that gave rise to fascism in the interwar period, and therefore deserves the same name.

In mainstream liberal discourse, by contrast, the debates about Trumpism and fascism have a strangely indecisive, unending quality. Journalists and social media pundits often defer to experts, so arguments devolve into bickering about who really counts as an expert and what they’ve actually said. After the Capitol attack, much of the discussion pivoted on brief comments by historians Robert Paxton and Ruth Ben-Ghiat. Paxton claimedin private correspondence that the Capitol attack “crosses the red line” beyond which the “F word” is appropriate, while on Twitter Ben-Ghiat drew a parallel with Mussolini’s 1922 March on Rome.

Meanwhile, even experts who have consistently equated Trumpism and fascism continue adding caveats and qualifications. Historian Timothy Snyder, who sounded the alarm in 2017 with his book On Tyrannyrecently described Trump’s politics as “pre-fascist” and his lies about election fraud as “structurally fascist,” leaving for the future the possibility Trump’s Republican enablers could “become the fascist faction.” Philosopher Jason Stanley, who makes a version of the left’s fascism-as-persistent-feature argument, does not claim that the label is definitive so much as a necessary framing, highlighting important aspects of Trump’s politics.

The hesitancy of the fascism debate reflects the difficulty of assigning a banner to movements that don’t claim it. A broad theory of fascism unavoidably relies on the few major examples of avowedly fascist regimes— especially interwar Italy and Germany –– even if, as Stanley has detailed in his book How Fascism Works, such regimes drew inspiration from the United States, and inspired Hindu nationalists in India. This creates an awkward relationship between fascism as empirical phenomenon and fascism as theoretical construct, and means there will always be historians stepping in, as Richard Evans recently did, to point out all the ways that 1920s-30s fascism was fundamentally different from the 21st century movements which are compared to it.

But there’s another reason the term “fascism” remains shrouded in perpetual controversy, one so obvious it’s rarely explored: The concept has maintained an aura of seriousness, of genuine evil, such that acknowledging its existence seems to represent a moral and political crisis. The role of fascism in mainstream discourse is like the hammer that sits in the box marked “in case of emergency break glass” — we might point to it and talk about breaking the glass one day, but actually doing so would signify a kind of rupture in the fabric of politics, opening up a world where extreme measures would surely be justified.

We see this in the impulse to ask “do we really want to call everyone who voted for fascist?” “Aren’t we being alarmist?” And “if we use that word now, what will we use when things get much worse?” Stanley has acknowledged this trepidation, suggesting it shows we’ve become accustomed to things that should be considered a crisis. I would argue otherwise. It reflects the crucial place of fascism in grand narrative of liberal democracy, especially after the Cold War — a narrative that relies on the idea of fascism as a historical singularity.

This first occurred to me when I visited Holocaust memorials in Berlin, and realized, to my surprise, that they had all been erected quite recently. The first were the Jewish Museum and the Memorial to the Murdered Jews of Europe, both disturbingly beautiful, evocative structures, conceived during the 1990s, after the collapse of communist East Germany, and opened between 2000–2005. Over the next decade, these were followed by smaller memorials to various other groups the Nazis persecuted: homosexuals, the Sinti and Roma, the disabled.

There were obvious reasons for these monuments to appear at this time and place. Post-reunification, Germany was reflecting on its national identity, and Berlin had been the capital of the Third Reich. But they still strike me as an excellent representation of liberal democracies’ need to identify memories and values that bind them together, especially when they could no longer contrast themselves to the USSR.

Vanquishing fascist power in the Second World War was and remains a foundational moment. Even as they recede into a distant, mythic past, the horrors overcome at that moment still grip the popular imagination. We saw this during the Brexit debate, when the most emotionally appealing argument for European integration referred back to its original, post-WWII purpose: constraining nationalism. And as the proliferation of memorials in Berlin suggests, fascism can retroactively be defined as the ultimate antithesis to what has, from the 1960s onwards, become liberalism’s main moral purpose: protection and empowerment of traditionally marginalized groups in society.

The United States plays a huge part in maintaining this narrative throughout the West and the English-speaking world, producing an endless stream of books, movies, and documentaries about the Second World War. The American public’s appetite for it seems boundless. That war is infused with a sense of heroism and tragedy unlike any other. But all of this stems from the unique certainty regarding the evil nature of 20th century European fascism.

This is why those who want to identify fascism in the present will always encounter skepticism and reluctance. Fascism is a moral singularity, a point of convergence in otherwise divided societies, because it is a historical singularity, the fixed source from which our history flows. To remove fascism from this foundational position – and worse, to implicate us in tolerating it – is morally disorientating. It raises the suspicion that, while claiming to separate fascism from the European historical example, those who invoke the term are actually trading off the emotional impact of that very example.

I don’t think commentators like Snyder and Stanley have such cynical intentions, and nor do I believe it’s a writer’s job to respect the version of history held dear by the public. Nonetheless, those who try to be both theorists and passionate opponents of fascism must recognize that they are walking a tightrope.

By making fascism a broader, more abstract signifier, and thereby bringing the term into the grey areas of semantic and historiographical bickering, they risk diminishing the aura of singular evil that surrounds fascism in the popular consciousness. But this is an aura which, surely, opponents of fascism should want to maintain.

After the Capitol, the battle for the dream machine

Sovereign is he who decides on the exception. In a statement on Wednesday afternoon, Facebook’s VP of integrity Guy Rosen declared: “This is an emergency situation and we are taking appropriate emergency measures, including removing President Trump’s video.” This came as Trump’s supporters, like a hoard of pantomime barbarians, were carrying out their surreal sacking of the Washington Capitol, and the US president attempted to publish a video which, in Rosen’s words, “contributes to rather than diminishes the risk of ongoing violence.” In the video, Trump had told the mob to go home, but continued to insist that the election of November 2020 had been fraudulent.

The following day Mark Zuckerberg announced that the sitting president would be barred from Facebook and Instagram indefinitely, and at least “until the peaceful transition of power is complete.” Zuckerberg reflected that “we have allowed President Trump to use our platform consistent with our own rules,” so as to give the public “the broadest possible access to political speech,” but that “the current context is now fundamentally different.”

Yesterday Trump’s main communication platform, Twitter, went a step further and suspended the US president permanently (it had initially suspended Trump’s account for 12 hours during the Capitol riot). Giving its rationale for the decision, Twitter also insisted its policy was to “enable the public to hear from elected officials” on the basis that “the people have a right to hold power to account in the open.” It stated, however, that “In the context of horrific events this week,” it had decided “recent Tweets from the @realDonaldTrump account and the context around them – specifically how they are being received and interpreted” (my emphasis) amounted to a violation of its rules against incitement to violence.

These emergency measures by the big tech companies were the most significant development in the United States this week, not the attack on the Capitol itself. In the language used to justify to them, we hear the unmistakable echoes of a constitutional sovereign claiming its authority to decide how the rules should be applied – for between the rules and their application there is always judgment and discretion – and more importantly, to decide that a crisis demands an exceptional interpretation of the rules. With that assertion of authority, Silicon Valley has reminded us – even if it would have preferred not to – where ultimate power lies in a new era of American politics. It does not lie in the ability to raise a movement of brainwashed followers, but in the ability to decide who is allowed the means to do so.

The absurd assault on the Capitol was an event perfectly calibrated to demonstrate this configuration of power. First, the seriousness of the event – a violent attack against an elected government, however spontaneous – forced the social media companies to reveal their authority by taking decisive action. In doing so, of course, they also showed the limits of their authority (no sovereignty is absolute, after all). The tech giants are eager to avoid being implicated in a situation that would justify greater regulation, or perhaps even dismemberment by a Democrat government. Hence their increasing willingness over the last six months, as a Democratic victory in the November elections loomed, to actively regulate the circulation of pro-Trump propaganda with misinformation warnings, content restrictions and occasional bans on outlets such as the New York Post, following its Hunter Biden splash on the eve of the election.

It should be remembered that the motivations of companies like Facebook and Twitter are primarily commercial rather than political. They must keep their monopolistic hold on the public sphere intact to safeguard their data harvesting and advertising mechanisms. This means they need to show lawmakers that they will wield authority over their digital fiefdoms in an appropriate fashion.

Trump’s removal from these platforms was therefore over determined, especially after Wednesday’s debacle in Washington. Yes, the tech companies want to signal their political allegiance to the Democrats, but they also need to show that their virtual domains will not destabilize the United States to the extent that it is no longer an inviting place to do business – for that too would end in greater regulation. They were surely looking for an excuse to get rid of Trump, but from their perspective, the Capitol invasion merited action by itself. It was never going to lead to the overturning of November’s election, still less the toppling of the regime; but it could hardly fail to impress America’s allies, not to mention the global financial elite, as an obvious watershed in the disintegration of the country’s political system.

But it was also the unseriousness of Wednesday’s events that revealed why control of the media apparatus is so important. A popular take on the Capitol invasion itself – and, given the many surreal images of the buffoonish rioters, a persuasive one – is that it was the ultimate demonstration of the United States’ descent into a politics of fantasy; what the theorist Bruno Maçães calls “Dreampolitik.” Submerged in the alternative realities of partisan media and infused with the spirit of Hollywood, Americans have come to treat political action as a kind of role-play, a stage where the iconic motifs of history are unwittingly reenacted as parody. Who could be surprised that an era when a significant part of America has convinced itself that it is fighting fascism, and another that it is ruled by a conspiracy of pedophiles, has ended with men in horned helmets, bird-watching camouflage and MAGA merchandise storming the seat of government with chants of “U-S-A”?

At the very least, it is clear that Trump’s success as an insurgent owes a great deal to his embrace of followers whose view of politics is heavily colored by conspiracy theories, if not downright deranged. The Capitol attack was the most remarkable evidence to date of how such fantasy politics can be leveraged for projects with profound “real world” implications. It was led, after all, by members of the QAnon conspiracy theory movement, and motivated by elaborate myths of a stolen election. Barack Obama was quite right to call it the product of a “fantasy narrative [which] has spiraled further and further from reality… [building] upon years of sown resentments.”

But while there is justifiably much fascination with this new form of political power, it must be remembered that such fantasy narratives are a superstructure. They can only operate through the available technological channels – that is, through the media, all of which is today centred around the major social media platforms. The triumph of Dreampolitik at the Capitol therefore only emphasises the significance of Facebook and Twitter’s decisive action against Trump. For whatever power is made available through the postmodern tools of partisan narrative and alternative reality, an even greater power necessarily belongs to those who can grant or deny access to these tools.

And this week’s events are, of course, just the beginning. The motley insurrection of the Trumpists will serve as a justification, if one was needed, for an increasingly strict regime of surveillance and censorship by major social media platforms, answering to their investors and to the political class in Washington. Already the incoming president Joe Biden has stated his intentions to introduce new legislation against “domestic terrorism,” which will no doubt involve the tech giants maintaining their commercial dominance in return for carrying out the required surveillance and reporting of those deemed subversive. Meanwhile, Google and Apple yesterday issued an ultimatum to the platform Parler, which offers the same basic model as Twitter but with laxer content rules, threatening to banish it from their app stores if it did not police conversation more strictly.

But however disturbing the implications of this crackdown, we should welcome the clarity we got this week. For too long, the tech giants have been able to pose as neutral arbiters of discussion, cloaking their authority in corporate euphemisms about public interest. Consequently, they have been able to set the terms of communication over much of the world according to their own interests and political calculations. Whether or not they were right to banish Trump, the key fact is that it was they who had the authority to do so, for their own reasons. The increasing regulation of social media – which was always inevitable, in one form or another, given its incendiary potential – will now proceed according to the same logic. Hopefully the dramatic nature of their decisions this week will make us question if this is really a tolerable situation.

Poland and Hungary are exposing the EU’s flaws

The European Union veered into another crisis on Monday, as the governments of Hungary and Poland announced they would veto the bloc’s next seven-year budget. This comes after the European Parliament and Council tried to introduce “rule of law” measures for punishing member states that breach democratic standards — measures that Budapest and Warsaw, the obvious target of such sanctions, have declared unacceptable.

As I wrote last week, it is unlikely that the disciplinary mechanism would actually have posed a major threat to either the Fidesz regime in Hungary or the Law and Justice one in Poland. These stubborn antagonists of European liberalism have long threatened to block the entire budget if it came with meaningful conditions attached. That they have used their veto anyway suggests the Hungarian and Polish governments — or at least the hardline factions within them — feel they can extract further concessions.

There’s likely to be a tense video conference on Thursday as EU leaders attempt to salvage the budget. It’s tempting to assume a compromise will be found that allows everyone to save face (that is the European way), but the ongoing impasse has angered both sides. At least one commentator has statedthat further concessions to Hungary and Poland would amount to “appeasement of dictators.”

In fact compromises with illiberal forces are far from unprecedented in the history of modern democracy. The EU constitution that limits the power of federal institutions is what allows actors like Orban to misbehave — something the Hungarian Prime Minister has exploited to great effect.

And yet, it doesn’t help that the constitutional procedures in question — the treaties of the European Union — were so poorly designed in the first place. Allowing single states an effective veto over key policy areas is a recipe for dysfunction, as the EU already found out in September when Cyprus blockedsanctions against Belarus.

More to the point, the current deadlock with Hungary and Poland has come about because the existing Article 7 mechanism for disciplining member states is virtually unenforceable (both nations have been subject to Article 7 probes for several years, to no effect).

But this practical shortcoming also points to an ideological one. As European politicians have admitted, the failure to design a workable disciplinary mechanism shows the project’s architects did not take seriously the possibility that, once countries had made the democratic reforms necessary to gain access to the EU, they might, at a later date, move back in the opposite direction. Theirs was a naïve faith in the onwards march of liberal democracy.

In this sense, the crisis now surrounding the EU budget is another product of that ill-fated optimism which gripped western elites around the turn of the 21stcentury. Like the governing class in the United States who felt sure China would reform itself once invited into the comity of nations, the founders of the European Union had too rosy a view of liberalism’s future — and their successors are paying the price.

Europe’s deplorables have outwitted Brussels

This essay was originally published by Unherd on November 10th 2020

Throughout the autumn, the European Union has been engaged in a standoff with its two most antagonistic members, Hungary and Poland. At stake was whether the EU would finally take meaningful action against these pioneers of “illiberal democracy”, to use the infamous phrase of Hungarian Prime Minister Viktor Orbán. As of last week — and despite appearances to the contrary — it seems the Hungarian and Polish regimes have postponed the reckoning once more.

Last week, representatives of the European Parliament triumphantly announced a new disciplinary mechanism which, they claimed, would enable Brussels to withhold funds from states that violate liberal democratic standards. According to MEP Petri Sarvamaa, it meant the end of “a painful phase [in] the recent history of the European Union”, in which “the basic values of democracy” had been “threatened and undermined”.

No names were named, of course, but they did not need to be. Tensions between the EU and the recalcitrant regimes on its eastern periphery, Hungary under Orbán’s Fidesz and Poland under the Law and Justice Party, have been mounting for years. Those governments’ erosion of judicial independence and media freedom, as well as concerns over corruption, education, and minority rights, have resulted in a series of formal investigations and legal actions. And that is not to mention the constant rhetorical fusillades between EU officials and Budapest and Warsaw.

The new disciplinary mechanism is being presented as the means to finally bring Hungary and Poland to heel, but it is no such thing. Though not exactly toothless, it is unlikely to pose a serious threat to the illiberal pretenders in the east. Breaches of “rule of law” standards will only be sanctioned if they affect EU funds — so the measures are effectively limited to budget oversight. Moreover, enforcing the sanctions will require a weighted majority of member states in the European Council, giving Hungary or Poland ample room to assemble a blocking coalition.

In fact, what we have here is another sticking plaster so characteristic of the complex and unwieldy structures of European supranational democracy. The political dynamics of this system, heavily reliant on horse-trading and compromise, have allowed Hungary and Poland to outmanoeuvre their opponents.

The real purpose of the disciplinary measures is to ensure the timely passage of next EU budget, and in particular, a €750 billion coronavirus relief fund. That package will, for the first time, see member states issuing collective debt backed by their taxpayers, and therefore has totemic significance for the future of the Union. It is a real indication that fiscal integration might be possible in the EU — a step long regarded as crucial to the survival of Europe’s federal ambitions, and one that shows its ability to respond effectively to a major crisis.

But this achievement has almost been derailed by a showdown with Hungary and Poland. Liberal northern states such as Finland, Sweden and the Netherlands, together with the European Parliament, insisted that financial support should be conditional on upholding EU values and transparency standards. But since the relief fund requires unanimous approval, Hungary or Poland can simply veto the whole initiative, which is exactly what they have been threatening to do.

In other words, the EU landed itself with a choice between upholding its liberal commitments and securing its future as a viable political and economic project. The relatively weak disciplinary mechanism shows that European leaders are opting for the latter, as they inevitably would. It is a compromise that allows the defenders of democratic values to save face, while essentially letting Hungary and Poland off the hook. (Of course this doesn’t rule out the possibility that the Hungarian and Polish governments will continue making a fuss anyway.)

Liberals who place their hopes in the European project may despair at this, but these dilemmas are part and parcel of binding different regions and cultures in a democratic system. Such undertakings need strict constitutional procedures to hold them together, but those same procedures create opportunities to game the system, especially as demands in one area can be tied with cooperation in another.

As he announced the new rule of law agreement, Sarvamaa pointed to Donald Trump’s threat to win the presidential election via the Supreme Court as evidence of the need to uphold democratic standards. In truth, what is happening in Europe bears a closer resemblance to America in the 1930s, when F.D. Roosevelt was forced to make concessions to the Southern states to deliver his New Deal agenda.

That too was a high-stakes attempt at federal consolidation and economic repair, with the Great Depression at its height and democracy floundering around the world. As the political historian Ira Katznelson has noted,Roosevelt only succeeded by making “necessary but often costly illiberal alliances” — in particular, alliances with Southern Democratic legislators who held an effective veto in Congress. The result was that New Deal programs either avoided or actively upheld white supremacy in the Jim Crow South. (Key welfare programs, for instance, were designed to exclude some two-thirds of African American employees in the Southern states).

According to civil rights campaigner Walter White, Roosevelt himself explained his silence on a 1934 bill to combat the lynching of African Americans as follows: “I’ve got to get legislation passed by Congress to save America… If I come out for the anti-lynching bill, they [the Southern Democrats] will block every bill I ask Congress to pass to keep America from collapsing. I just can’t take that risk.”

This is not to suggest any moral equivalence between the Europe’s “illiberal democracies” and the Deep South of the 1930s. But the Hungarian and Polish governments do resemble the experienced Southern politicians of the New Deal era in their ability to manoeuvre within a federal framework, achieving an autonomy that belies their economic dependency. They have learned to play by the letter of the rules as well as to subvert them.

Orbán, for instance, has frequently insisted that his critics make a formal legal case against him, whereupon he has managed to reduce sanctions to mere technicalities. He has skilfully leveraged the arithmetic of the European Parliament to keep Fidesz within the orbit of the mainstream European People’s Party group. In September, the Hungarian and Polish governments even announced plans to establish their own institute of comparative legal studies, aiming to expose the EU’s “double standards.”

And now, with their votes required to pass the crucial relief fund, the regimes in Budapest and Warsaw are taking advantage of exceptionally high stakes much as their Southern analogues in the 1930s did. They have, in recent months, become increasingly defiant in their rejection of European liberalism. In September, Orbán published a searing essay in which he hailed a growing “rebellion against liberal intellectual oppression” in the western world. The recent anti-abortion ruling by the Polish high court is likewise a sign of that state’s determination to uphold Catholic values and a robust national identity.

Looking forward, however, it seems clear this situation cannot continue forever. Much has been made of Joe Biden’s hostility to the Hungarian and Polish regimes, and with his election victory, we may see the US attaching its own conditions to investment in Eastern Europe. But Biden cannot question the EU’s standards too much, since he has made the latter out to be America’s key liberal partner. The real issue is that if richer EU states are really going to accept the financial burdens of further integration, they will not tolerate deviant nations wielding outsized influence on key policy areas.

Of course such reforms would require an overhaul of the voting system, which means treaty change. This raises a potential irony: could the intransigence of Hungary and Poland ultimately spur on Europe’s next big constitutional step — one that will see their leverage taken away? Maybe. For the time being, the EU is unlikely to rein in the illiberal experiments within its borders.

Biden versus Beijing

The Last of the Libertarians

This book review was originally published by Arc Digital on August 31st 2020.

As the world reels from the chaos of COVID-19, it is banking on the power of innovation. We need a vaccine, and before even that, we need new technologies and practices to help us protect the vulnerable, salvage our pulverized economies, and go on with our lives. If we manage to weather this storm, it will be because our institutions prove capable of converting human ingenuity into practical, scalable fixes.

And yet, even if we did not realize it, this was already the position we found ourselves in prior to the pandemic. From global warming to food and energy security to aging populations, the challenges faced by humanity in the 21st century will require new ways of doing things, and new tools to do them with.

So how can our societies foster such innovation? What are the institutions, or more broadly the economic and political conditions, from which new solutions can emerge? Some would argue we need state-funded initiatives to direct our best minds towards specific goals, like the 1940s Manhattan Project that cracked the puzzle of nuclear technology. Others would have us place our faith in the miracles of the free market, with its incentives for creativity, efficiency, and experimentation.

Matt Ridley, the British businessman, author, and science journalist, is firmly in the latter camp. His recent book, How Innovation Works, is a work of two halves. On the one hand it is an entertaining, informative, and deftly written account of the innovations which have shaped the modern world, delivering vast improvements in living standards and opportunity along the way. On the other hand, it is the grumpy expostulation of a beleaguered libertarian, whose reflexive hostility to government makes for a vague and contradictory theory of innovation in general.

Innovation, we should clarify, does not simply mean inventing new things, nor is it synonymous with scientific or technological progress. There are plenty of inventions that do not become innovations — or at least not for some time — because we have neither the means nor the demand to develop them further. Thus, the key concepts behind the internal combustion engine and general-purpose computer long preceded their fruition. Likewise, there are plenty of important innovations which are neither scientific nor technological — double-entry bookkeeping, for instance, or the U-bend in toilet plumbing — and plenty of scientific or technological advances which have little impact beyond the laboratory or drawing board.

Innovation, as Ridley explains, is the process by which new products, practices, and ideas catch on, so that they are widely adopted within an industry or society at large. This, he rightly emphasizes, is rarely down to a brilliant individual or blinding moment of insight. It is almost never the result of an immaculate process of design. It is, rather, “a collective, incremental, and messy network phenomenon.”

Many innovations make use of old, failed ideas whose time has come at last. At the moment of realization, we often find multiple innovators racing to be first over the line — as was the case with the steam engine, light bulb, and telegraph. Sometimes successful innovation hinges on a moment of luck, like the penicillin spore which drifted into Alexander Fleming’s petri dish while he was away on holiday. And sometimes a revolutionary innovation, such as the search engine, is strangely anticipated by no one, including its innovators, almost up until the moment it is born.

But in virtually every instance, the emergence of an innovation requires numerous people with different talents, often far apart in space and time. As Ridley describes the archetypal case: “One person may make a technological breakthrough, another work out how to manufacture it, and a third how to make it cheap enough to catch on. All are part of the innovation process and none of them knows how to achieve the whole innovation.”

These observations certainly lend some credence to Ridley’s arguments that innovation is best served by a dynamic, competitive market economy responding to the choices of consumers. After all, we are not very good at guessing from which direction the solution to a problem will come — we often do not even know there was a problem until a solution comes along — and so it makes sense to encourage a multitude of private actors to tinker, experiment, and take risks in the hope of discovering something that catches on.

Moreover, Ridley’s griping about misguided government regulation — best illustrated by Europe’s almost superstitious aversion to genetically modified crops — and about the stultifying influence of monopolistic, subsidy-farming corporations, is not without merit.

But not so fast. Is it not true that many innovations in Ridley’s book drew, at some point in their complex gestation, from state-funded research? This was the case with jet engines, nuclear energy, and computing (not to mention GPS, various products using plastic polymers, and touch-screen displays). Ridley’s habit of shrugging off such contributions with counterfactuals — had not the state done it, someone else would have — misses the point, because the state has basic interests that inevitably bring it into the innovation business.

It has always been the case that certain technologies, however they emerge, will continue their development in a limbo between public and private sectors, since they are important to economic productivity, military capability, or energy security. So it is today with the numerous innovative technologies caught up in the rivalry between the United States and China, including 5G, artificial intelligence, biotechnology, semiconductors, quantum computing, and Ridley’s beloved fracking for shale gas.

As for regulation, the idea that every innovation which succeeds in a market context is in humanity’s best interests is clearly absurd. One thinks of such profitable 19th-century innovations by Western businessmen as exporting Indian opium to the Far East. Ridley tries to forestall such objections with the claim that “To contribute to human welfare … an innovation must meet two tests: it must be useful to individuals, and it must save time, energy, or money in the accomplishment of some task.” Yet there are plenty of innovations which meet this standard and are still destructive. Consider the opium-like qualities of social media, or the subprime mortgage-backed securities which triggered the financial crisis of 2007–8 (an example Ridley ought to know about, seeing as he was chairman of Britain’s ill-fated Northern Rock bank at the time).

Ridley’s weakness in these matters is amplified by his conceptual framework, a dubious fusion of evolutionary theory and dogmatic libertarianism. Fundamentally, he holds that innovation is an extension of evolution by natural selection, “a process of constantly discovering ways of rearranging the world into forms that are unlikely to arise by chance — and that happen to be useful.” (Ridley even has a section on “The ultimate innovation: life itself.”) That same cosmic process, he claims, is embodied in the spontaneous order of the free market, which, through trade and specialization, allows useful innovations to emerge and spread.

This explains why How Innovation Works contains no suggestion about how we should weigh the risks and benefits of different kinds of innovation. Insofar as Ridley makes an ethical case at all, it amounts to a giant exercise in naturalistic fallacy. Though he occasionally notes innovation can be destructive, he more often moves seamlessly from claiming that it is an “inexorable” natural process, something which simply happens, to hailing it as “the child of freedom and the parent of prosperity,” a golden goose in perpetual danger of suffocation.

But the most savage contradictions in Ridley’s theory appear, once again, in his pronouncements on the role of the state. He insists that by definition, government cannot be central to innovation, because it has predetermined goals whereas evolutionary processes do not. “Trying to pretend that government is the main actor in this process,” he says, “is an essentially creationist approach to an essentially evolutionary phenomenon.”

Never mind that many of Ridley’s own examples involve innovators aiming for predetermined goals, or that in his (suspiciously brief) section on the Chinese innovation boom, he concedes in passing that shrewd state investment played a key role. The more pressing question is, what about those crucial innovations for which there is no market demand, and which therefore do not evolve?

Astonishingly, in his afterword on the challenges posed by COVID-19, Ridley has the gall to admonish governments for not taking the lead in innovation. “Vaccine development,” he writes, has been “insufficiently encouraged by governments and the World Health Organisation,” and “ignored, too, by the private sector because new vaccines are not profitable things to make.” He goes on: “Politicians should go further and rethink their incentives for innovation more generally so that we are never again caught out with too little innovation having happened in a crucial field of human endeavour.”

In these lines, we should read not just the collapse of Ridley’s central thesis, but more broadly, the demise of a certain naïve market libertarianism — a worldview that flourished during the 1980s and ’90s, and which, like most dominant intellectual paradigms, came to see its beliefs as reflecting the very order of nature itself. For what we should have learned in 2007–8, and what we have certainly learned this year, is that for all its undoubted wonders the market is always tacitly relying on the state to step in should the need arise.

This does not mean, of course, that the market has no role to play in developing the key innovations of the 21st century. I believe it has a crucial role, for it remains unmatched in its ability to harness the latent power of widely dispersed ideas and skills. But if the market’s potential is not to be snuffed out in a post-COVID era of corporatism and monopoly, then it will need more credible defenders than Ridley. It will need defenders who are aware of its limitations and of its interdependence with the state.

Anti-racism and the long shadow of the 1970s

The essay was originally published by Unherd on August 3rd 2020.

Last month, following a bout of online outrage, the National Museum of African American History and Culture removed an infographic from its website. Carrying the title “Aspects and assumptions of whiteness and white culture in the United States,” the offending chart presented a list of cultural expectations which, apparently, reflect the “traditions, attitudes and ways of life” characteristic of “white people.” Among the items listed were “self-reliance,” “the nuclear family,” “respect authority,” “plan for future” and “objective, rational linear thinking”.

Critics seized on this as evidence that the anti-racism narrative that has taken hold in institutional America is permeated by a bigotry of low expectations. The chart seemed to suggest that African Americans should not be expected to adhere to the basic tenets of modern civil society and intellectual life. Moreover, the notion that prudence, personal responsibility and rationality are inherently white echoes to an uncanny degree the racist claims that have historically been used to justify the oppression of people of African descent.

We could assume, in the interests of fairness, that the problem with the NMAAHC’s chart was a lack of context. Surely the various qualities it ascribes to “white culture” should be read as though followed by a phrase like “as commonly understood in the United States today?” The problem is that the original document which inspired the chart, and which bore the copyright of corporate consultant Judith H. Katz, provides no such caveats.

If we look at Katz’s own career, however, we do find some illuminating context — not just for this particular incident, but also regarding the origins of the current anti-racism movement more broadly. During the 1970s, Katz pioneered a distinctive approach to combatting racism, one that was above all therapeutic and managerial. This approach, as the NMAAHC chart suggests, took little interest in the opinions and experiences of ethnic and racial minorities, but focused on helping white Americans understand their identity.

Katz’s most obvious descendent today is Robin DiAngelo, author of the bestselling White Fragility — a book relating the experiences and methods of DiAngelo’s lucrative career in corporate anti-racism training. Katz too developed a re-education program, “White awareness training,” which, according to her 1978 book White Awareness, “strives to help Whites understand that racism in the United States is a White problem and that being White implies being racist.”

Like DiAngelo, Katz rails against the pretense of individualism and colour blindness, which she regards as strategies for denying complicity in racism. And like DiAngelo, Katz emphasizes the need for exclusively white discussions (the “White-on-White training group”) to avoid turning minorities into teachers, which would be merely another form of exploitation.

Yet the most striking aspect of Katz’s ideas, by contrast to the puritanical DiAngelo, is her insistence that the real purpose of anti-racism training is to enable the psychological liberation and self-fulfillment of white Americans. She consistently discusses the problem of racism in the medicalizing language of sickness and trauma. It is, she says, “a form of schizophrenia,” “a pervasive form of mental illness,” a “disease,” and “a psychological disorder… deeply embedded in White people from a very early age on both a conscious and an unconscious level.” Thus the primary benefit offered by Katz is to save white people from this pathology, by allowing them to establish a coherent identity as whites.

Her program, she repeatedly emphasizes, is not meant to produce guilt. Rather, its premise is that in order to discover “our unique identities,” we must not overlook “[o]ur sexual and racial essences.” Her training allows its subjects to “become more fully human,” to “identify themselves as White and feel good about it.” Or as Katz writes in a journal article: “We must begin to remove the intellectual shackles and psychological chains that keep us in a mental and spiritual bondage. White people have been hurt for too long.”

Reading all of this, it is difficult not to be reminded of the critic Christopher Lasch’s portrayal of 1970s America as a “culture of narcissism”. Lasch was referring to a bundle of tendencies that characterised the hangover from the radicalism of the 1960s: a catastrophising hypochondria that found in everything the signs of impending disaster or decay; a naval-gazing self-awareness which sought expression in various forms of spiritual liberation; and consequently, a therapeutic culture obsessed with self-improvement and personal renewal.

The great prophet of this culture was surely Woody Allen, whose work routinely evoked crippling neuroses, fear of death, and psychiatry as the customary tool for managing the inner tensions of the liberated bourgeois. That Allen treated all of this with layer upon layer of self-deprecating irony points to another key part of Lasch’s analysis. The narcissist of this era retained enough idealism so as to be slightly ashamed of his self-absorption — unless, of course, some way could be found to justify it as a means towards wider social improvement.

And that is what Katz’s white awareness training offered: a way to resolve the tensions between a desire for personal liberation and a social conscience, or more particularly, a new synthesis of ’70s therapeutic culture with the collectivist political currents unleashed in the ’60s.

Moreover, in Katz’s work we catch a glimpse of what the vehicle for this synthesis would be: the managerial structures of the public or private institution, where a paternalistic attitude towards students, employees and the general public could provide the ideal setting for the tenets of “white awareness.” By way of promoting her program, Katz observed in the late ’70s a general trend towards “a more educational role for the psychotherapist… utilizing systemic training as the process by which to meet desired behavior change.” There was, she noted, a “growing demand” for such services.

Which brings us back to the NMAAHC’s controversial chart. It would be wrong to suggest that this single episode allows us to draw a straight line from the culture of narcissism in which Katz’s ideas emerged to the present anti-racism narrative. But the fact that there continues to be so much emphasis placed on the notion of “whiteness” today — the NMAAHC has an entire webpage under this heading, which prominently features Katz’s successor Robin DiAngelo — suggests that progressive politics has not entirely escaped the identity crises of the 1970s.

Today that politics might be more comfortable assigning guilt than Katz was, but it still places a disproportionate emphasis on those it calls “white” to adopt a noble burden of self-transformation, while relegating minorities to the role of a helpless other.

Of course, it is precisely this simplistic dichotomy which allows the anti-racism narrative to jump across borders and even oceans, as we have seen happening recently, into any context where there are people who can be called “white” and an institutional framework for administering reeducation. Already in 1983, Katz was able to promote her “white awareness training” in the British journal Early Child Development and Care, simply swapping her standard American intro for a discussion of English racism.

Then as now, the implication is that from the perspective of “whiteness,” the experience of African-Americans and of ethnic minorities in a host of other places is somehow interchangeable. This, I think, can justifiably be called a kind of narcissism.

The left’s obsession with symbols has gone too far

The article was originally published by Arc Digital on June 20th 2020.

Protest is a symbolic form of politics. It is about sending messages. It turns public space — both physical and now virtual — into an arena where frustrations not satisfied by the formal political system are expressed with slogans, banners, and bodies.

But protest can only be a force for good if its aims point away from the symbolic and back towards formal politics — and, beyond that, towards material reality.

Yes, filling the streets with demonstrators and the internet with hashtags can be effective in raising awareness of issues. It can be effective in bringing new movements to life. But if the issues that spur protest are real problems in society, then the sending of messages must be accompanied by practical plans to address those problems.

In this last respect, the spectacular wave of protests sparked by the killing of George Floyd three weeks ago, which has carried not just across the United States but to Europe and beyond, presents a mixed picture. At its core is the grief and righteous anger of African-Americans regarding police brutality. This is a cause that, so far as I know, has been questioned by no one anywhere near the mainstream of public life. And it is eminently capable of achieving concrete reforms.

But around that core has gathered a much more nebulous phenomenon — a culture of protest that does not just employ symbolic means, but pursues largely symbolic goals. In the U.S., African-American grievances have been overshadowed by white progressives expressing their prodigious guilt and shunting it onto one another. Witness the rituals of purification where crowds have gathered to kneel or lie prostrate on the ground.

The iconography of Black Lives Matter — especially the resonant statements about injustice presented in white text on black background — has been widely co-opted by mega-corporations seeking to endear themselves to consumers. Meanwhile, the demonstrations have coalesced around a vacuous slogan, “defund the police,” which is clearly a rallying cry for further protest rather than a serious policy proposal.

In the United Kingdom (where we are always copying our cousins across the Atlantic), the protests have become similarly ingrown. Beginning as a statement of empathy for African-Americans and a warning that we have our own issues of racial inequality to address, they quickly descended into an argument about historical figures represented in public statues and the names of buildings and streets. The focus on iconoclasm has now blown back into the United States, as seen in this week’s wave of statue-toppling and defacing.

To the extent that all this symbolic activity makes racial minorities feel they have solidarity with society at large, this is good. But there is a point at which the politics of gesture becomes so dominant that it distracts from practical efforts at reform, or even hinders them. When manifestly crazy ideas like defunding the police become attached to the protests, it undermines legislators seeking genuine solutions by giving their opponents a brush to tar them with. Making an issue of public monuments (London Mayor Sadiq Khan has announced a new “Diversity Commission” for this purpose) diverts attention from a hundred more consequential issues we could be discussing.

Of course to view these events in isolation would be to miss the forest for the trees. The symbolic turn of progressive activism is part of an ongoing culture war over the norms that govern our language, manners, and institutions.

Part of the progressive strategy in that war has been to establish a kind of semantic hegemony, or effective control over the meaning of symbols. This involves emphasizing the symbolic dimension of all kinds of things — words, gestures, intellectual practices, works of art and entertainment — and then insisting on what they signify. Thus a statue is considered not a historical artifact but an expression of racism, or particular words and actions deemed manifestations of white privilege rather than of the intentions that motivated them.

The performative character of the recent protests — the emphasis on gesture for its own sake, the fixation with symbols of oppression — certainly fits into this wider picture. This is why commentators, both supportive and skeptical, are now talking less about policing and concrete forms of racial inequality than about a “cultural revolution.” It is also why many conservatives, old school liberals, and social democrats are freaking out. They are imagining a society in which, simply to have a career, people will have to accept the meanings assigned onto things by the progressive worldview.

But there is still a question as to whether this cultural revolution will actually help the people it purports to. This is the question, or it ought to be. Can a politics so heavily focused on language, meanings, and manners create the conditions for minority individuals and communities to lead more secure and fulfilling lives?

To some degree it can. Dignity — or the entitlement to claim one’s right to full participation in civic life — is a necessary condition for any individual or group to flourish. And dignity does have a lot to do with that amorphous realm of social norms and meanings. It is ultimately manifest at the level of subjective experience, as self-assurance and an inner sense of belonging, but can only be guaranteed by the respect of others. There is at present, among younger generations especially, a genuine desire to ensure that the way we talk and act does not prevent minorities from claiming the dignity that is their due.

On the other hand, meaning is an uncertain, fuzzy thing, such that a politics which focuses too heavily on it can easily become Sisyphean — fighting endless battles over symbolic territory without achieving any real forward progress.

Indeed, in recent years progress towards social justice has become both a closed circle and an infinitely receding frontier. As the progressive mind has become preoccupied with attaching meanings to things — with telling us how we should interpret social phenomena, statues, words, pictures, etc., — it has granted itself the power to endlessly create new symbolic obstacles to be overcome.

This project seems especially futile in light of the fact that so many issues of racial inequality, both in the U.S. and in Europe, are also issues of social class. The cultural revolution is largely the preserve of the highly educated — academia, media, advertising, and managerial bureaucrat type people whose daily lives revolve around the interpretation and manipulation of symbols.

Far from expanding the circle of dignity, their arcane theories represent yet another barrier that excludes poorer people of all ethnicities from the conversation. Meanwhile, luminaries of the symbolic struggle can justify their endeavors by pointing to the very inequality which they subtly reinforce.

So the protest style we are seeing of late, which prioritizes grand gestures over concrete achievements, is indicative of a wider problem. A fixation with symbols is of little use if it comes at the expense of practical engagement with other, equally important dimensions of social life: economic opportunity and public services and community formation and the justice system.

That the symbolic mode of activism is so good at stoking passion, and at extracting equally symbolic gestures from cowed institutions, is just another indication that more substantive issues are being crowded out.

The politics of crisis is not going away any time soon

This essay was originally published by Palladium magazine on June 10th 2020

A pattern emerges when surveying the vast commentary on the COVID-19 pandemic. At its center is a distinctive image of crisis: the image of a cruel but instructive spotlight laying bare the flaws of contemporary society. Crisis, we read, has “revealed,” “illuminated,” “clarified,” and above all, “exposed” our collective failures and weaknesses. It has unveiled the corruption of institutions, the decadence of culture, and the fragility of a material way of life. It has sounded the death-knell for countless projects and ideals.

“The pernicious coronavirus tore off an American scab and revealed suppurating wounds beneath,” announces one commentator, after noting “these calamities can be tragically instructional…Fundamental but forgotten truths, easily masked in times of calm, reemerge.”

Says another: “Invasion and occupation expose a society’s fault lines, exaggerating what goes unnoticed or accepted in peacetime, clarifying essential truths, raising the smell of buried rot.”

You may not be surprised to learn that these two near-identical comments come from very different interpretations of the crisis. The first, from Trump-supporting historian Victor Davis Hanson of the Hoover Institution, claims that the “suppurating wounds” of American society are an effete liberal elite compromised by their reliance on a malignant China and determined to undermine the president at any cost. According to the second, by The Atlantic’s George Packer, the “smell of buried rot” comes from the Trump administration itself, the product of an oligarchic ascendency whose power stems from the division of society and hollowing-out of the state.

Nothing, it seems, has evaded the extraordinary powers of diagnosis made available by crisis: merciless globalism, backwards nationalism, the ignorance of populists, the naivety of liberals, the feral market, the authoritarian state. We are awash in diagnoses, but diagnosis is only the first step. It is customary to sharpen the reality exposed by the virus into a binary, existential decision: address the weakness identified, or succumb to it. “We’re faced with a choice that the crisis makes inescapably clear,” writes Packer, “the alternative to solidarity is death.” No less ominous is Hanson’s invocation of Pearl Harbor: “Whether China has woken a sleeping giant in the manner of the earlier Japanese, or just a purring kitten, remains to be seen.”

The crisis mindset is not just limited to journalistic sensationalism. Politicians, too, have appealed to a now-or-never, sink-or-swim framing of the COVID-19 emergency. French President Emmanuel Macron has been among those using such terms to pressure Eurozone leaders into finally establishing a collective means of financing debt. “If we can’t do this today, I tell you the populists will win,” Macron told The Financial Times. Across the Atlantic, U.S. Congresswoman Alexandria Ocasio-Cortez has claimed that the pandemic “has just exposed us, the fragility of our system,” and has adopted the language of “life or death” in her efforts to bring together the progressive and centrist wings of the Democratic Party before the presidential election in November.

And yet, in surveying this rhetoric of diagnosis and decision, what is most surprising is how familiar it sounds. Apart from the pathogen itself, there are few narratives of crisis now being aired which were not already well-established during the last decade. Much as the coronavirus outbreak has felt like a sudden rupture from the past, we have already been long accustomed to the politics of crisis.

It was under the mantra of “tough decisions,” with the shadow of the financial crisis still looming, that sharp reductions in public spending were justified across much of the Western world after 2010. Since then, the European Union has been crippled by conflicts over sovereign debt and migration. It was the rhetoric of the Chinese menace and of terminal decline—of “rusted-out factories scattered like tombstones across the landscape of our nation,” to quote the 2017 inaugural address—that brought President Trump to power. Meanwhile, progressives had already mobilized themselves around the language of emergency with respect to inequality and climate change.

There is something deeply paradoxical about all of this. The concept of crisis is supposed to denote a need for exceptional attention and decisive focus. In its original Greek, the term krisis often referred to a decision between two possible futures, but the ubiquity of “crisis” in our politics today has produced only deepening chaos. The sense of emergency is stoked continuously, but the accompanying promises of clarity, agency, and action are never delivered. Far from a revealing spotlight, the crises of the past decade have left us with a lingering fog which now threatens to obscure us at a moment when we really do need judicious action.


Crises are a perennial feature of modern history. For half a millenium, human life has been shaped by impersonal forces of increasing complexity and abstraction, from global trade and finance to technological development and geopolitical competition. These forces are inherently unstable and frequently produce moments of crisis, not least due to an exogenous shock like a deadly plague. Though rarely openly acknowledged, the legitimacy of modern regimes has largely depended on a perceived ability to keep that instability at bay.

This is the case even at times of apparent calm, such as the period of U.S. global hegemony immediately following the Cold War. The market revolution of the 1980s and globalization of the 1990s were predicated on a conception of capitalism as an unpredictable, dynamic system which could nonetheless be harnessed and governed by technocratic expertise. Such were the hopes of “the great moderation.” A series of emerging market financial crises—in Mexico, Korea, Thailand, Indonesia, Russia, and Argentina—provided opportunities for the IMF and World Bank to demand compliance with the Washington Consensus in economic policy. Meanwhile, there were frequent occasions for the U.S. to coordinate global police actions in war-torn states.

Despite the façade of independent institutions and international bodies, it was in no small part through such crisis-fighting economic and military interventions that a generation of U.S. leaders projected power abroad and secured legitimacy at home. This model of competence and progress, which seems so distant now, was not based on a sense of inevitability so much as confidence in the capacity to manage one crisis after another: to “stabilize” the most recent eruption of chaos and instability.

A still more striking example comes from the European Union, another product of the post-Cold War era. The project’s main purpose was to maintain stability in a trading bloc soon to be dominated by a reunified Germany. Nonetheless, many of its proponents envisaged that the development of a fully federal Europe would occur through a series of crises, with the supra-national structures of the EU achieving more power and legitimacy at each step. When the Euro currency was launched in 1999, Romano Prodi, then president of the European Commission, spoke of how the EU would extend its control over economic policy: “It is politically impossible to propose that now. But some day there will be a crisis and new instruments will be created.”

It is not difficult to see why Prodi took this stance. Since the rise of the rationalized state two centuries ago, managerial competence has been central to notions of successful governance. In the late 19th century, French sociologist Emile Durkheim compared the modern statesman to a physician: “he prevents the outbreak of illnesses by good hygiene, and seeks to cure them when they have appeared.” Indeed, the bureaucratic structures which govern modern societies have been forged in the furnaces of crisis. Social security programs, income tax, business regulation, and a host of other state functions now taken for granted are a product of upheavals of the 19th and early 20th centuries: total war, breakneck industrialization, famine, and financial panic. If necessity is the mother of invention, crisis is the midwife of administrative capacity.

By the same token, the major political ideologies of the modern era have always claimed to offer some mastery over uncertainty. The locus of agency has variously been situated in the state, the nation, individuals, businesses, or some particular class or group; the stated objectives have been progress, emancipation, greatness, or simply order and stability. But in every instance, the message has been that the chaos endemic to modern history must be tamed or overcome by some paradigmatic form of human action. The curious development of Western modernity, where the management of complex, crisis-prone systems has come to be legitimated through secular mass politics, appears amenable to no other template.

It is against this backdrop that we can understand the period of crisis we have endured since 2008. The narratives of diagnosis and decision which have overtaken politics during this time are variations on a much older theme—one that is present even in what are retrospectively called “times of calm.” The difference is that, where established regimes have failed to protect citizens from instability, the logic of crisis management has burst its technocratic and ideological bounds and entered the wider political sphere. The greatest of these ruptures was captured by a famous statement attributed to Federal Reserve Chairman Ben Bernanke in September 2008. Pleading with Congress to pass a $700 billion bailout, Bernanke claimed: “If we don’t do this now, we won’t have an economy on Monday.”

This remark set the tone for the either/or, act-or-perish politics of the last decade. It points to a loss of control which, in the United States and beyond, opened the way for competing accounts not just of how order could be restored, but also what that order should look like. Danger and disruption have become a kind of opportunity, as political insurgents across the West have captured established parties, upended traditional power-sharing arrangements, and produced the electoral shocks suggested by the ubiquitous phrase “the age of Trump and Brexit.” These campaigns sought to give the mood of crisis a definite shape, directing it towards the need for urgent decision or transformative action, thereby giving supporters a compelling sense of their own agency.


Typically though, such movements do not merely offer a choice between existing chaos and redemption to come. In diagnoses of crisis, there is always an opposing agent who is responsible for and threatening to deepen the problem. We saw this already in Hanson’s and Packer’s association of the COVID-19 crisis with their political opponents. But it was there, too, among Trump’s original supporters, for whom the agents of crisis were not just immigrants and elites but, more potently, the threat posed by the progressive vision for America. This was most vividly laid out in Michael Anton’s infamous “Flight 93 Election” essay, an archetypal crisis narrative which urged fellow conservatives that only Trump could stem the tide of “wholesale cultural and political change,” claiming “if you don’t try, death is certain.”

Yet Trump’s victory only galvanized the radical elements of the left, as it gave them a villain to point to as a way of further raising the consciousness of crisis among their own supporters. The reviled figure of Trump has done more for progressive stances on immigration, healthcare, and climate action than anyone else, for he is the ever-present foil in these narratives of emergency. Then again, such progressive ambitions, relayed on Fox News and social media, have also proved invaluable in further stoking conservatives’ fears.

To simply call this polarization is to miss the point. The dynamic taking shape here is rooted in a shared understanding of crisis, one that treats the present as a time in which the future of society is being decided. There is no middle path, no going back: each party claims that if they do not take this opportunity to reshape society, their opponents will. In this way, narratives of crisis feed off one another, and become the basis for a highly ideological politics—a politics that de-emphasizes compromise with opponents and with the practical constraints of the situation at hand, prioritizing instead the fulfillment of a goal or vision for the future.

Liberal politics is ill-equipped to deal with, or even to properly recognize, such degeneration of discourse. In the liberal imagination, the danger of crisis is typically that the insecurity of the masses will be exploited by a demagogue, who will then transfigure the system into an illiberal one. In many cases, though, it is the system which loses legitimacy first, as the frustrating business of deliberative, transactional politics cannot meet the expectations of transformative change which are raised in the public sphere.

Consider the most iconic and, in recent years, most frequently analogized period of crisis in modern history: Germany’s Weimar Republic of 1918-33. These were the tempestuous years between World War I and Hitler’s dictatorship, during which a fledgling democracy was rocked by armed insurrection, hyperinflation, foreign occupation, and the onset of the Great Depression, all against a backdrop of rapid social, economic, and technological upheaval.

Over the past decade or so, there have been no end of suggestions that ours is a “Weimar moment.” Though echoes have been found in all sorts of social and cultural trends, the overriding tendency has been to view the crises of the Weimar period backwards through their end result, the establishment of Nazi dictatorship in 1933. In various liberal democracies, the most assertive Weimar parallels have referred to the rise of populist and nationalist politics, and in particular, the erosion of constitutional norms by leaders of this stripe. The implication is that history has warned us how the path of crisis can lead towards an authoritarian ending.

What this overlooks, however, is that Weimar society was not just a victim of crisis that stumbled blindly towards authoritarianism, but was active in interpreting what crises revealed and how they should be addressed. In particular, the notion of crisis served the ideological narratives of the day as evidence of the need to refashion the social settlement. Long before the National Socialists began their rise in the early 1930s, these conflicting visions, pointing to one another as evidence of the stakes, sapped the republic’s legitimacy by making it appear impermanent and fungible.

The First World War had left German thought with a pronounced sense of the importance of human agency in shaping history. On the one hand, the scale and brutality of the conflict left survivors adrift in a world of unprecedented chaos, seeming to confirm a suspicion of some 19th century German intellectuals that history had no inherent meaning. But at the same time, the war had shown the extraordinary feats of organization and ingenuity that an industrialized society, unified and mobilized around a single purpose, was capable of. Consequently, the prevailing mood of Weimar was best captured by the popular term Zeitenwende, the turning of the times. Its implication was that the past was irretrievably lost, the present was chaotic and dangerous, but the future was there to be claimed by those with the conviction and technical skill to do so.

Throughout the 1920s, this historical self-consciousness was expressed in the concept of Krisis or Krise, crisis. Intellectual buzzwords referred to a crisis of learning, a crisis of European culture, a crisis of historicism, crisis theology, and numerous crises of science and mathematics. The implication was that these fields were in a state of flux which called for resolution. A similar dynamic could be seen in the political polemics which filled the Weimar press, where discussions of crisis tended to portray the present as a moment of decision or opportunity. According to Rüdiger Graf’s study of more than 370 Weimar-era books and still more journal articles with the term “crisis” in their titles, the concept generally functioned as “a call to action” by “narrow[ing] the complex political world to two exclusive alternatives.”

Although the republic was most popular among workers and social democrats, the Weimar left contained an influential strain of utopian thought which saw itself as working beyond the bounds of formal politics. Here, too, crisis was considered a source of potential. Consider the sentiments expressed by Walter Gropius, founder of the Bauhaus school of architecture of design, in 1919:

Capitalism and power politics have made our generation creatively sluggish, and our vital art is mired in a broad bourgeois philistinism. The intellectual bourgeois of the old Empire…has proven his incapacity to be the bearer of German culture. The benumbed world is now toppled, its spirit is overthrown, and is in the midst of being recast in a new mold.

Gropius was among those intellectuals, artists, and administrators who, often taking inspiration from an idealized image of the Soviet Union, subscribed to the idea of the “new man”—a post-capitalist individual whose self-fulfillment would come from social duty. Urban planning, social policy, and the arts were all seen as means to create the environment in which this new man could emerge.

The “bourgeois of the old Empire,” as Gropius called them, had indeed been overthrown; but in their place came a reactionary modernist movement, often referred to as the “conservative revolution,” whose own ideas of political transformation used socialism both as inspiration and as ideological counterpoint. In the works of Ernst Jünger, technology and militarist willpower were romanticized as dynamic forces which could pull society out of decadence. Meanwhile, the political theorist Carl Schmitt emphasized the need for a democratic polity to achieve a shared identity in opposition to a common enemy, a need sometimes better accomplished by the decisive judgments of a sovereign dictator than by a fractious parliamentary system.

Even some steadfast supporters of the republic, like the novelist Heinrich Mann, seized on the theme of crisis as a call to transformative action. In a 1923 speech, against a backdrop of hyperinflation and the occupation of the Ruhr by French forces, Mann insisted that the republic should resist the temptation of nationalism, and instead fulfill its promise as a “free people’s state” by dethroning the “blood-gorging” capitalists who still controlled society in their own interests.

These trends were not confined to rhetoric and intellectual discussion. They were reflected in practical politics by the tendency of even trivial issues to be treated as crises that raised fundamental conflicts of worldview. So it was that, in 1926, a government was toppled by a dispute over the regulations for the display of the republican flag. Meanwhile, representatives were harangued by voters who expected them to embody the uncompromising ideological clashes taking place in the wider political sphere. In towns and cities across the country, rival marches and processions signaled the antagonism of socialists and their conservative counterparts—the burghers, professionals and petite bourgeoisie who would later form the National Socialist coalition, and who by mid-decade had already coalesced around President Paul von Hindenburg.


We are not Weimar. The ideologies of that era, and the politics that flowed from them, were products of their time, and there were numerous contingent reasons why the republic faced an uphill battle for acceptance. Still, there are lessons. The conflict between opposing visions of society may seem integral to the spirit of democratic politics, but at times of crisis, it can be corrosive to democratic institutions. The either/or mindset can add a whole new dimension to whatever emergency is at hand, forcing what is already a time of disorientating change into a zero-sum competition between grand projects and convictions that leave ordinary, procedural politics looking at best insignificant, and at worst an obstacle.

But sometimes this kind of escalation is simply unavoidable. Crisis ideologies amplify, but do not create, a desire for change. The always-evolving material realities of capitalist societies frequently create circumstances that are untenable, and which cannot be sufficiently addressed by political systems prone to inertia and capture by vested interests. When such a situation erupts into crisis, incremental change and a moderate tone may already be a foregone conclusion. If your political opponent is electrifying voters with the rhetoric of emergency, the only option might be to fight fire with fire.

There is also a hypocrisy innate to democratic politics which makes the reality of how severe crises are managed something of a dirty secret. Politicians like to invite comparisons with past leaders who acted decisively during crises, whether it be French president Macron’s idolization of Charles de Gaulle, the progressive movement in the U.S. and elsewhere taking Franklin D Roosevelt as their inspiration, or virtually every British leader’s wish to be likened to Winston Churchill. What is not acknowledged is the shameful compromises that accompanied these leaders’ triumphs. De Gaulle’s opportunity to found the French Fifth Republic came amid threats of a military coup. Roosevelt’s New Deal could only be enacted with the backing of Southern Democratic politicians, and as such, effectively excluded African Americans from its most important programs. Allied victory in the Second World War, the final fruit of Churchill’s resistance, came at the price of ceding Eastern and Central Europe to Soviet tyranny.

Such realities are especially difficult to bear because the crises of the past are a uniquely unifying force in liberal democracies. It was often through crises, after all, that rights were won, new institutions forged, and loyalty and sacrifice demonstrated. We tend to imagine those achievements as acts of principled agency which can be attributed to society as a whole, whereas they were just as often the result of improvisation, reluctant concession, and tragic compromise.

Obviously, we cannot expect a willingness to bend principles to be treated as a virtue, and nor, perhaps, should we want it to. But we can acknowledge the basic degree of pragmatism  which crises demand. This is the most worrying aspect of the narratives of decision surrounding the current COVID-19 crisis: still rooted in the projects and preoccupations of the past, they threaten to render us inflexible at a moment when we are entering uncharted territory.

Away from the discussions about what the emergency has revealed and the action it demands, a new era is being forged by governments and other institutions acting on a more pressing set of motives—in particular, maintaining legitimacy in the face of sweeping political pressures and staving off the risk of financial and public health catastrophes. It is also being shaped from the ground up, as countless individuals have changed their behavior in response to an endless stream of graphs, tables, and reports in the media.

Political narratives simply fail to grip the contingency of this situation. Commentators talk about the need to reduce global interdependence, even as the architecture of global finance has been further built up by the decision of the Federal Reserve, in March, to support it with unprecedented amounts of dollar liquidity. They continue to argue within a binary of free market and big government, even as staunchly neoliberal parties endorse state intervention in their economies on a previously unimaginable scale. Likewise, with discussions about climate policy or western relations with China—the parameters within which these strategies will have to operate are simply unknown.

To reduce such complex circumstances to simple, momentous decisions is to offer us more clarity and agency than we actually possess. Nonetheless, that is how this crisis will continue to be framed, as political actors strive to capture the mood of emergency. It will only make matters worse, though, if our judgment remains colored by ambitions and resentments which were formed in earlier crises. If we continue those old struggles on this new terrain, we will swiftly lose our purchase on reality. We will be incapable of a realistic appraisal of the constraints now facing us, and without such realistic appraisal, no solution can be effectively pursued.