“Euro-English”: A thought experiment

There was an interesting story in Politico last weekend about “Euro-English,” and a Swedish academic who wants to make it an official language. Marko Modiano, a professor at the University of Gävle, says the European Union should stop using British English for its documents and communications, and replace it with the bastardised English which is actually spoken in Brussels and on the continent more generally.

Politico offers this example of how Euro-English might sound, as spoken by someone at the European Commission: “Hello, I am coming from the EU. Since 3 years I have competences for language policy and today I will eventually assist at a trilogue on comitology.”

Although the EU likes to maintain the pretence of linguistic equality, English is in practice the lingua franca of its bureaucrats, the language in which most laws are drafted, and increasingly default language of translation for foreign missions. It is also the most common second language across the continent. But according to Modiano, this isn’t the same English used by native speakers, and it’s silly that the EU’s style guides try to make it conform to the latter. (Spare a thought for Ireland and Malta, who under Modiano’s plans would presumably have to conduct EU business in a slightly different form of English).

It’s a wonderful provocation, but could it also be a veiled political strategy? A distinctively continental English might be a way for the EU to cultivate a stronger pan-European identity, thus increasing its authority both in absolute terms and relative to national governments. The way Modiano presents his proposal certainly makes it sound like that: “Someone is going to have to step forward and say, ‘OK, let’s break our ties with the tyranny of British English and the tyranny of American English.’ And instead say… ‘This is our language.’” (My emphasis).

The EU has forever been struggling with the question of whether it can transcend the appeal of nation states and achieve a truly European consciousness. Adopting Euro-English as an official lingua franca might be a good start. After all, a similar process of linguistic standardisation was essential to the creation of the modern nation state itself.   

As Eric Hobsbawm writes in his classic survey of the late-19th and early-20th century, The Age of Empire, the invention of national languages was a deliberate ideological project, part of the effort to forge national identities out of culturally heterogeneous regions. Hobsbawm explains:

Linguistic nationalism was the creation of people who wrote and read, not of people who spoke. And the ‘national languages’ in which they discovered the essential character of their nations were, more often than not, artefacts, since they had to be compiled, standardized, homogenized and modernized for contemporary and literary use, out of the jigsaw puzzle of local or regional dialects which constituted non-literary languages as actually spoken. 

Perhaps the most remarkable example was the Zionist movement’s promotion of Hebrew, “a language which no Jews had used for ordinary purposes since the days of the Babylonian captivity, if then.”

Where this linguistic engineering succeeded, it was thanks to the expansion of state education and the white-collar professions. A codified national language, used in schools, the civil service and public communications like street signs, was an ideal tool for governments to instil a measure of unity and loyalty in their diverse and fragmented populations. This in turn created incentives for the emerging middle class to prefer an official language to their own vernaculars, since it gave access to careers and social status. 

Could the EU not pursue a similar strategy with Euro-English? There could a special department in Brussels tracking the way English is used by EU citizens on social media, and each year issuing an updated compendium on Euro-English. This emergent language, growing ever more distinctly European, could be mandated in schools, promoted through culture and in the media, and of course used for official EU business. Eventually the language would be different enough to be rebranded simply as “European.”

You’ll notice I’m being facetious now; obviously this would never work. Privileging one language over others would instantly galvanise the patriotism of EU member states, and give politicians a new terrain on which to defend national identity against Brussels. This is pretty much how things played out in multinational 19th century states such as Austria-Hungary, where linguistic hierarchies enflamed the nationalism of minority cultures. One can already see something like this in the longstanding French resentment against the informal dominance of English on the continent.

Conversely, Euro-English wouldn’t work because for Europe’s middle-classes and elites, the English language is a gateway not to Europe, but to the world. English is the language of global business and of American cultural output, and so is a prerequisite for membership of any affluent cosmopolitan milieu. 

And this, I think, is the valuable insight to be gained from thought experiments like the one suggested by Modiano. Whenever we try to imagine what the path to a truly European demos might look like, we always encounter these two quite different, almost contradictory obstacles. On the one hand, the structure of the EU seems to have frozen in place the role of the nation state as the rightful locus of imagined community and symbolic attachment. At the same time, among those who identify most strongly with the European project, many are ultimately universalist in their outlook, and unlikely to warm to anything that implies a distinctively European identity. 

Gambling on technocrats

The likely appointment of Mario Draghi as Italy’s prime minister has been widely, if nervously, greeted as a necessary step. Draghi, an esteemed economist and central banker, will be the fourth unelected technocrat to fill the post in Italy in the last 30 years. As the Guardian concedes by way of welcoming Draghi’s appointment, a ready embrace of unelected leaders is “not a good look for any self-respecting democracy.” 

Italy’s resort to temporary “technical governments” reflects the fact that its fractious political system, with its multitude of parties and short-lived coalitions, is vulnerable to paralysis at moments of crisis. Such has been the price for a constitution designed to prevent the rise of another Mussolini. Ironically though, the convention of installing technocrats recalls the constitutional role of Dictator in the ancient Roman Republic: a trusted leader who, by consensus among the political class, takes charge for a limited term during emergencies.

During the 1990s, it was the crisis of the European Exchange Rate Mechanism, the vast Mani pulite corruption scandal, and Silvio Berlusconi’s first chaotic administration which formed the backdrop for the technocratic governments of Carlo Ciampi and Lamberto Dini. Now in the midst of a pandemic and a gathering economic storm, the immediate pretext comes from the collapse of a government led by Giuseppe Conte of the Five Star Movement, amid machinations by Conte’s rivals and accusations of EU emergency funds being deployed for political patronage

Yet despite its distinctively Italian flavour, this tradition of the technocratic dictator has a much wider European resonance. It reflects the economic and political strains of European integration. And ultimately, the Italian case merely offers a pronounced example of the precarious interplay between depoliticised technocratic governance and democracy which haunts the European Union at large.

The agendas of the Ciampi and Dini cabinets included politically sensitive reforms to state benefits and the public sector, with the purpose of rendering Italy fit for a European economy where Germany set the tune. This pattern was repeated much more emphatically when the next technocratic prime minister, the economist Mario Monti, served from 2011-13. Monti’s mission on behalf of Berlin and Brussels was to temper Italy’s sovereign debt crisis by overseeing harsh austerity measures. 

The legacy of that strategy was the rise of the Italian populism in the form of the Five Star Movement and, on the right, Matteo Salvini’s Lega Nord. Which brings us to another crucial piece of background for Draghi’s appointment this week. With Italian Euroscepticism making further advances during the disastrous first phase of the pandemic, it seems likely that were an election called now a rightwing coalition led by Salvini would take power.

For Italy’s financial and administrative class, that prospect is especially scary given how much the country’s stability now depends on support from the EU. It can be hoped that Draghi will calm the nerves of Italy’s northern creditors, and Germany especially, to pave the way for a much needed second instalment of the coronavirus relief fund. But while all the talk now is of spending and investment, Italy has a public debt worth 160% of GDP and rising, which is only sustainable thanks to the European Central Bank (ECB) continuing to buy its government bonds. It is surely a matter of time before further “structural reforms” are demanded of Italy. 

In other words, when the political parties aren’t up to it, technical governments do the dirty work of squeezing the Italy into the ever-tightening corset of the EU’s economic model. So this is not simply a pathology of Italian politics, but nor can it be characterised as an imposition. Figures like Monti and Draghi have long been invested in this arrangement: they cut their teeth during the 1990s hammering Italian finances into shape for entry to the Euro, and subsequently held important posts in EU institutions. 

Indeed, the basic logic at work here, whereby tasks considered too difficult for democratic politics are handed over to the realm of technocratic expertise, has become a deeply European one. We see it most clearly in the EU’s increasing reliance on the monetary instruments of the ECB as the only acceptable tool with which to respond to economic crises. This goes back to the original political failure of not achieving fiscal integration in the Eurozone, which would have allowed wealth transfers to ailing economies no longer able to negotiate debt reductions or devalue their currencies. But during the Eurozone crisis and its aftermath, politicians avoided confronting their electorates with the need to provide funds for the stricken Club Med states. In stead they relied on the ECB to keep national governments solvent through sovereign bond purchases.

And lest we forget, it was these same bond purchases that made the name of Italy’s incoming prime minister, Mario Draghi. In 2012, when Draghi was ECB president, he appeared to almost magically calm the debt markets by announcing he would do “whatever it takes” to keep the Eurozone afloat. This statement, revealing that Draghi had been empowered to step outside the bounds of rule and precedent, is again suggestive of a kind of constitutionally-mandated technocratic dictator, but at a Europe-wide level. 

Of course to focus on monetary policy is also to highlight that these tensions between technocracy and democracy go far beyond the EU. It is certainly not just in Europe that central bankers have accrued vast power through their ability to provide back-door stimulus and keep huge debt burdens sustainable. The growing importance of central banks points back to an earlier moment of depoliticisation at the dawn of neoliberalism in the early 1980s, when control of interest rates was removed from the realm of democratic politics. More fundamentally, it points to the limitations imposed on democracy by the power of financial markets. 

Still, it is no accident that this tension has appeared in such acute form in the EU. As with Italy’s ready supply of emergency prime ministers, the EU’s dense canopy of technocratic institutions provides an irresistible way for politicians to pass the buck on issues they would otherwise have to subject to democratic conflict. This is all well and good if the technocrats succeed, but as we have seen recently with the EU’s vaccine program, it also raises the stakes of failure. Handing difficult and sensitive matters over to unaccountable administrators means that blame and resentment will be directed against the system as whole. 

Why accusations of vaccine nationalism miss the mark

This article was first published by The Critic magazine on 2nd February 2021.

n the wake of Friday’s decision by the European Union to introduce controls on vaccine exports, there has once again been much alarm about “vaccine nationalism.”  This term is meant to pour scorn on governments that prioritise their own citizens’ access to vaccines over that of other countries. It points to the danger that richer parts of the world will squabble for first dibs on limited vaccine supplies – “fighting over the cake,” as a World Health Organisation official aptly described it – while leaving poorer countries trailing far behind in their vaccination efforts.

Certainly, there’s a real danger that the EU’s export controls will end up hampering overall vaccine production by sparking a trade war over raw materials. This is somewhat ironic, given that few have been as outspoken about countries “unduly restricting access to vaccines” as the EU itself. As for global inequalities in vaccine access, make no mistake – they are shaping up to be very ugly indeed. It looks likely that poorer countries, having already faced an economic, social, and public health catastrophe, will struggle to vaccinate their most vulnerable citizens even as richer states give jabs to the majority of their populations.

Wealthy nations undoubtedly have a moral obligation to minimize the impact of these disparities. Nonetheless, wielding vaccine nationalism as a pejorative term is an unhelpful way to diagnose or even to address this problem. Given how the world is structured politically, the best way to ensure that vaccines reach poorer countries is for richer ones to vaccinate a critical mass of their own citizens as quickly as possible.

To condemn vaccine nationalism is to imply that, in the early summer of 2020 when governments began bidding for Advance Purchase Agreements with pharmaceutical companies, a more cooperative global approach would have been feasible. In reality, the political, bureaucratic and logistical structures to meet such a challenge did not exist. Some are still pointing to Covax, the consortium of institutions trying to facilitate global vaccine equality, as a path not taken. But Covax’s proposed strategy was neither realistic nor effective.

The bottom line here is that for governments around the world, whether democratic or not, legitimacy and political stability depends on protecting the welfare of their citizens – a basic principle that even critics of vaccine nationalism struggle to deny. Only slightly less important are the social unrest and geopolitical setbacks that states anticipate if they fall behind in the race to get economies back up and running.

In light of these pressures, Covax never stood a chance. Its task of forging agreement between an array of national, international and commercial players was bound to be difficult, and no state which had the industrial capacity or market access to secure its own vaccines could have afforded to wait and see if it would work. To meet Covax’s aim of vaccinating 20 per cent of the population in every country at the same speed, nations with the infrastructure to deliver vaccines would have had to wait for those that lacked it. They would have surrendered responsibility for the sensitive task of selecting and securing the best vaccines from among the multitude of candidates. (As late as November last year Covax had just nine vaccines in its putative global portfolio; it did not reach a deal with the first successful candidate, Pfizer-BioNTech, until mid-January).

But even if a more equitable approach to global vaccine distribution had been plausible, it wouldn’t necessarily have been more desirable. Seeing some states race ahead in the vaccine race is unsettling, but at least countries with the capacity to roll out vaccines are using it, and just as important, we are getting crucial information about how to organise vaccination campaigns from a range of different models. The peculiarity of the vaccine challenge means that, in the long run, having a few nations to serve as laboratories will probably prove more useful to everyone than a more monolithic approach that prioritises equality above all.

The EU’s experience is instructive here. Given its fraught internal politics, it really had no choice but to adopt a collective approach for its 27 member states. To do otherwise would have left less fortunate member states open to offers from Russia and China. Still, the many obstacles and delays it has faced – ultimately driving it to impose its export controls – are illustrative of the costs imposed by coordination. Nor should we overlook the fact that its newfound urgency has come from the example of more successful strategies in Israel, the United States and United Kingdom.

Obviously, richer states should be helping Covax build up its financial and logistical resources as well as ensuring their own populations are vaccinated. Many are doing so already. What is still lacking are the vaccines themselves. Since wealthy states acting alone have been able to order in advance from multiple sources, they have gained access to an estimated 800 million surplus vaccine doses, or more than two billion when options are taken into account.

There’s no denying that if such hoarding continues in the medium-term, it will constitute an enormous moral failing. But rather than condemning governments for having favoured their own citizens in this way, we should focus on how that surplus can reach poorer parts of the world as quickly as possible.

This means, first, scaling up manufacturing to ease the supply bottlenecks which are making governments unsure of their vaccine supply. Most importantly though, it means concentrating on how nations that do have access to vaccines can most efficiently get them into people’s arms. The sooner they can see an end to the pandemic in sight, the sooner they can begin seriously diverting vaccines elsewhere. Obviously this will also require resolving the disputes sparked by the EU’s export controls, if necessary by other nations donating vaccines to the EU.

But we also need to have an urgent discussion about when exactly nations should stop prioritising their citizens. Governments should be pressured to state under what conditions they will deem their vaccine supply sufficient to focus on global redistribution. Personally, not being in a high-risk category, I would like to see a vaccine reach vulnerable people in other countries before it reaches me. Admittedly the parameters of this decision are not yet fully in view, with new strains emerging and the nature of herd immunity still unclear. But it would be a more productive problem to focus our attention on than the issue of vaccine nationalism as such.

What’s really at stake in the fascism debate

This essay was originally published by Arc magazine on January 27th 2021.

Many themes of the Trump presidency reached a crescendo on January 6th, when the now-former president’s supporters rampaged through the Capitol building. Among those themes is the controversy over whether we should label the Trump movement “fascist.”

This argument has flared-up at various points since Trump won the Republican nomination in 2015. After the Capitol attack, commentators who warned of a fascist turn in American politics have been rushed back into interview slots and op-ed columns. Doesn’t this attempt by a violent, propaganda-driven mob to overturn last November’s presidential election vindicate their claims?

Many themes of the Trump presidency reached a crescendo on January 6th, when the now-former president’s supporters rampaged through the Capitol building. Among those themes is the controversy over whether we should label the Trump movement “fascist.”

This argument has flared-up at various points since Trump won the Republican nomination in 2015. After the Capitol attack, commentators who warned of a fascist turn in American politics have been rushed back into interview slots and op-ed columns. Doesn’t this attempt by a violent, propaganda-driven mob to overturn last November’s presidential election vindicate their claims?

If Trumpism continues after Trump, then so will this debate. But whether the fascist label is descriptively accurate has always struck me as the least rewarding part. Different people mean different things by the word, and have different aims in using it. Here’s a more interesting question: What is at stake if we choose to identify contemporary politics as fascist?

Many on the activist left branded Trump’s project fascist from the outset. This is not just because they are LARPers trying to re-enact the original anti-fascist struggles of the 1920s and 30s — even if Antifa, the most publicized radicals on the left, derive their name and flag from the communist Antifaschistische Aktion movement of early 1930s Germany. More concretely, the left’s readiness to invoke fascism reflects a longstanding, originally Marxist convention of using “fascist” to describe authoritarian and racist tendencies deemed inherent to capitalism.

From this perspective, the global shift in politics often labeled “populist” — including not just Trump, but also Brexit, the illiberal regimes of Eastern Europe, Narendra Modi’s India, and Jair Bolsonaro’s Brazil — is another upsurge of the structural forces that gave rise to fascism in the interwar period, and therefore deserves the same name.

In mainstream liberal discourse, by contrast, the debates about Trumpism and fascism have a strangely indecisive, unending quality. Journalists and social media pundits often defer to experts, so arguments devolve into bickering about who really counts as an expert and what they’ve actually said. After the Capitol attack, much of the discussion pivoted on brief comments by historians Robert Paxton and Ruth Ben-Ghiat. Paxton claimedin private correspondence that the Capitol attack “crosses the red line” beyond which the “F word” is appropriate, while on Twitter Ben-Ghiat drew a parallel with Mussolini’s 1922 March on Rome.

Meanwhile, even experts who have consistently equated Trumpism and fascism continue adding caveats and qualifications. Historian Timothy Snyder, who sounded the alarm in 2017 with his book On Tyrannyrecently described Trump’s politics as “pre-fascist” and his lies about election fraud as “structurally fascist,” leaving for the future the possibility Trump’s Republican enablers could “become the fascist faction.” Philosopher Jason Stanley, who makes a version of the left’s fascism-as-persistent-feature argument, does not claim that the label is definitive so much as a necessary framing, highlighting important aspects of Trump’s politics.

The hesitancy of the fascism debate reflects the difficulty of assigning a banner to movements that don’t claim it. A broad theory of fascism unavoidably relies on the few major examples of avowedly fascist regimes— especially interwar Italy and Germany –– even if, as Stanley has detailed in his book How Fascism Works, such regimes drew inspiration from the United States, and inspired Hindu nationalists in India. This creates an awkward relationship between fascism as empirical phenomenon and fascism as theoretical construct, and means there will always be historians stepping in, as Richard Evans recently did, to point out all the ways that 1920s-30s fascism was fundamentally different from the 21st century movements which are compared to it.

But there’s another reason the term “fascism” remains shrouded in perpetual controversy, one so obvious it’s rarely explored: The concept has maintained an aura of seriousness, of genuine evil, such that acknowledging its existence seems to represent a moral and political crisis. The role of fascism in mainstream discourse is like the hammer that sits in the box marked “in case of emergency break glass” — we might point to it and talk about breaking the glass one day, but actually doing so would signify a kind of rupture in the fabric of politics, opening up a world where extreme measures would surely be justified.

We see this in the impulse to ask “do we really want to call everyone who voted for fascist?” “Aren’t we being alarmist?” And “if we use that word now, what will we use when things get much worse?” Stanley has acknowledged this trepidation, suggesting it shows we’ve become accustomed to things that should be considered a crisis. I would argue otherwise. It reflects the crucial place of fascism in grand narrative of liberal democracy, especially after the Cold War — a narrative that relies on the idea of fascism as a historical singularity.

This first occurred to me when I visited Holocaust memorials in Berlin, and realized, to my surprise, that they had all been erected quite recently. The first were the Jewish Museum and the Memorial to the Murdered Jews of Europe, both disturbingly beautiful, evocative structures, conceived during the 1990s, after the collapse of communist East Germany, and opened between 2000–2005. Over the next decade, these were followed by smaller memorials to various other groups the Nazis persecuted: homosexuals, the Sinti and Roma, the disabled.

There were obvious reasons for these monuments to appear at this time and place. Post-reunification, Germany was reflecting on its national identity, and Berlin had been the capital of the Third Reich. But they still strike me as an excellent representation of liberal democracies’ need to identify memories and values that bind them together, especially when they could no longer contrast themselves to the USSR.

Vanquishing fascist power in the Second World War was and remains a foundational moment. Even as they recede into a distant, mythic past, the horrors overcome at that moment still grip the popular imagination. We saw this during the Brexit debate, when the most emotionally appealing argument for European integration referred back to its original, post-WWII purpose: constraining nationalism. And as the proliferation of memorials in Berlin suggests, fascism can retroactively be defined as the ultimate antithesis to what has, from the 1960s onwards, become liberalism’s main moral purpose: protection and empowerment of traditionally marginalized groups in society.

The United States plays a huge part in maintaining this narrative throughout the West and the English-speaking world, producing an endless stream of books, movies, and documentaries about the Second World War. The American public’s appetite for it seems boundless. That war is infused with a sense of heroism and tragedy unlike any other. But all of this stems from the unique certainty regarding the evil nature of 20th century European fascism.

This is why those who want to identify fascism in the present will always encounter skepticism and reluctance. Fascism is a moral singularity, a point of convergence in otherwise divided societies, because it is a historical singularity, the fixed source from which our history flows. To remove fascism from this foundational position – and worse, to implicate us in tolerating it – is morally disorientating. It raises the suspicion that, while claiming to separate fascism from the European historical example, those who invoke the term are actually trading off the emotional impact of that very example.

I don’t think commentators like Snyder and Stanley have such cynical intentions, and nor do I believe it’s a writer’s job to respect the version of history held dear by the public. Nonetheless, those who try to be both theorists and passionate opponents of fascism must recognize that they are walking a tightrope.

By making fascism a broader, more abstract signifier, and thereby bringing the term into the grey areas of semantic and historiographical bickering, they risk diminishing the aura of singular evil that surrounds fascism in the popular consciousness. But this is an aura which, surely, opponents of fascism should want to maintain.

After the Capitol, the battle for the dream machine

Sovereign is he who decides on the exception. In a statement on Wednesday afternoon, Facebook’s VP of integrity Guy Rosen declared: “This is an emergency situation and we are taking appropriate emergency measures, including removing President Trump’s video.” This came as Trump’s supporters, like a hoard of pantomime barbarians, were carrying out their surreal sacking of the Washington Capitol, and the US president attempted to publish a video which, in Rosen’s words, “contributes to rather than diminishes the risk of ongoing violence.” In the video, Trump had told the mob to go home, but continued to insist that the election of November 2020 had been fraudulent.

The following day Mark Zuckerberg announced that the sitting president would be barred from Facebook and Instagram indefinitely, and at least “until the peaceful transition of power is complete.” Zuckerberg reflected that “we have allowed President Trump to use our platform consistent with our own rules,” so as to give the public “the broadest possible access to political speech,” but that “the current context is now fundamentally different.”

Yesterday Trump’s main communication platform, Twitter, went a step further and suspended the US president permanently (it had initially suspended Trump’s account for 12 hours during the Capitol riot). Giving its rationale for the decision, Twitter also insisted its policy was to “enable the public to hear from elected officials” on the basis that “the people have a right to hold power to account in the open.” It stated, however, that “In the context of horrific events this week,” it had decided “recent Tweets from the @realDonaldTrump account and the context around them – specifically how they are being received and interpreted” (my emphasis) amounted to a violation of its rules against incitement to violence.

These emergency measures by the big tech companies were the most significant development in the United States this week, not the attack on the Capitol itself. In the language used to justify to them, we hear the unmistakable echoes of a constitutional sovereign claiming its authority to decide how the rules should be applied – for between the rules and their application there is always judgment and discretion – and more importantly, to decide that a crisis demands an exceptional interpretation of the rules. With that assertion of authority, Silicon Valley has reminded us – even if it would have preferred not to – where ultimate power lies in a new era of American politics. It does not lie in the ability to raise a movement of brainwashed followers, but in the ability to decide who is allowed the means to do so.

The absurd assault on the Capitol was an event perfectly calibrated to demonstrate this configuration of power. First, the seriousness of the event – a violent attack against an elected government, however spontaneous – forced the social media companies to reveal their authority by taking decisive action. In doing so, of course, they also showed the limits of their authority (no sovereignty is absolute, after all). The tech giants are eager to avoid being implicated in a situation that would justify greater regulation, or perhaps even dismemberment by a Democrat government. Hence their increasing willingness over the last six months, as a Democratic victory in the November elections loomed, to actively regulate the circulation of pro-Trump propaganda with misinformation warnings, content restrictions and occasional bans on outlets such as the New York Post, following its Hunter Biden splash on the eve of the election.

It should be remembered that the motivations of companies like Facebook and Twitter are primarily commercial rather than political. They must keep their monopolistic hold on the public sphere intact to safeguard their data harvesting and advertising mechanisms. This means they need to show lawmakers that they will wield authority over their digital fiefdoms in an appropriate fashion.

Trump’s removal from these platforms was therefore over determined, especially after Wednesday’s debacle in Washington. Yes, the tech companies want to signal their political allegiance to the Democrats, but they also need to show that their virtual domains will not destabilize the United States to the extent that it is no longer an inviting place to do business – for that too would end in greater regulation. They were surely looking for an excuse to get rid of Trump, but from their perspective, the Capitol invasion merited action by itself. It was never going to lead to the overturning of November’s election, still less the toppling of the regime; but it could hardly fail to impress America’s allies, not to mention the global financial elite, as an obvious watershed in the disintegration of the country’s political system.

But it was also the unseriousness of Wednesday’s events that revealed why control of the media apparatus is so important. A popular take on the Capitol invasion itself – and, given the many surreal images of the buffoonish rioters, a persuasive one – is that it was the ultimate demonstration of the United States’ descent into a politics of fantasy; what the theorist Bruno Maçães calls “Dreampolitik.” Submerged in the alternative realities of partisan media and infused with the spirit of Hollywood, Americans have come to treat political action as a kind of role-play, a stage where the iconic motifs of history are unwittingly reenacted as parody. Who could be surprised that an era when a significant part of America has convinced itself that it is fighting fascism, and another that it is ruled by a conspiracy of pedophiles, has ended with men in horned helmets, bird-watching camouflage and MAGA merchandise storming the seat of government with chants of “U-S-A”?

At the very least, it is clear that Trump’s success as an insurgent owes a great deal to his embrace of followers whose view of politics is heavily colored by conspiracy theories, if not downright deranged. The Capitol attack was the most remarkable evidence to date of how such fantasy politics can be leveraged for projects with profound “real world” implications. It was led, after all, by members of the QAnon conspiracy theory movement, and motivated by elaborate myths of a stolen election. Barack Obama was quite right to call it the product of a “fantasy narrative [which] has spiraled further and further from reality… [building] upon years of sown resentments.”

But while there is justifiably much fascination with this new form of political power, it must be remembered that such fantasy narratives are a superstructure. They can only operate through the available technological channels – that is, through the media, all of which is today centred around the major social media platforms. The triumph of Dreampolitik at the Capitol therefore only emphasises the significance of Facebook and Twitter’s decisive action against Trump. For whatever power is made available through the postmodern tools of partisan narrative and alternative reality, an even greater power necessarily belongs to those who can grant or deny access to these tools.

And this week’s events are, of course, just the beginning. The motley insurrection of the Trumpists will serve as a justification, if one was needed, for an increasingly strict regime of surveillance and censorship by major social media platforms, answering to their investors and to the political class in Washington. Already the incoming president Joe Biden has stated his intentions to introduce new legislation against “domestic terrorism,” which will no doubt involve the tech giants maintaining their commercial dominance in return for carrying out the required surveillance and reporting of those deemed subversive. Meanwhile, Google and Apple yesterday issued an ultimatum to the platform Parler, which offers the same basic model as Twitter but with laxer content rules, threatening to banish it from their app stores if it did not police conversation more strictly.

But however disturbing the implications of this crackdown, we should welcome the clarity we got this week. For too long, the tech giants have been able to pose as neutral arbiters of discussion, cloaking their authority in corporate euphemisms about public interest. Consequently, they have been able to set the terms of communication over much of the world according to their own interests and political calculations. Whether or not they were right to banish Trump, the key fact is that it was they who had the authority to do so, for their own reasons. The increasing regulation of social media – which was always inevitable, in one form or another, given its incendiary potential – will now proceed according to the same logic. Hopefully the dramatic nature of their decisions this week will make us question if this is really a tolerable situation.

Poland and Hungary are exposing the EU’s flaws

The European Union veered into another crisis on Monday, as the governments of Hungary and Poland announced they would veto the bloc’s next seven-year budget. This comes after the European Parliament and Council tried to introduce “rule of law” measures for punishing member states that breach democratic standards — measures that Budapest and Warsaw, the obvious target of such sanctions, have declared unacceptable.

As I wrote last week, it is unlikely that the disciplinary mechanism would actually have posed a major threat to either the Fidesz regime in Hungary or the Law and Justice one in Poland. These stubborn antagonists of European liberalism have long threatened to block the entire budget if it came with meaningful conditions attached. That they have used their veto anyway suggests the Hungarian and Polish governments — or at least the hardline factions within them — feel they can extract further concessions.

There’s likely to be a tense video conference on Thursday as EU leaders attempt to salvage the budget. It’s tempting to assume a compromise will be found that allows everyone to save face (that is the European way), but the ongoing impasse has angered both sides. At least one commentator has statedthat further concessions to Hungary and Poland would amount to “appeasement of dictators.”

In fact compromises with illiberal forces are far from unprecedented in the history of modern democracy. The EU constitution that limits the power of federal institutions is what allows actors like Orban to misbehave — something the Hungarian Prime Minister has exploited to great effect.

And yet, it doesn’t help that the constitutional procedures in question — the treaties of the European Union — were so poorly designed in the first place. Allowing single states an effective veto over key policy areas is a recipe for dysfunction, as the EU already found out in September when Cyprus blockedsanctions against Belarus.

More to the point, the current deadlock with Hungary and Poland has come about because the existing Article 7 mechanism for disciplining member states is virtually unenforceable (both nations have been subject to Article 7 probes for several years, to no effect).

But this practical shortcoming also points to an ideological one. As European politicians have admitted, the failure to design a workable disciplinary mechanism shows the project’s architects did not take seriously the possibility that, once countries had made the democratic reforms necessary to gain access to the EU, they might, at a later date, move back in the opposite direction. Theirs was a naïve faith in the onwards march of liberal democracy.

In this sense, the crisis now surrounding the EU budget is another product of that ill-fated optimism which gripped western elites around the turn of the 21stcentury. Like the governing class in the United States who felt sure China would reform itself once invited into the comity of nations, the founders of the European Union had too rosy a view of liberalism’s future — and their successors are paying the price.

Europe’s deplorables have outwitted Brussels

This essay was originally published by Unherd on November 10th 2020

Throughout the autumn, the European Union has been engaged in a standoff with its two most antagonistic members, Hungary and Poland. At stake was whether the EU would finally take meaningful action against these pioneers of “illiberal democracy”, to use the infamous phrase of Hungarian Prime Minister Viktor Orbán. As of last week — and despite appearances to the contrary — it seems the Hungarian and Polish regimes have postponed the reckoning once more.

Last week, representatives of the European Parliament triumphantly announced a new disciplinary mechanism which, they claimed, would enable Brussels to withhold funds from states that violate liberal democratic standards. According to MEP Petri Sarvamaa, it meant the end of “a painful phase [in] the recent history of the European Union”, in which “the basic values of democracy” had been “threatened and undermined”.

No names were named, of course, but they did not need to be. Tensions between the EU and the recalcitrant regimes on its eastern periphery, Hungary under Orbán’s Fidesz and Poland under the Law and Justice Party, have been mounting for years. Those governments’ erosion of judicial independence and media freedom, as well as concerns over corruption, education, and minority rights, have resulted in a series of formal investigations and legal actions. And that is not to mention the constant rhetorical fusillades between EU officials and Budapest and Warsaw.

The new disciplinary mechanism is being presented as the means to finally bring Hungary and Poland to heel, but it is no such thing. Though not exactly toothless, it is unlikely to pose a serious threat to the illiberal pretenders in the east. Breaches of “rule of law” standards will only be sanctioned if they affect EU funds — so the measures are effectively limited to budget oversight. Moreover, enforcing the sanctions will require a weighted majority of member states in the European Council, giving Hungary or Poland ample room to assemble a blocking coalition.

In fact, what we have here is another sticking plaster so characteristic of the complex and unwieldy structures of European supranational democracy. The political dynamics of this system, heavily reliant on horse-trading and compromise, have allowed Hungary and Poland to outmanoeuvre their opponents.

The real purpose of the disciplinary measures is to ensure the timely passage of next EU budget, and in particular, a €750 billion coronavirus relief fund. That package will, for the first time, see member states issuing collective debt backed by their taxpayers, and therefore has totemic significance for the future of the Union. It is a real indication that fiscal integration might be possible in the EU — a step long regarded as crucial to the survival of Europe’s federal ambitions, and one that shows its ability to respond effectively to a major crisis.

But this achievement has almost been derailed by a showdown with Hungary and Poland. Liberal northern states such as Finland, Sweden and the Netherlands, together with the European Parliament, insisted that financial support should be conditional on upholding EU values and transparency standards. But since the relief fund requires unanimous approval, Hungary or Poland can simply veto the whole initiative, which is exactly what they have been threatening to do.

In other words, the EU landed itself with a choice between upholding its liberal commitments and securing its future as a viable political and economic project. The relatively weak disciplinary mechanism shows that European leaders are opting for the latter, as they inevitably would. It is a compromise that allows the defenders of democratic values to save face, while essentially letting Hungary and Poland off the hook. (Of course this doesn’t rule out the possibility that the Hungarian and Polish governments will continue making a fuss anyway.)

Liberals who place their hopes in the European project may despair at this, but these dilemmas are part and parcel of binding different regions and cultures in a democratic system. Such undertakings need strict constitutional procedures to hold them together, but those same procedures create opportunities to game the system, especially as demands in one area can be tied with cooperation in another.

As he announced the new rule of law agreement, Sarvamaa pointed to Donald Trump’s threat to win the presidential election via the Supreme Court as evidence of the need to uphold democratic standards. In truth, what is happening in Europe bears a closer resemblance to America in the 1930s, when F.D. Roosevelt was forced to make concessions to the Southern states to deliver his New Deal agenda.

That too was a high-stakes attempt at federal consolidation and economic repair, with the Great Depression at its height and democracy floundering around the world. As the political historian Ira Katznelson has noted,Roosevelt only succeeded by making “necessary but often costly illiberal alliances” — in particular, alliances with Southern Democratic legislators who held an effective veto in Congress. The result was that New Deal programs either avoided or actively upheld white supremacy in the Jim Crow South. (Key welfare programs, for instance, were designed to exclude some two-thirds of African American employees in the Southern states).

According to civil rights campaigner Walter White, Roosevelt himself explained his silence on a 1934 bill to combat the lynching of African Americans as follows: “I’ve got to get legislation passed by Congress to save America… If I come out for the anti-lynching bill, they [the Southern Democrats] will block every bill I ask Congress to pass to keep America from collapsing. I just can’t take that risk.”

This is not to suggest any moral equivalence between the Europe’s “illiberal democracies” and the Deep South of the 1930s. But the Hungarian and Polish governments do resemble the experienced Southern politicians of the New Deal era in their ability to manoeuvre within a federal framework, achieving an autonomy that belies their economic dependency. They have learned to play by the letter of the rules as well as to subvert them.

Orbán, for instance, has frequently insisted that his critics make a formal legal case against him, whereupon he has managed to reduce sanctions to mere technicalities. He has skilfully leveraged the arithmetic of the European Parliament to keep Fidesz within the orbit of the mainstream European People’s Party group. In September, the Hungarian and Polish governments even announced plans to establish their own institute of comparative legal studies, aiming to expose the EU’s “double standards.”

And now, with their votes required to pass the crucial relief fund, the regimes in Budapest and Warsaw are taking advantage of exceptionally high stakes much as their Southern analogues in the 1930s did. They have, in recent months, become increasingly defiant in their rejection of European liberalism. In September, Orbán published a searing essay in which he hailed a growing “rebellion against liberal intellectual oppression” in the western world. The recent anti-abortion ruling by the Polish high court is likewise a sign of that state’s determination to uphold Catholic values and a robust national identity.

Looking forward, however, it seems clear this situation cannot continue forever. Much has been made of Joe Biden’s hostility to the Hungarian and Polish regimes, and with his election victory, we may see the US attaching its own conditions to investment in Eastern Europe. But Biden cannot question the EU’s standards too much, since he has made the latter out to be America’s key liberal partner. The real issue is that if richer EU states are really going to accept the financial burdens of further integration, they will not tolerate deviant nations wielding outsized influence on key policy areas.

Of course such reforms would require an overhaul of the voting system, which means treaty change. This raises a potential irony: could the intransigence of Hungary and Poland ultimately spur on Europe’s next big constitutional step — one that will see their leverage taken away? Maybe. For the time being, the EU is unlikely to rein in the illiberal experiments within its borders.

Biden versus Beijing

The Last of the Libertarians

This book review was originally published by Arc Digital on August 31st 2020.

As the world reels from the chaos of COVID-19, it is banking on the power of innovation. We need a vaccine, and before even that, we need new technologies and practices to help us protect the vulnerable, salvage our pulverized economies, and go on with our lives. If we manage to weather this storm, it will be because our institutions prove capable of converting human ingenuity into practical, scalable fixes.

And yet, even if we did not realize it, this was already the position we found ourselves in prior to the pandemic. From global warming to food and energy security to aging populations, the challenges faced by humanity in the 21st century will require new ways of doing things, and new tools to do them with.

So how can our societies foster such innovation? What are the institutions, or more broadly the economic and political conditions, from which new solutions can emerge? Some would argue we need state-funded initiatives to direct our best minds towards specific goals, like the 1940s Manhattan Project that cracked the puzzle of nuclear technology. Others would have us place our faith in the miracles of the free market, with its incentives for creativity, efficiency, and experimentation.

Matt Ridley, the British businessman, author, and science journalist, is firmly in the latter camp. His recent book, How Innovation Works, is a work of two halves. On the one hand it is an entertaining, informative, and deftly written account of the innovations which have shaped the modern world, delivering vast improvements in living standards and opportunity along the way. On the other hand, it is the grumpy expostulation of a beleaguered libertarian, whose reflexive hostility to government makes for a vague and contradictory theory of innovation in general.

Innovation, we should clarify, does not simply mean inventing new things, nor is it synonymous with scientific or technological progress. There are plenty of inventions that do not become innovations — or at least not for some time — because we have neither the means nor the demand to develop them further. Thus, the key concepts behind the internal combustion engine and general-purpose computer long preceded their fruition. Likewise, there are plenty of important innovations which are neither scientific nor technological — double-entry bookkeeping, for instance, or the U-bend in toilet plumbing — and plenty of scientific or technological advances which have little impact beyond the laboratory or drawing board.

Innovation, as Ridley explains, is the process by which new products, practices, and ideas catch on, so that they are widely adopted within an industry or society at large. This, he rightly emphasizes, is rarely down to a brilliant individual or blinding moment of insight. It is almost never the result of an immaculate process of design. It is, rather, “a collective, incremental, and messy network phenomenon.”

Many innovations make use of old, failed ideas whose time has come at last. At the moment of realization, we often find multiple innovators racing to be first over the line — as was the case with the steam engine, light bulb, and telegraph. Sometimes successful innovation hinges on a moment of luck, like the penicillin spore which drifted into Alexander Fleming’s petri dish while he was away on holiday. And sometimes a revolutionary innovation, such as the search engine, is strangely anticipated by no one, including its innovators, almost up until the moment it is born.

But in virtually every instance, the emergence of an innovation requires numerous people with different talents, often far apart in space and time. As Ridley describes the archetypal case: “One person may make a technological breakthrough, another work out how to manufacture it, and a third how to make it cheap enough to catch on. All are part of the innovation process and none of them knows how to achieve the whole innovation.”

These observations certainly lend some credence to Ridley’s arguments that innovation is best served by a dynamic, competitive market economy responding to the choices of consumers. After all, we are not very good at guessing from which direction the solution to a problem will come — we often do not even know there was a problem until a solution comes along — and so it makes sense to encourage a multitude of private actors to tinker, experiment, and take risks in the hope of discovering something that catches on.

Moreover, Ridley’s griping about misguided government regulation — best illustrated by Europe’s almost superstitious aversion to genetically modified crops — and about the stultifying influence of monopolistic, subsidy-farming corporations, is not without merit.

But not so fast. Is it not true that many innovations in Ridley’s book drew, at some point in their complex gestation, from state-funded research? This was the case with jet engines, nuclear energy, and computing (not to mention GPS, various products using plastic polymers, and touch-screen displays). Ridley’s habit of shrugging off such contributions with counterfactuals — had not the state done it, someone else would have — misses the point, because the state has basic interests that inevitably bring it into the innovation business.

It has always been the case that certain technologies, however they emerge, will continue their development in a limbo between public and private sectors, since they are important to economic productivity, military capability, or energy security. So it is today with the numerous innovative technologies caught up in the rivalry between the United States and China, including 5G, artificial intelligence, biotechnology, semiconductors, quantum computing, and Ridley’s beloved fracking for shale gas.

As for regulation, the idea that every innovation which succeeds in a market context is in humanity’s best interests is clearly absurd. One thinks of such profitable 19th-century innovations by Western businessmen as exporting Indian opium to the Far East. Ridley tries to forestall such objections with the claim that “To contribute to human welfare … an innovation must meet two tests: it must be useful to individuals, and it must save time, energy, or money in the accomplishment of some task.” Yet there are plenty of innovations which meet this standard and are still destructive. Consider the opium-like qualities of social media, or the subprime mortgage-backed securities which triggered the financial crisis of 2007–8 (an example Ridley ought to know about, seeing as he was chairman of Britain’s ill-fated Northern Rock bank at the time).

Ridley’s weakness in these matters is amplified by his conceptual framework, a dubious fusion of evolutionary theory and dogmatic libertarianism. Fundamentally, he holds that innovation is an extension of evolution by natural selection, “a process of constantly discovering ways of rearranging the world into forms that are unlikely to arise by chance — and that happen to be useful.” (Ridley even has a section on “The ultimate innovation: life itself.”) That same cosmic process, he claims, is embodied in the spontaneous order of the free market, which, through trade and specialization, allows useful innovations to emerge and spread.

This explains why How Innovation Works contains no suggestion about how we should weigh the risks and benefits of different kinds of innovation. Insofar as Ridley makes an ethical case at all, it amounts to a giant exercise in naturalistic fallacy. Though he occasionally notes innovation can be destructive, he more often moves seamlessly from claiming that it is an “inexorable” natural process, something which simply happens, to hailing it as “the child of freedom and the parent of prosperity,” a golden goose in perpetual danger of suffocation.

But the most savage contradictions in Ridley’s theory appear, once again, in his pronouncements on the role of the state. He insists that by definition, government cannot be central to innovation, because it has predetermined goals whereas evolutionary processes do not. “Trying to pretend that government is the main actor in this process,” he says, “is an essentially creationist approach to an essentially evolutionary phenomenon.”

Never mind that many of Ridley’s own examples involve innovators aiming for predetermined goals, or that in his (suspiciously brief) section on the Chinese innovation boom, he concedes in passing that shrewd state investment played a key role. The more pressing question is, what about those crucial innovations for which there is no market demand, and which therefore do not evolve?

Astonishingly, in his afterword on the challenges posed by COVID-19, Ridley has the gall to admonish governments for not taking the lead in innovation. “Vaccine development,” he writes, has been “insufficiently encouraged by governments and the World Health Organisation,” and “ignored, too, by the private sector because new vaccines are not profitable things to make.” He goes on: “Politicians should go further and rethink their incentives for innovation more generally so that we are never again caught out with too little innovation having happened in a crucial field of human endeavour.”

In these lines, we should read not just the collapse of Ridley’s central thesis, but more broadly, the demise of a certain naïve market libertarianism — a worldview that flourished during the 1980s and ’90s, and which, like most dominant intellectual paradigms, came to see its beliefs as reflecting the very order of nature itself. For what we should have learned in 2007–8, and what we have certainly learned this year, is that for all its undoubted wonders the market is always tacitly relying on the state to step in should the need arise.

This does not mean, of course, that the market has no role to play in developing the key innovations of the 21st century. I believe it has a crucial role, for it remains unmatched in its ability to harness the latent power of widely dispersed ideas and skills. But if the market’s potential is not to be snuffed out in a post-COVID era of corporatism and monopoly, then it will need more credible defenders than Ridley. It will need defenders who are aware of its limitations and of its interdependence with the state.

Anti-racism and the long shadow of the 1970s

The essay was originally published by Unherd on August 3rd 2020.

Last month, following a bout of online outrage, the National Museum of African American History and Culture removed an infographic from its website. Carrying the title “Aspects and assumptions of whiteness and white culture in the United States,” the offending chart presented a list of cultural expectations which, apparently, reflect the “traditions, attitudes and ways of life” characteristic of “white people.” Among the items listed were “self-reliance,” “the nuclear family,” “respect authority,” “plan for future” and “objective, rational linear thinking”.

Critics seized on this as evidence that the anti-racism narrative that has taken hold in institutional America is permeated by a bigotry of low expectations. The chart seemed to suggest that African Americans should not be expected to adhere to the basic tenets of modern civil society and intellectual life. Moreover, the notion that prudence, personal responsibility and rationality are inherently white echoes to an uncanny degree the racist claims that have historically been used to justify the oppression of people of African descent.

We could assume, in the interests of fairness, that the problem with the NMAAHC’s chart was a lack of context. Surely the various qualities it ascribes to “white culture” should be read as though followed by a phrase like “as commonly understood in the United States today?” The problem is that the original document which inspired the chart, and which bore the copyright of corporate consultant Judith H. Katz, provides no such caveats.

If we look at Katz’s own career, however, we do find some illuminating context — not just for this particular incident, but also regarding the origins of the current anti-racism movement more broadly. During the 1970s, Katz pioneered a distinctive approach to combatting racism, one that was above all therapeutic and managerial. This approach, as the NMAAHC chart suggests, took little interest in the opinions and experiences of ethnic and racial minorities, but focused on helping white Americans understand their identity.

Katz’s most obvious descendent today is Robin DiAngelo, author of the bestselling White Fragility — a book relating the experiences and methods of DiAngelo’s lucrative career in corporate anti-racism training. Katz too developed a re-education program, “White awareness training,” which, according to her 1978 book White Awareness, “strives to help Whites understand that racism in the United States is a White problem and that being White implies being racist.”

Like DiAngelo, Katz rails against the pretense of individualism and colour blindness, which she regards as strategies for denying complicity in racism. And like DiAngelo, Katz emphasizes the need for exclusively white discussions (the “White-on-White training group”) to avoid turning minorities into teachers, which would be merely another form of exploitation.

Yet the most striking aspect of Katz’s ideas, by contrast to the puritanical DiAngelo, is her insistence that the real purpose of anti-racism training is to enable the psychological liberation and self-fulfillment of white Americans. She consistently discusses the problem of racism in the medicalizing language of sickness and trauma. It is, she says, “a form of schizophrenia,” “a pervasive form of mental illness,” a “disease,” and “a psychological disorder… deeply embedded in White people from a very early age on both a conscious and an unconscious level.” Thus the primary benefit offered by Katz is to save white people from this pathology, by allowing them to establish a coherent identity as whites.

Her program, she repeatedly emphasizes, is not meant to produce guilt. Rather, its premise is that in order to discover “our unique identities,” we must not overlook “[o]ur sexual and racial essences.” Her training allows its subjects to “become more fully human,” to “identify themselves as White and feel good about it.” Or as Katz writes in a journal article: “We must begin to remove the intellectual shackles and psychological chains that keep us in a mental and spiritual bondage. White people have been hurt for too long.”

Reading all of this, it is difficult not to be reminded of the critic Christopher Lasch’s portrayal of 1970s America as a “culture of narcissism”. Lasch was referring to a bundle of tendencies that characterised the hangover from the radicalism of the 1960s: a catastrophising hypochondria that found in everything the signs of impending disaster or decay; a naval-gazing self-awareness which sought expression in various forms of spiritual liberation; and consequently, a therapeutic culture obsessed with self-improvement and personal renewal.

The great prophet of this culture was surely Woody Allen, whose work routinely evoked crippling neuroses, fear of death, and psychiatry as the customary tool for managing the inner tensions of the liberated bourgeois. That Allen treated all of this with layer upon layer of self-deprecating irony points to another key part of Lasch’s analysis. The narcissist of this era retained enough idealism so as to be slightly ashamed of his self-absorption — unless, of course, some way could be found to justify it as a means towards wider social improvement.

And that is what Katz’s white awareness training offered: a way to resolve the tensions between a desire for personal liberation and a social conscience, or more particularly, a new synthesis of ’70s therapeutic culture with the collectivist political currents unleashed in the ’60s.

Moreover, in Katz’s work we catch a glimpse of what the vehicle for this synthesis would be: the managerial structures of the public or private institution, where a paternalistic attitude towards students, employees and the general public could provide the ideal setting for the tenets of “white awareness.” By way of promoting her program, Katz observed in the late ’70s a general trend towards “a more educational role for the psychotherapist… utilizing systemic training as the process by which to meet desired behavior change.” There was, she noted, a “growing demand” for such services.

Which brings us back to the NMAAHC’s controversial chart. It would be wrong to suggest that this single episode allows us to draw a straight line from the culture of narcissism in which Katz’s ideas emerged to the present anti-racism narrative. But the fact that there continues to be so much emphasis placed on the notion of “whiteness” today — the NMAAHC has an entire webpage under this heading, which prominently features Katz’s successor Robin DiAngelo — suggests that progressive politics has not entirely escaped the identity crises of the 1970s.

Today that politics might be more comfortable assigning guilt than Katz was, but it still places a disproportionate emphasis on those it calls “white” to adopt a noble burden of self-transformation, while relegating minorities to the role of a helpless other.

Of course, it is precisely this simplistic dichotomy which allows the anti-racism narrative to jump across borders and even oceans, as we have seen happening recently, into any context where there are people who can be called “white” and an institutional framework for administering reeducation. Already in 1983, Katz was able to promote her “white awareness training” in the British journal Early Child Development and Care, simply swapping her standard American intro for a discussion of English racism.

Then as now, the implication is that from the perspective of “whiteness,” the experience of African-Americans and of ethnic minorities in a host of other places is somehow interchangeable. This, I think, can justifiably be called a kind of narcissism.