What’s really at stake in the fascism debate

This essay was originally published by Arc magazine on January 27th 2021.

Many themes of the Trump presidency reached a crescendo on January 6th, when the now-former president’s supporters rampaged through the Capitol building. Among those themes is the controversy over whether we should label the Trump movement “fascist.”

This argument has flared-up at various points since Trump won the Republican nomination in 2015. After the Capitol attack, commentators who warned of a fascist turn in American politics have been rushed back into interview slots and op-ed columns. Doesn’t this attempt by a violent, propaganda-driven mob to overturn last November’s presidential election vindicate their claims?

Many themes of the Trump presidency reached a crescendo on January 6th, when the now-former president’s supporters rampaged through the Capitol building. Among those themes is the controversy over whether we should label the Trump movement “fascist.”

This argument has flared-up at various points since Trump won the Republican nomination in 2015. After the Capitol attack, commentators who warned of a fascist turn in American politics have been rushed back into interview slots and op-ed columns. Doesn’t this attempt by a violent, propaganda-driven mob to overturn last November’s presidential election vindicate their claims?

If Trumpism continues after Trump, then so will this debate. But whether the fascist label is descriptively accurate has always struck me as the least rewarding part. Different people mean different things by the word, and have different aims in using it. Here’s a more interesting question: What is at stake if we choose to identify contemporary politics as fascist?

Many on the activist left branded Trump’s project fascist from the outset. This is not just because they are LARPers trying to re-enact the original anti-fascist struggles of the 1920s and 30s — even if Antifa, the most publicized radicals on the left, derive their name and flag from the communist Antifaschistische Aktion movement of early 1930s Germany. More concretely, the left’s readiness to invoke fascism reflects a longstanding, originally Marxist convention of using “fascist” to describe authoritarian and racist tendencies deemed inherent to capitalism.

From this perspective, the global shift in politics often labeled “populist” — including not just Trump, but also Brexit, the illiberal regimes of Eastern Europe, Narendra Modi’s India, and Jair Bolsonaro’s Brazil — is another upsurge of the structural forces that gave rise to fascism in the interwar period, and therefore deserves the same name.

In mainstream liberal discourse, by contrast, the debates about Trumpism and fascism have a strangely indecisive, unending quality. Journalists and social media pundits often defer to experts, so arguments devolve into bickering about who really counts as an expert and what they’ve actually said. After the Capitol attack, much of the discussion pivoted on brief comments by historians Robert Paxton and Ruth Ben-Ghiat. Paxton claimedin private correspondence that the Capitol attack “crosses the red line” beyond which the “F word” is appropriate, while on Twitter Ben-Ghiat drew a parallel with Mussolini’s 1922 March on Rome.

Meanwhile, even experts who have consistently equated Trumpism and fascism continue adding caveats and qualifications. Historian Timothy Snyder, who sounded the alarm in 2017 with his book On Tyrannyrecently described Trump’s politics as “pre-fascist” and his lies about election fraud as “structurally fascist,” leaving for the future the possibility Trump’s Republican enablers could “become the fascist faction.” Philosopher Jason Stanley, who makes a version of the left’s fascism-as-persistent-feature argument, does not claim that the label is definitive so much as a necessary framing, highlighting important aspects of Trump’s politics.

The hesitancy of the fascism debate reflects the difficulty of assigning a banner to movements that don’t claim it. A broad theory of fascism unavoidably relies on the few major examples of avowedly fascist regimes— especially interwar Italy and Germany –– even if, as Stanley has detailed in his book How Fascism Works, such regimes drew inspiration from the United States, and inspired Hindu nationalists in India. This creates an awkward relationship between fascism as empirical phenomenon and fascism as theoretical construct, and means there will always be historians stepping in, as Richard Evans recently did, to point out all the ways that 1920s-30s fascism was fundamentally different from the 21st century movements which are compared to it.

But there’s another reason the term “fascism” remains shrouded in perpetual controversy, one so obvious it’s rarely explored: The concept has maintained an aura of seriousness, of genuine evil, such that acknowledging its existence seems to represent a moral and political crisis. The role of fascism in mainstream discourse is like the hammer that sits in the box marked “in case of emergency break glass” — we might point to it and talk about breaking the glass one day, but actually doing so would signify a kind of rupture in the fabric of politics, opening up a world where extreme measures would surely be justified.

We see this in the impulse to ask “do we really want to call everyone who voted for fascist?” “Aren’t we being alarmist?” And “if we use that word now, what will we use when things get much worse?” Stanley has acknowledged this trepidation, suggesting it shows we’ve become accustomed to things that should be considered a crisis. I would argue otherwise. It reflects the crucial place of fascism in grand narrative of liberal democracy, especially after the Cold War — a narrative that relies on the idea of fascism as a historical singularity.

This first occurred to me when I visited Holocaust memorials in Berlin, and realized, to my surprise, that they had all been erected quite recently. The first were the Jewish Museum and the Memorial to the Murdered Jews of Europe, both disturbingly beautiful, evocative structures, conceived during the 1990s, after the collapse of communist East Germany, and opened between 2000–2005. Over the next decade, these were followed by smaller memorials to various other groups the Nazis persecuted: homosexuals, the Sinti and Roma, the disabled.

There were obvious reasons for these monuments to appear at this time and place. Post-reunification, Germany was reflecting on its national identity, and Berlin had been the capital of the Third Reich. But they still strike me as an excellent representation of liberal democracies’ need to identify memories and values that bind them together, especially when they could no longer contrast themselves to the USSR.

Vanquishing fascist power in the Second World War was and remains a foundational moment. Even as they recede into a distant, mythic past, the horrors overcome at that moment still grip the popular imagination. We saw this during the Brexit debate, when the most emotionally appealing argument for European integration referred back to its original, post-WWII purpose: constraining nationalism. And as the proliferation of memorials in Berlin suggests, fascism can retroactively be defined as the ultimate antithesis to what has, from the 1960s onwards, become liberalism’s main moral purpose: protection and empowerment of traditionally marginalized groups in society.

The United States plays a huge part in maintaining this narrative throughout the West and the English-speaking world, producing an endless stream of books, movies, and documentaries about the Second World War. The American public’s appetite for it seems boundless. That war is infused with a sense of heroism and tragedy unlike any other. But all of this stems from the unique certainty regarding the evil nature of 20th century European fascism.

This is why those who want to identify fascism in the present will always encounter skepticism and reluctance. Fascism is a moral singularity, a point of convergence in otherwise divided societies, because it is a historical singularity, the fixed source from which our history flows. To remove fascism from this foundational position – and worse, to implicate us in tolerating it – is morally disorientating. It raises the suspicion that, while claiming to separate fascism from the European historical example, those who invoke the term are actually trading off the emotional impact of that very example.

I don’t think commentators like Snyder and Stanley have such cynical intentions, and nor do I believe it’s a writer’s job to respect the version of history held dear by the public. Nonetheless, those who try to be both theorists and passionate opponents of fascism must recognize that they are walking a tightrope.

By making fascism a broader, more abstract signifier, and thereby bringing the term into the grey areas of semantic and historiographical bickering, they risk diminishing the aura of singular evil that surrounds fascism in the popular consciousness. But this is an aura which, surely, opponents of fascism should want to maintain.

After the Capitol, the battle for the dream machine

Sovereign is he who decides on the exception. In a statement on Wednesday afternoon, Facebook’s VP of integrity Guy Rosen declared: “This is an emergency situation and we are taking appropriate emergency measures, including removing President Trump’s video.” This came as Trump’s supporters, like a hoard of pantomime barbarians, were carrying out their surreal sacking of the Washington Capitol, and the US president attempted to publish a video which, in Rosen’s words, “contributes to rather than diminishes the risk of ongoing violence.” In the video, Trump had told the mob to go home, but continued to insist that the election of November 2020 had been fraudulent.

The following day Mark Zuckerberg announced that the sitting president would be barred from Facebook and Instagram indefinitely, and at least “until the peaceful transition of power is complete.” Zuckerberg reflected that “we have allowed President Trump to use our platform consistent with our own rules,” so as to give the public “the broadest possible access to political speech,” but that “the current context is now fundamentally different.”

Yesterday Trump’s main communication platform, Twitter, went a step further and suspended the US president permanently (it had initially suspended Trump’s account for 12 hours during the Capitol riot). Giving its rationale for the decision, Twitter also insisted its policy was to “enable the public to hear from elected officials” on the basis that “the people have a right to hold power to account in the open.” It stated, however, that “In the context of horrific events this week,” it had decided “recent Tweets from the @realDonaldTrump account and the context around them – specifically how they are being received and interpreted” (my emphasis) amounted to a violation of its rules against incitement to violence.

These emergency measures by the big tech companies were the most significant development in the United States this week, not the attack on the Capitol itself. In the language used to justify to them, we hear the unmistakable echoes of a constitutional sovereign claiming its authority to decide how the rules should be applied – for between the rules and their application there is always judgment and discretion – and more importantly, to decide that a crisis demands an exceptional interpretation of the rules. With that assertion of authority, Silicon Valley has reminded us – even if it would have preferred not to – where ultimate power lies in a new era of American politics. It does not lie in the ability to raise a movement of brainwashed followers, but in the ability to decide who is allowed the means to do so.

The absurd assault on the Capitol was an event perfectly calibrated to demonstrate this configuration of power. First, the seriousness of the event – a violent attack against an elected government, however spontaneous – forced the social media companies to reveal their authority by taking decisive action. In doing so, of course, they also showed the limits of their authority (no sovereignty is absolute, after all). The tech giants are eager to avoid being implicated in a situation that would justify greater regulation, or perhaps even dismemberment by a Democrat government. Hence their increasing willingness over the last six months, as a Democratic victory in the November elections loomed, to actively regulate the circulation of pro-Trump propaganda with misinformation warnings, content restrictions and occasional bans on outlets such as the New York Post, following its Hunter Biden splash on the eve of the election.

It should be remembered that the motivations of companies like Facebook and Twitter are primarily commercial rather than political. They must keep their monopolistic hold on the public sphere intact to safeguard their data harvesting and advertising mechanisms. This means they need to show lawmakers that they will wield authority over their digital fiefdoms in an appropriate fashion.

Trump’s removal from these platforms was therefore over determined, especially after Wednesday’s debacle in Washington. Yes, the tech companies want to signal their political allegiance to the Democrats, but they also need to show that their virtual domains will not destabilize the United States to the extent that it is no longer an inviting place to do business – for that too would end in greater regulation. They were surely looking for an excuse to get rid of Trump, but from their perspective, the Capitol invasion merited action by itself. It was never going to lead to the overturning of November’s election, still less the toppling of the regime; but it could hardly fail to impress America’s allies, not to mention the global financial elite, as an obvious watershed in the disintegration of the country’s political system.

But it was also the unseriousness of Wednesday’s events that revealed why control of the media apparatus is so important. A popular take on the Capitol invasion itself – and, given the many surreal images of the buffoonish rioters, a persuasive one – is that it was the ultimate demonstration of the United States’ descent into a politics of fantasy; what the theorist Bruno Maçães calls “Dreampolitik.” Submerged in the alternative realities of partisan media and infused with the spirit of Hollywood, Americans have come to treat political action as a kind of role-play, a stage where the iconic motifs of history are unwittingly reenacted as parody. Who could be surprised that an era when a significant part of America has convinced itself that it is fighting fascism, and another that it is ruled by a conspiracy of pedophiles, has ended with men in horned helmets, bird-watching camouflage and MAGA merchandise storming the seat of government with chants of “U-S-A”?

At the very least, it is clear that Trump’s success as an insurgent owes a great deal to his embrace of followers whose view of politics is heavily colored by conspiracy theories, if not downright deranged. The Capitol attack was the most remarkable evidence to date of how such fantasy politics can be leveraged for projects with profound “real world” implications. It was led, after all, by members of the QAnon conspiracy theory movement, and motivated by elaborate myths of a stolen election. Barack Obama was quite right to call it the product of a “fantasy narrative [which] has spiraled further and further from reality… [building] upon years of sown resentments.”

But while there is justifiably much fascination with this new form of political power, it must be remembered that such fantasy narratives are a superstructure. They can only operate through the available technological channels – that is, through the media, all of which is today centred around the major social media platforms. The triumph of Dreampolitik at the Capitol therefore only emphasises the significance of Facebook and Twitter’s decisive action against Trump. For whatever power is made available through the postmodern tools of partisan narrative and alternative reality, an even greater power necessarily belongs to those who can grant or deny access to these tools.

And this week’s events are, of course, just the beginning. The motley insurrection of the Trumpists will serve as a justification, if one was needed, for an increasingly strict regime of surveillance and censorship by major social media platforms, answering to their investors and to the political class in Washington. Already the incoming president Joe Biden has stated his intentions to introduce new legislation against “domestic terrorism,” which will no doubt involve the tech giants maintaining their commercial dominance in return for carrying out the required surveillance and reporting of those deemed subversive. Meanwhile, Google and Apple yesterday issued an ultimatum to the platform Parler, which offers the same basic model as Twitter but with laxer content rules, threatening to banish it from their app stores if it did not police conversation more strictly.

But however disturbing the implications of this crackdown, we should welcome the clarity we got this week. For too long, the tech giants have been able to pose as neutral arbiters of discussion, cloaking their authority in corporate euphemisms about public interest. Consequently, they have been able to set the terms of communication over much of the world according to their own interests and political calculations. Whether or not they were right to banish Trump, the key fact is that it was they who had the authority to do so, for their own reasons. The increasing regulation of social media – which was always inevitable, in one form or another, given its incendiary potential – will now proceed according to the same logic. Hopefully the dramatic nature of their decisions this week will make us question if this is really a tolerable situation.

Poland and Hungary are exposing the EU’s flaws

The European Union veered into another crisis on Monday, as the governments of Hungary and Poland announced they would veto the bloc’s next seven-year budget. This comes after the European Parliament and Council tried to introduce “rule of law” measures for punishing member states that breach democratic standards — measures that Budapest and Warsaw, the obvious target of such sanctions, have declared unacceptable.

As I wrote last week, it is unlikely that the disciplinary mechanism would actually have posed a major threat to either the Fidesz regime in Hungary or the Law and Justice one in Poland. These stubborn antagonists of European liberalism have long threatened to block the entire budget if it came with meaningful conditions attached. That they have used their veto anyway suggests the Hungarian and Polish governments — or at least the hardline factions within them — feel they can extract further concessions.

There’s likely to be a tense video conference on Thursday as EU leaders attempt to salvage the budget. It’s tempting to assume a compromise will be found that allows everyone to save face (that is the European way), but the ongoing impasse has angered both sides. At least one commentator has statedthat further concessions to Hungary and Poland would amount to “appeasement of dictators.”

In fact compromises with illiberal forces are far from unprecedented in the history of modern democracy. The EU constitution that limits the power of federal institutions is what allows actors like Orban to misbehave — something the Hungarian Prime Minister has exploited to great effect.

And yet, it doesn’t help that the constitutional procedures in question — the treaties of the European Union — were so poorly designed in the first place. Allowing single states an effective veto over key policy areas is a recipe for dysfunction, as the EU already found out in September when Cyprus blockedsanctions against Belarus.

More to the point, the current deadlock with Hungary and Poland has come about because the existing Article 7 mechanism for disciplining member states is virtually unenforceable (both nations have been subject to Article 7 probes for several years, to no effect).

But this practical shortcoming also points to an ideological one. As European politicians have admitted, the failure to design a workable disciplinary mechanism shows the project’s architects did not take seriously the possibility that, once countries had made the democratic reforms necessary to gain access to the EU, they might, at a later date, move back in the opposite direction. Theirs was a naïve faith in the onwards march of liberal democracy.

In this sense, the crisis now surrounding the EU budget is another product of that ill-fated optimism which gripped western elites around the turn of the 21stcentury. Like the governing class in the United States who felt sure China would reform itself once invited into the comity of nations, the founders of the European Union had too rosy a view of liberalism’s future — and their successors are paying the price.

Europe’s deplorables have outwitted Brussels

This essay was originally published by Unherd on November 10th 2020

Throughout the autumn, the European Union has been engaged in a standoff with its two most antagonistic members, Hungary and Poland. At stake was whether the EU would finally take meaningful action against these pioneers of “illiberal democracy”, to use the infamous phrase of Hungarian Prime Minister Viktor Orbán. As of last week — and despite appearances to the contrary — it seems the Hungarian and Polish regimes have postponed the reckoning once more.

Last week, representatives of the European Parliament triumphantly announced a new disciplinary mechanism which, they claimed, would enable Brussels to withhold funds from states that violate liberal democratic standards. According to MEP Petri Sarvamaa, it meant the end of “a painful phase [in] the recent history of the European Union”, in which “the basic values of democracy” had been “threatened and undermined”.

No names were named, of course, but they did not need to be. Tensions between the EU and the recalcitrant regimes on its eastern periphery, Hungary under Orbán’s Fidesz and Poland under the Law and Justice Party, have been mounting for years. Those governments’ erosion of judicial independence and media freedom, as well as concerns over corruption, education, and minority rights, have resulted in a series of formal investigations and legal actions. And that is not to mention the constant rhetorical fusillades between EU officials and Budapest and Warsaw.

The new disciplinary mechanism is being presented as the means to finally bring Hungary and Poland to heel, but it is no such thing. Though not exactly toothless, it is unlikely to pose a serious threat to the illiberal pretenders in the east. Breaches of “rule of law” standards will only be sanctioned if they affect EU funds — so the measures are effectively limited to budget oversight. Moreover, enforcing the sanctions will require a weighted majority of member states in the European Council, giving Hungary or Poland ample room to assemble a blocking coalition.

In fact, what we have here is another sticking plaster so characteristic of the complex and unwieldy structures of European supranational democracy. The political dynamics of this system, heavily reliant on horse-trading and compromise, have allowed Hungary and Poland to outmanoeuvre their opponents.

The real purpose of the disciplinary measures is to ensure the timely passage of next EU budget, and in particular, a €750 billion coronavirus relief fund. That package will, for the first time, see member states issuing collective debt backed by their taxpayers, and therefore has totemic significance for the future of the Union. It is a real indication that fiscal integration might be possible in the EU — a step long regarded as crucial to the survival of Europe’s federal ambitions, and one that shows its ability to respond effectively to a major crisis.

But this achievement has almost been derailed by a showdown with Hungary and Poland. Liberal northern states such as Finland, Sweden and the Netherlands, together with the European Parliament, insisted that financial support should be conditional on upholding EU values and transparency standards. But since the relief fund requires unanimous approval, Hungary or Poland can simply veto the whole initiative, which is exactly what they have been threatening to do.

In other words, the EU landed itself with a choice between upholding its liberal commitments and securing its future as a viable political and economic project. The relatively weak disciplinary mechanism shows that European leaders are opting for the latter, as they inevitably would. It is a compromise that allows the defenders of democratic values to save face, while essentially letting Hungary and Poland off the hook. (Of course this doesn’t rule out the possibility that the Hungarian and Polish governments will continue making a fuss anyway.)

Liberals who place their hopes in the European project may despair at this, but these dilemmas are part and parcel of binding different regions and cultures in a democratic system. Such undertakings need strict constitutional procedures to hold them together, but those same procedures create opportunities to game the system, especially as demands in one area can be tied with cooperation in another.

As he announced the new rule of law agreement, Sarvamaa pointed to Donald Trump’s threat to win the presidential election via the Supreme Court as evidence of the need to uphold democratic standards. In truth, what is happening in Europe bears a closer resemblance to America in the 1930s, when F.D. Roosevelt was forced to make concessions to the Southern states to deliver his New Deal agenda.

That too was a high-stakes attempt at federal consolidation and economic repair, with the Great Depression at its height and democracy floundering around the world. As the political historian Ira Katznelson has noted,Roosevelt only succeeded by making “necessary but often costly illiberal alliances” — in particular, alliances with Southern Democratic legislators who held an effective veto in Congress. The result was that New Deal programs either avoided or actively upheld white supremacy in the Jim Crow South. (Key welfare programs, for instance, were designed to exclude some two-thirds of African American employees in the Southern states).

According to civil rights campaigner Walter White, Roosevelt himself explained his silence on a 1934 bill to combat the lynching of African Americans as follows: “I’ve got to get legislation passed by Congress to save America… If I come out for the anti-lynching bill, they [the Southern Democrats] will block every bill I ask Congress to pass to keep America from collapsing. I just can’t take that risk.”

This is not to suggest any moral equivalence between the Europe’s “illiberal democracies” and the Deep South of the 1930s. But the Hungarian and Polish governments do resemble the experienced Southern politicians of the New Deal era in their ability to manoeuvre within a federal framework, achieving an autonomy that belies their economic dependency. They have learned to play by the letter of the rules as well as to subvert them.

Orbán, for instance, has frequently insisted that his critics make a formal legal case against him, whereupon he has managed to reduce sanctions to mere technicalities. He has skilfully leveraged the arithmetic of the European Parliament to keep Fidesz within the orbit of the mainstream European People’s Party group. In September, the Hungarian and Polish governments even announced plans to establish their own institute of comparative legal studies, aiming to expose the EU’s “double standards.”

And now, with their votes required to pass the crucial relief fund, the regimes in Budapest and Warsaw are taking advantage of exceptionally high stakes much as their Southern analogues in the 1930s did. They have, in recent months, become increasingly defiant in their rejection of European liberalism. In September, Orbán published a searing essay in which he hailed a growing “rebellion against liberal intellectual oppression” in the western world. The recent anti-abortion ruling by the Polish high court is likewise a sign of that state’s determination to uphold Catholic values and a robust national identity.

Looking forward, however, it seems clear this situation cannot continue forever. Much has been made of Joe Biden’s hostility to the Hungarian and Polish regimes, and with his election victory, we may see the US attaching its own conditions to investment in Eastern Europe. But Biden cannot question the EU’s standards too much, since he has made the latter out to be America’s key liberal partner. The real issue is that if richer EU states are really going to accept the financial burdens of further integration, they will not tolerate deviant nations wielding outsized influence on key policy areas.

Of course such reforms would require an overhaul of the voting system, which means treaty change. This raises a potential irony: could the intransigence of Hungary and Poland ultimately spur on Europe’s next big constitutional step — one that will see their leverage taken away? Maybe. For the time being, the EU is unlikely to rein in the illiberal experiments within its borders.

The Philosophy of Rupture: How the 1920s Gave Rise to Intellectual Magicians

This essay was originally published by Areo magazine on 4th November 2020.

When it comes to intellectual history, Central Europe in the decade of the 1920s presents a paradox. It was an era when revolutionary thought – original and iconoclastic ideas and modes of thinking – was not in fact revolutionary, but almost the norm. And the results are all around us today. The 1920s were the final flourish in a remarkable period of path-breaking activity in German-speaking Europe, one that laid many of the foundations for both analytic and continental philosophy, for psychology and sociology, and for several branches of legal philosophy and of theoretical science.

This creative ferment is partly what people grasp at when they refer to the “spirit” of the ’20s, especially in Germany’s Weimar Republic. But this doesn’t help us understand where that spirit came from, or how it draws together the various thinkers who, in hindsight, seem to be bursting out of their historical context rather than sharing it.

Wolfram Eilenberger attempts one solution to that problem in his new book, Time of the Magicians: The Invention of Modern Thought, 1919-1929. He manages to weave together the ideas of four philosophers – Ludwig Wittgenstein, Martin Heidegger, Walter Benjamin and Ernst Cassirer – by showing how they emerged from those thinkers’ personal lives. We get colourful accounts of money troubles, love affairs, career struggles and mental breakdowns, each giving way to a discussion of the philosophical material. In this way, the personal and intellectual journeys of the four protagonists are linked in an expanding web of experiences and ideas.

This is a satisfying format. There’s just no denying the voyeuristic pleasure of peering into these characters’ private lives, whether it be Heidegger’s and Benjamin’s attempts to rationalise their adulterous tendencies, or the series of car crashes that was Wittgenstein’s social life. Besides, it’s always useful to be reminded that, with the exception of the genuinely upstanding Cassirer, these great thinkers were frequently selfish, delusional, hypocritical and insecure. Just like the rest of us then.

But entertaining as it is, Eilenberger’s biographical approach does not really cast much light on that riddle of the age: why was this such a propitious time for magicians? If anything, his portraits play into the romantic myth of the intellectual window-breaker as a congenital outsider and unusual genius – an ideal that was in no small part erected by this very generation. This is a shame because, as I’ll try to show later, these figures become still more engaging when considered not just as brilliant individuals, but also as products of their time.

First, it’s worth looking at how Eilenberger manages to draw parallels between the four philosophers’ ideas, for that is no mean feat. Inevitably this challenge makes his presentation selective and occasionally tendentious, but it also produces some imaginative insights.

*          *          *

 

At first sight, Wittgenstein seems an awkward fit for this book, seeing as he did not produce any philosophy during the decade in question. His famous early work, the Tractatus Logico-Philosophicus, claimed to have solved the problems of philosophy “on all essential points.” So we are left with the (admittedly fascinating) account of how he signed away his vast inheritance, trained as a primary school teacher, and moved through a series of remote Austrian towns becoming increasingly isolated and depressed.

But this does leave Eilenberger plenty of space to discuss the puzzling Tractatus. He points out, rightly, that Wittgenstein’s mission to establish once and for all what can meaningfully be said – that is, what kinds of statements actually make sense – was far more than an attempt to rid philosophy of metaphysical hokum (even if that was how his logical-empiricist fans in Cambridge and the Vienna Circle wanted to read the work).

Wittgenstein did declare that the only valid propositions were those of natural science, since these alone shared the same logical structure as empirical reality, and so could capture an existing or possible “state of affairs” in the world. But as Wittgenstein freely admitted, this meant the Tractatus itself was nonsense. Therefore its reader was encouraged to disregard the very claims which had established how to judge claims, to “throw away the ladder after he has climbed up it.” Besides, it remained the case that “even if all possible scientific questions be answered, the problems of life have still not been touched at all.”

According to Eilenberger, who belongs to the “existentialist Wittgenstein” school, the Tractatus’ real goals were twofold. First, to save humanity from pointless conflict by clarifying what could be communicated with certainty. And second, to emphasise the degree to which our lives will always be plagued by ambiguity – by that which can only be “shown,” not said – and hence by decisions that must be taken on the basis of faith.

This reading allows Eilenberger to place Wittgenstein in dialogue with Heidegger and Benjamin. The latter both styled themselves as abrasive outsiders: Heidegger as the Black Forest peasant seeking to subvert academic philosophy from within, Benjamin as the struggling journalist and flaneur who, thanks to his erratic behaviour and idiosyncratic methods, never found an academic post. By the end of the ’20s, they had gravitated towards the political extremes, with Heidegger eventually joining the Nazi party and Benjamin flirting with Communism.

Like many intellectuals at this time, Heidegger and Benjamin were interested in the consequences of the scientific and philosophical revolutions of the 17th century, the revolutions of Galileo and Descartes, which had produced the characteristic dualism of modernity: the separation of the autonomous, thinking subject from a scientific reality governed by natural laws. Both presented this as an illusory and fallen state, in which the world had been stripped of authentic human purpose and significance.

Granted, Heidegger did not think such fine things were available to most of humanity anyway. As he argued in his masterpiece Being and Time, people tend to seek distraction in mundane tasks, social conventions and gossip. But it did bother him that philosophers had forgotten about “the question of the meaning of Being.” To ask this question was to realise that, before we come to do science or anything else, we are always already “thrown” into an existence we have neither chosen nor designed, and which we can only access through the meanings made available by language and by the looming horizon of our own mortality.

Likewise, Benjamin insisted language was not a means of communication or rational thought, but an aesthetic medium through which the world was revealed to us. In his work on German baroque theatre, he identified the arrival of modernity with a tragic distortion in that medium. Rather than a holistic existence in which in which everything had its proper name and meaning – an existence that, for Benjamin, was intimately connected with the religious temporality of awaiting salvation – the very process of understanding had become arbitrary and reified, so that any given symbol might as well stand for any given thing.

As Eilenberger details, both Heidegger and Benjamin found some redemption in the idea of decision – a fleeting moment when the superficial autonomy of everyday choices gave way to an all-embracing realisation of purpose and fate. Benjamin identified such potential in love and, on a collective and political level, in the “profane illuminations” of the metropolis, where the alienation of the modern subject was most profound. For Heidegger, only a stark confrontation with death could produce a truly “authentic” decision. (This too had political implications, which Eilenberger avoids: Heidegger saw the “possibilities” glimpsed in these moments as handed down by tradition to each generation, leaving the door open to a reactionary idea of authenticity as something a community discovers in its past).

If Wittgenstein, Heidegger and Benjamin were outsiders and “conceptual wrecking balls,” Ernst Cassirer cuts a very different figure. His inclusion in this book is the latest sign of an extraordinary revival in his reputation over the past fifteen years or so. That said, some of Eilenberger’s remarks suggest Cassirer has not entirely shaken off the earlier judgment, that he was merely “an intellectual bureaucrat,” “a thoroughly decent man and thinker, but not a great one.”

Cassirer was the last major figure in the Neo-Kantian tradition, which had dominated German academic philosophy from the mid-19th century until around 1910. At this point, it grew unfashionable for its associations with scientific positivism and naïve notions of rationality and progress (not to mention the presence of prominent Jewish scholars like Cassirer within its ranks). The coup de grâce was delivered by Heidegger himself at the famous 1929 “Davos debate” with Cassirer, the event which opens and closes Eilenberger’s book. Here contemporaries portrayed Cassirer as an embodiment of “the old thinking” that was being swept away.

That judgment was not entirely accurate. It’s true that Cassirer was an intellectual in the mould of 19th century Central European liberalism, committed to human progress and individual freedom, devoted to science, culture and the achievements of German classicism. Not incidentally, he was the only one of our four thinkers to wholeheartedly defend Germany’s Weimar democracy. But he was also an imaginative, versatile and unbelievably prolific philosopher.

Cassirer’s three-volume project of the 1920s, The Theory of Symbolic Forms, showed that he, too, understood language and meaning as largely constitutive of reality. But for Cassirer, the modern scientific worldview was not a debasement of the subject’s relationship to the world, but a development of the same faculty which underlay language, myth and culture – that of representing phenomena through symbolic forms. It was, moreover, an advance. The logical coherence of theoretical science, and the impersonal detachment from nature it afforded, was the supreme example of how human beings achieved freedom: by understanding the structure of the world they inhabited to ever greater degrees.

But nor was Cassirer dogmatic in his admiration for science. His key principle was the plurality of representation and understanding, allowing the same phenomenon to be grasped in different ways. The scientist and artist are capable of different insights. More to the point, the creative process through which human minds devised new forms of representation was open ended. The very history of science, as of culture, showed that there were always new symbolic forms to be invented, transforming our perception of the world in the process.

*          *          *

 

It would be unfair to say Eilenberger gives us no sense of how these ideas relate to the context in which they were formed; his biographical vignettes do offer vivid glimpses of life in 1920s Europe. But that context is largely personal, and rarely social, cultural or intellectual. As a result, the most striking parallel of all – the determination of Wittgenstein, Heidegger and Benjamin to upend the premises of the philosophical discipline, and that of Cassirer to protect them – can only be explained in terms of personality. This is misleading.

A time-traveller visiting Central Europe in the years after 1918 could not help but notice that all things intellectual were in a state of profound flux. Not only was Neo-Kantianism succumbing to a generation of students obsessed with metaphysics, existence and (in the strict sense) nihilism. Every certainty was being forcefully undermined: the superiority of European culture in Oswald Spengler’s bestselling Decline of the West (1918); the purpose and progress of history in Ernst Troeltsch’s “Crisis of Historicism” (1922); the Protestant worldview in Karl Barth’s Epistle to the Romans (1919); and the structure of nature itself in Albert Einstein’s article “On the Present Crisis in Theoretical Physics” (1922).

In these years, even the concept of revolution was undergoing a revolution, as seen in the influence of unorthodox Marxist works like György Lukács’ History and Class Consciousness (1923). And this is to say nothing of what our time-traveller would discover in the arts. Dada, a movement dedicated to the destruction of bourgeois norms and sensibilities, had broken out in Zurich in 1917 and quickly spread to Berlin. Here it infused the works of brilliant but scandalous artists such as George Grosz and Otto Dix.

German intellectuals, in other words, were conscious of living in an age of immense disruption. More particularly, they saw themselves as responding to world defined by rupture; or to borrow a term from Heidegger and Benjamin, by “caesura” – a decisive and irreversible break from the past.

It’s not difficult to imagine where that impression came from. This generation experienced the cataclysm of the First World War, an unprecedented bloodbath that discredited assumptions of progress even as it toppled ancient regimes (though among Eilenberger’s quartet, only Wittgenstein served on the front lines). In its wake came the febrile economic and political atmosphere of the Weimar Republic, which has invited so many comparisons to our own time. Less noticed is that the ’20s were also, like our era, a time of destabilising technological revolution, witnessing the arrival of radio, the expansion of the telephone, cinema and aviation, and a bevy of new capitalist practices extending from factory to billboard.

Nonetheless, in philosophy and culture, we should not imagine that an awareness of rupture emerged suddenly in 1918, or even in 1914. The war is best seen as an explosive catalyst which propelled and distorted changes already underway. The problems that occupied Eilenberger’s four philosophers, and the intellectual currents that drove them, stem for a deeper set of dislocations.

 Anxiety over the scientific worldview, and over philosophy’s relationship to science, was an inheritance from the 19thcentury. In Neo-Kantianism, Germany had produced a philosophy at ease with the advances of modern science. But paradoxically, this grew to be a problem when it became clear how momentous those advances really were. Increasingly science was not just producing strange new ways of seeing the world, but through technology and industry, reshaping it. Ultimately the Neo-Kantian holding pattern, which had tried to reconcile science with the humanistic traditions of the intellectual class, gave way. Philosophy became the site of a backlash against both.

But critics of philosophy’s subordination to science had their own predecessors to call on, not least with respect to the problem of language. Those who, like Heidegger and Benjamin, saw language not as a potential tool for representing empirical reality, but the medium which disclosed that reality to us (and who thus began to draw the dividing line between continental and Anglo-American philosophy), were sharpening a conflict that had simmered since the Enlightenment. They took inspiration from the 18th century mystic and scourge of scientific rationality, Johann Georg Hamann.

Meanwhile, the 1890s saw widespread recognition of the three figures most responsible for the post-war generation’s ideal of the radical outsider: Søren Kierkegaard, Friedrich Nietzsche and Karl Marx. That generation would also be taught by the great pioneers of sociology in Germany, Max Weber and Georg Simmel, whose work recognised what many could feel around them: that modern society was impersonal, fragmented and beset by irresolvable conflicts of value.

In light of all this, it’s not surprising that the concept of rupture appears on several levels in Wittgenstein, Heidegger and Benjamin. They presented their works as breaks in and with the philosophical tradition. They reinterpreted history in terms of rupture, going back and seeking the junctures when pathologies had appeared and possibilities had been foreclosed. They emphasised the leaps of faith and moments of decision that punctuated the course of life.

Even the personal qualities that attract Eilenberger to these individuals – their eccentric behaviour, their search for authenticity – were not theirs alone. They were part of a generational desire to break with the old bourgeois ways, which no doubt seemed the only way to take ownership of such a rapidly changing world.

 

Biden versus Beijing

The Last of the Libertarians

This book review was originally published by Arc Digital on August 31st 2020.

As the world reels from the chaos of COVID-19, it is banking on the power of innovation. We need a vaccine, and before even that, we need new technologies and practices to help us protect the vulnerable, salvage our pulverized economies, and go on with our lives. If we manage to weather this storm, it will be because our institutions prove capable of converting human ingenuity into practical, scalable fixes.

And yet, even if we did not realize it, this was already the position we found ourselves in prior to the pandemic. From global warming to food and energy security to aging populations, the challenges faced by humanity in the 21st century will require new ways of doing things, and new tools to do them with.

So how can our societies foster such innovation? What are the institutions, or more broadly the economic and political conditions, from which new solutions can emerge? Some would argue we need state-funded initiatives to direct our best minds towards specific goals, like the 1940s Manhattan Project that cracked the puzzle of nuclear technology. Others would have us place our faith in the miracles of the free market, with its incentives for creativity, efficiency, and experimentation.

Matt Ridley, the British businessman, author, and science journalist, is firmly in the latter camp. His recent book, How Innovation Works, is a work of two halves. On the one hand it is an entertaining, informative, and deftly written account of the innovations which have shaped the modern world, delivering vast improvements in living standards and opportunity along the way. On the other hand, it is the grumpy expostulation of a beleaguered libertarian, whose reflexive hostility to government makes for a vague and contradictory theory of innovation in general.

Innovation, we should clarify, does not simply mean inventing new things, nor is it synonymous with scientific or technological progress. There are plenty of inventions that do not become innovations — or at least not for some time — because we have neither the means nor the demand to develop them further. Thus, the key concepts behind the internal combustion engine and general-purpose computer long preceded their fruition. Likewise, there are plenty of important innovations which are neither scientific nor technological — double-entry bookkeeping, for instance, or the U-bend in toilet plumbing — and plenty of scientific or technological advances which have little impact beyond the laboratory or drawing board.

Innovation, as Ridley explains, is the process by which new products, practices, and ideas catch on, so that they are widely adopted within an industry or society at large. This, he rightly emphasizes, is rarely down to a brilliant individual or blinding moment of insight. It is almost never the result of an immaculate process of design. It is, rather, “a collective, incremental, and messy network phenomenon.”

Many innovations make use of old, failed ideas whose time has come at last. At the moment of realization, we often find multiple innovators racing to be first over the line — as was the case with the steam engine, light bulb, and telegraph. Sometimes successful innovation hinges on a moment of luck, like the penicillin spore which drifted into Alexander Fleming’s petri dish while he was away on holiday. And sometimes a revolutionary innovation, such as the search engine, is strangely anticipated by no one, including its innovators, almost up until the moment it is born.

But in virtually every instance, the emergence of an innovation requires numerous people with different talents, often far apart in space and time. As Ridley describes the archetypal case: “One person may make a technological breakthrough, another work out how to manufacture it, and a third how to make it cheap enough to catch on. All are part of the innovation process and none of them knows how to achieve the whole innovation.”

These observations certainly lend some credence to Ridley’s arguments that innovation is best served by a dynamic, competitive market economy responding to the choices of consumers. After all, we are not very good at guessing from which direction the solution to a problem will come — we often do not even know there was a problem until a solution comes along — and so it makes sense to encourage a multitude of private actors to tinker, experiment, and take risks in the hope of discovering something that catches on.

Moreover, Ridley’s griping about misguided government regulation — best illustrated by Europe’s almost superstitious aversion to genetically modified crops — and about the stultifying influence of monopolistic, subsidy-farming corporations, is not without merit.

But not so fast. Is it not true that many innovations in Ridley’s book drew, at some point in their complex gestation, from state-funded research? This was the case with jet engines, nuclear energy, and computing (not to mention GPS, various products using plastic polymers, and touch-screen displays). Ridley’s habit of shrugging off such contributions with counterfactuals — had not the state done it, someone else would have — misses the point, because the state has basic interests that inevitably bring it into the innovation business.

It has always been the case that certain technologies, however they emerge, will continue their development in a limbo between public and private sectors, since they are important to economic productivity, military capability, or energy security. So it is today with the numerous innovative technologies caught up in the rivalry between the United States and China, including 5G, artificial intelligence, biotechnology, semiconductors, quantum computing, and Ridley’s beloved fracking for shale gas.

As for regulation, the idea that every innovation which succeeds in a market context is in humanity’s best interests is clearly absurd. One thinks of such profitable 19th-century innovations by Western businessmen as exporting Indian opium to the Far East. Ridley tries to forestall such objections with the claim that “To contribute to human welfare … an innovation must meet two tests: it must be useful to individuals, and it must save time, energy, or money in the accomplishment of some task.” Yet there are plenty of innovations which meet this standard and are still destructive. Consider the opium-like qualities of social media, or the subprime mortgage-backed securities which triggered the financial crisis of 2007–8 (an example Ridley ought to know about, seeing as he was chairman of Britain’s ill-fated Northern Rock bank at the time).

Ridley’s weakness in these matters is amplified by his conceptual framework, a dubious fusion of evolutionary theory and dogmatic libertarianism. Fundamentally, he holds that innovation is an extension of evolution by natural selection, “a process of constantly discovering ways of rearranging the world into forms that are unlikely to arise by chance — and that happen to be useful.” (Ridley even has a section on “The ultimate innovation: life itself.”) That same cosmic process, he claims, is embodied in the spontaneous order of the free market, which, through trade and specialization, allows useful innovations to emerge and spread.

This explains why How Innovation Works contains no suggestion about how we should weigh the risks and benefits of different kinds of innovation. Insofar as Ridley makes an ethical case at all, it amounts to a giant exercise in naturalistic fallacy. Though he occasionally notes innovation can be destructive, he more often moves seamlessly from claiming that it is an “inexorable” natural process, something which simply happens, to hailing it as “the child of freedom and the parent of prosperity,” a golden goose in perpetual danger of suffocation.

But the most savage contradictions in Ridley’s theory appear, once again, in his pronouncements on the role of the state. He insists that by definition, government cannot be central to innovation, because it has predetermined goals whereas evolutionary processes do not. “Trying to pretend that government is the main actor in this process,” he says, “is an essentially creationist approach to an essentially evolutionary phenomenon.”

Never mind that many of Ridley’s own examples involve innovators aiming for predetermined goals, or that in his (suspiciously brief) section on the Chinese innovation boom, he concedes in passing that shrewd state investment played a key role. The more pressing question is, what about those crucial innovations for which there is no market demand, and which therefore do not evolve?

Astonishingly, in his afterword on the challenges posed by COVID-19, Ridley has the gall to admonish governments for not taking the lead in innovation. “Vaccine development,” he writes, has been “insufficiently encouraged by governments and the World Health Organisation,” and “ignored, too, by the private sector because new vaccines are not profitable things to make.” He goes on: “Politicians should go further and rethink their incentives for innovation more generally so that we are never again caught out with too little innovation having happened in a crucial field of human endeavour.”

In these lines, we should read not just the collapse of Ridley’s central thesis, but more broadly, the demise of a certain naïve market libertarianism — a worldview that flourished during the 1980s and ’90s, and which, like most dominant intellectual paradigms, came to see its beliefs as reflecting the very order of nature itself. For what we should have learned in 2007–8, and what we have certainly learned this year, is that for all its undoubted wonders the market is always tacitly relying on the state to step in should the need arise.

This does not mean, of course, that the market has no role to play in developing the key innovations of the 21st century. I believe it has a crucial role, for it remains unmatched in its ability to harness the latent power of widely dispersed ideas and skills. But if the market’s potential is not to be snuffed out in a post-COVID era of corporatism and monopoly, then it will need more credible defenders than Ridley. It will need defenders who are aware of its limitations and of its interdependence with the state.

Train-splaining a new world order

This article was originally published by The Critic on August 4th 2020.

“We have great ambitions for night trains in France,’ said transport minister Jean-Baptiste Djebbari in June. It was a curious statement. When it comes to infrastructure, the language of ambition is usually reserved for projects that convey scale, speed and technological prowess. Europe’s dwindling network of sleeper trains, by contrast, have long been considered a charming relic in an age of ever cheaper, faster and more atomised travel.

Not any longer. On Bastille Day, president Emmanuel Macron confirmed that sleeper trains would be returning to French rails, and in so doing, he was merely joining a continental trend. In January, the first sleeper service since 2003 departed Vienna’s Westbahnhof for Brussels. Its provider, the Austrian ÖBB network, had already resurrected routes to Germany, Italy and Switzerland. A new night train linking states on the European Union’s eastern periphery commenced in June, and is already increasing services to meet a growing demand – as are sleeper routes connecting the Nordic countries to Germany. The Swedish government last month committed to fund new services linking Stockholm and Malmö with Hamburg and Brussels.

This piqued my interest, because I’ve long felt that railways offer vivid windows into the states across which they roam. They tend to exhibit attitudes to public service provision and capital-intensive infrastructure, but they also say a great deal about the nature and extent of a society’s interrelatedness, its pace of life, and indeed its ambition.

On its face, the return of sleeper trains signals the rise of flygskam – a popular Swedish coinage meaning “flight shame,” part of the growing environmental conscience of European governments and consumers. In recent months, Covid-19 has also been boosting demand. And it remains true that continental Europe’s investment in all forms of rail leaves the UK’s patchy, overcrowded and overpriced networks in the shade (let’s not even mention HS2).

But just as Britain’s rail headaches say a great deal about us as a country – our uncertainty over the proper roles of the public and private sector, our incorrigible NIMBYism and our longstanding neglect of the nation beyond London – so it would only be a little facetious to say that sleeper trains capture something deeper about the European Geist today.

At the height of its 19th century confidence, the steam locomotive was the ultimate symbol of Europe’s headlong rush into modernity. Its near-manic desire to control the globe was likewise measured in yards and metres of railway track. Now, as Bruno Maçães eloquently argues, Europe has reached a different inflection point: it is coming to realize that the values it once took to be universal are merely those of its own “civilization state.” Relinquishing any sense of global mission, liberal-minded Europeans now seek to cultivate, in Maçães’ words, ‘a specific way of life: uncommitted, free, detached, aesthetic.’

Surely there’s no better metaphor for this inward turn than the tranquilising comforts of a slow-moving sleeper train. With the world around it growing increasingly chaotic and nasty, I picture Europe seated in the dining car with a Kindle edition of Proust, ordering the vegetarian option, and finally gazing half-drunk into the sunset. Would you not, dear reader, prefer that to the unseemly crush of your 6am Ryanair flight? Would you not prefer it to arriving anywhere at all?

Certainly, writers who step on board a night train cannot help but mention their “nostalgic” or “romantic” appeal – that is, if they don’t simply wallow in kitsch sentimentality. Consider one such account in The Guardian:

“I wake in the pre-dawn light – still inky blue in the compartment. I lie there, feeling the train rock beneath me and then push up the window blind with a foot. I’m rolling through misty flatlands. The landscape spooling past. Austria.”

But perhaps we don’t need to be figurative about this. After all, a quasi-national European consciousness, based around a common purpose like environmentalism, is undoubtedly something the EU would like to foster. And railways, being as skeletons to the bodies of nations, have always been a choice tool for such unification. So it should not surprise us that the return of sleeper trains comes partly under the auspices of the European Commission’s Green Deal, with 2021 slated as “the European Year of Rail.”

The distinctiveness of train culture in Europe comes into sharper focus when we consider its troubled cousin across the Atlantic, the United States. There too the westwards expansion of the railway was once a crucial component, both practically and symbolically, in the creation of a unified nation. Yet today the railway can be seen, like almost everything in American life, as an emblem of estrangement.

The so-called “flyover states,” those swathes of the continental heartland not visited by coastal elites, are in many cases states crossed by the long-distance Amtrak service. But taking the Amtrak, especially overnight, is viewed as a profound eccentricity. Last year a not entirely ironic New York Times Magazine feature reported the experience as though it belonged to another planet. ‘Train people,’ writes our correspondent, ‘are content to stare out the window for hours, like indoor cats … Train people are also individuals for whom small talk is as invigorating as a rail of cocaine.’

It is largely within Blue America – the coastal strips and the urbanised mid-West around Chicago – that high-speed links after the European fashion are being planned. Meanwhile, Elon Musk and others are racing to complete the first “hyperloop” service: a flashy, futuristic transport project of the kind loved by celebrity entrepreneurs, which will use vacuum technology to send passenger pods through tubes at over 750 mph (destinations San Francisco, Las Vegas, Orlando).

Of course, no discussion of modern rail systems would be complete without China, where the staggering proliferation of high-speed networks in recent decades (think two-thirds of world’s total) illustrates a scale and dynamism of which the west can only dream. These are a typical product of the Chinese economic model, which suppresses consumer spending in favor of state-managed export and investment as an engine of growth. That being said, China’s semi-private developers have still borrowed prodigiously, so that a number rail projects have recently ground to halt under a crushing debt burden.

Such vaulting ambition seems a world away from European decadence, but in one sense it is not. Railways also comprise a crucial element of the New Silk Road initiative, whereby China’s power is projected across the Eurasian landmass through infrastructure projects and trade. With over thirty Chinese cities already connected with Europe by rail, it may not be long before Chinese freight carriages and European sleeper carriages routinely share the same tracks.

Anti-racism and the long shadow of the 1970s

The essay was originally published by Unherd on August 3rd 2020.

Last month, following a bout of online outrage, the National Museum of African American History and Culture removed an infographic from its website. Carrying the title “Aspects and assumptions of whiteness and white culture in the United States,” the offending chart presented a list of cultural expectations which, apparently, reflect the “traditions, attitudes and ways of life” characteristic of “white people.” Among the items listed were “self-reliance,” “the nuclear family,” “respect authority,” “plan for future” and “objective, rational linear thinking”.

Critics seized on this as evidence that the anti-racism narrative that has taken hold in institutional America is permeated by a bigotry of low expectations. The chart seemed to suggest that African Americans should not be expected to adhere to the basic tenets of modern civil society and intellectual life. Moreover, the notion that prudence, personal responsibility and rationality are inherently white echoes to an uncanny degree the racist claims that have historically been used to justify the oppression of people of African descent.

We could assume, in the interests of fairness, that the problem with the NMAAHC’s chart was a lack of context. Surely the various qualities it ascribes to “white culture” should be read as though followed by a phrase like “as commonly understood in the United States today?” The problem is that the original document which inspired the chart, and which bore the copyright of corporate consultant Judith H. Katz, provides no such caveats.

If we look at Katz’s own career, however, we do find some illuminating context — not just for this particular incident, but also regarding the origins of the current anti-racism movement more broadly. During the 1970s, Katz pioneered a distinctive approach to combatting racism, one that was above all therapeutic and managerial. This approach, as the NMAAHC chart suggests, took little interest in the opinions and experiences of ethnic and racial minorities, but focused on helping white Americans understand their identity.

Katz’s most obvious descendent today is Robin DiAngelo, author of the bestselling White Fragility — a book relating the experiences and methods of DiAngelo’s lucrative career in corporate anti-racism training. Katz too developed a re-education program, “White awareness training,” which, according to her 1978 book White Awareness, “strives to help Whites understand that racism in the United States is a White problem and that being White implies being racist.”

Like DiAngelo, Katz rails against the pretense of individualism and colour blindness, which she regards as strategies for denying complicity in racism. And like DiAngelo, Katz emphasizes the need for exclusively white discussions (the “White-on-White training group”) to avoid turning minorities into teachers, which would be merely another form of exploitation.

Yet the most striking aspect of Katz’s ideas, by contrast to the puritanical DiAngelo, is her insistence that the real purpose of anti-racism training is to enable the psychological liberation and self-fulfillment of white Americans. She consistently discusses the problem of racism in the medicalizing language of sickness and trauma. It is, she says, “a form of schizophrenia,” “a pervasive form of mental illness,” a “disease,” and “a psychological disorder… deeply embedded in White people from a very early age on both a conscious and an unconscious level.” Thus the primary benefit offered by Katz is to save white people from this pathology, by allowing them to establish a coherent identity as whites.

Her program, she repeatedly emphasizes, is not meant to produce guilt. Rather, its premise is that in order to discover “our unique identities,” we must not overlook “[o]ur sexual and racial essences.” Her training allows its subjects to “become more fully human,” to “identify themselves as White and feel good about it.” Or as Katz writes in a journal article: “We must begin to remove the intellectual shackles and psychological chains that keep us in a mental and spiritual bondage. White people have been hurt for too long.”

Reading all of this, it is difficult not to be reminded of the critic Christopher Lasch’s portrayal of 1970s America as a “culture of narcissism”. Lasch was referring to a bundle of tendencies that characterised the hangover from the radicalism of the 1960s: a catastrophising hypochondria that found in everything the signs of impending disaster or decay; a naval-gazing self-awareness which sought expression in various forms of spiritual liberation; and consequently, a therapeutic culture obsessed with self-improvement and personal renewal.

The great prophet of this culture was surely Woody Allen, whose work routinely evoked crippling neuroses, fear of death, and psychiatry as the customary tool for managing the inner tensions of the liberated bourgeois. That Allen treated all of this with layer upon layer of self-deprecating irony points to another key part of Lasch’s analysis. The narcissist of this era retained enough idealism so as to be slightly ashamed of his self-absorption — unless, of course, some way could be found to justify it as a means towards wider social improvement.

And that is what Katz’s white awareness training offered: a way to resolve the tensions between a desire for personal liberation and a social conscience, or more particularly, a new synthesis of ’70s therapeutic culture with the collectivist political currents unleashed in the ’60s.

Moreover, in Katz’s work we catch a glimpse of what the vehicle for this synthesis would be: the managerial structures of the public or private institution, where a paternalistic attitude towards students, employees and the general public could provide the ideal setting for the tenets of “white awareness.” By way of promoting her program, Katz observed in the late ’70s a general trend towards “a more educational role for the psychotherapist… utilizing systemic training as the process by which to meet desired behavior change.” There was, she noted, a “growing demand” for such services.

Which brings us back to the NMAAHC’s controversial chart. It would be wrong to suggest that this single episode allows us to draw a straight line from the culture of narcissism in which Katz’s ideas emerged to the present anti-racism narrative. But the fact that there continues to be so much emphasis placed on the notion of “whiteness” today — the NMAAHC has an entire webpage under this heading, which prominently features Katz’s successor Robin DiAngelo — suggests that progressive politics has not entirely escaped the identity crises of the 1970s.

Today that politics might be more comfortable assigning guilt than Katz was, but it still places a disproportionate emphasis on those it calls “white” to adopt a noble burden of self-transformation, while relegating minorities to the role of a helpless other.

Of course, it is precisely this simplistic dichotomy which allows the anti-racism narrative to jump across borders and even oceans, as we have seen happening recently, into any context where there are people who can be called “white” and an institutional framework for administering reeducation. Already in 1983, Katz was able to promote her “white awareness training” in the British journal Early Child Development and Care, simply swapping her standard American intro for a discussion of English racism.

Then as now, the implication is that from the perspective of “whiteness,” the experience of African-Americans and of ethnic minorities in a host of other places is somehow interchangeable. This, I think, can justifiably be called a kind of narcissism.

Why I’m not giving up on my ego

This spring, I finally got round to reading Derek Parfit’s famous work, Reasons and Persons. Published in 1984, the book is often cited as a key inspiration for subsequent developments in moral philosophy, notably the field of population ethics and the Effective Altruism movement. (Both of which, incidentally, are closely associated with Oxford University, the institution where Parfit himself worked until his death in 2017). I found Reasons and Persons every bit the masterpiece many have made it out to be – a work not just of rich insight, but also of persuasive humility and charm. For this reason, and because some themes of the book resonate with certain cultural trends today, I thought it would be worth saying something about why Parfit did not win me over to his way of seeing the world.

In Reasons and Persons, Parfit takes on three main issues:

  1. He makes numerous arguments against the self-interest theory of rationality, which holds that what is most rational for any individual to do is whatever will benefit him or her the most;
  1. He argues for a Reductionist theory of identity, according to which there is no “deep further fact” or metaphysical essence underpinning our existence as individual persons, only the partial continuity of psychological experiences across time;
  1. He argues for the moral significance of future generations, and searches (unsuccessfully, by his own admission) for the best way to recognise that significance in our own decisions.

I want to consider (2), Parfit’s Reductionist view of identity. On my reading, this was really the lynchpin of the whole book. According to Parfit, we are inclined to believe there is a “deep further fact” involved in personal identity – that our particular bodies and conscious minds constitute an identity which is somehow more than the sum of these parts. If your conscious mind (your patterns of thought, memories and intentions) managed somehow to survive the destruction of your body (including your brain), and to find itself in a replica body, you may suspect that this new entity would not be you. Likewise if your body continued with some other mind. In either case some fundamental aspect of your personhood, perhaps a metaphysical essence or soul or self, would surely have perished along the way.

Parfit says these intuitions are wrong: there simply is no further fact involved in personal identity. In fact, as regards both a true understanding of reality and what we should value (or “what really matters,” as he puts it), Parfit thinks the notion of persons as bearers of distinct identities can be dispensed with altogether.

What really matters about identity, he argues, is nothing more than the psychological continuity that characterises our conscious minds; and this can be understood without reference to the idea of a person at all. If your body were destroyed and your mind transferred to a replica body, this would merely be “about as bad as ordinary survival.” Your mind could even find itself combined with someone else’s mind, in someone else’s body, which would no doubt present some challenges. In both cases, though, whether the new entity would “really be you” is an empty question. We could describe what had taken place, and that would be enough.

Finally, once we dispense with the idea of a person as bearer of a distinct identity, we notice how unpersonlike our conscious minds really are. Psychological continuity is, over the course of a life, highly discontinuous. Thought patterns, memories and intentions form overlapping “chains” of experience, and each of these ultimately expires or evolves in such a way that, although there is never a total rupture, our future selves might as well be different people.

As I say, I found these claims about identity to be the lynchpin of Reasons and Persons. Parfit doesn’t refer to them in the other sections of his book, where he argues against self-interest and for the moral significance of future generations. But you can hardly avoid noticing its relevance for both. Parfit’s agenda, ultimately, is to show that ethics is about the quality of human experiences, and that all experiences across time and space should have the same moral significance. Denying the sanctity of personal identity provides crucial support for that agenda. Once you accept that the notion of an experience being your experience is much less important than it seems, it is easier to care more about experiences happening on the other side of the planet, or a thousand years in the future.

But there is another reason I was especially interested in Parfit’s treatment of identity.  In recent years, some friends and acquaintances of mine have become fascinated by the idea of escaping from the self or ego, whether through neo-Buddhist meditation (I know people who really like Sam Harris) or the spiritualism of Eckhart Tolle. I’m also aware that various subcultures, notably in Silicon Valley, have become interested in the very Parfitian idea of transhumanism, whereby the transferal of human minds to enhanced bodies or machines raises the prospect of superseding humanity altogether. Add to these the new conceptions of identity emerging from the domain of cultural politics – in particular, the notion of gender fluidity and the resurgence of racial essentialism – and it seems to me we are living at a time when the metaphysics of selfhood and personhood have become an area of pressing uncertainty.

I don’t think it would be very productive to make Reasons and Persons speak to these contemporary trends, but they did inform my own reading of the book. In particular, they led me to notice something about Parfit’s presentation of the Reductionist view.

In the other sections of the Reasons and Persons, Parfit makes some striking historical observations. He argues for a rational, consequentialist approach to ethics by pointing out that in the modern world, our actions affect a far larger number of people than they did in the small communities where our traditional moral systems evolved. He reassures us of the possibility of moral progress by claiming that ethics is still in its infancy, since it has only recently broken free from a religious framework. In other words, he encourages us to situate his ideas in a concrete social and historical context, where they can be evaluated in relation to the goal of maximising human flourishing.

But this kind of contextualisation is entirely absent from Parfit’s treatment of identity. What he offers us instead is, ironically, a very personal reason for accepting the Reductionist view:

Is the truth depressing? Some may find it so. But I find it liberating, and consoling. When I believed that my existence was such a further fact, I seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others.

Parfit goes on to explain how accepting the Reductionist view helps him to reimagine his relationship to those who will be living after he has died. Rather than thinking “[a]fter my death, there will be no one living who will be me,” he can now think:

Though there will later be many experiences, none of these experiences will be connected to my present experiences by chains of such direct connections as those involved in experience-memory, or in the carrying out of an earlier intention.

There is certainly a suggestion here that, as I said earlier, the devaluation of personal identity supports a moral outlook which grants equal importance to all experiences across time and space. But there is no consideration of what it might be like if a significant number of people in our societies did abandon the idea of persons as substantive, continuous entities with real and distinct identities.

So what would that be like? Well, I don’t think the proposition makes much sense. As soon as we introduce the social angle, we see that Parfit’s treatment of identity is lacking an entire dimension. His arguments make us think about our personal identity in isolation, to show that in certain specific scenarios we imagine a further fact where there is none. But in social terms, our existence does involve a further fact – or rather, a multitude of further facts: facts describing our relations with others and the institutions that structure them. We are sons and daughters, parents, spouses, friends, citizens, strangers, worshippers, students, teachers, customers, employees, and so on. These are not necessarily well-defined categories, but they suggest the extent to which social life is dependent on individuals apprehending one another not in purely empirical terms, but in terms of roles with associated expectations, allowances and responsibilities.

And that, crucially, is also how we tend to understand ourselves – how we interpret our desires and formulate our motivations. The things we value, aim for, think worth doing, and want to become, inevitably take their shape from our impressions of the social world we inhabit, with its distinctive roles and practices.

We emulate people we admire, which does not mean we want to be exactly like them, but that they perform a certain role in a way that we identify with. There is some aspect of their identity, as we understand it, that we want to incorporate into our own. Likewise, when we care about something, we are typically situating ourselves in a social milieu whose values and norms become part of our identity. Such is the case with raising a family, being successful in some profession, or finding a community of interest like sport or art or playing with train sets. It is also the case, I might add, with learning meditation or studying philosophy in order to write a masterpiece about ethics.

There is, of course, a whole other tradition in philosophy that emphasises this interdependence of the personal and the social, from Aristotle and Hegel to Hannah Arendt and Alasdair MacIntyre. This tradition is sometimes called communitarian, by which is meant, in part, that it views the roles provided by institutions as integral to human flourishing. But the objection to Parfit I am trying to make here is not necessarily ethical.

My objection is that we can’t, in any meaningful sense, be Reductionists, framing our experiences and decisions as though they belong merely to transient nodes of psychological connectivity. Even if we consider personhood an illusion, it is an illusion we cannot help but participate in as soon as we begin to interact with others and to pursue ends in the social world. Identity happens, whether we like it or not: other people regard us in a certain way, we become aware of how they regard us, and in our ensuing negotiation with ourselves about how to behave, a person is born.

This is, of course, one reason that people find escaping the self so appealing: the problem of how to present ourselves in the world, and of deciding which values to consider authentically our own, can be a source of immense neurosis and anxiety. But the psychological dynamics from which all of this springs are a real and inescapable part of being human (there is a reason Buddhist sages have often lived in isolation – something I notice few of their contemporary western descendants do). You can go around suppressing these thoughts by continuously telling yourself they do not amount to a person or self, but then you would just be repeating the fallacy identified by Parfit – putting the emphasis on personhood rather than on experiences. Meanwhile, if you actually want to find purpose and fulfilment in the world, you will find yourself behaving like a person in all but name.

To truly step outside our identities by denying any further fact in our existence (or, for that matter, by experiencing the dissolution of the ego through meditation, or fantasising about being uploaded to a machine) is at most a private, intermittent exercise. And even then, our desire to undertake this exercise, our reasons for thinking it worthwhile, and the things we hope to achieve in the process, are firmly rooted in our histories as social beings. You must be a person before you can stop being a person.

Perhaps these complications explain why Parfit is so tentative in his report of what it is like to be a Reductionist: “There is still a difference between my life and the lives of other people. But the difference is less.” I interpret his claim that we should be Reductionists as the echo of an age-old wisdom: don’t get so caught up in your own personal dramas that you overlook your relative insignificance and the fact that others are, fundamentally, not so different to you. But this moral stance does not follow inevitably from a theoretical commitment to Reductionism (and like I say, I don’t think that commitment could be anything more than theoretical). In fact, it’s possible to imagine some horrific beliefs being just as compatible with the principle that persons do not really exist. Parfit’s claim that Reductionism makes him care more about humanity in general seems to betray his own place in the tradition of universalist moral thought – a tradition in which the sanctity of persons (and indeed of souls) has long been central.

As for my friends who like to step away from the self through meditation, if this helps them stay happy and grounded, more power to them. But I don’t think this could ever obviate the importance of engaging in another kind of reflection: one that recognises life as a journey we must all undertake as real persons living in a world with others, and which requires us to struggle to define who we are and want to be. This is not easy today, because the social frameworks that have always been necessary for persons, like so many climbing flowers, to grow, are now in a state of flux (but that is a subject for another time). Still, difficult as it may be, the road awaits.