Biden versus Beijing

The Last of the Libertarians

This book review was originally published by Arc Digital on August 31st 2020.

As the world reels from the chaos of COVID-19, it is banking on the power of innovation. We need a vaccine, and before even that, we need new technologies and practices to help us protect the vulnerable, salvage our pulverized economies, and go on with our lives. If we manage to weather this storm, it will be because our institutions prove capable of converting human ingenuity into practical, scalable fixes.

And yet, even if we did not realize it, this was already the position we found ourselves in prior to the pandemic. From global warming to food and energy security to aging populations, the challenges faced by humanity in the 21st century will require new ways of doing things, and new tools to do them with.

So how can our societies foster such innovation? What are the institutions, or more broadly the economic and political conditions, from which new solutions can emerge? Some would argue we need state-funded initiatives to direct our best minds towards specific goals, like the 1940s Manhattan Project that cracked the puzzle of nuclear technology. Others would have us place our faith in the miracles of the free market, with its incentives for creativity, efficiency, and experimentation.

Matt Ridley, the British businessman, author, and science journalist, is firmly in the latter camp. His recent book, How Innovation Works, is a work of two halves. On the one hand it is an entertaining, informative, and deftly written account of the innovations which have shaped the modern world, delivering vast improvements in living standards and opportunity along the way. On the other hand, it is the grumpy expostulation of a beleaguered libertarian, whose reflexive hostility to government makes for a vague and contradictory theory of innovation in general.

Innovation, we should clarify, does not simply mean inventing new things, nor is it synonymous with scientific or technological progress. There are plenty of inventions that do not become innovations — or at least not for some time — because we have neither the means nor the demand to develop them further. Thus, the key concepts behind the internal combustion engine and general-purpose computer long preceded their fruition. Likewise, there are plenty of important innovations which are neither scientific nor technological — double-entry bookkeeping, for instance, or the U-bend in toilet plumbing — and plenty of scientific or technological advances which have little impact beyond the laboratory or drawing board.

Innovation, as Ridley explains, is the process by which new products, practices, and ideas catch on, so that they are widely adopted within an industry or society at large. This, he rightly emphasizes, is rarely down to a brilliant individual or blinding moment of insight. It is almost never the result of an immaculate process of design. It is, rather, “a collective, incremental, and messy network phenomenon.”

Many innovations make use of old, failed ideas whose time has come at last. At the moment of realization, we often find multiple innovators racing to be first over the line — as was the case with the steam engine, light bulb, and telegraph. Sometimes successful innovation hinges on a moment of luck, like the penicillin spore which drifted into Alexander Fleming’s petri dish while he was away on holiday. And sometimes a revolutionary innovation, such as the search engine, is strangely anticipated by no one, including its innovators, almost up until the moment it is born.

But in virtually every instance, the emergence of an innovation requires numerous people with different talents, often far apart in space and time. As Ridley describes the archetypal case: “One person may make a technological breakthrough, another work out how to manufacture it, and a third how to make it cheap enough to catch on. All are part of the innovation process and none of them knows how to achieve the whole innovation.”

These observations certainly lend some credence to Ridley’s arguments that innovation is best served by a dynamic, competitive market economy responding to the choices of consumers. After all, we are not very good at guessing from which direction the solution to a problem will come — we often do not even know there was a problem until a solution comes along — and so it makes sense to encourage a multitude of private actors to tinker, experiment, and take risks in the hope of discovering something that catches on.

Moreover, Ridley’s griping about misguided government regulation — best illustrated by Europe’s almost superstitious aversion to genetically modified crops — and about the stultifying influence of monopolistic, subsidy-farming corporations, is not without merit.

But not so fast. Is it not true that many innovations in Ridley’s book drew, at some point in their complex gestation, from state-funded research? This was the case with jet engines, nuclear energy, and computing (not to mention GPS, various products using plastic polymers, and touch-screen displays). Ridley’s habit of shrugging off such contributions with counterfactuals — had not the state done it, someone else would have — misses the point, because the state has basic interests that inevitably bring it into the innovation business.

It has always been the case that certain technologies, however they emerge, will continue their development in a limbo between public and private sectors, since they are important to economic productivity, military capability, or energy security. So it is today with the numerous innovative technologies caught up in the rivalry between the United States and China, including 5G, artificial intelligence, biotechnology, semiconductors, quantum computing, and Ridley’s beloved fracking for shale gas.

As for regulation, the idea that every innovation which succeeds in a market context is in humanity’s best interests is clearly absurd. One thinks of such profitable 19th-century innovations by Western businessmen as exporting Indian opium to the Far East. Ridley tries to forestall such objections with the claim that “To contribute to human welfare … an innovation must meet two tests: it must be useful to individuals, and it must save time, energy, or money in the accomplishment of some task.” Yet there are plenty of innovations which meet this standard and are still destructive. Consider the opium-like qualities of social media, or the subprime mortgage-backed securities which triggered the financial crisis of 2007–8 (an example Ridley ought to know about, seeing as he was chairman of Britain’s ill-fated Northern Rock bank at the time).

Ridley’s weakness in these matters is amplified by his conceptual framework, a dubious fusion of evolutionary theory and dogmatic libertarianism. Fundamentally, he holds that innovation is an extension of evolution by natural selection, “a process of constantly discovering ways of rearranging the world into forms that are unlikely to arise by chance — and that happen to be useful.” (Ridley even has a section on “The ultimate innovation: life itself.”) That same cosmic process, he claims, is embodied in the spontaneous order of the free market, which, through trade and specialization, allows useful innovations to emerge and spread.

This explains why How Innovation Works contains no suggestion about how we should weigh the risks and benefits of different kinds of innovation. Insofar as Ridley makes an ethical case at all, it amounts to a giant exercise in naturalistic fallacy. Though he occasionally notes innovation can be destructive, he more often moves seamlessly from claiming that it is an “inexorable” natural process, something which simply happens, to hailing it as “the child of freedom and the parent of prosperity,” a golden goose in perpetual danger of suffocation.

But the most savage contradictions in Ridley’s theory appear, once again, in his pronouncements on the role of the state. He insists that by definition, government cannot be central to innovation, because it has predetermined goals whereas evolutionary processes do not. “Trying to pretend that government is the main actor in this process,” he says, “is an essentially creationist approach to an essentially evolutionary phenomenon.”

Never mind that many of Ridley’s own examples involve innovators aiming for predetermined goals, or that in his (suspiciously brief) section on the Chinese innovation boom, he concedes in passing that shrewd state investment played a key role. The more pressing question is, what about those crucial innovations for which there is no market demand, and which therefore do not evolve?

Astonishingly, in his afterword on the challenges posed by COVID-19, Ridley has the gall to admonish governments for not taking the lead in innovation. “Vaccine development,” he writes, has been “insufficiently encouraged by governments and the World Health Organisation,” and “ignored, too, by the private sector because new vaccines are not profitable things to make.” He goes on: “Politicians should go further and rethink their incentives for innovation more generally so that we are never again caught out with too little innovation having happened in a crucial field of human endeavour.”

In these lines, we should read not just the collapse of Ridley’s central thesis, but more broadly, the demise of a certain naïve market libertarianism — a worldview that flourished during the 1980s and ’90s, and which, like most dominant intellectual paradigms, came to see its beliefs as reflecting the very order of nature itself. For what we should have learned in 2007–8, and what we have certainly learned this year, is that for all its undoubted wonders the market is always tacitly relying on the state to step in should the need arise.

This does not mean, of course, that the market has no role to play in developing the key innovations of the 21st century. I believe it has a crucial role, for it remains unmatched in its ability to harness the latent power of widely dispersed ideas and skills. But if the market’s potential is not to be snuffed out in a post-COVID era of corporatism and monopoly, then it will need more credible defenders than Ridley. It will need defenders who are aware of its limitations and of its interdependence with the state.

Train-splaining a new world order

This article was originally published by The Critic on August 4th 2020.

“We have great ambitions for night trains in France,’ said transport minister Jean-Baptiste Djebbari in June. It was a curious statement. When it comes to infrastructure, the language of ambition is usually reserved for projects that convey scale, speed and technological prowess. Europe’s dwindling network of sleeper trains, by contrast, have long been considered a charming relic in an age of ever cheaper, faster and more atomised travel.

Not any longer. On Bastille Day, president Emmanuel Macron confirmed that sleeper trains would be returning to French rails, and in so doing, he was merely joining a continental trend. In January, the first sleeper service since 2003 departed Vienna’s Westbahnhof for Brussels. Its provider, the Austrian ÖBB network, had already resurrected routes to Germany, Italy and Switzerland. A new night train linking states on the European Union’s eastern periphery commenced in June, and is already increasing services to meet a growing demand – as are sleeper routes connecting the Nordic countries to Germany. The Swedish government last month committed to fund new services linking Stockholm and Malmö with Hamburg and Brussels.

This piqued my interest, because I’ve long felt that railways offer vivid windows into the states across which they roam. They tend to exhibit attitudes to public service provision and capital-intensive infrastructure, but they also say a great deal about the nature and extent of a society’s interrelatedness, its pace of life, and indeed its ambition.

On its face, the return of sleeper trains signals the rise of flygskam – a popular Swedish coinage meaning “flight shame,” part of the growing environmental conscience of European governments and consumers. In recent months, Covid-19 has also been boosting demand. And it remains true that continental Europe’s investment in all forms of rail leaves the UK’s patchy, overcrowded and overpriced networks in the shade (let’s not even mention HS2).

But just as Britain’s rail headaches say a great deal about us as a country – our uncertainty over the proper roles of the public and private sector, our incorrigible NIMBYism and our longstanding neglect of the nation beyond London – so it would only be a little facetious to say that sleeper trains capture something deeper about the European Geist today.

At the height of its 19th century confidence, the steam locomotive was the ultimate symbol of Europe’s headlong rush into modernity. Its near-manic desire to control the globe was likewise measured in yards and metres of railway track. Now, as Bruno Maçães eloquently argues, Europe has reached a different inflection point: it is coming to realize that the values it once took to be universal are merely those of its own “civilization state.” Relinquishing any sense of global mission, liberal-minded Europeans now seek to cultivate, in Maçães’ words, ‘a specific way of life: uncommitted, free, detached, aesthetic.’

Surely there’s no better metaphor for this inward turn than the tranquilising comforts of a slow-moving sleeper train. With the world around it growing increasingly chaotic and nasty, I picture Europe seated in the dining car with a Kindle edition of Proust, ordering the vegetarian option, and finally gazing half-drunk into the sunset. Would you not, dear reader, prefer that to the unseemly crush of your 6am Ryanair flight? Would you not prefer it to arriving anywhere at all?

Certainly, writers who step on board a night train cannot help but mention their “nostalgic” or “romantic” appeal – that is, if they don’t simply wallow in kitsch sentimentality. Consider one such account in The Guardian:

“I wake in the pre-dawn light – still inky blue in the compartment. I lie there, feeling the train rock beneath me and then push up the window blind with a foot. I’m rolling through misty flatlands. The landscape spooling past. Austria.”

But perhaps we don’t need to be figurative about this. After all, a quasi-national European consciousness, based around a common purpose like environmentalism, is undoubtedly something the EU would like to foster. And railways, being as skeletons to the bodies of nations, have always been a choice tool for such unification. So it should not surprise us that the return of sleeper trains comes partly under the auspices of the European Commission’s Green Deal, with 2021 slated as “the European Year of Rail.”

The distinctiveness of train culture in Europe comes into sharper focus when we consider its troubled cousin across the Atlantic, the United States. There too the westwards expansion of the railway was once a crucial component, both practically and symbolically, in the creation of a unified nation. Yet today the railway can be seen, like almost everything in American life, as an emblem of estrangement.

The so-called “flyover states,” those swathes of the continental heartland not visited by coastal elites, are in many cases states crossed by the long-distance Amtrak service. But taking the Amtrak, especially overnight, is viewed as a profound eccentricity. Last year a not entirely ironic New York Times Magazine feature reported the experience as though it belonged to another planet. ‘Train people,’ writes our correspondent, ‘are content to stare out the window for hours, like indoor cats … Train people are also individuals for whom small talk is as invigorating as a rail of cocaine.’

It is largely within Blue America – the coastal strips and the urbanised mid-West around Chicago – that high-speed links after the European fashion are being planned. Meanwhile, Elon Musk and others are racing to complete the first “hyperloop” service: a flashy, futuristic transport project of the kind loved by celebrity entrepreneurs, which will use vacuum technology to send passenger pods through tubes at over 750 mph (destinations San Francisco, Las Vegas, Orlando).

Of course, no discussion of modern rail systems would be complete without China, where the staggering proliferation of high-speed networks in recent decades (think two-thirds of world’s total) illustrates a scale and dynamism of which the west can only dream. These are a typical product of the Chinese economic model, which suppresses consumer spending in favor of state-managed export and investment as an engine of growth. That being said, China’s semi-private developers have still borrowed prodigiously, so that a number rail projects have recently ground to halt under a crushing debt burden.

Such vaulting ambition seems a world away from European decadence, but in one sense it is not. Railways also comprise a crucial element of the New Silk Road initiative, whereby China’s power is projected across the Eurasian landmass through infrastructure projects and trade. With over thirty Chinese cities already connected with Europe by rail, it may not be long before Chinese freight carriages and European sleeper carriages routinely share the same tracks.

Anti-racism and the long shadow of the 1970s

The essay was originally published by Unherd on August 3rd 2020.

Last month, following a bout of online outrage, the National Museum of African American History and Culture removed an infographic from its website. Carrying the title “Aspects and assumptions of whiteness and white culture in the United States,” the offending chart presented a list of cultural expectations which, apparently, reflect the “traditions, attitudes and ways of life” characteristic of “white people.” Among the items listed were “self-reliance,” “the nuclear family,” “respect authority,” “plan for future” and “objective, rational linear thinking”.

Critics seized on this as evidence that the anti-racism narrative that has taken hold in institutional America is permeated by a bigotry of low expectations. The chart seemed to suggest that African Americans should not be expected to adhere to the basic tenets of modern civil society and intellectual life. Moreover, the notion that prudence, personal responsibility and rationality are inherently white echoes to an uncanny degree the racist claims that have historically been used to justify the oppression of people of African descent.

We could assume, in the interests of fairness, that the problem with the NMAAHC’s chart was a lack of context. Surely the various qualities it ascribes to “white culture” should be read as though followed by a phrase like “as commonly understood in the United States today?” The problem is that the original document which inspired the chart, and which bore the copyright of corporate consultant Judith H. Katz, provides no such caveats.

If we look at Katz’s own career, however, we do find some illuminating context — not just for this particular incident, but also regarding the origins of the current anti-racism movement more broadly. During the 1970s, Katz pioneered a distinctive approach to combatting racism, one that was above all therapeutic and managerial. This approach, as the NMAAHC chart suggests, took little interest in the opinions and experiences of ethnic and racial minorities, but focused on helping white Americans understand their identity.

Katz’s most obvious descendent today is Robin DiAngelo, author of the bestselling White Fragility — a book relating the experiences and methods of DiAngelo’s lucrative career in corporate anti-racism training. Katz too developed a re-education program, “White awareness training,” which, according to her 1978 book White Awareness, “strives to help Whites understand that racism in the United States is a White problem and that being White implies being racist.”

Like DiAngelo, Katz rails against the pretense of individualism and colour blindness, which she regards as strategies for denying complicity in racism. And like DiAngelo, Katz emphasizes the need for exclusively white discussions (the “White-on-White training group”) to avoid turning minorities into teachers, which would be merely another form of exploitation.

Yet the most striking aspect of Katz’s ideas, by contrast to the puritanical DiAngelo, is her insistence that the real purpose of anti-racism training is to enable the psychological liberation and self-fulfillment of white Americans. She consistently discusses the problem of racism in the medicalizing language of sickness and trauma. It is, she says, “a form of schizophrenia,” “a pervasive form of mental illness,” a “disease,” and “a psychological disorder… deeply embedded in White people from a very early age on both a conscious and an unconscious level.” Thus the primary benefit offered by Katz is to save white people from this pathology, by allowing them to establish a coherent identity as whites.

Her program, she repeatedly emphasizes, is not meant to produce guilt. Rather, its premise is that in order to discover “our unique identities,” we must not overlook “[o]ur sexual and racial essences.” Her training allows its subjects to “become more fully human,” to “identify themselves as White and feel good about it.” Or as Katz writes in a journal article: “We must begin to remove the intellectual shackles and psychological chains that keep us in a mental and spiritual bondage. White people have been hurt for too long.”

Reading all of this, it is difficult not to be reminded of the critic Christopher Lasch’s portrayal of 1970s America as a “culture of narcissism”. Lasch was referring to a bundle of tendencies that characterised the hangover from the radicalism of the 1960s: a catastrophising hypochondria that found in everything the signs of impending disaster or decay; a naval-gazing self-awareness which sought expression in various forms of spiritual liberation; and consequently, a therapeutic culture obsessed with self-improvement and personal renewal.

The great prophet of this culture was surely Woody Allen, whose work routinely evoked crippling neuroses, fear of death, and psychiatry as the customary tool for managing the inner tensions of the liberated bourgeois. That Allen treated all of this with layer upon layer of self-deprecating irony points to another key part of Lasch’s analysis. The narcissist of this era retained enough idealism so as to be slightly ashamed of his self-absorption — unless, of course, some way could be found to justify it as a means towards wider social improvement.

And that is what Katz’s white awareness training offered: a way to resolve the tensions between a desire for personal liberation and a social conscience, or more particularly, a new synthesis of ’70s therapeutic culture with the collectivist political currents unleashed in the ’60s.

Moreover, in Katz’s work we catch a glimpse of what the vehicle for this synthesis would be: the managerial structures of the public or private institution, where a paternalistic attitude towards students, employees and the general public could provide the ideal setting for the tenets of “white awareness.” By way of promoting her program, Katz observed in the late ’70s a general trend towards “a more educational role for the psychotherapist… utilizing systemic training as the process by which to meet desired behavior change.” There was, she noted, a “growing demand” for such services.

Which brings us back to the NMAAHC’s controversial chart. It would be wrong to suggest that this single episode allows us to draw a straight line from the culture of narcissism in which Katz’s ideas emerged to the present anti-racism narrative. But the fact that there continues to be so much emphasis placed on the notion of “whiteness” today — the NMAAHC has an entire webpage under this heading, which prominently features Katz’s successor Robin DiAngelo — suggests that progressive politics has not entirely escaped the identity crises of the 1970s.

Today that politics might be more comfortable assigning guilt than Katz was, but it still places a disproportionate emphasis on those it calls “white” to adopt a noble burden of self-transformation, while relegating minorities to the role of a helpless other.

Of course, it is precisely this simplistic dichotomy which allows the anti-racism narrative to jump across borders and even oceans, as we have seen happening recently, into any context where there are people who can be called “white” and an institutional framework for administering reeducation. Already in 1983, Katz was able to promote her “white awareness training” in the British journal Early Child Development and Care, simply swapping her standard American intro for a discussion of English racism.

Then as now, the implication is that from the perspective of “whiteness,” the experience of African-Americans and of ethnic minorities in a host of other places is somehow interchangeable. This, I think, can justifiably be called a kind of narcissism.

Why I’m not giving up on my ego

This spring, I finally got round to reading Derek Parfit’s famous work, Reasons and Persons. Published in 1984, the book is often cited as a key inspiration for subsequent developments in moral philosophy, notably the field of population ethics and the Effective Altruism movement. (Both of which, incidentally, are closely associated with Oxford University, the institution where Parfit himself worked until his death in 2017). I found Reasons and Persons every bit the masterpiece many have made it out to be – a work not just of rich insight, but also of persuasive humility and charm. For this reason, and because some themes of the book resonate with certain cultural trends today, I thought it would be worth saying something about why Parfit did not win me over to his way of seeing the world.

In Reasons and Persons, Parfit takes on three main issues:

  1. He makes numerous arguments against the self-interest theory of rationality, which holds that what is most rational for any individual to do is whatever will benefit him or her the most;
  1. He argues for a Reductionist theory of identity, according to which there is no “deep further fact” or metaphysical essence underpinning our existence as individual persons, only the partial continuity of psychological experiences across time;
  1. He argues for the moral significance of future generations, and searches (unsuccessfully, by his own admission) for the best way to recognise that significance in our own decisions.

I want to consider (2), Parfit’s Reductionist view of identity. On my reading, this was really the lynchpin of the whole book. According to Parfit, we are inclined to believe there is a “deep further fact” involved in personal identity – that our particular bodies and conscious minds constitute an identity which is somehow more than the sum of these parts. If your conscious mind (your patterns of thought, memories and intentions) managed somehow to survive the destruction of your body (including your brain), and to find itself in a replica body, you may suspect that this new entity would not be you. Likewise if your body continued with some other mind. In either case some fundamental aspect of your personhood, perhaps a metaphysical essence or soul or self, would surely have perished along the way.

Parfit says these intuitions are wrong: there simply is no further fact involved in personal identity. In fact, as regards both a true understanding of reality and what we should value (or “what really matters,” as he puts it), Parfit thinks the notion of persons as bearers of distinct identities can be dispensed with altogether.

What really matters about identity, he argues, is nothing more than the psychological continuity that characterises our conscious minds; and this can be understood without reference to the idea of a person at all. If your body were destroyed and your mind transferred to a replica body, this would merely be “about as bad as ordinary survival.” Your mind could even find itself combined with someone else’s mind, in someone else’s body, which would no doubt present some challenges. In both cases, though, whether the new entity would “really be you” is an empty question. We could describe what had taken place, and that would be enough.

Finally, once we dispense with the idea of a person as bearer of a distinct identity, we notice how unpersonlike our conscious minds really are. Psychological continuity is, over the course of a life, highly discontinuous. Thought patterns, memories and intentions form overlapping “chains” of experience, and each of these ultimately expires or evolves in such a way that, although there is never a total rupture, our future selves might as well be different people.

As I say, I found these claims about identity to be the lynchpin of Reasons and Persons. Parfit doesn’t refer to them in the other sections of his book, where he argues against self-interest and for the moral significance of future generations. But you can hardly avoid noticing its relevance for both. Parfit’s agenda, ultimately, is to show that ethics is about the quality of human experiences, and that all experiences across time and space should have the same moral significance. Denying the sanctity of personal identity provides crucial support for that agenda. Once you accept that the notion of an experience being your experience is much less important than it seems, it is easier to care more about experiences happening on the other side of the planet, or a thousand years in the future.

But there is another reason I was especially interested in Parfit’s treatment of identity.  In recent years, some friends and acquaintances of mine have become fascinated by the idea of escaping from the self or ego, whether through neo-Buddhist meditation (I know people who really like Sam Harris) or the spiritualism of Eckhart Tolle. I’m also aware that various subcultures, notably in Silicon Valley, have become interested in the very Parfitian idea of transhumanism, whereby the transferal of human minds to enhanced bodies or machines raises the prospect of superseding humanity altogether. Add to these the new conceptions of identity emerging from the domain of cultural politics – in particular, the notion of gender fluidity and the resurgence of racial essentialism – and it seems to me we are living at a time when the metaphysics of selfhood and personhood have become an area of pressing uncertainty.

I don’t think it would be very productive to make Reasons and Persons speak to these contemporary trends, but they did inform my own reading of the book. In particular, they led me to notice something about Parfit’s presentation of the Reductionist view.

In the other sections of the Reasons and Persons, Parfit makes some striking historical observations. He argues for a rational, consequentialist approach to ethics by pointing out that in the modern world, our actions affect a far larger number of people than they did in the small communities where our traditional moral systems evolved. He reassures us of the possibility of moral progress by claiming that ethics is still in its infancy, since it has only recently broken free from a religious framework. In other words, he encourages us to situate his ideas in a concrete social and historical context, where they can be evaluated in relation to the goal of maximising human flourishing.

But this kind of contextualisation is entirely absent from Parfit’s treatment of identity. What he offers us instead is, ironically, a very personal reason for accepting the Reductionist view:

Is the truth depressing? Some may find it so. But I find it liberating, and consoling. When I believed that my existence was such a further fact, I seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others.

Parfit goes on to explain how accepting the Reductionist view helps him to reimagine his relationship to those who will be living after he has died. Rather than thinking “[a]fter my death, there will be no one living who will be me,” he can now think:

Though there will later be many experiences, none of these experiences will be connected to my present experiences by chains of such direct connections as those involved in experience-memory, or in the carrying out of an earlier intention.

There is certainly a suggestion here that, as I said earlier, the devaluation of personal identity supports a moral outlook which grants equal importance to all experiences across time and space. But there is no consideration of what it might be like if a significant number of people in our societies did abandon the idea of persons as substantive, continuous entities with real and distinct identities.

So what would that be like? Well, I don’t think the proposition makes much sense. As soon as we introduce the social angle, we see that Parfit’s treatment of identity is lacking an entire dimension. His arguments make us think about our personal identity in isolation, to show that in certain specific scenarios we imagine a further fact where there is none. But in social terms, our existence does involve a further fact – or rather, a multitude of further facts: facts describing our relations with others and the institutions that structure them. We are sons and daughters, parents, spouses, friends, citizens, strangers, worshippers, students, teachers, customers, employees, and so on. These are not necessarily well-defined categories, but they suggest the extent to which social life is dependent on individuals apprehending one another not in purely empirical terms, but in terms of roles with associated expectations, allowances and responsibilities.

And that, crucially, is also how we tend to understand ourselves – how we interpret our desires and formulate our motivations. The things we value, aim for, think worth doing, and want to become, inevitably take their shape from our impressions of the social world we inhabit, with its distinctive roles and practices.

We emulate people we admire, which does not mean we want to be exactly like them, but that they perform a certain role in a way that we identify with. There is some aspect of their identity, as we understand it, that we want to incorporate into our own. Likewise, when we care about something, we are typically situating ourselves in a social milieu whose values and norms become part of our identity. Such is the case with raising a family, being successful in some profession, or finding a community of interest like sport or art or playing with train sets. It is also the case, I might add, with learning meditation or studying philosophy in order to write a masterpiece about ethics.

There is, of course, a whole other tradition in philosophy that emphasises this interdependence of the personal and the social, from Aristotle and Hegel to Hannah Arendt and Alasdair MacIntyre. This tradition is sometimes called communitarian, by which is meant, in part, that it views the roles provided by institutions as integral to human flourishing. But the objection to Parfit I am trying to make here is not necessarily ethical.

My objection is that we can’t, in any meaningful sense, be Reductionists, framing our experiences and decisions as though they belong merely to transient nodes of psychological connectivity. Even if we consider personhood an illusion, it is an illusion we cannot help but participate in as soon as we begin to interact with others and to pursue ends in the social world. Identity happens, whether we like it or not: other people regard us in a certain way, we become aware of how they regard us, and in our ensuing negotiation with ourselves about how to behave, a person is born.

This is, of course, one reason that people find escaping the self so appealing: the problem of how to present ourselves in the world, and of deciding which values to consider authentically our own, can be a source of immense neurosis and anxiety. But the psychological dynamics from which all of this springs are a real and inescapable part of being human (there is a reason Buddhist sages have often lived in isolation – something I notice few of their contemporary western descendants do). You can go around suppressing these thoughts by continuously telling yourself they do not amount to a person or self, but then you would just be repeating the fallacy identified by Parfit – putting the emphasis on personhood rather than on experiences. Meanwhile, if you actually want to find purpose and fulfilment in the world, you will find yourself behaving like a person in all but name.

To truly step outside our identities by denying any further fact in our existence (or, for that matter, by experiencing the dissolution of the ego through meditation, or fantasising about being uploaded to a machine) is at most a private, intermittent exercise. And even then, our desire to undertake this exercise, our reasons for thinking it worthwhile, and the things we hope to achieve in the process, are firmly rooted in our histories as social beings. You must be a person before you can stop being a person.

Perhaps these complications explain why Parfit is so tentative in his report of what it is like to be a Reductionist: “There is still a difference between my life and the lives of other people. But the difference is less.” I interpret his claim that we should be Reductionists as the echo of an age-old wisdom: don’t get so caught up in your own personal dramas that you overlook your relative insignificance and the fact that others are, fundamentally, not so different to you. But this moral stance does not follow inevitably from a theoretical commitment to Reductionism (and like I say, I don’t think that commitment could be anything more than theoretical). In fact, it’s possible to imagine some horrific beliefs being just as compatible with the principle that persons do not really exist. Parfit’s claim that Reductionism makes him care more about humanity in general seems to betray his own place in the tradition of universalist moral thought – a tradition in which the sanctity of persons (and indeed of souls) has long been central.

As for my friends who like to step away from the self through meditation, if this helps them stay happy and grounded, more power to them. But I don’t think this could ever obviate the importance of engaging in another kind of reflection: one that recognises life as a journey we must all undertake as real persons living in a world with others, and which requires us to struggle to define who we are and want to be. This is not easy today, because the social frameworks that have always been necessary for persons, like so many climbing flowers, to grow, are now in a state of flux (but that is a subject for another time). Still, difficult as it may be, the road awaits.

The left’s obsession with symbols has gone too far

The article was originally published by Arc Digital on June 20th 2020.

Protest is a symbolic form of politics. It is about sending messages. It turns public space — both physical and now virtual — into an arena where frustrations not satisfied by the formal political system are expressed with slogans, banners, and bodies.

But protest can only be a force for good if its aims point away from the symbolic and back towards formal politics — and, beyond that, towards material reality.

Yes, filling the streets with demonstrators and the internet with hashtags can be effective in raising awareness of issues. It can be effective in bringing new movements to life. But if the issues that spur protest are real problems in society, then the sending of messages must be accompanied by practical plans to address those problems.

In this last respect, the spectacular wave of protests sparked by the killing of George Floyd three weeks ago, which has carried not just across the United States but to Europe and beyond, presents a mixed picture. At its core is the grief and righteous anger of African-Americans regarding police brutality. This is a cause that, so far as I know, has been questioned by no one anywhere near the mainstream of public life. And it is eminently capable of achieving concrete reforms.

But around that core has gathered a much more nebulous phenomenon — a culture of protest that does not just employ symbolic means, but pursues largely symbolic goals. In the U.S., African-American grievances have been overshadowed by white progressives expressing their prodigious guilt and shunting it onto one another. Witness the rituals of purification where crowds have gathered to kneel or lie prostrate on the ground.

The iconography of Black Lives Matter — especially the resonant statements about injustice presented in white text on black background — has been widely co-opted by mega-corporations seeking to endear themselves to consumers. Meanwhile, the demonstrations have coalesced around a vacuous slogan, “defund the police,” which is clearly a rallying cry for further protest rather than a serious policy proposal.

In the United Kingdom (where we are always copying our cousins across the Atlantic), the protests have become similarly ingrown. Beginning as a statement of empathy for African-Americans and a warning that we have our own issues of racial inequality to address, they quickly descended into an argument about historical figures represented in public statues and the names of buildings and streets. The focus on iconoclasm has now blown back into the United States, as seen in this week’s wave of statue-toppling and defacing.

To the extent that all this symbolic activity makes racial minorities feel they have solidarity with society at large, this is good. But there is a point at which the politics of gesture becomes so dominant that it distracts from practical efforts at reform, or even hinders them. When manifestly crazy ideas like defunding the police become attached to the protests, it undermines legislators seeking genuine solutions by giving their opponents a brush to tar them with. Making an issue of public monuments (London Mayor Sadiq Khan has announced a new “Diversity Commission” for this purpose) diverts attention from a hundred more consequential issues we could be discussing.

Of course to view these events in isolation would be to miss the forest for the trees. The symbolic turn of progressive activism is part of an ongoing culture war over the norms that govern our language, manners, and institutions.

Part of the progressive strategy in that war has been to establish a kind of semantic hegemony, or effective control over the meaning of symbols. This involves emphasizing the symbolic dimension of all kinds of things — words, gestures, intellectual practices, works of art and entertainment — and then insisting on what they signify. Thus a statue is considered not a historical artifact but an expression of racism, or particular words and actions deemed manifestations of white privilege rather than of the intentions that motivated them.

The performative character of the recent protests — the emphasis on gesture for its own sake, the fixation with symbols of oppression — certainly fits into this wider picture. This is why commentators, both supportive and skeptical, are now talking less about policing and concrete forms of racial inequality than about a “cultural revolution.” It is also why many conservatives, old school liberals, and social democrats are freaking out. They are imagining a society in which, simply to have a career, people will have to accept the meanings assigned onto things by the progressive worldview.

But there is still a question as to whether this cultural revolution will actually help the people it purports to. This is the question, or it ought to be. Can a politics so heavily focused on language, meanings, and manners create the conditions for minority individuals and communities to lead more secure and fulfilling lives?

To some degree it can. Dignity — or the entitlement to claim one’s right to full participation in civic life — is a necessary condition for any individual or group to flourish. And dignity does have a lot to do with that amorphous realm of social norms and meanings. It is ultimately manifest at the level of subjective experience, as self-assurance and an inner sense of belonging, but can only be guaranteed by the respect of others. There is at present, among younger generations especially, a genuine desire to ensure that the way we talk and act does not prevent minorities from claiming the dignity that is their due.

On the other hand, meaning is an uncertain, fuzzy thing, such that a politics which focuses too heavily on it can easily become Sisyphean — fighting endless battles over symbolic territory without achieving any real forward progress.

Indeed, in recent years progress towards social justice has become both a closed circle and an infinitely receding frontier. As the progressive mind has become preoccupied with attaching meanings to things — with telling us how we should interpret social phenomena, statues, words, pictures, etc., — it has granted itself the power to endlessly create new symbolic obstacles to be overcome.

This project seems especially futile in light of the fact that so many issues of racial inequality, both in the U.S. and in Europe, are also issues of social class. The cultural revolution is largely the preserve of the highly educated — academia, media, advertising, and managerial bureaucrat type people whose daily lives revolve around the interpretation and manipulation of symbols.

Far from expanding the circle of dignity, their arcane theories represent yet another barrier that excludes poorer people of all ethnicities from the conversation. Meanwhile, luminaries of the symbolic struggle can justify their endeavors by pointing to the very inequality which they subtly reinforce.

So the protest style we are seeing of late, which prioritizes grand gestures over concrete achievements, is indicative of a wider problem. A fixation with symbols is of little use if it comes at the expense of practical engagement with other, equally important dimensions of social life: economic opportunity and public services and community formation and the justice system.

That the symbolic mode of activism is so good at stoking passion, and at extracting equally symbolic gestures from cowed institutions, is just another indication that more substantive issues are being crowded out.

The politics of crisis is not going away any time soon

This essay was originally published by Palladium magazine on June 10th 2020

A pattern emerges when surveying the vast commentary on the COVID-19 pandemic. At its center is a distinctive image of crisis: the image of a cruel but instructive spotlight laying bare the flaws of contemporary society. Crisis, we read, has “revealed,” “illuminated,” “clarified,” and above all, “exposed” our collective failures and weaknesses. It has unveiled the corruption of institutions, the decadence of culture, and the fragility of a material way of life. It has sounded the death-knell for countless projects and ideals.

“The pernicious coronavirus tore off an American scab and revealed suppurating wounds beneath,” announces one commentator, after noting “these calamities can be tragically instructional…Fundamental but forgotten truths, easily masked in times of calm, reemerge.”

Says another: “Invasion and occupation expose a society’s fault lines, exaggerating what goes unnoticed or accepted in peacetime, clarifying essential truths, raising the smell of buried rot.”

You may not be surprised to learn that these two near-identical comments come from very different interpretations of the crisis. The first, from Trump-supporting historian Victor Davis Hanson of the Hoover Institution, claims that the “suppurating wounds” of American society are an effete liberal elite compromised by their reliance on a malignant China and determined to undermine the president at any cost. According to the second, by The Atlantic’s George Packer, the “smell of buried rot” comes from the Trump administration itself, the product of an oligarchic ascendency whose power stems from the division of society and hollowing-out of the state.

Nothing, it seems, has evaded the extraordinary powers of diagnosis made available by crisis: merciless globalism, backwards nationalism, the ignorance of populists, the naivety of liberals, the feral market, the authoritarian state. We are awash in diagnoses, but diagnosis is only the first step. It is customary to sharpen the reality exposed by the virus into a binary, existential decision: address the weakness identified, or succumb to it. “We’re faced with a choice that the crisis makes inescapably clear,” writes Packer, “the alternative to solidarity is death.” No less ominous is Hanson’s invocation of Pearl Harbor: “Whether China has woken a sleeping giant in the manner of the earlier Japanese, or just a purring kitten, remains to be seen.”

The crisis mindset is not just limited to journalistic sensationalism. Politicians, too, have appealed to a now-or-never, sink-or-swim framing of the COVID-19 emergency. French President Emmanuel Macron has been among those using such terms to pressure Eurozone leaders into finally establishing a collective means of financing debt. “If we can’t do this today, I tell you the populists will win,” Macron told The Financial Times. Across the Atlantic, U.S. Congresswoman Alexandria Ocasio-Cortez has claimed that the pandemic “has just exposed us, the fragility of our system,” and has adopted the language of “life or death” in her efforts to bring together the progressive and centrist wings of the Democratic Party before the presidential election in November.

And yet, in surveying this rhetoric of diagnosis and decision, what is most surprising is how familiar it sounds. Apart from the pathogen itself, there are few narratives of crisis now being aired which were not already well-established during the last decade. Much as the coronavirus outbreak has felt like a sudden rupture from the past, we have already been long accustomed to the politics of crisis.

It was under the mantra of “tough decisions,” with the shadow of the financial crisis still looming, that sharp reductions in public spending were justified across much of the Western world after 2010. Since then, the European Union has been crippled by conflicts over sovereign debt and migration. It was the rhetoric of the Chinese menace and of terminal decline—of “rusted-out factories scattered like tombstones across the landscape of our nation,” to quote the 2017 inaugural address—that brought President Trump to power. Meanwhile, progressives had already mobilized themselves around the language of emergency with respect to inequality and climate change.

There is something deeply paradoxical about all of this. The concept of crisis is supposed to denote a need for exceptional attention and decisive focus. In its original Greek, the term krisis often referred to a decision between two possible futures, but the ubiquity of “crisis” in our politics today has produced only deepening chaos. The sense of emergency is stoked continuously, but the accompanying promises of clarity, agency, and action are never delivered. Far from a revealing spotlight, the crises of the past decade have left us with a lingering fog which now threatens to obscure us at a moment when we really do need judicious action.

***

Crises are a perennial feature of modern history. For half a millenium, human life has been shaped by impersonal forces of increasing complexity and abstraction, from global trade and finance to technological development and geopolitical competition. These forces are inherently unstable and frequently produce moments of crisis, not least due to an exogenous shock like a deadly plague. Though rarely openly acknowledged, the legitimacy of modern regimes has largely depended on a perceived ability to keep that instability at bay.

This is the case even at times of apparent calm, such as the period of U.S. global hegemony immediately following the Cold War. The market revolution of the 1980s and globalization of the 1990s were predicated on a conception of capitalism as an unpredictable, dynamic system which could nonetheless be harnessed and governed by technocratic expertise. Such were the hopes of “the great moderation.” A series of emerging market financial crises—in Mexico, Korea, Thailand, Indonesia, Russia, and Argentina—provided opportunities for the IMF and World Bank to demand compliance with the Washington Consensus in economic policy. Meanwhile, there were frequent occasions for the U.S. to coordinate global police actions in war-torn states.

Despite the façade of independent institutions and international bodies, it was in no small part through such crisis-fighting economic and military interventions that a generation of U.S. leaders projected power abroad and secured legitimacy at home. This model of competence and progress, which seems so distant now, was not based on a sense of inevitability so much as confidence in the capacity to manage one crisis after another: to “stabilize” the most recent eruption of chaos and instability.

A still more striking example comes from the European Union, another product of the post-Cold War era. The project’s main purpose was to maintain stability in a trading bloc soon to be dominated by a reunified Germany. Nonetheless, many of its proponents envisaged that the development of a fully federal Europe would occur through a series of crises, with the supra-national structures of the EU achieving more power and legitimacy at each step. When the Euro currency was launched in 1999, Romano Prodi, then president of the European Commission, spoke of how the EU would extend its control over economic policy: “It is politically impossible to propose that now. But some day there will be a crisis and new instruments will be created.”

It is not difficult to see why Prodi took this stance. Since the rise of the rationalized state two centuries ago, managerial competence has been central to notions of successful governance. In the late 19th century, French sociologist Emile Durkheim compared the modern statesman to a physician: “he prevents the outbreak of illnesses by good hygiene, and seeks to cure them when they have appeared.” Indeed, the bureaucratic structures which govern modern societies have been forged in the furnaces of crisis. Social security programs, income tax, business regulation, and a host of other state functions now taken for granted are a product of upheavals of the 19th and early 20th centuries: total war, breakneck industrialization, famine, and financial panic. If necessity is the mother of invention, crisis is the midwife of administrative capacity.

By the same token, the major political ideologies of the modern era have always claimed to offer some mastery over uncertainty. The locus of agency has variously been situated in the state, the nation, individuals, businesses, or some particular class or group; the stated objectives have been progress, emancipation, greatness, or simply order and stability. But in every instance, the message has been that the chaos endemic to modern history must be tamed or overcome by some paradigmatic form of human action. The curious development of Western modernity, where the management of complex, crisis-prone systems has come to be legitimated through secular mass politics, appears amenable to no other template.

It is against this backdrop that we can understand the period of crisis we have endured since 2008. The narratives of diagnosis and decision which have overtaken politics during this time are variations on a much older theme—one that is present even in what are retrospectively called “times of calm.” The difference is that, where established regimes have failed to protect citizens from instability, the logic of crisis management has burst its technocratic and ideological bounds and entered the wider political sphere. The greatest of these ruptures was captured by a famous statement attributed to Federal Reserve Chairman Ben Bernanke in September 2008. Pleading with Congress to pass a $700 billion bailout, Bernanke claimed: “If we don’t do this now, we won’t have an economy on Monday.”

This remark set the tone for the either/or, act-or-perish politics of the last decade. It points to a loss of control which, in the United States and beyond, opened the way for competing accounts not just of how order could be restored, but also what that order should look like. Danger and disruption have become a kind of opportunity, as political insurgents across the West have captured established parties, upended traditional power-sharing arrangements, and produced the electoral shocks suggested by the ubiquitous phrase “the age of Trump and Brexit.” These campaigns sought to give the mood of crisis a definite shape, directing it towards the need for urgent decision or transformative action, thereby giving supporters a compelling sense of their own agency.

***

Typically though, such movements do not merely offer a choice between existing chaos and redemption to come. In diagnoses of crisis, there is always an opposing agent who is responsible for and threatening to deepen the problem. We saw this already in Hanson’s and Packer’s association of the COVID-19 crisis with their political opponents. But it was there, too, among Trump’s original supporters, for whom the agents of crisis were not just immigrants and elites but, more potently, the threat posed by the progressive vision for America. This was most vividly laid out in Michael Anton’s infamous “Flight 93 Election” essay, an archetypal crisis narrative which urged fellow conservatives that only Trump could stem the tide of “wholesale cultural and political change,” claiming “if you don’t try, death is certain.”

Yet Trump’s victory only galvanized the radical elements of the left, as it gave them a villain to point to as a way of further raising the consciousness of crisis among their own supporters. The reviled figure of Trump has done more for progressive stances on immigration, healthcare, and climate action than anyone else, for he is the ever-present foil in these narratives of emergency. Then again, such progressive ambitions, relayed on Fox News and social media, have also proved invaluable in further stoking conservatives’ fears.

To simply call this polarization is to miss the point. The dynamic taking shape here is rooted in a shared understanding of crisis, one that treats the present as a time in which the future of society is being decided. There is no middle path, no going back: each party claims that if they do not take this opportunity to reshape society, their opponents will. In this way, narratives of crisis feed off one another, and become the basis for a highly ideological politics—a politics that de-emphasizes compromise with opponents and with the practical constraints of the situation at hand, prioritizing instead the fulfillment of a goal or vision for the future.

Liberal politics is ill-equipped to deal with, or even to properly recognize, such degeneration of discourse. In the liberal imagination, the danger of crisis is typically that the insecurity of the masses will be exploited by a demagogue, who will then transfigure the system into an illiberal one. In many cases, though, it is the system which loses legitimacy first, as the frustrating business of deliberative, transactional politics cannot meet the expectations of transformative change which are raised in the public sphere.

Consider the most iconic and, in recent years, most frequently analogized period of crisis in modern history: Germany’s Weimar Republic of 1918-33. These were the tempestuous years between World War I and Hitler’s dictatorship, during which a fledgling democracy was rocked by armed insurrection, hyperinflation, foreign occupation, and the onset of the Great Depression, all against a backdrop of rapid social, economic, and technological upheaval.

Over the past decade or so, there have been no end of suggestions that ours is a “Weimar moment.” Though echoes have been found in all sorts of social and cultural trends, the overriding tendency has been to view the crises of the Weimar period backwards through their end result, the establishment of Nazi dictatorship in 1933. In various liberal democracies, the most assertive Weimar parallels have referred to the rise of populist and nationalist politics, and in particular, the erosion of constitutional norms by leaders of this stripe. The implication is that history has warned us how the path of crisis can lead towards an authoritarian ending.

What this overlooks, however, is that Weimar society was not just a victim of crisis that stumbled blindly towards authoritarianism, but was active in interpreting what crises revealed and how they should be addressed. In particular, the notion of crisis served the ideological narratives of the day as evidence of the need to refashion the social settlement. Long before the National Socialists began their rise in the early 1930s, these conflicting visions, pointing to one another as evidence of the stakes, sapped the republic’s legitimacy by making it appear impermanent and fungible.

The First World War had left German thought with a pronounced sense of the importance of human agency in shaping history. On the one hand, the scale and brutality of the conflict left survivors adrift in a world of unprecedented chaos, seeming to confirm a suspicion of some 19th century German intellectuals that history had no inherent meaning. But at the same time, the war had shown the extraordinary feats of organization and ingenuity that an industrialized society, unified and mobilized around a single purpose, was capable of. Consequently, the prevailing mood of Weimar was best captured by the popular term Zeitenwende, the turning of the times. Its implication was that the past was irretrievably lost, the present was chaotic and dangerous, but the future was there to be claimed by those with the conviction and technical skill to do so.

Throughout the 1920s, this historical self-consciousness was expressed in the concept of Krisis or Krise, crisis. Intellectual buzzwords referred to a crisis of learning, a crisis of European culture, a crisis of historicism, crisis theology, and numerous crises of science and mathematics. The implication was that these fields were in a state of flux which called for resolution. A similar dynamic could be seen in the political polemics which filled the Weimar press, where discussions of crisis tended to portray the present as a moment of decision or opportunity. According to Rüdiger Graf’s study of more than 370 Weimar-era books and still more journal articles with the term “crisis” in their titles, the concept generally functioned as “a call to action” by “narrow[ing] the complex political world to two exclusive alternatives.”

Although the republic was most popular among workers and social democrats, the Weimar left contained an influential strain of utopian thought which saw itself as working beyond the bounds of formal politics. Here, too, crisis was considered a source of potential. Consider the sentiments expressed by Walter Gropius, founder of the Bauhaus school of architecture of design, in 1919:

Capitalism and power politics have made our generation creatively sluggish, and our vital art is mired in a broad bourgeois philistinism. The intellectual bourgeois of the old Empire…has proven his incapacity to be the bearer of German culture. The benumbed world is now toppled, its spirit is overthrown, and is in the midst of being recast in a new mold.

Gropius was among those intellectuals, artists, and administrators who, often taking inspiration from an idealized image of the Soviet Union, subscribed to the idea of the “new man”—a post-capitalist individual whose self-fulfillment would come from social duty. Urban planning, social policy, and the arts were all seen as means to create the environment in which this new man could emerge.

The “bourgeois of the old Empire,” as Gropius called them, had indeed been overthrown; but in their place came a reactionary modernist movement, often referred to as the “conservative revolution,” whose own ideas of political transformation used socialism both as inspiration and as ideological counterpoint. In the works of Ernst Jünger, technology and militarist willpower were romanticized as dynamic forces which could pull society out of decadence. Meanwhile, the political theorist Carl Schmitt emphasized the need for a democratic polity to achieve a shared identity in opposition to a common enemy, a need sometimes better accomplished by the decisive judgments of a sovereign dictator than by a fractious parliamentary system.

Even some steadfast supporters of the republic, like the novelist Heinrich Mann, seized on the theme of crisis as a call to transformative action. In a 1923 speech, against a backdrop of hyperinflation and the occupation of the Ruhr by French forces, Mann insisted that the republic should resist the temptation of nationalism, and instead fulfill its promise as a “free people’s state” by dethroning the “blood-gorging” capitalists who still controlled society in their own interests.

These trends were not confined to rhetoric and intellectual discussion. They were reflected in practical politics by the tendency of even trivial issues to be treated as crises that raised fundamental conflicts of worldview. So it was that, in 1926, a government was toppled by a dispute over the regulations for the display of the republican flag. Meanwhile, representatives were harangued by voters who expected them to embody the uncompromising ideological clashes taking place in the wider political sphere. In towns and cities across the country, rival marches and processions signaled the antagonism of socialists and their conservative counterparts—the burghers, professionals and petite bourgeoisie who would later form the National Socialist coalition, and who by mid-decade had already coalesced around President Paul von Hindenburg.

***

We are not Weimar. The ideologies of that era, and the politics that flowed from them, were products of their time, and there were numerous contingent reasons why the republic faced an uphill battle for acceptance. Still, there are lessons. The conflict between opposing visions of society may seem integral to the spirit of democratic politics, but at times of crisis, it can be corrosive to democratic institutions. The either/or mindset can add a whole new dimension to whatever emergency is at hand, forcing what is already a time of disorientating change into a zero-sum competition between grand projects and convictions that leave ordinary, procedural politics looking at best insignificant, and at worst an obstacle.

But sometimes this kind of escalation is simply unavoidable. Crisis ideologies amplify, but do not create, a desire for change. The always-evolving material realities of capitalist societies frequently create circumstances that are untenable, and which cannot be sufficiently addressed by political systems prone to inertia and capture by vested interests. When such a situation erupts into crisis, incremental change and a moderate tone may already be a foregone conclusion. If your political opponent is electrifying voters with the rhetoric of emergency, the only option might be to fight fire with fire.

There is also a hypocrisy innate to democratic politics which makes the reality of how severe crises are managed something of a dirty secret. Politicians like to invite comparisons with past leaders who acted decisively during crises, whether it be French president Macron’s idolization of Charles de Gaulle, the progressive movement in the U.S. and elsewhere taking Franklin D Roosevelt as their inspiration, or virtually every British leader’s wish to be likened to Winston Churchill. What is not acknowledged is the shameful compromises that accompanied these leaders’ triumphs. De Gaulle’s opportunity to found the French Fifth Republic came amid threats of a military coup. Roosevelt’s New Deal could only be enacted with the backing of Southern Democratic politicians, and as such, effectively excluded African Americans from its most important programs. Allied victory in the Second World War, the final fruit of Churchill’s resistance, came at the price of ceding Eastern and Central Europe to Soviet tyranny.

Such realities are especially difficult to bear because the crises of the past are a uniquely unifying force in liberal democracies. It was often through crises, after all, that rights were won, new institutions forged, and loyalty and sacrifice demonstrated. We tend to imagine those achievements as acts of principled agency which can be attributed to society as a whole, whereas they were just as often the result of improvisation, reluctant concession, and tragic compromise.

Obviously, we cannot expect a willingness to bend principles to be treated as a virtue, and nor, perhaps, should we want it to. But we can acknowledge the basic degree of pragmatism  which crises demand. This is the most worrying aspect of the narratives of decision surrounding the current COVID-19 crisis: still rooted in the projects and preoccupations of the past, they threaten to render us inflexible at a moment when we are entering uncharted territory.

Away from the discussions about what the emergency has revealed and the action it demands, a new era is being forged by governments and other institutions acting on a more pressing set of motives—in particular, maintaining legitimacy in the face of sweeping political pressures and staving off the risk of financial and public health catastrophes. It is also being shaped from the ground up, as countless individuals have changed their behavior in response to an endless stream of graphs, tables, and reports in the media.

Political narratives simply fail to grip the contingency of this situation. Commentators talk about the need to reduce global interdependence, even as the architecture of global finance has been further built up by the decision of the Federal Reserve, in March, to support it with unprecedented amounts of dollar liquidity. They continue to argue within a binary of free market and big government, even as staunchly neoliberal parties endorse state intervention in their economies on a previously unimaginable scale. Likewise, with discussions about climate policy or western relations with China—the parameters within which these strategies will have to operate are simply unknown.

To reduce such complex circumstances to simple, momentous decisions is to offer us more clarity and agency than we actually possess. Nonetheless, that is how this crisis will continue to be framed, as political actors strive to capture the mood of emergency. It will only make matters worse, though, if our judgment remains colored by ambitions and resentments which were formed in earlier crises. If we continue those old struggles on this new terrain, we will swiftly lose our purchase on reality. We will be incapable of a realistic appraisal of the constraints now facing us, and without such realistic appraisal, no solution can be effectively pursued.

Protest and the pressures of lockdown

Was the lockdown the catalyst for the riots sweeping the United States during the past few days? The question will never be definitively answered, but it is difficult to believe that the psychological tension and economic hardships of shutting down society have not contributed to the unrest. Race relations in the US have long been a tinderbox, but the fears and frustrations of the last few months have surely made the situation a good deal more combustible.

Politics in the United Kingdom tend to offer a polite, sotto voce echo of those in the US. We too have seen the effects of lockdown fatigue, not in the form of burning cities but of indignation at the thought that the Prime Minister’s advisor, Dominic Cummings, may have breached lockdown regulations. Again, the furor which greeted that scandal – a tempest in a teacup by American standards – suggested a nation whose nerves had started to fray. Over the weekend, socially distancing measures were being widely flouted in London, not least by crowds of demonstrators showing solidarity with their counterparts in the United States.

On both sides of the Atlantic, it is now difficult to imagine another lockdown being successfully imposed. Should there be a second spike in Covid cases, one can already see how the blame-game will be played: one side will condemn the government’s incompetence and lack of moral authority, the other will deflect by pointing to the protestors’ irresponsibility.

A broader problem for democratic societies is revealing itself here. At moments of crisis, the use of public space has traditionally been a crucial part of the political process. But if gathering in public spaces increases the danger posed by infectious disease, then there may be a serious conflict between the demands of public health and the health of the political system.

The scenes of egregious violence coming from American cities should not make us overlook the importance of demonstrations and protests. The descent of civil unrest into lawlessness and brutal destruction poses its own set of questions: chiefly, what kinds and degrees of illegality are morally justifiable in a given set of circumstances. But however we reply to this – wherever we draw the line between acceptable protest and unacceptable violence – it remains the case that some forms of protest can and must be accepted.

Protests are not just a way of expressing an opinion or trying to bring about change; they also have a cathartic value. They are pressure valves allowing the pent-up tensions to be released in a way that, while potentially bringing about changes in policy or in the political system itself, can in the long run prevent the system from collapsing into chaos. Again, the fact that such a collapse can begin with protest is neither here nor there. The kinds of seething resentments which can make protests a catalyst for wider chaos must be addressed elsewhere in the political system; suppressing protest on these grounds would only make those resentments worse.

Even if you disagree with protesters – even if you think their protests are unjustified, irresponsible or downright dangerous – shutting down protest in a political culture where it is seen as a legitimate form of expression tends to be a self-defeating strategy.

This puts us in quite a bind when it comes to the ongoing Covid-19 threat. It has been said time and again that national or statewide lockdowns are an unprecedented social experiment whose effects cannot be predicted. These policies, justified though they may be in terms of public health, have amounted to forcing citizens into total passivity as their lives are reshaped by their governments’ frantic attempts to stay on top of the situation. Recent events in the United States, for all their other causes, suggest an early result of the lockdown experiment. I don’t expect other democracies to see unrest on a remotely similar scale, but in nations like Britain and France, which have their own traditions of protest, we should not be surprised if some people feel a desire to make their voices heard together and in public.

If that desire continues to manifest itself, and the pandemic does return in a second wave, it will present politicians with yet another excruciating judgment. If they try to prevent protests, social distancing will be dismissed as a pretext for silencing opposition. That could cause anger to grow, or lead to widespread rule breaking which would leave the relevant government’s authority in tatters. But if protests are allowed, it could hardly be an exception: restrictions would have to go altogether.

The management of civil unrest is a perilous business at the best of times. Setting limits to protest is necessary for a regime to maintain credibility, but knowing where to set those limits requires a deft reading of the mood. This has just been made considerably more difficult by a threat which both ramps up political tensions and constrains the use of public space. To the medical and economic challenges posed by Covid-19 we can now add another dilemma: that of judging when it is necessary to sacrifice the rules for the wider stability of the system.

 

Ancient liberties, novel dangers

Until very recently, the British political landscape was drearily familiar: each new argument about Brexit, the dangers of populism, or the excesses of cultural liberalism seemed identical to the last. It has taken an act of nature to force us out of that rut, but here we are. Thanks to the Covid-19 outbreak, the nation not only faces a public health emergency, but also an unprecedented suspension of civil liberties, as parliament this week granted the government powers to disperse public gatherings and confine people to their homes. 

Now we are seeing politics in a new light. On the left, many who were in the habit of portraying Boris Johnson as a budding authoritarian dictator found themselves pleading for the state-enforced lockdown which has now arrived. It is on the right that opinion has been divided. Though some have relished the state flexing its muscles during a crisis, it has equally been some of the nation’s most conservative voices that have expressed reservations about the infringement of civil liberties.

“End of freedom,” bellowed the front page of The Daily Telegraph on Tuesday morning. Thatcher biographer Charles Moore conceded that “it would be bold” to say the lockdown was wrong, but warned of a herd-like population becoming “blindly dependent on rigid orders.” Meanwhile, Mail on Sunday columnist Peter Hitchens, who has been loudly insisting that the government response is disproportionate to the threat posed by the virus, declared the emergency powers “a frightening series of restrictions on ancient liberties.”

This is a useful reminder that, on the subject of personal freedom, there are important crosscurrents between liberals and conservatives. Whereas liberals are more inclined to say you should be able to do as you like, they are also more comfortable with the state protecting its citizens from harm. Even Daniel Hannan, the closest thing modern Britain has to a 19th-century Whig, has supported the right of government to restrict liberties on the grounds that risk of infection is, in the language of neoclassical economics, an externality we impose on one another.

British conservatism, on the other hand, though traditionally keen on law and order, also contains a deep strain of suspicion with respect to the state meddling in civil society. There are various uncharitable explanations for this instinct. Conservatism has historically been concerned with protecting the wealth and status of certain elites. Since the 1980s, it has additionally been susceptible to libertarian dogma about free markets. More simply, the conservative worldview tends to attract a certain kind of grumpy individualist who resents the bureaucracy of modern society (even when it is trying to protect him from a plague).

In its purest Burkean form, however, the conservative case for liberty rests on the much richer philosophical grounds of the trans-generational contract. Given what we know about the fallibility of human judgment, and about the difficulty of clawing back rights once they have been lost, we should conclude that the freedoms which previous generations have struggled for are not ours to give away at a moment’s notice, but to guard jealously for those who come after us. Hence the emphasis on “ancient liberties,” and on pausing for thought especially during an emergency.

I take this argument seriously, regardless of whether it is actually what motivates conservatives today. I take it far more seriously than the libertarian case against an overbearing state, which rests on a dubious view of human beings as autonomous contract-making individuals, and on unrealistic injunctions against coercion. The Burkean paradigm emphatically does not value freedom in and of itself. Rather, it posits that the cumulative experience of generations has established the value of particular freedoms within the context of a particular society.

Even if, like myself, you think it was correct for the government to enforce the lockdown, I think we should still adopt the spirit of mild paranoia which animates the “ancient liberties” outlook. We should be alert to the possibility that certain emergency measures might outlive the emergency in one form or another. We should push back against authorities who seem to be enjoying their new powers too much. And we should think about how this experience of trading freedom for safety might influence expectations in the long term.

Yet these same considerations also point to a major weakness of thinking about civil liberties in primarily historical terms. Namely, it can lead us to fixate on traditional rights and customs, and consequently, to overlook new kinds of threat – a problem aptly illustrated by those who seem to think the worst part of the lockdown is that British people can’t go to the pub.

I don’t think the real danger of our present situation has much to do with the forced closure of businesses, or with physical confinement to our households. The damage these measures are inflicting on our economy, and the immense financial burden the state is assuming as a consequence, make it irrational for even the most power-crazed despot to maintain them longer than necessary. In any case, I get the impression the public is fully aware that these are emergency precautions, and won’t take kindly to prolonged interference in such matters.

Rather, it seems to me the threat is most acute with respect to the state’s technological capacity. As I mentioned in a recent post, there is a good chance that the Covid-19 crisis will prompt various industries to develop technologies which allow them to do more remotely. We should expect a similar trajectory in terms of state power. The administrative challenge of responding to the epidemic, and of facilitating economic and bureaucratic activity during the lockdown, will surely incentivise the state to strengthen and centralise its digital resources. It would, in the process, become more adept at collecting, managing, and utilising information about its citizens, while learning new ways of enacting its most intrusive powers.

Admittedly the British government, which does not even have an emergency messaging system for contacting citizens on their mobiles, does not yet seem very threatening in this respect. But elsewhere there has been plenty of evidence that new techniques of surveillance and control are being forged in response to the crisis (I recommend reading this piece by Jeremy Cliffe in The New Statesman), and we could yet see similar developments here, especially if expanding digital infrastructures turns out to be a matter of economic competitiveness.

It may well be, of course, that we want our government to take some of these steps if it helps us weather the current storm. But that is precisely where the risk lies. If we think it necessary to empower the state in new ways, we need to devise new forms of oversight and accountability. To that end, thinking about our freedoms as keepsakes from the past is of limited use; we also need to think imaginatively about how they can be extended into the future.

The politics of this crisis will be grim. We should prepare now.

Last weekend, which now feels like a lifetime ago, I nervously attended what will probably be my last social gathering for several months. Despite a general mood of uneasiness, at least one of my friends was hoping that there would be a silver lining to the looming Covid-19 epidemic. Did I not think, he asked, that confronting this challenge together might finally instil some solidarity in our society?

I heard similar sentiments being expressed throughout last week. In a BBC Newsnight interview, the Rabbi Jonathan Sacks suggested that “We are going to come through this… with a much stronger commitment to helping others,” adding that it was “probably the lesson we needed as a country.” Some of those rushing to join community aid groups have expressed similar optimism. Even on social media, the shared experience of confinement has given rise to something of an upbeat communal spirit.

Solidarity is obviously welcome, and action to help the vulnerable is more welcome still. I am as hopeful as anyone else that little platoons will play their part in this emergency. But we should not fool ourselves about what lies ahead. Though many commentators have been drawing parallels to the Second World War, the emerging consensus among economists is that the shock now underway will dwarf that of the early 1940s. The blow to demand dealt by social distancing measures points towards a spiral of business contraction and redundancies simply unprecedented in modern history. The forecasts flying around in recent days vary considerably, and are of limited use given how quickly the situation is developing. But I have yet to see any evidence that the swiftly approaching economic crisis will not be brutal – and that is before we consider the effects of the financial crisis unfolding alongside it.

This means that our efforts as individuals and communities ultimately pale by comparison to the responsibility which now rests on the state. Only the state can manage the gargantuan tasks of coordinating healthcare, propping up collapsing industries, and mitigating the financial damage in the population at large. As the multi-hundred billion pound measures announced by Chancellor Rishi Sunak last week attest, we are undergoing a transformation of the government’s role in the economy on a scale not seen in living memory. And we are only at the beginning.

What is more, it’s becoming apparent that the flag around which many of us have been rallying in recent weeks – the necessity of aggressive containment measures to ease the stress on our healthcare system – will only take us so far. At the moment, our priority is to slow the virus’ spread by reducing interpersonal contact as much as possible. But if, as is widely suspected, any attempt to return to normality will only cause infections to rise again, then there will be truly horrendous trade-offs between ongoing economic damage and the likely deaths resulting from interaction. (The dimensions of that dilemma may become clearer in the coming days, as the Chinese authorities begin to relax their brutal lockdown of Wuhan province).

All of this points to inevitable and legitimate political conflict in the coming months and years. The fissures which have threatened to emerge following each of Sunak’s announcements last week – between homeowners and renters, between businesses and workers, between employees and the self-employed – are just a glimpse of what lies ahead.

As the state rapidly expands into a Leviathan, acting as insurer of last resort for much of the population, it will assume responsibility for the survival prospects not only of thousands of individuals at risk of illness, but of entire sectors of the economy. There may be hopes of a swift “bounce-back” recovery, if the government’s attempts to flood the economy with borrowed and printed cash manage to shore-up demand, but we should not delude ourselves that we can somehow just resume where we left off. Countless businesses and careers that entered this crisis as perfectly viable will needed ongoing targeted support to survive, and the state will need to decide which are most worthy of that support.

In other words, whatever the settlement that emerges from a prolonged period of extraordinary state intervention, there are bound to be winners and losers. As the aftermath of the 2007-08 financial crisis taught us, a perception that bailouts have been distributed unfairly will lead to toxic resentments. The coming recession has every likelihood of bringing such tensions back to the surface. As a recent report by the Resolution Foundation pointed out, the sectors being hardest hit by the downturn are disproportionately staffed by those with low incomes, with little or no savings, and without the option to work from home. One can already imagine a scenario in which handouts to firms deemed too big or strategically important to fail coincide with a sense of powerlessness among a burgeoning population of underemployed workers and debt-laden small businesses.

There is no doubt that in the short term, our efforts must be directed toward mitigating a public health emergency which, sadly, has yet to reach its peak. I accept that this will entail seeking political conciliation wherever possible, so as to focus on the challenge at hand.

In the medium-term, however, we need to think about what solidarity really means in these circumstances. It should, surely, involve an acknowledgement that the careful mediation of political disputes will be essential to riding this crisis out. That will require, above all, a framework in which competing interests can make their claims without the resulting conflicts becoming too incendiary.

Such a framework precisely what our political culture has already, in recent years, shown itself to be lacking. In a strange throwback to the “grand bargains” that characterised mid-20th century politics, the government has promised to consult with representatives of business and the unions going forward. But trade unions today represent barely a fifth of the workforce, with memberships skewed towards older, well-paid public sector workers. Like many other advanced economies, modern Britain is a patchwork of groups whose economic interests appear to align, but which lack the social cohesion necessary to realise and articulate those interests. They exist only as statistical entities.

It is crucial, therefore, that we think about the role of institutions in channeling some of the solidarity that is generated by this crisis towards conflict resolution. This should be an opportunity for the Labour Party to address the problem of who in modern Britain is most in need of its representation, and to provide constructive opposition to the government on that basis. It should be an opportunity for the media to break out of last decade’s culture wars and identify on whose behalf the government should be held to account.

We will also need new institutions to represent those socially dispersed interests who will struggle to be heard in the halls of power during a new era of corporatism. Perhaps this is where the little platoons will make a difference after all. Could community aid groups, or the professional networks which are already springing-up among the unemployed, gradually morph into such bodies?

Admittedly it seems perverse to talk about the necessity of conflict at a time like this. Yet if we suppress the political fallout from this crisis, we will only be storing-up demons for later.