The politics of crisis is not going away any time soon

This essay was originally published by Palladium magazine on June 10th 2020

A pattern emerges when surveying the vast commentary on the COVID-19 pandemic. At its center is a distinctive image of crisis: the image of a cruel but instructive spotlight laying bare the flaws of contemporary society. Crisis, we read, has “revealed,” “illuminated,” “clarified,” and above all, “exposed” our collective failures and weaknesses. It has unveiled the corruption of institutions, the decadence of culture, and the fragility of a material way of life. It has sounded the death-knell for countless projects and ideals.

“The pernicious coronavirus tore off an American scab and revealed suppurating wounds beneath,” announces one commentator, after noting “these calamities can be tragically instructional…Fundamental but forgotten truths, easily masked in times of calm, reemerge.”

Says another: “Invasion and occupation expose a society’s fault lines, exaggerating what goes unnoticed or accepted in peacetime, clarifying essential truths, raising the smell of buried rot.”

You may not be surprised to learn that these two near-identical comments come from very different interpretations of the crisis. The first, from Trump-supporting historian Victor Davis Hanson of the Hoover Institution, claims that the “suppurating wounds” of American society are an effete liberal elite compromised by their reliance on a malignant China and determined to undermine the president at any cost. According to the second, by The Atlantic’s George Packer, the “smell of buried rot” comes from the Trump administration itself, the product of an oligarchic ascendency whose power stems from the division of society and hollowing-out of the state.

Nothing, it seems, has evaded the extraordinary powers of diagnosis made available by crisis: merciless globalism, backwards nationalism, the ignorance of populists, the naivety of liberals, the feral market, the authoritarian state. We are awash in diagnoses, but diagnosis is only the first step. It is customary to sharpen the reality exposed by the virus into a binary, existential decision: address the weakness identified, or succumb to it. “We’re faced with a choice that the crisis makes inescapably clear,” writes Packer, “the alternative to solidarity is death.” No less ominous is Hanson’s invocation of Pearl Harbor: “Whether China has woken a sleeping giant in the manner of the earlier Japanese, or just a purring kitten, remains to be seen.”

The crisis mindset is not just limited to journalistic sensationalism. Politicians, too, have appealed to a now-or-never, sink-or-swim framing of the COVID-19 emergency. French President Emmanuel Macron has been among those using such terms to pressure Eurozone leaders into finally establishing a collective means of financing debt. “If we can’t do this today, I tell you the populists will win,” Macron told The Financial Times. Across the Atlantic, U.S. Congresswoman Alexandria Ocasio-Cortez has claimed that the pandemic “has just exposed us, the fragility of our system,” and has adopted the language of “life or death” in her efforts to bring together the progressive and centrist wings of the Democratic Party before the presidential election in November.

And yet, in surveying this rhetoric of diagnosis and decision, what is most surprising is how familiar it sounds. Apart from the pathogen itself, there are few narratives of crisis now being aired which were not already well-established during the last decade. Much as the coronavirus outbreak has felt like a sudden rupture from the past, we have already been long accustomed to the politics of crisis.

It was under the mantra of “tough decisions,” with the shadow of the financial crisis still looming, that sharp reductions in public spending were justified across much of the Western world after 2010. Since then, the European Union has been crippled by conflicts over sovereign debt and migration. It was the rhetoric of the Chinese menace and of terminal decline—of “rusted-out factories scattered like tombstones across the landscape of our nation,” to quote the 2017 inaugural address—that brought President Trump to power. Meanwhile, progressives had already mobilized themselves around the language of emergency with respect to inequality and climate change.

There is something deeply paradoxical about all of this. The concept of crisis is supposed to denote a need for exceptional attention and decisive focus. In its original Greek, the term krisis often referred to a decision between two possible futures, but the ubiquity of “crisis” in our politics today has produced only deepening chaos. The sense of emergency is stoked continuously, but the accompanying promises of clarity, agency, and action are never delivered. Far from a revealing spotlight, the crises of the past decade have left us with a lingering fog which now threatens to obscure us at a moment when we really do need judicious action.

***

Crises are a perennial feature of modern history. For half a millenium, human life has been shaped by impersonal forces of increasing complexity and abstraction, from global trade and finance to technological development and geopolitical competition. These forces are inherently unstable and frequently produce moments of crisis, not least due to an exogenous shock like a deadly plague. Though rarely openly acknowledged, the legitimacy of modern regimes has largely depended on a perceived ability to keep that instability at bay.

This is the case even at times of apparent calm, such as the period of U.S. global hegemony immediately following the Cold War. The market revolution of the 1980s and globalization of the 1990s were predicated on a conception of capitalism as an unpredictable, dynamic system which could nonetheless be harnessed and governed by technocratic expertise. Such were the hopes of “the great moderation.” A series of emerging market financial crises—in Mexico, Korea, Thailand, Indonesia, Russia, and Argentina—provided opportunities for the IMF and World Bank to demand compliance with the Washington Consensus in economic policy. Meanwhile, there were frequent occasions for the U.S. to coordinate global police actions in war-torn states.

Despite the façade of independent institutions and international bodies, it was in no small part through such crisis-fighting economic and military interventions that a generation of U.S. leaders projected power abroad and secured legitimacy at home. This model of competence and progress, which seems so distant now, was not based on a sense of inevitability so much as confidence in the capacity to manage one crisis after another: to “stabilize” the most recent eruption of chaos and instability.

A still more striking example comes from the European Union, another product of the post-Cold War era. The project’s main purpose was to maintain stability in a trading bloc soon to be dominated by a reunified Germany. Nonetheless, many of its proponents envisaged that the development of a fully federal Europe would occur through a series of crises, with the supra-national structures of the EU achieving more power and legitimacy at each step. When the Euro currency was launched in 1999, Romano Prodi, then president of the European Commission, spoke of how the EU would extend its control over economic policy: “It is politically impossible to propose that now. But some day there will be a crisis and new instruments will be created.”

It is not difficult to see why Prodi took this stance. Since the rise of the rationalized state two centuries ago, managerial competence has been central to notions of successful governance. In the late 19th century, French sociologist Emile Durkheim compared the modern statesman to a physician: “he prevents the outbreak of illnesses by good hygiene, and seeks to cure them when they have appeared.” Indeed, the bureaucratic structures which govern modern societies have been forged in the furnaces of crisis. Social security programs, income tax, business regulation, and a host of other state functions now taken for granted are a product of upheavals of the 19th and early 20th centuries: total war, breakneck industrialization, famine, and financial panic. If necessity is the mother of invention, crisis is the midwife of administrative capacity.

By the same token, the major political ideologies of the modern era have always claimed to offer some mastery over uncertainty. The locus of agency has variously been situated in the state, the nation, individuals, businesses, or some particular class or group; the stated objectives have been progress, emancipation, greatness, or simply order and stability. But in every instance, the message has been that the chaos endemic to modern history must be tamed or overcome by some paradigmatic form of human action. The curious development of Western modernity, where the management of complex, crisis-prone systems has come to be legitimated through secular mass politics, appears amenable to no other template.

It is against this backdrop that we can understand the period of crisis we have endured since 2008. The narratives of diagnosis and decision which have overtaken politics during this time are variations on a much older theme—one that is present even in what are retrospectively called “times of calm.” The difference is that, where established regimes have failed to protect citizens from instability, the logic of crisis management has burst its technocratic and ideological bounds and entered the wider political sphere. The greatest of these ruptures was captured by a famous statement attributed to Federal Reserve Chairman Ben Bernanke in September 2008. Pleading with Congress to pass a $700 billion bailout, Bernanke claimed: “If we don’t do this now, we won’t have an economy on Monday.”

This remark set the tone for the either/or, act-or-perish politics of the last decade. It points to a loss of control which, in the United States and beyond, opened the way for competing accounts not just of how order could be restored, but also what that order should look like. Danger and disruption have become a kind of opportunity, as political insurgents across the West have captured established parties, upended traditional power-sharing arrangements, and produced the electoral shocks suggested by the ubiquitous phrase “the age of Trump and Brexit.” These campaigns sought to give the mood of crisis a definite shape, directing it towards the need for urgent decision or transformative action, thereby giving supporters a compelling sense of their own agency.

***

Typically though, such movements do not merely offer a choice between existing chaos and redemption to come. In diagnoses of crisis, there is always an opposing agent who is responsible for and threatening to deepen the problem. We saw this already in Hanson’s and Packer’s association of the COVID-19 crisis with their political opponents. But it was there, too, among Trump’s original supporters, for whom the agents of crisis were not just immigrants and elites but, more potently, the threat posed by the progressive vision for America. This was most vividly laid out in Michael Anton’s infamous “Flight 93 Election” essay, an archetypal crisis narrative which urged fellow conservatives that only Trump could stem the tide of “wholesale cultural and political change,” claiming “if you don’t try, death is certain.”

Yet Trump’s victory only galvanized the radical elements of the left, as it gave them a villain to point to as a way of further raising the consciousness of crisis among their own supporters. The reviled figure of Trump has done more for progressive stances on immigration, healthcare, and climate action than anyone else, for he is the ever-present foil in these narratives of emergency. Then again, such progressive ambitions, relayed on Fox News and social media, have also proved invaluable in further stoking conservatives’ fears.

To simply call this polarization is to miss the point. The dynamic taking shape here is rooted in a shared understanding of crisis, one that treats the present as a time in which the future of society is being decided. There is no middle path, no going back: each party claims that if they do not take this opportunity to reshape society, their opponents will. In this way, narratives of crisis feed off one another, and become the basis for a highly ideological politics—a politics that de-emphasizes compromise with opponents and with the practical constraints of the situation at hand, prioritizing instead the fulfillment of a goal or vision for the future.

Liberal politics is ill-equipped to deal with, or even to properly recognize, such degeneration of discourse. In the liberal imagination, the danger of crisis is typically that the insecurity of the masses will be exploited by a demagogue, who will then transfigure the system into an illiberal one. In many cases, though, it is the system which loses legitimacy first, as the frustrating business of deliberative, transactional politics cannot meet the expectations of transformative change which are raised in the public sphere.

Consider the most iconic and, in recent years, most frequently analogized period of crisis in modern history: Germany’s Weimar Republic of 1918-33. These were the tempestuous years between World War I and Hitler’s dictatorship, during which a fledgling democracy was rocked by armed insurrection, hyperinflation, foreign occupation, and the onset of the Great Depression, all against a backdrop of rapid social, economic, and technological upheaval.

Over the past decade or so, there have been no end of suggestions that ours is a “Weimar moment.” Though echoes have been found in all sorts of social and cultural trends, the overriding tendency has been to view the crises of the Weimar period backwards through their end result, the establishment of Nazi dictatorship in 1933. In various liberal democracies, the most assertive Weimar parallels have referred to the rise of populist and nationalist politics, and in particular, the erosion of constitutional norms by leaders of this stripe. The implication is that history has warned us how the path of crisis can lead towards an authoritarian ending.

What this overlooks, however, is that Weimar society was not just a victim of crisis that stumbled blindly towards authoritarianism, but was active in interpreting what crises revealed and how they should be addressed. In particular, the notion of crisis served the ideological narratives of the day as evidence of the need to refashion the social settlement. Long before the National Socialists began their rise in the early 1930s, these conflicting visions, pointing to one another as evidence of the stakes, sapped the republic’s legitimacy by making it appear impermanent and fungible.

The First World War had left German thought with a pronounced sense of the importance of human agency in shaping history. On the one hand, the scale and brutality of the conflict left survivors adrift in a world of unprecedented chaos, seeming to confirm a suspicion of some 19th century German intellectuals that history had no inherent meaning. But at the same time, the war had shown the extraordinary feats of organization and ingenuity that an industrialized society, unified and mobilized around a single purpose, was capable of. Consequently, the prevailing mood of Weimar was best captured by the popular term Zeitenwende, the turning of the times. Its implication was that the past was irretrievably lost, the present was chaotic and dangerous, but the future was there to be claimed by those with the conviction and technical skill to do so.

Throughout the 1920s, this historical self-consciousness was expressed in the concept of Krisis or Krise, crisis. Intellectual buzzwords referred to a crisis of learning, a crisis of European culture, a crisis of historicism, crisis theology, and numerous crises of science and mathematics. The implication was that these fields were in a state of flux which called for resolution. A similar dynamic could be seen in the political polemics which filled the Weimar press, where discussions of crisis tended to portray the present as a moment of decision or opportunity. According to Rüdiger Graf’s study of more than 370 Weimar-era books and still more journal articles with the term “crisis” in their titles, the concept generally functioned as “a call to action” by “narrow[ing] the complex political world to two exclusive alternatives.”

Although the republic was most popular among workers and social democrats, the Weimar left contained an influential strain of utopian thought which saw itself as working beyond the bounds of formal politics. Here, too, crisis was considered a source of potential. Consider the sentiments expressed by Walter Gropius, founder of the Bauhaus school of architecture of design, in 1919:

Capitalism and power politics have made our generation creatively sluggish, and our vital art is mired in a broad bourgeois philistinism. The intellectual bourgeois of the old Empire…has proven his incapacity to be the bearer of German culture. The benumbed world is now toppled, its spirit is overthrown, and is in the midst of being recast in a new mold.

Gropius was among those intellectuals, artists, and administrators who, often taking inspiration from an idealized image of the Soviet Union, subscribed to the idea of the “new man”—a post-capitalist individual whose self-fulfillment would come from social duty. Urban planning, social policy, and the arts were all seen as means to create the environment in which this new man could emerge.

The “bourgeois of the old Empire,” as Gropius called them, had indeed been overthrown; but in their place came a reactionary modernist movement, often referred to as the “conservative revolution,” whose own ideas of political transformation used socialism both as inspiration and as ideological counterpoint. In the works of Ernst Jünger, technology and militarist willpower were romanticized as dynamic forces which could pull society out of decadence. Meanwhile, the political theorist Carl Schmitt emphasized the need for a democratic polity to achieve a shared identity in opposition to a common enemy, a need sometimes better accomplished by the decisive judgments of a sovereign dictator than by a fractious parliamentary system.

Even some steadfast supporters of the republic, like the novelist Heinrich Mann, seized on the theme of crisis as a call to transformative action. In a 1923 speech, against a backdrop of hyperinflation and the occupation of the Ruhr by French forces, Mann insisted that the republic should resist the temptation of nationalism, and instead fulfill its promise as a “free people’s state” by dethroning the “blood-gorging” capitalists who still controlled society in their own interests.

These trends were not confined to rhetoric and intellectual discussion. They were reflected in practical politics by the tendency of even trivial issues to be treated as crises that raised fundamental conflicts of worldview. So it was that, in 1926, a government was toppled by a dispute over the regulations for the display of the republican flag. Meanwhile, representatives were harangued by voters who expected them to embody the uncompromising ideological clashes taking place in the wider political sphere. In towns and cities across the country, rival marches and processions signaled the antagonism of socialists and their conservative counterparts—the burghers, professionals and petite bourgeoisie who would later form the National Socialist coalition, and who by mid-decade had already coalesced around President Paul von Hindenburg.

***

We are not Weimar. The ideologies of that era, and the politics that flowed from them, were products of their time, and there were numerous contingent reasons why the republic faced an uphill battle for acceptance. Still, there are lessons. The conflict between opposing visions of society may seem integral to the spirit of democratic politics, but at times of crisis, it can be corrosive to democratic institutions. The either/or mindset can add a whole new dimension to whatever emergency is at hand, forcing what is already a time of disorientating change into a zero-sum competition between grand projects and convictions that leave ordinary, procedural politics looking at best insignificant, and at worst an obstacle.

But sometimes this kind of escalation is simply unavoidable. Crisis ideologies amplify, but do not create, a desire for change. The always-evolving material realities of capitalist societies frequently create circumstances that are untenable, and which cannot be sufficiently addressed by political systems prone to inertia and capture by vested interests. When such a situation erupts into crisis, incremental change and a moderate tone may already be a foregone conclusion. If your political opponent is electrifying voters with the rhetoric of emergency, the only option might be to fight fire with fire.

There is also a hypocrisy innate to democratic politics which makes the reality of how severe crises are managed something of a dirty secret. Politicians like to invite comparisons with past leaders who acted decisively during crises, whether it be French president Macron’s idolization of Charles de Gaulle, the progressive movement in the U.S. and elsewhere taking Franklin D Roosevelt as their inspiration, or virtually every British leader’s wish to be likened to Winston Churchill. What is not acknowledged is the shameful compromises that accompanied these leaders’ triumphs. De Gaulle’s opportunity to found the French Fifth Republic came amid threats of a military coup. Roosevelt’s New Deal could only be enacted with the backing of Southern Democratic politicians, and as such, effectively excluded African Americans from its most important programs. Allied victory in the Second World War, the final fruit of Churchill’s resistance, came at the price of ceding Eastern and Central Europe to Soviet tyranny.

Such realities are especially difficult to bear because the crises of the past are a uniquely unifying force in liberal democracies. It was often through crises, after all, that rights were won, new institutions forged, and loyalty and sacrifice demonstrated. We tend to imagine those achievements as acts of principled agency which can be attributed to society as a whole, whereas they were just as often the result of improvisation, reluctant concession, and tragic compromise.

Obviously, we cannot expect a willingness to bend principles to be treated as a virtue, and nor, perhaps, should we want it to. But we can acknowledge the basic degree of pragmatism  which crises demand. This is the most worrying aspect of the narratives of decision surrounding the current COVID-19 crisis: still rooted in the projects and preoccupations of the past, they threaten to render us inflexible at a moment when we are entering uncharted territory.

Away from the discussions about what the emergency has revealed and the action it demands, a new era is being forged by governments and other institutions acting on a more pressing set of motives—in particular, maintaining legitimacy in the face of sweeping political pressures and staving off the risk of financial and public health catastrophes. It is also being shaped from the ground up, as countless individuals have changed their behavior in response to an endless stream of graphs, tables, and reports in the media.

Political narratives simply fail to grip the contingency of this situation. Commentators talk about the need to reduce global interdependence, even as the architecture of global finance has been further built up by the decision of the Federal Reserve, in March, to support it with unprecedented amounts of dollar liquidity. They continue to argue within a binary of free market and big government, even as staunchly neoliberal parties endorse state intervention in their economies on a previously unimaginable scale. Likewise, with discussions about climate policy or western relations with China—the parameters within which these strategies will have to operate are simply unknown.

To reduce such complex circumstances to simple, momentous decisions is to offer us more clarity and agency than we actually possess. Nonetheless, that is how this crisis will continue to be framed, as political actors strive to capture the mood of emergency. It will only make matters worse, though, if our judgment remains colored by ambitions and resentments which were formed in earlier crises. If we continue those old struggles on this new terrain, we will swiftly lose our purchase on reality. We will be incapable of a realistic appraisal of the constraints now facing us, and without such realistic appraisal, no solution can be effectively pursued.

Protest and the pressures of lockdown

Was the lockdown the catalyst for the riots sweeping the United States during the past few days? The question will never be definitively answered, but it is difficult to believe that the psychological tension and economic hardships of shutting down society have not contributed to the unrest. Race relations in the US have long been a tinderbox, but the fears and frustrations of the last few months have surely made the situation a good deal more combustible.

Politics in the United Kingdom tend to offer a polite, sotto voce echo of those in the US. We too have seen the effects of lockdown fatigue, not in the form of burning cities but of indignation at the thought that the Prime Minister’s advisor, Dominic Cummings, may have breached lockdown regulations. Again, the furor which greeted that scandal – a tempest in a teacup by American standards – suggested a nation whose nerves had started to fray. Over the weekend, socially distancing measures were being widely flouted in London, not least by crowds of demonstrators showing solidarity with their counterparts in the United States.

On both sides of the Atlantic, it is now difficult to imagine another lockdown being successfully imposed. Should there be a second spike in Covid cases, one can already see how the blame-game will be played: one side will condemn the government’s incompetence and lack of moral authority, the other will deflect by pointing to the protestors’ irresponsibility.

A broader problem for democratic societies is revealing itself here. At moments of crisis, the use of public space has traditionally been a crucial part of the political process. But if gathering in public spaces increases the danger posed by infectious disease, then there may be a serious conflict between the demands of public health and the health of the political system.

The scenes of egregious violence coming from American cities should not make us overlook the importance of demonstrations and protests. The descent of civil unrest into lawlessness and brutal destruction poses its own set of questions: chiefly, what kinds and degrees of illegality are morally justifiable in a given set of circumstances. But however we reply to this – wherever we draw the line between acceptable protest and unacceptable violence – it remains the case that some forms of protest can and must be accepted.

Protests are not just a way of expressing an opinion or trying to bring about change; they also have a cathartic value. They are pressure valves allowing the pent-up tensions to be released in a way that, while potentially bringing about changes in policy or in the political system itself, can in the long run prevent the system from collapsing into chaos. Again, the fact that such a collapse can begin with protest is neither here nor there. The kinds of seething resentments which can make protests a catalyst for wider chaos must be addressed elsewhere in the political system; suppressing protest on these grounds would only make those resentments worse.

Even if you disagree with protesters – even if you think their protests are unjustified, irresponsible or downright dangerous – shutting down protest in a political culture where it is seen as a legitimate form of expression tends to be a self-defeating strategy.

This puts us in quite a bind when it comes to the ongoing Covid-19 threat. It has been said time and again that national or statewide lockdowns are an unprecedented social experiment whose effects cannot be predicted. These policies, justified though they may be in terms of public health, have amounted to forcing citizens into total passivity as their lives are reshaped by their governments’ frantic attempts to stay on top of the situation. Recent events in the United States, for all their other causes, suggest an early result of the lockdown experiment. I don’t expect other democracies to see unrest on a remotely similar scale, but in nations like Britain and France, which have their own traditions of protest, we should not be surprised if some people feel a desire to make their voices heard together and in public.

If that desire continues to manifest itself, and the pandemic does return in a second wave, it will present politicians with yet another excruciating judgment. If they try to prevent protests, social distancing will be dismissed as a pretext for silencing opposition. That could cause anger to grow, or lead to widespread rule breaking which would leave the relevant government’s authority in tatters. But if protests are allowed, it could hardly be an exception: restrictions would have to go altogether.

The management of civil unrest is a perilous business at the best of times. Setting limits to protest is necessary for a regime to maintain credibility, but knowing where to set those limits requires a deft reading of the mood. This has just been made considerably more difficult by a threat which both ramps up political tensions and constrains the use of public space. To the medical and economic challenges posed by Covid-19 we can now add another dilemma: that of judging when it is necessary to sacrifice the rules for the wider stability of the system.

 

Ancient liberties, novel dangers

Until very recently, the British political landscape was drearily familiar: each new argument about Brexit, the dangers of populism, or the excesses of cultural liberalism seemed identical to the last. It has taken an act of nature to force us out of that rut, but here we are. Thanks to the Covid-19 outbreak, the nation not only faces a public health emergency, but also an unprecedented suspension of civil liberties, as parliament this week granted the government powers to disperse public gatherings and confine people to their homes. 

Now we are seeing politics in a new light. On the left, many who were in the habit of portraying Boris Johnson as a budding authoritarian dictator found themselves pleading for the state-enforced lockdown which has now arrived. It is on the right that opinion has been divided. Though some have relished the state flexing its muscles during a crisis, it has equally been some of the nation’s most conservative voices that have expressed reservations about the infringement of civil liberties.

“End of freedom,” bellowed the front page of The Daily Telegraph on Tuesday morning. Thatcher biographer Charles Moore conceded that “it would be bold” to say the lockdown was wrong, but warned of a herd-like population becoming “blindly dependent on rigid orders.” Meanwhile, Mail on Sunday columnist Peter Hitchens, who has been loudly insisting that the government response is disproportionate to the threat posed by the virus, declared the emergency powers “a frightening series of restrictions on ancient liberties.”

This is a useful reminder that, on the subject of personal freedom, there are important crosscurrents between liberals and conservatives. Whereas liberals are more inclined to say you should be able to do as you like, they are also more comfortable with the state protecting its citizens from harm. Even Daniel Hannan, the closest thing modern Britain has to a 19th-century Whig, has supported the right of government to restrict liberties on the grounds that risk of infection is, in the language of neoclassical economics, an externality we impose on one another.

British conservatism, on the other hand, though traditionally keen on law and order, also contains a deep strain of suspicion with respect to the state meddling in civil society. There are various uncharitable explanations for this instinct. Conservatism has historically been concerned with protecting the wealth and status of certain elites. Since the 1980s, it has additionally been susceptible to libertarian dogma about free markets. More simply, the conservative worldview tends to attract a certain kind of grumpy individualist who resents the bureaucracy of modern society (even when it is trying to protect him from a plague).

In its purest Burkean form, however, the conservative case for liberty rests on the much richer philosophical grounds of the trans-generational contract. Given what we know about the fallibility of human judgment, and about the difficulty of clawing back rights once they have been lost, we should conclude that the freedoms which previous generations have struggled for are not ours to give away at a moment’s notice, but to guard jealously for those who come after us. Hence the emphasis on “ancient liberties,” and on pausing for thought especially during an emergency.

I take this argument seriously, regardless of whether it is actually what motivates conservatives today. I take it far more seriously than the libertarian case against an overbearing state, which rests on a dubious view of human beings as autonomous contract-making individuals, and on unrealistic injunctions against coercion. The Burkean paradigm emphatically does not value freedom in and of itself. Rather, it posits that the cumulative experience of generations has established the value of particular freedoms within the context of a particular society.

Even if, like myself, you think it was correct for the government to enforce the lockdown, I think we should still adopt the spirit of mild paranoia which animates the “ancient liberties” outlook. We should be alert to the possibility that certain emergency measures might outlive the emergency in one form or another. We should push back against authorities who seem to be enjoying their new powers too much. And we should think about how this experience of trading freedom for safety might influence expectations in the long term.

Yet these same considerations also point to a major weakness of thinking about civil liberties in primarily historical terms. Namely, it can lead us to fixate on traditional rights and customs, and consequently, to overlook new kinds of threat – a problem aptly illustrated by those who seem to think the worst part of the lockdown is that British people can’t go to the pub.

I don’t think the real danger of our present situation has much to do with the forced closure of businesses, or with physical confinement to our households. The damage these measures are inflicting on our economy, and the immense financial burden the state is assuming as a consequence, make it irrational for even the most power-crazed despot to maintain them longer than necessary. In any case, I get the impression the public is fully aware that these are emergency precautions, and won’t take kindly to prolonged interference in such matters.

Rather, it seems to me the threat is most acute with respect to the state’s technological capacity. As I mentioned in a recent post, there is a good chance that the Covid-19 crisis will prompt various industries to develop technologies which allow them to do more remotely. We should expect a similar trajectory in terms of state power. The administrative challenge of responding to the epidemic, and of facilitating economic and bureaucratic activity during the lockdown, will surely incentivise the state to strengthen and centralise its digital resources. It would, in the process, become more adept at collecting, managing, and utilising information about its citizens, while learning new ways of enacting its most intrusive powers.

Admittedly the British government, which does not even have an emergency messaging system for contacting citizens on their mobiles, does not yet seem very threatening in this respect. But elsewhere there has been plenty of evidence that new techniques of surveillance and control are being forged in response to the crisis (I recommend reading this piece by Jeremy Cliffe in The New Statesman), and we could yet see similar developments here, especially if expanding digital infrastructures turns out to be a matter of economic competitiveness.

It may well be, of course, that we want our government to take some of these steps if it helps us weather the current storm. But that is precisely where the risk lies. If we think it necessary to empower the state in new ways, we need to devise new forms of oversight and accountability. To that end, thinking about our freedoms as keepsakes from the past is of limited use; we also need to think imaginatively about how they can be extended into the future.

The politics of this crisis will be grim. We should prepare now.

Last weekend, which now feels like a lifetime ago, I nervously attended what will probably be my last social gathering for several months. Despite a general mood of uneasiness, at least one of my friends was hoping that there would be a silver lining to the looming Covid-19 epidemic. Did I not think, he asked, that confronting this challenge together might finally instil some solidarity in our society?

I heard similar sentiments being expressed throughout last week. In a BBC Newsnight interview, the Rabbi Jonathan Sacks suggested that “We are going to come through this… with a much stronger commitment to helping others,” adding that it was “probably the lesson we needed as a country.” Some of those rushing to join community aid groups have expressed similar optimism. Even on social media, the shared experience of confinement has given rise to something of an upbeat communal spirit.

Solidarity is obviously welcome, and action to help the vulnerable is more welcome still. I am as hopeful as anyone else that little platoons will play their part in this emergency. But we should not fool ourselves about what lies ahead. Though many commentators have been drawing parallels to the Second World War, the emerging consensus among economists is that the shock now underway will dwarf that of the early 1940s. The blow to demand dealt by social distancing measures points towards a spiral of business contraction and redundancies simply unprecedented in modern history. The forecasts flying around in recent days vary considerably, and are of limited use given how quickly the situation is developing. But I have yet to see any evidence that the swiftly approaching economic crisis will not be brutal – and that is before we consider the effects of the financial crisis unfolding alongside it.

This means that our efforts as individuals and communities ultimately pale by comparison to the responsibility which now rests on the state. Only the state can manage the gargantuan tasks of coordinating healthcare, propping up collapsing industries, and mitigating the financial damage in the population at large. As the multi-hundred billion pound measures announced by Chancellor Rishi Sunak last week attest, we are undergoing a transformation of the government’s role in the economy on a scale not seen in living memory. And we are only at the beginning.

What is more, it’s becoming apparent that the flag around which many of us have been rallying in recent weeks – the necessity of aggressive containment measures to ease the stress on our healthcare system – will only take us so far. At the moment, our priority is to slow the virus’ spread by reducing interpersonal contact as much as possible. But if, as is widely suspected, any attempt to return to normality will only cause infections to rise again, then there will be truly horrendous trade-offs between ongoing economic damage and the likely deaths resulting from interaction. (The dimensions of that dilemma may become clearer in the coming days, as the Chinese authorities begin to relax their brutal lockdown of Wuhan province).

All of this points to inevitable and legitimate political conflict in the coming months and years. The fissures which have threatened to emerge following each of Sunak’s announcements last week – between homeowners and renters, between businesses and workers, between employees and the self-employed – are just a glimpse of what lies ahead.

As the state rapidly expands into a Leviathan, acting as insurer of last resort for much of the population, it will assume responsibility for the survival prospects not only of thousands of individuals at risk of illness, but of entire sectors of the economy. There may be hopes of a swift “bounce-back” recovery, if the government’s attempts to flood the economy with borrowed and printed cash manage to shore-up demand, but we should not delude ourselves that we can somehow just resume where we left off. Countless businesses and careers that entered this crisis as perfectly viable will needed ongoing targeted support to survive, and the state will need to decide which are most worthy of that support.

In other words, whatever the settlement that emerges from a prolonged period of extraordinary state intervention, there are bound to be winners and losers. As the aftermath of the 2007-08 financial crisis taught us, a perception that bailouts have been distributed unfairly will lead to toxic resentments. The coming recession has every likelihood of bringing such tensions back to the surface. As a recent report by the Resolution Foundation pointed out, the sectors being hardest hit by the downturn are disproportionately staffed by those with low incomes, with little or no savings, and without the option to work from home. One can already imagine a scenario in which handouts to firms deemed too big or strategically important to fail coincide with a sense of powerlessness among a burgeoning population of underemployed workers and debt-laden small businesses.

There is no doubt that in the short term, our efforts must be directed toward mitigating a public health emergency which, sadly, has yet to reach its peak. I accept that this will entail seeking political conciliation wherever possible, so as to focus on the challenge at hand.

In the medium-term, however, we need to think about what solidarity really means in these circumstances. It should, surely, involve an acknowledgement that the careful mediation of political disputes will be essential to riding this crisis out. That will require, above all, a framework in which competing interests can make their claims without the resulting conflicts becoming too incendiary.

Such a framework precisely what our political culture has already, in recent years, shown itself to be lacking. In a strange throwback to the “grand bargains” that characterised mid-20th century politics, the government has promised to consult with representatives of business and the unions going forward. But trade unions today represent barely a fifth of the workforce, with memberships skewed towards older, well-paid public sector workers. Like many other advanced economies, modern Britain is a patchwork of groups whose economic interests appear to align, but which lack the social cohesion necessary to realise and articulate those interests. They exist only as statistical entities.

It is crucial, therefore, that we think about the role of institutions in channeling some of the solidarity that is generated by this crisis towards conflict resolution. This should be an opportunity for the Labour Party to address the problem of who in modern Britain is most in need of its representation, and to provide constructive opposition to the government on that basis. It should be an opportunity for the media to break out of last decade’s culture wars and identify on whose behalf the government should be held to account.

We will also need new institutions to represent those socially dispersed interests who will struggle to be heard in the halls of power during a new era of corporatism. Perhaps this is where the little platoons will make a difference after all. Could community aid groups, or the professional networks which are already springing-up among the unemployed, gradually morph into such bodies?

Admittedly it seems perverse to talk about the necessity of conflict at a time like this. Yet if we suppress the political fallout from this crisis, we will only be storing-up demons for later.

The end of extraordinary politics?

I’ve been overseas for a few weeks, so I missed the election results coming in on December 12th, and most of the ensuing media frenzy. Based on the odd headline I did see, it seemed as though the British political system had just been administered an enormous quantity of laxative; though whether Boris Johnson’s breaking of three years of parliamentary deadlock was a moment of profound relief or terrifying incontinence was, naturally, a contentious issue.

When I got back to the UK a couple of days ago, sleep-deprived after my journey and struggling to work, I decided to watch some of the election night coverage. Amidst all the praise and recriminations in response to Johnson’s victory, one interview in particular stood out to me. It was with Nicholas Soames, a former minister and one of the MPs who had been kicked out of the Conservative party in September for obstructing Johnson’s theatrical drive for Brexit. The interviewer, Andrew Neil, put it to Soames that Conservative gains in the north and midlands would entail a fundamental transformation of the party. It was, Neil suggested, “the end of your kind of Tory party – a party that was pro-EU, was more southern than northern, was quite posh.”

This was an apt point to raise with Soames. Besides being, quite literally, an embodiment of the Tory heritage – Soames is Winston Churchill’s grandson, as the hangdog expression and comb-over make clear – he had just recently railed against Johnson for turning the party into “a Brexit sect.” But Soames was in a conciliatory mood. All these contradictions, he insisted, would now be dissolved in the aura of Johnson’s “One Nation” Conservatism. He even conceded Johnson had been right to eject him from the party, and was quick to point out he had “very generously” been reinstated. In any case, his opposition had merely been “a point of principle.”

This obsequious performance resonated with many of the responses I’m seeing from long-term Conservative supporters. I don’t doubt they are genuinely thrilled by the prospect of embracing their blue-collar compatriots under a Disraelian banner of queen and country. But it is notable that this “One Nation” fervour has made them forget their longstanding reservations about what Johnson is now shaping up to do. Most obviously, a majority of Tory stalwarts were for the longest time grimly opposed to high levels of government borrowing and spending (not to mention taxing) – the very thing that all tacitly concede will be a condition of cementing their new constituency. Then there is the fact that many of them were, like Soames, less than keen on Johnson himself. Nor is this surprising, given that nobody knew what he actually wanted to do with the power he so nakedly craved, only that he would do anything to get it.

But he did get it. And that, I would suggest, is the main reason that “points of principle” are receding so sharply into the background. In hindsight, it has to be said that Johnson’s outmanoeuvring of parliament and an inept Labour party was skilfully done. Taking seats like Blyth Valley and Redcar is no small achievement; after a decade with very few notable politicians on the British scene, it looks positively Bismarckian. It is intoxicating, all this talk of realignments, watersheds, historic breakthroughs, new eras. And somehow, Johnson’s mercurial (or if you prefer, unprincipled) character makes it all the more intoxicating. That shapeless quality behind the cartoonish façade has, for the time being, revealed itself as the spirit of pure power.

It should come as no surprise that politicians prove flexible in the presence of a winner. The recent kowtowing to king Boris echoes a ritual which has played out in countless courts and privy chambers over the centuries, as erstwhile enemies and fence-sitters bend the knee to the new authority in the land. More surprising, however, is that the rancour we’ve all been through in the past few years should be settled in such a time-honoured fashion. The British constitution, with all its ceremonies, conventions and medieval fripperies, is unmatched in its insistence on cloaking the ugly business of power competition in the sacred garb of custom. One can scarcely imagine, when one sees the Prime Minister’s car gliding along to Buckingham Palace for an audience with the Queen, that just a few months ago we were witnessing a constitutional bonfire, as the executive, legislature and judiciary wrestled for control of the Brexit proceedings. Yet the question remains how much exactly has been settled by this election. After all, those wranglings within the political system were only part of a wider turbulence that shows no signs of stopping.

One of the most intriguing books I came across this year was a study called Democracy and the Politics of the Extraordinary by Andreas Kalyvas, professor at New York’s New School for Social Research. Borrowing from the ideas of Max Weber, Carl Schmitt and Hannah Arendt, the book considers what happens when democratic politics are subject to exceptional strain or rupture, overflowing their constitutional limits and entering the domains of culture and everyday life. Needless to say, many of its themes resonated with the experience of western countries in recent years. Institutions that had seemed to operate with the assurance of natural laws are revealed as arbitrary customs. Formerly trivial issues become symbolic of wide-ranging and fundamental questions of worldview. The primacy of identity leads to a resurrection of the primitive friend/enemy distinction. In a climate of endless possibility, charisma emerges as an almost magical force, and people flock to all manner of saints and charlatans.

All of this signals a diversification of power. As politics enters new arenas, so too does an awareness of how new forms of authority might be leveraged, new constituencies mobilised. This is what much of the commentary on the contemporary left, in particular, overlooks. From the perspective of ordinary politics, the UK Labour party and perhaps also the US Democrats appear determined to make themselves unelectable. But the left faces a genuine dilemma on account of the possibilities that seem to be opened up by extraordinary politics. The emotive potential of online discourse, hegemony in cultural institutions, the emergence of leaders who exude genuine conviction – all of these forms of power rely on an adversarial, or at least selective relationship with traditional forms of authority. It is easy to portray such tendencies as delusional when the right wins elections, and this could turn out to be the correct verdict. It could turn out that the right has capitalised on the potential of extraordinary politics to effect a reorientation of the electorate, ushering in a new sense of the ordinary. Then again, it could not. Only time will tell who is backing the right horse.

It might seem cynical to speak in these terms. After all, it is often said that we are currently seeing the return of a politics based on values rather than interests. Notions like equality, justice, patriotism and solidarity are now back on the table. But if this period of extraordinary politics has taught us anything, it is surely that values and power are not as distinct as we would like to imagine. The recalibration of principles after a decisive election victory is nothing compared to what happens when political conflicts spill into culture at large and become supercharged by tribalism. There the language of values, rights and integrity quickly becomes a tool for different purposes: signalling strength, claiming territory and cultivating solidarity. Power is no longer a means to an end, but an end in itself – one which perpetually creates other ends to serve as its means. And eventually, it is difficult to tell where values stop and the desire for power begins.

Notes on “Why Liberalism Failed”

Patrick Deneen’s Why Liberalism Failed was one of the most widely discussed political books last year. In a crowded field of authors addressing the future of liberalism, Deneen stood out like a lightning-rod for his withering, full-frontal attack on the core principles and assumptions of liberal philosophy. And yet, when I recently went back and read the many reviews of Why Liberalism Failed, I came out feeling slightly dissatisfied. Critics of the book seemed all too able to shrug off its most interesting claims, and to argue in stead on grounds more comfortable to them.

Part of the problem, perhaps, is that Deneen’s book is not all that well written. His argument is more often a barrage of polemical statements than a carefully constructed analysis. Still, the objective is clear enough. He is taking aim at the liberal doctrine of individual freedom, which prioritises the individual’s right to do, be, and choose as he or she wishes. This “voluntarist” notion of freedom, Deneen argues, has shown itself to be not just destructive, but in certain respects illusory. On that basis he claims we would be better off embracing the constraints of small-scale community life.

Most provocatively, Deneen claims that liberal societies, while claiming merely to create conditions in which individuals can exercise their freedom, in fact mould people to see themselves and to act in a particular way. Liberalism, he argues, grew out of a particular idea of human nature, which posited, above all, that people want to pursue their own ends. It imagined our natural and ideal condition as that of freely choosing individual actors without connection to any particular time, place, or social context. For Deneen, this is a dangerous distortion – human flourishing also requires things at odds with personal freedom, such as self-restraint, committed relationships, and membership of a stable and continuous community. But once our political, economic, and cultural institutions are dedicated to individual choice as the highest good, we ourselves are encouraged to value that freedom above all else. As Deneen writes:

Liberalism began with the explicit assertion that it merely describes our political, social, and private decision making. Yet… what it presented as a description of human voluntarism in fact had to displace a very different form of human self-understanding and experience. In effect, liberal theory sought to educate people to think differently about themselves and their relationships.

Liberal society, in other words, shapes us to behave more like the human beings imagined by its political and economic theories.

It’s worth reflecting for a moment on what is being argued here. Deneen is saying our awareness of ourselves as freely choosing agents is, in fact, a reflection of how we have been shaped by the society we inhabit. It is every bit as much of a social construct as, say, a view of the self that is defined by religious duties, or by membership of a particular community. Moreover, valuing choice is itself a kind of constraint: it makes us less likely to adopt decisions and patterns of life which might limit our ability to choose in the future – even if we are less happy as a result. Liberalism makes us unfree, in a sense, to do anything apart from maximise our freedom.

*   *   *

 

Reviewers of Why Liberalism Failed did offer some strong arguments in defence of liberalism, and against Deneen’s communitarian alternative. These tended to focus on material wealth, and on the various forms of suffering and oppression inherent to non-liberal ways of life. But they barely engaged with his claims that our reverence for individual choice amounts to a socially determined and self-defeating idea of freedom. Rather, they tended to take the freely choosing individual as a given, which often meant they failed to distinguish between the kind of freedom Deneen is criticizing – that which seeks to actively maximise choice – and simply being free from coercion.

Thus, writing in the New York Times, Jennifer Szalai didn’t see what Deneen was griping about. She pointed out that

nobody is truly stopping Deneen from doing what he prescribes: finding a community of like-minded folk, taking to the land, growing his own food, pulling his children out of public school. His problem is that he apparently wants everyone to do these things

Meanwhile, at National Review, David French argued that liberalism in the United States actually incentivises individuals to“embrace the most basic virtues of self-governance – complete your education, get married, and wait until after marriage to have children.”And how so? With the promise of greater “opportunities and autonomy.” Similarly Deidre McCloskey, in a nonetheless fascinating rebuttal of Why Liberalism Failed, jumped between condemnation of social hierarchy and celebration of the “spontaneous order” of the liberal market, without acknowledging that she seemed to be describing two systems which shape individuals to behave in certain ways.

So why does this matter? Because it matters, ultimately, what kind of creatures we are – which desires we can think of as authentic and intrinsic to our flourishing, and which ones stem largely from our environment. The desire, for instance, to be able to choose new leaders, new clothes, new identities, new sexual partners – do these reflect the unfolding of some innate longing for self-expression, or could we in another setting do just as well without them?

There is no hard and fast distinction here, of course; the desire for a sports car is no less real and, at bottom, no less natural than the desire for friendship. Yet there is a moral distinction between the two, and a system which places a high value on the freedom to fulfil one’s desires has to remain conscious of such distinctions. The reason is, firstly, because many kinds of freedom are in conflict with other personal and social goods, and secondly, because there may come a time when a different system offers more by way of prosperity and security.  In both cases, it is important to be able to say what amounts to an essential form of freedom, and what does not.

*   *   *

 

Another common theme among Deneen’s critics was to question his motivation. His Catholicism, in particular, was widely implicated, with many reviewers insinuating that his promotion of close-knit community was a cover for a reactionary social and moral order. Here’s Hugo Drochon writing in The Guardian:

it’s clear that what he wants… is a return to “updated Benedictine forms” of Catholic monastic communities. Like many who share his worldview, Deneen believes that if people returned to such communities they would get back on a moral path that includes the rejection of gay marriage and premarital sex, two of Deneen’s pet peeves.

Similarly, Deidre McCloskey:

We’re to go back to preliberal societies… with the church triumphant, closed corporate communities of lovely peasants and lords, hierarchies laid out in all directions, gays back in the closet, women in the kitchen, and so forth.

Such insinuations strike me as unjustified – these views do not actually appear in Why Liberalism Failed– but they are also understandable. For Deneen does not clarify the grounds of his argument. His critique of liberalism is made in the language of political philosophy, and seems to be consequentialist: liberalism has failed, because it has destroyed the conditions necessary for human flourishing. And yet whenever Deneen is more specific about just what has been lost, one hears the incipient voice of religious conservatism. In sexual matters, Deneen looks back to “courtship norms” and “mannered interaction between the sexes”; in education, to “comportment” and “the revealed word of God.”

I don’t doubt that Deneen’s religious beliefs colour his views, but nor do I think his entire case springs from some dastardly deontological commitment to Catholic moral teaching. Rather, I would argue that these outbursts point to a much more interesting tension in his argument.

My sense is that the underpinnings of Why Liberalism Failed come from virtue ethics – a philosophy whose stock has fallen somewhat since the Enlightenment, but which reigned supreme in antiquity and medieval Christendom. In Deneen’s case, what is important to grasp is Aristotle’s linking of three concepts: virtue, happiness, and the polis or community. The highest end of human life, says Aristotle, is happiness (or flourishing). And the only way to attain that happiness is through consistent action in accordance with virtue – in particular, through moderation and honest dealing. But note, virtues are not rules governing action; they are principles that one must possess at the level of character and, especially, of motivation. Also, it is not that virtue produces happiness as a consequence; the two are coterminous – to be virtuous is to be happy. Finally, the pursuit of virtue/happiness can only be successful in a community whose laws and customs are directed towards this same goal. For according to Aristotle:

to obtain a right training for goodness from an early age is a hard thing, unless one has been brought up under right laws. For a temperate and hardy way of life is not a pleasant thing to most people, especially when they are young.

The problem comes, though, when one has to provide a more detailed account of what the correct virtues are. For Aristotle, and for later Christian thinkers, this was provided by a natural teleology – a belief that human beings, as part of a divinely ordained natural order, have a purpose which is intrinsic to them. But this crutch is not really available in a modern philosophical discussion. And so more recent virtue ethicists, notably Alasdair MacIntyre, have shifted the emphasis away from a particular set of virtues with a particular purpose, and towards virtue and purpose as such. What matters for human flourishing, MacIntyre argued, is that individuals be part of a community or tradition which offers a deeply felt sense of what it is to lead a good life. Living under a shared purpose, as manifest in the social roles and duties of the polis, is ultimately more important than the purpose itself.

This seems to me roughly the vision of human flourishing sketched out in Why Liberalism Failed. Yet I’m not sure Deneen has fully reconciled himself to the relativism that is entailed by abandoning the moral framework of a natural teleology. This is a very real problem – for why should we not accept, say, the Manson family as an example of virtuous community? – but one which is difficult to resolve without overtly metaphysical concepts. And in fact, Deneen’s handling of human nature does strain in that direction, as when he looks forward to

the only real form of diversity, a variety of cultures that is multiple yet grounded in human truths that are transcultural and hence capable of being celebrated by many peoples.

So I would say that Deneen’s talk of “courtship norms” and “comportment” is similar to his suggestion that the good life might involve “cooking, planting, preserving, and composting.” Such specifics are needed to refine what is otherwise a dangerously vague picture of the good life.

 

 

 

 

Addressing the crisis of work

This article was first published by Arc Digital on December 10th 2018.

There are few ideals as central to the life of liberal democracies as that of stable and rewarding work. Political parties of every stripe make promises and boasts about job creation; even Donald Trump is not so eccentric that he does not brag about falling rates of unemployment. Preparing individuals for the job market is seen as the main purpose of education, and a major responsibility of parents too.

But all of this is starting to ring hollow. Today it is an open secret that, whatever the headline employment figures say, the future of work is beset by uncertainty.

Since the 1980s, the share of national income going to wages has declined in almost every advanced economy (the socially democratic Nordic countries are the exception). The decade since the financial crisis of 2007–8 has seen a stubborn rise in youth unemployment, and an increase in “alternative arrangements” characteristic of the gig economy: short-term contracts, freelancing and part-time work. Graduates struggle to find jobs to match their expectations. In many places the salaried middle-class is shrinking, leaving a workforce increasingly polarized between low- and high-earners.

Nor do we particularly enjoy our work. A 2013 Gallup survey found that in Western countries only a fifth of people say they are “engaged” at work, with the rest “not engaged” or “actively disengaged.”

The net result is an uptick of resentment, apathy, and despair. Various studies suggest that younger generations are less likely to identify with their career, or profess loyalty to their employer. In the United States, a worrying number of young men have dropped out of work altogether, with many apparently devoting their time to video games or taking prescription medication. And that’s without mentioning the ongoing automation revolution, which will exacerbate these trends. Robotics and artificial intelligence will likely wipe-out whole echelons of the current employment structure.

So what to do? Given the complexity of these problems — social, cultural, and economic — we should not expect any single, perfect solution. Yet it would be reckless to hope that, as the economy changes, it will reinvent a model of employment resembling what we have known in the past.

We should be thinking in broad terms about two related questions: in the short term, how could we reduce the strains of precarious or unfulfilling employment? And in the long term, what will we do if work grows increasingly scarce?

One answer involves a limited intervention by the state, aimed at revitalizing the habits of a free-market society — encouraging individuals to be independent, mobile, and entrepreneurial. American entrepreneur Andrew Yang proposes a Universal Basic Income (UBI) paid to all citizens, a policy he dubs “the freedom dividend.” Alternatively, Harvard economist Lawrence Katz suggests improving labor rights for part-time and contracted workers, while encouraging a middle-class “artisan economy” of creative entrepreneurs, whose greatest asset is their “personal flair.”

There are valid intuitions here about what many of us desire from work — namely, autonomy, and useful productivity. We want some control over how our labor is employed, and ideally to derive some personal fulfillment from its results. These values are captured in what political scientist Ian Shapiro has termed “the workmanship ideal”: the tendency, remarkably persistent in Western thought since the Enlightenment, to recognize “the sense of subjective satisfaction that attaches to the idea of making something that one can subsequently call one’s own.”

But if technology becomes as disruptive as many foresee, then independence may come at a steep price in terms of unpredictability and stress. For your labor — or, for that matter, your artisan products — to be worth anything in a constantly evolving market, you will need to dedicate huge amounts of time and energy to retraining. According to some upbeat advice from the World Economic Forum, individuals should now be aiming to “skill, reskill, and reskill again,” perhaps as often as every 2–3 years.

Is it time, then, for more radical solutions? There is a strand of thinking on the left which sees the demise of stable employment very differently. It argues that by harnessing technological efficiency in an egalitarian way, we could all work much less and still have the means to lead more fulfilling lives.

This “post-work” vision, as it is now called, has been gaining traction in the United Kingdom especially. Its advocates — a motley group of Marx-inspired journalists and academics — found an unexpected political platform in Jeremy Corbyn’s Labour Party, which has recently proposed cutting the working week to four days. It has also established a presence in mainstream progressive publications such as The Guardian and New Statesman.

To be sure, there is no coherent, long-term program here. Rather, there is a great deal of blind faith in the prospects of automation, common ownership and cultural revolution. Many in the post-work camp see liberation from employment, usually accompanied by UBI, as the first step in an ill-defined plan to transcend capitalism. Typical in that respect are Alex Williams and Nick Srnicek, authors of Inventing the Future: Postcapitalism and a World Without Work. This blueprint includes open borders and a pervasive propaganda network, and flirts with the possibility of “synthetic forms of biological reproduction” to enable “a newfound equality between the sexes.”

We don’t need to buy into any of this, though, to appreciate the appeal of enabling people to work less. Various thinkers, including Bertrand Russell and John Maynard Keynes, took this to be an obvious goal of technological development. And since employment does not provide many of us with the promised goods of autonomy, fulfillment, productive satisfaction and so on, why shouldn’t we make the time to pursue them elsewhere?

Now, one could say that even this proposition is based on an unrealistic view of human nature. Arguably the real value of work is not enjoyment or even wealth, but purpose: people need routine, structure, a reason to get up in the morning, otherwise they would be adrift in a sea of aimlessness. Or at least some of them would – for another thing employment currently provides is a relatively civilized way for ambitious individuals to compete for resources and social status. Nothing in human history suggests that, even in conditions of superabundance, that competition would stop.

According to this pessimistic view, freedom and fulfillment are secondary concerns. The real question is, in the absence of employment, what belief systems, political mechanisms, and social institutions would make work for all of those idle thumbs?

But the way things are headed, it looks like we are going to need to face that question anyway, in which case our work-centric culture is a profound obstacle to generating good solutions. With so much energy committed to long hours and career success (the former being increasingly necessary for the latter), there is no space for other sources of purpose, recognition, or indeed fulfilment to emerge in an organic way.

The same goes for the economic side of the problem. I am no supporter of UBI – a policy whose potential benefits are dwarfed by the implications of a society where every individual is a client of the state. But if we want to avoid that future, it would be better to explore other arrangements now than to cling to our current habits until we end up there by default. Thus, if for no other reason than to create room for such experiments, the idea of working less is worth rescuing from the margins of the debate.

More to the point, there needs to be a proper debate. Given how deeply rooted our current ideas about employment are, politicians will continue appealing to them. We shouldn’t accept such sedatives. Addressing this problem will likely be a messy and imperfect process however we go about it, and the sooner we acknowledge that the better.

Testing the limits of universalism in science

This essay was first published by Areo magazine on 23 November 2018. 

Science traditionally aspires to be universal in two respects. First, it seeks fundamental knowledge—facts which are universally true. Second, it aims to be impersonal in practice; identity should be irrelevant to the process by which a scientific claim is judged.

Since the era following the Second World War, a great deal has come to rest on these aspirations. For not only does universalism make science a reliable means of understanding the world; it also makes scientific institutions an obvious basis for cooperation in response to various grim and complex challenges facing humanity. Today, these challenges include environmental damage, infectious diseases, biotechnology and food and energy insecurity. Surely, if anyone can rise above conflicts of culture and interest—and maybe even help governments do the same—it is the people in the proverbial white coats.

And yet, lately we find the very principle of universalism being called into doubt. Armed with the tools of critical theory, scholars in the social sciences and humanities assert that science is just one knowledge system among many, relative to the western context in which it evolved. In this view, the universalism that enables science to inform other peoples and cultures is really a form of unjust hegemony.

So far, this trend has mostly been discussed in an educational setting, where there have been calls to decolonize scientific curricula and to address demographic imbalances among students. But how will it affect those institutions seeking to foster scientific collaboration on critical policy issues?

An argument erupted this year in the field of ecology, centered on a body called the IPBES (Intergovernmental Panel on Biodiversity and Ecosystem Services). I suspect few readers have heard of this organization, but then, such is the unglamorous business of saving the world. The IPBES is one of the few vehicles for drawing governments’ attention to the rapid global decline of biodiversity, and of animal and plant populations generally.

In January, leading members of the panel published an article in the journal Science, announcing a “paradigm shift” in how it would approach its mission. They claim the scientific model on which the IPBES was founded is “dominated by knowledge from the natural sciences and economics,” and prone to adopt the “generalizing perspective” of “western science.” Consequently, they argue, it does not provide space for the humanities and social sciences, nor does it recognize the knowledge and values of local and indigenous peoples.

The article, which sparked an acrimonious row within the research community, came after several years in which IPBES papers and reports had developed “a pluralistic approach to recognizing the diversity of values.” The panel has now officially adopted a new paradigm that “resist[s] the scientific goal of attaining a universally applicable schema,” while seeking to “overcome existing power asymmetries between western science and local and indigenous knowledge, and among different disciplines within western science.”

 

Science, Policy, Politics

 It is easy to dismiss such terminology as mere jargon, and that is what some critics have done. They claim the “paradigm shift” amounts to “a political compromise, and not a new scientific concept.” In other words, labeling a universal outlook western science is a diplomatic gesture to placate skeptics. Recognizing “a diversity of values” does not alter the pertinent data, because, however you frame them, the data are the data.

But here is the problem. When it comes to organizations whose role is to inform policy, this neat separation between science and politics is misleading; they often have their own political goals that guide their scientific activity. For the IPBES, that goal is persuading policymakers to conserve the natural world. Consequently, the panel does not merely gather data about the health of ecosystems. It gathers data showing how humans benefit from healthy ecosystems, so as to emphasize the costs of not conserving them.

This strategy, however, forces the IPBES to make value judgments which are not straightforwardly amenable to scientific methods. To assess the benefits of nature, one must consider not just clean air and soil nutrients, but also nonmaterial factors such as religious inspiration and cultural identity that vary widely around the world. Can all of this really be incorporated into a universal, objective system of measurements?

The IPBES’ original paradigm tried to do so, but, inevitably, the result was a crude framework of utilitarian metrics. It sought to categorize and quantify all of nature’s benefits (including the religious and cultural) and convert them into monetary values—this being, after all, the language policy makers understand best. As the Science article states, drawing on a substantial literature, this reductive approach alienated a great many scientists, as well as local people, whose participation is crucial for conservation.

All of this illustrates some general problems with universalism as a basis for cooperation. Firstly, when a scientific institution directs its work towards certain policy outcomes, its claims to objectivity become more questionable. It might still produce knowledge that is universally true; but which knowledge it actually seeks, and how it translates that knowledge into policy tools are more contentious questions.

This problem arises even in cases of solid scientific consensus, such as climate change. Rising temperatures are one thing, but which consequences should scientists investigate to grab the attention of policymakers or even voters? Which economic policies should they endorse? Such judgments will inevitably be political and ideological in nature.

Moreover, some subjects are simply more politically and culturally contentious than others. There are many areas where, even if a universalist approach can be devised, it will nonetheless be regarded as an unwelcome and foreign way of thinking. As we have seen, nature is one of these areas. Another obvious example is gene editing, which Japan has recently allowed in human embryos. Any attempts to regulate this technology will likely require a debate about religious and cultural mores as much as hard science.

 

The Limits of Pluralism

The question is, however, does the pluralism now advocated by IPBES offer a viable solution to these problems? It is highly doubtful. The influence of critical theory, as seen in a fixation with knowledge as a proxy for power, is itself antithetical to productive cooperation. Rather than merely identifying the practical limitations of the scientific worldview, it pits science in zero-sum competition with other perspectives.

The problem begins with a slide from cultural pluralism into epistemological relativism. In the literature that laid the groundwork for the IPBES “paradigm shift,” knowledge systems are treated as “context specific,” each containing “its own processes of validity.” As a result, the prospect of compromise recedes into the distance, the priority being to “equitably bridge different value systems, eventually allowing processes of social learning.”

As critics have warned, there is a danger here of losing clarity and focus, leading to less effective advocacy. IPBES papers and reports now bulge with extensive discussions of cultural particularism and equity, threatening at times to become an altogether parallel mission. Yet in 2016, when the panel delivered its most comprehensive assessment to date, the summary for policymakers included barely any information about the economic costs of ecological damage.

Indeed, despite its supposed skepticism, there is an air of fantasy surrounding this discourse. Even if there are areas where it is inappropriate to impose a purely scientific outlook, it is disingenuous to pretend that, with a particular goal in view, all perspectives are equally useful. Likewise, no amount of consultation and mediation can negate the reality that, with limited resources, different values and interests must be traded off against one another. If scientists absolve themselves of this responsibility, they simply pass it on to policymakers.

Universalism has practical limits of its own: it cannot dissolve cultural differences, or remove the need to make political decisions. But, provided such limitations are understood, it surely remains the most useful default principle for collaborative work. Even diverse institutions need common goals: to treat values as fully incommensurable is to invite paralysis. And to politicize knowledge itself is to risk unraveling the scientific enterprise altogether.

Yuval Noah Harari’s half-baked guide to the 21st century

This review was first published by Arc Digital on 25 October 2018.

There is something immensely comforting about Yuval Noah Harari. In an era when a writer’s success often depends on a willingness to provoke, Harari’s calling cards are politeness and equanimity. In the new class of so-called “rock star intellectuals,” he is analogous to Coldplay: accessible, inoffensive, and astoundingly popular. I find no other writer so frequently referenced by friends who don’t generally read. On YouTube he is a man for all seasons, discussing #MeToo with Natalie Portman, contemplating the nature of money with Christine Lagarde, and considering “Who Really Runs the World?” with Russell Brand.

Harari, a historian at the Hebrew University of Jerusalem, is by no means undeserving of this success. His first book, Sapiens: A Brief History of Humankind, displayed a rare talent for condensing vast epochs of history into simple narratives. In his second, Homo Deus, he showed all the imagination of a science fiction writer in presenting the dystopian possibilities of artificial intelligence and biotechnology.

But now Harari has abandoned the speculative realms of past and future, turning his attention to the thorny problems of the present. And here we find that his formula has its limits. 21 Lessons for the 21st Century is a collection of essays taking on everything from culture and politics to technology and spirituality. Undoubtedly, it offers plenty of thought-provoking questions and insights. By and large though, the very thing that made his previous works so engaging — an insistence on painting in broad, simple brushstrokes — makes this latest effort somewhat superficial.

Many of Harari’s essays are just not very illuminating. They circle their subjects ponderously, never quite making contact. Take his chapter on the immigration debate in Europe. Harari begins by identifying three areas of disagreement: borders, integration, and citizenship. Then he walks us through some generic and largely hypothetical pro- and anti-immigration stances, guided mainly by a desire not to offend anyone. Finally, after explaining that “culturism” is not the same as racism, he simply concludes: “If the European project fails…it would indicate that belief in the liberal values of freedom and tolerance is not enough to resolve the cultural conflicts of the world.”

Here we glimpse one of the book’s main questions: whether liberalism can unite the world and overcome the existential challenges facing humanity. But what is liberalism? According to Harari, all social systems, whether religious or political, are “stories.” By this he means that they are psychological software packages, allowing large-scale cooperation while providing individuals with identity and purpose. Thus, liberalism is a “global story” which boils down to the belief that “all authority ultimately stems from the free will of individual humans.” Harari gives us three handy axioms: “the voter knows best,” “the customer is always right,” and “follow your heart.”

This certainly makes matters crystal clear. But political systems are not just ideological dogmas to which entire populations blindly subscribe. They are institutional arrangements shaped by the clashes and compromises of differing values and interests. Historically, liberalism’s commitment to individualism was less important than its preference for democratic means to resolve such conflicts. Harari’s individualist, universalist liberalism has certainly been espoused in recent decades; but as a more perceptive critic such as John Gray or Shadi Hamid would point out, it is only for sections of Western society that this has offered a meaningful worldview.

Overlooking this basic degree of complexity leads Harari to some bizarre judgments. He claims that “most people who voted for Trump and Brexit didn’t reject the liberal package in its entirety — they lost faith mainly in its globalizing part.” Does he really think these voters were once enthusiastic about globalism? Likewise, to illustrate the irrational character of liberal customs, Harari states: “If democracy were a matter of rational decision-making, there would be absolutely no reason to give all people equal voting rights.” Did he not consider that a key purpose of the ballot is to secure the legitimacy of government?

Harari is frequently half-sighted, struggling to acknowledge that phenomena can have more than one explanation. I confess I chuckled at his reading of Ex Machina, the 2015 sci-fi about a cyborg femme fatale.“This is not a movie about the human fear of intelligent robots,” he writes. It is about “the male fear…that female liberation might lead to female domination.” To support his interpretation, Harari poses a question: “For why on earth would an AI have a sexual or a gender identity?” This in a book which argues extensively that artificial intelligence will be used to exploit human desires.

Nor are such hiccups merely incidental. Rather, they stem from Harari’s failure to connect his various arguments into a coherent world-view. This is perhaps the most serious shortcoming of 21 Lessons. Reading this book is like watching a one-man kabuki play, whereby Harari puts on different masks as the situation demands. But these characters are not called on to complement each other so much as to prevent the stage from collapsing.

We have already encountered Harari’s first mask: postmodern cynicism. He is at pains to deconstruct the grand narratives of the past, whether religious, political, or national. He argues that the human subject, too, is a social construct — an amalgam of fictions, bound by context and largely incapable of rational thought.

However this approach tends to invite relativism and apathy. And so, to provide some moral ballast, Harari picks up the mask of secularist polemic. Though never abandoning his light-hearted tone, he spends a great deal of time eye-poking and shin-kicking any tradition that indulges the human inclination for sanctity, ritual, and transcendence. But not to worry: you can keep your superstitions, “provided you adhere to the secular ethical code.” This consists of truth, compassion, equality, freedom, courage, and responsibility.

What, then, of our darker impulses? And what of our yearning to identify with something larger than ourselves? Enter Harari in his third mask: neo-Buddhist introspection. This is an especially useful guise, for whenever Harari encounters a difficult knot, he simply cuts it with a platitude. “If you really understand how an action causes unnecessary suffering to yourself and to others,” he writes, “you will naturally abstain from it.” Moreover: “If you really know the truth about yourself and the world, nothing can make you miserable.”

I am not saying these outlooks cannot be reconciled. My point is that Harari does not attempt to do so, leaving us instead with an array of loose ends. If the imperative is to deconstruct, why should secular shibboleths be left standing? Why should we worry about technology treating us as “little more than biochemical algorithms,” when Harari already thinks that “your core identity is a complex illusion created by neural networks”? And given that “both the ‘self’ and freedom are mythological chimeras,” what does Harari mean when he advises us to “work very hard…to know what you are, and what you want from life”?

You might object that I’m being ungenerous; that the most popular of popular intellectuals must necessarily deal in outlines, not details. But this is a slippery slope that leads to lazy assumptions about the incuriousness of a general audience. When it comes to current political and philosophical dilemmas, being a good popularizer does not consist in doling out reductive formulas. It consists in giving a flavor of the subtlety which makes these matters worth exploring. In that respect, 21 Lessons falls short of the mark.

The Price of Success: Britain’s Tumultuous 19th Century

In 1858, an exclusive Soho dining society known simply as “the Club” – attended by former and future Prime Ministers, prominent clergymen, poets and men of letters – debated the question of “the highest period of civilization” ever reached. It was, they decided, “in London at the present moment.” The following year, several books were published which might, at first glance, appear to support this grandiose conclusion. They included On Liberty by John Stewart Mill, now a cornerstone of political philosophy; Adam Bede, the first novel by the great George Eliot; and Charles Darwin’s On the Origin of Species, which presented the most comprehensive argument yet for the theory of evolution.

Certainly, all of these works were products of quintessentially Victorian seams of thought. Yet they also revealed the fragility of what most members of “the Club” considered the very pillars of their “highest period of civilization.” Mill’s liberalism was hostile to the widespread complacency which held the British constitution to be perfect. George Eliot, aka Marian Evans, was a formidably educated woman living out of wedlock with the writer George Henry Lewes; as such, she was an affront to various tenets of contemporary morality. And Darwin’s work, of course, would fatally undermine the Victorian assumption that theirs was a divinely ordained greatness.

These are just some of the insecurities, tensions, and contradictions which lie at the heart of Britain’s history in the 19th century, and which provide the central theme of David Cannadine’s sweeping (and somewhat ironically titled) new volume, Victorious Century: The United Kingdom 1800-1906. This was a period when Britain’s global hegemony in economic, financial, and imperial terms was rendered almost illusory by an atmosphere of entropy and flux at home. It was a period when the state became more proactive and informed than ever before, yet could never fully comprehend the challenges of its rapidly industrialising economy. And it was a period when Britain’s Empire continued incessantly to expand, despite no one in Westminster finding a coherent plan of how, or for what purpose, to govern it.

Cannadine’s interest in discomfort and dilemma also explains the dates which bookend his narrative. In 1800 William Pitt’s administration enacted the Union with Ireland, bringing into existence the “United Kingdom” of the book’s title. Throughout the ensuing century, the “Irish question” would periodically overwhelm British politics through religious tension, famine, and popular unrest (indeed, I refer mainly to Britain in this review because Ireland was never assimilated into its cultural or political life). The general election of 1906, meanwhile, was the last hurrah of the Liberal Party, a coalition of progressive aristocrats, free traders and radical reformers whose internal conflicts in many ways mirrored those of Victorian Britain at large.

Cannadine’s approach is not an analytical one, and so there is little discussion of the great, complex question which looms over Britain’s 19th century: namely, why that seismic shift in world history, the industrial revolution, happened here. He does make clear, however, the importance of victory in the Napoleonic Wars which engulfed Europe until 1815. Without this hard-won success, Britain could not have exploited its geographical and cultural position in between its two largest export markets, Europe and the United States. Moreover, entrepreneurial industrial activity was directly stimulated by the state’s demand for materiel, and the wheels of international finance greased by government borrowing for the war effort.

From the outset, the volatility of this new model of capitalism was painfully clear. Until mid-century, Britain’s population, industrial output, investment and trade expanded at a dizzying rate, only to stumble repeatedly into prolonged and wrenching economic crises. The accompanying urban deprivation was brutal – life expectancy for a working-class man in 1840s Liverpool was 22 – though arguably no worse than the rural deprivation which had preceded it. Nonetheless, these realities, together with the regular outbreaks of revolution on the continent, meant that from the 1830s onwards the British state assumed a radically new role of “legislative engagement with contemporary issues”: regulating industry, enhancing local government and public services, and gauging public opinion to judge whether political concessions, particularly electoral reform, were necessary.

The second half of the century, by contrast, hatched anxieties which were less dramatic but more insidious. Rising giants such as the United States and Germany, with their superior resources and higher standards of science, technology, and education, foretold the end of British preeminence long before it came to pass. Certainly, the price of global competition was paid largely by landlords, farmers, and manufacturers; working-class living standards steadily improved. But declinism permeated the culture as a whole, manifesting itself in a range of doubts which may sound familiar to us today: immigration and loss of national identity, intractable inequality, military unpreparedness, the spiritual and physical decrepitude of the masses, and the depravity of conspicuous consumption among the upper classes.

Cannadine recounts all of this with lucidity, verve, and a dazzling turn of phrase. He is, however, committed to a top-down view of history which places Westminster politics at the centre of events. This has its benefits: we gain an understanding not just of such fascinating figures as Robert Peel, Benjamin Disraeli and William Gladstone, but also a detailed grasp of the evolution of modern government. This perspective does, however, run counter to the real story of the 19th century, which is precisely the redistribution of historical agency through expanding wealth, literacy, technology and political participation. Cannadine might have reassessed his priorities in light of his own book’s epigraph, from Marx’s Eighteenth Brumaire: “Men make their own history, but they do not do so freely, not under conditions of their own choosing.”