The Philosophy of Rupture: How the 1920s Gave Rise to Intellectual Magicians

This essay was originally published by Areo magazine on 4th November 2020.

When it comes to intellectual history, Central Europe in the decade of the 1920s presents a paradox. It was an era when revolutionary thought – original and iconoclastic ideas and modes of thinking – was not in fact revolutionary, but almost the norm. And the results are all around us today. The 1920s were the final flourish in a remarkable period of path-breaking activity in German-speaking Europe, one that laid many of the foundations for both analytic and continental philosophy, for psychology and sociology, and for several branches of legal philosophy and of theoretical science.

This creative ferment is partly what people grasp at when they refer to the “spirit” of the ’20s, especially in Germany’s Weimar Republic. But this doesn’t help us understand where that spirit came from, or how it draws together the various thinkers who, in hindsight, seem to be bursting out of their historical context rather than sharing it.

Wolfram Eilenberger attempts one solution to that problem in his new book, Time of the Magicians: The Invention of Modern Thought, 1919-1929. He manages to weave together the ideas of four philosophers – Ludwig Wittgenstein, Martin Heidegger, Walter Benjamin and Ernst Cassirer – by showing how they emerged from those thinkers’ personal lives. We get colourful accounts of money troubles, love affairs, career struggles and mental breakdowns, each giving way to a discussion of the philosophical material. In this way, the personal and intellectual journeys of the four protagonists are linked in an expanding web of experiences and ideas.

This is a satisfying format. There’s just no denying the voyeuristic pleasure of peering into these characters’ private lives, whether it be Heidegger’s and Benjamin’s attempts to rationalise their adulterous tendencies, or the series of car crashes that was Wittgenstein’s social life. Besides, it’s always useful to be reminded that, with the exception of the genuinely upstanding Cassirer, these great thinkers were frequently selfish, delusional, hypocritical and insecure. Just like the rest of us then.

But entertaining as it is, Eilenberger’s biographical approach does not really cast much light on that riddle of the age: why was this such a propitious time for magicians? If anything, his portraits play into the romantic myth of the intellectual window-breaker as a congenital outsider and unusual genius – an ideal that was in no small part erected by this very generation. This is a shame because, as I’ll try to show later, these figures become still more engaging when considered not just as brilliant individuals, but also as products of their time.

First, it’s worth looking at how Eilenberger manages to draw parallels between the four philosophers’ ideas, for that is no mean feat. Inevitably this challenge makes his presentation selective and occasionally tendentious, but it also produces some imaginative insights.

*          *          *

 

At first sight, Wittgenstein seems an awkward fit for this book, seeing as he did not produce any philosophy during the decade in question. His famous early work, the Tractatus Logico-Philosophicus, claimed to have solved the problems of philosophy “on all essential points.” So we are left with the (admittedly fascinating) account of how he signed away his vast inheritance, trained as a primary school teacher, and moved through a series of remote Austrian towns becoming increasingly isolated and depressed.

But this does leave Eilenberger plenty of space to discuss the puzzling Tractatus. He points out, rightly, that Wittgenstein’s mission to establish once and for all what can meaningfully be said – that is, what kinds of statements actually make sense – was far more than an attempt to rid philosophy of metaphysical hokum (even if that was how his logical-empiricist fans in Cambridge and the Vienna Circle wanted to read the work).

Wittgenstein did declare that the only valid propositions were those of natural science, since these alone shared the same logical structure as empirical reality, and so could capture an existing or possible “state of affairs” in the world. But as Wittgenstein freely admitted, this meant the Tractatus itself was nonsense. Therefore its reader was encouraged to disregard the very claims which had established how to judge claims, to “throw away the ladder after he has climbed up it.” Besides, it remained the case that “even if all possible scientific questions be answered, the problems of life have still not been touched at all.”

According to Eilenberger, who belongs to the “existentialist Wittgenstein” school, the Tractatus’ real goals were twofold. First, to save humanity from pointless conflict by clarifying what could be communicated with certainty. And second, to emphasise the degree to which our lives will always be plagued by ambiguity – by that which can only be “shown,” not said – and hence by decisions that must be taken on the basis of faith.

This reading allows Eilenberger to place Wittgenstein in dialogue with Heidegger and Benjamin. The latter both styled themselves as abrasive outsiders: Heidegger as the Black Forest peasant seeking to subvert academic philosophy from within, Benjamin as the struggling journalist and flaneur who, thanks to his erratic behaviour and idiosyncratic methods, never found an academic post. By the end of the ’20s, they had gravitated towards the political extremes, with Heidegger eventually joining the Nazi party and Benjamin flirting with Communism.

Like many intellectuals at this time, Heidegger and Benjamin were interested in the consequences of the scientific and philosophical revolutions of the 17th century, the revolutions of Galileo and Descartes, which had produced the characteristic dualism of modernity: the separation of the autonomous, thinking subject from a scientific reality governed by natural laws. Both presented this as an illusory and fallen state, in which the world had been stripped of authentic human purpose and significance.

Granted, Heidegger did not think such fine things were available to most of humanity anyway. As he argued in his masterpiece Being and Time, people tend to seek distraction in mundane tasks, social conventions and gossip. But it did bother him that philosophers had forgotten about “the question of the meaning of Being.” To ask this question was to realise that, before we come to do science or anything else, we are always already “thrown” into an existence we have neither chosen nor designed, and which we can only access through the meanings made available by language and by the looming horizon of our own mortality.

Likewise, Benjamin insisted language was not a means of communication or rational thought, but an aesthetic medium through which the world was revealed to us. In his work on German baroque theatre, he identified the arrival of modernity with a tragic distortion in that medium. Rather than a holistic existence in which in which everything had its proper name and meaning – an existence that, for Benjamin, was intimately connected with the religious temporality of awaiting salvation – the very process of understanding had become arbitrary and reified, so that any given symbol might as well stand for any given thing.

As Eilenberger details, both Heidegger and Benjamin found some redemption in the idea of decision – a fleeting moment when the superficial autonomy of everyday choices gave way to an all-embracing realisation of purpose and fate. Benjamin identified such potential in love and, on a collective and political level, in the “profane illuminations” of the metropolis, where the alienation of the modern subject was most profound. For Heidegger, only a stark confrontation with death could produce a truly “authentic” decision. (This too had political implications, which Eilenberger avoids: Heidegger saw the “possibilities” glimpsed in these moments as handed down by tradition to each generation, leaving the door open to a reactionary idea of authenticity as something a community discovers in its past).

If Wittgenstein, Heidegger and Benjamin were outsiders and “conceptual wrecking balls,” Ernst Cassirer cuts a very different figure. His inclusion in this book is the latest sign of an extraordinary revival in his reputation over the past fifteen years or so. That said, some of Eilenberger’s remarks suggest Cassirer has not entirely shaken off the earlier judgment, that he was merely “an intellectual bureaucrat,” “a thoroughly decent man and thinker, but not a great one.”

Cassirer was the last major figure in the Neo-Kantian tradition, which had dominated German academic philosophy from the mid-19th century until around 1910. At this point, it grew unfashionable for its associations with scientific positivism and naïve notions of rationality and progress (not to mention the presence of prominent Jewish scholars like Cassirer within its ranks). The coup de grâce was delivered by Heidegger himself at the famous 1929 “Davos debate” with Cassirer, the event which opens and closes Eilenberger’s book. Here contemporaries portrayed Cassirer as an embodiment of “the old thinking” that was being swept away.

That judgment was not entirely accurate. It’s true that Cassirer was an intellectual in the mould of 19th century Central European liberalism, committed to human progress and individual freedom, devoted to science, culture and the achievements of German classicism. Not incidentally, he was the only one of our four thinkers to wholeheartedly defend Germany’s Weimar democracy. But he was also an imaginative, versatile and unbelievably prolific philosopher.

Cassirer’s three-volume project of the 1920s, The Theory of Symbolic Forms, showed that he, too, understood language and meaning as largely constitutive of reality. But for Cassirer, the modern scientific worldview was not a debasement of the subject’s relationship to the world, but a development of the same faculty which underlay language, myth and culture – that of representing phenomena through symbolic forms. It was, moreover, an advance. The logical coherence of theoretical science, and the impersonal detachment from nature it afforded, was the supreme example of how human beings achieved freedom: by understanding the structure of the world they inhabited to ever greater degrees.

But nor was Cassirer dogmatic in his admiration for science. His key principle was the plurality of representation and understanding, allowing the same phenomenon to be grasped in different ways. The scientist and artist are capable of different insights. More to the point, the creative process through which human minds devised new forms of representation was open ended. The very history of science, as of culture, showed that there were always new symbolic forms to be invented, transforming our perception of the world in the process.

*          *          *

 

It would be unfair to say Eilenberger gives us no sense of how these ideas relate to the context in which they were formed; his biographical vignettes do offer vivid glimpses of life in 1920s Europe. But that context is largely personal, and rarely social, cultural or intellectual. As a result, the most striking parallel of all – the determination of Wittgenstein, Heidegger and Benjamin to upend the premises of the philosophical discipline, and that of Cassirer to protect them – can only be explained in terms of personality. This is misleading.

A time-traveller visiting Central Europe in the years after 1918 could not help but notice that all things intellectual were in a state of profound flux. Not only was Neo-Kantianism succumbing to a generation of students obsessed with metaphysics, existence and (in the strict sense) nihilism. Every certainty was being forcefully undermined: the superiority of European culture in Oswald Spengler’s bestselling Decline of the West (1918); the purpose and progress of history in Ernst Troeltsch’s “Crisis of Historicism” (1922); the Protestant worldview in Karl Barth’s Epistle to the Romans (1919); and the structure of nature itself in Albert Einstein’s article “On the Present Crisis in Theoretical Physics” (1922).

In these years, even the concept of revolution was undergoing a revolution, as seen in the influence of unorthodox Marxist works like György Lukács’ History and Class Consciousness (1923). And this is to say nothing of what our time-traveller would discover in the arts. Dada, a movement dedicated to the destruction of bourgeois norms and sensibilities, had broken out in Zurich in 1917 and quickly spread to Berlin. Here it infused the works of brilliant but scandalous artists such as George Grosz and Otto Dix.

German intellectuals, in other words, were conscious of living in an age of immense disruption. More particularly, they saw themselves as responding to world defined by rupture; or to borrow a term from Heidegger and Benjamin, by “caesura” – a decisive and irreversible break from the past.

It’s not difficult to imagine where that impression came from. This generation experienced the cataclysm of the First World War, an unprecedented bloodbath that discredited assumptions of progress even as it toppled ancient regimes (though among Eilenberger’s quartet, only Wittgenstein served on the front lines). In its wake came the febrile economic and political atmosphere of the Weimar Republic, which has invited so many comparisons to our own time. Less noticed is that the ’20s were also, like our era, a time of destabilising technological revolution, witnessing the arrival of radio, the expansion of the telephone, cinema and aviation, and a bevy of new capitalist practices extending from factory to billboard.

Nonetheless, in philosophy and culture, we should not imagine that an awareness of rupture emerged suddenly in 1918, or even in 1914. The war is best seen as an explosive catalyst which propelled and distorted changes already underway. The problems that occupied Eilenberger’s four philosophers, and the intellectual currents that drove them, stem for a deeper set of dislocations.

 Anxiety over the scientific worldview, and over philosophy’s relationship to science, was an inheritance from the 19thcentury. In Neo-Kantianism, Germany had produced a philosophy at ease with the advances of modern science. But paradoxically, this grew to be a problem when it became clear how momentous those advances really were. Increasingly science was not just producing strange new ways of seeing the world, but through technology and industry, reshaping it. Ultimately the Neo-Kantian holding pattern, which had tried to reconcile science with the humanistic traditions of the intellectual class, gave way. Philosophy became the site of a backlash against both.

But critics of philosophy’s subordination to science had their own predecessors to call on, not least with respect to the problem of language. Those who, like Heidegger and Benjamin, saw language not as a potential tool for representing empirical reality, but the medium which disclosed that reality to us (and who thus began to draw the dividing line between continental and Anglo-American philosophy), were sharpening a conflict that had simmered since the Enlightenment. They took inspiration from the 18th century mystic and scourge of scientific rationality, Johann Georg Hamann.

Meanwhile, the 1890s saw widespread recognition of the three figures most responsible for the post-war generation’s ideal of the radical outsider: Søren Kierkegaard, Friedrich Nietzsche and Karl Marx. That generation would also be taught by the great pioneers of sociology in Germany, Max Weber and Georg Simmel, whose work recognised what many could feel around them: that modern society was impersonal, fragmented and beset by irresolvable conflicts of value.

In light of all this, it’s not surprising that the concept of rupture appears on several levels in Wittgenstein, Heidegger and Benjamin. They presented their works as breaks in and with the philosophical tradition. They reinterpreted history in terms of rupture, going back and seeking the junctures when pathologies had appeared and possibilities had been foreclosed. They emphasised the leaps of faith and moments of decision that punctuated the course of life.

Even the personal qualities that attract Eilenberger to these individuals – their eccentric behaviour, their search for authenticity – were not theirs alone. They were part of a generational desire to break with the old bourgeois ways, which no doubt seemed the only way to take ownership of such a rapidly changing world.

 

The politics of crisis is not going away any time soon

This essay was originally published by Palladium magazine on June 10th 2020

A pattern emerges when surveying the vast commentary on the COVID-19 pandemic. At its center is a distinctive image of crisis: the image of a cruel but instructive spotlight laying bare the flaws of contemporary society. Crisis, we read, has “revealed,” “illuminated,” “clarified,” and above all, “exposed” our collective failures and weaknesses. It has unveiled the corruption of institutions, the decadence of culture, and the fragility of a material way of life. It has sounded the death-knell for countless projects and ideals.

“The pernicious coronavirus tore off an American scab and revealed suppurating wounds beneath,” announces one commentator, after noting “these calamities can be tragically instructional…Fundamental but forgotten truths, easily masked in times of calm, reemerge.”

Says another: “Invasion and occupation expose a society’s fault lines, exaggerating what goes unnoticed or accepted in peacetime, clarifying essential truths, raising the smell of buried rot.”

You may not be surprised to learn that these two near-identical comments come from very different interpretations of the crisis. The first, from Trump-supporting historian Victor Davis Hanson of the Hoover Institution, claims that the “suppurating wounds” of American society are an effete liberal elite compromised by their reliance on a malignant China and determined to undermine the president at any cost. According to the second, by The Atlantic’s George Packer, the “smell of buried rot” comes from the Trump administration itself, the product of an oligarchic ascendency whose power stems from the division of society and hollowing-out of the state.

Nothing, it seems, has evaded the extraordinary powers of diagnosis made available by crisis: merciless globalism, backwards nationalism, the ignorance of populists, the naivety of liberals, the feral market, the authoritarian state. We are awash in diagnoses, but diagnosis is only the first step. It is customary to sharpen the reality exposed by the virus into a binary, existential decision: address the weakness identified, or succumb to it. “We’re faced with a choice that the crisis makes inescapably clear,” writes Packer, “the alternative to solidarity is death.” No less ominous is Hanson’s invocation of Pearl Harbor: “Whether China has woken a sleeping giant in the manner of the earlier Japanese, or just a purring kitten, remains to be seen.”

The crisis mindset is not just limited to journalistic sensationalism. Politicians, too, have appealed to a now-or-never, sink-or-swim framing of the COVID-19 emergency. French President Emmanuel Macron has been among those using such terms to pressure Eurozone leaders into finally establishing a collective means of financing debt. “If we can’t do this today, I tell you the populists will win,” Macron told The Financial Times. Across the Atlantic, U.S. Congresswoman Alexandria Ocasio-Cortez has claimed that the pandemic “has just exposed us, the fragility of our system,” and has adopted the language of “life or death” in her efforts to bring together the progressive and centrist wings of the Democratic Party before the presidential election in November.

And yet, in surveying this rhetoric of diagnosis and decision, what is most surprising is how familiar it sounds. Apart from the pathogen itself, there are few narratives of crisis now being aired which were not already well-established during the last decade. Much as the coronavirus outbreak has felt like a sudden rupture from the past, we have already been long accustomed to the politics of crisis.

It was under the mantra of “tough decisions,” with the shadow of the financial crisis still looming, that sharp reductions in public spending were justified across much of the Western world after 2010. Since then, the European Union has been crippled by conflicts over sovereign debt and migration. It was the rhetoric of the Chinese menace and of terminal decline—of “rusted-out factories scattered like tombstones across the landscape of our nation,” to quote the 2017 inaugural address—that brought President Trump to power. Meanwhile, progressives had already mobilized themselves around the language of emergency with respect to inequality and climate change.

There is something deeply paradoxical about all of this. The concept of crisis is supposed to denote a need for exceptional attention and decisive focus. In its original Greek, the term krisis often referred to a decision between two possible futures, but the ubiquity of “crisis” in our politics today has produced only deepening chaos. The sense of emergency is stoked continuously, but the accompanying promises of clarity, agency, and action are never delivered. Far from a revealing spotlight, the crises of the past decade have left us with a lingering fog which now threatens to obscure us at a moment when we really do need judicious action.

***

Crises are a perennial feature of modern history. For half a millenium, human life has been shaped by impersonal forces of increasing complexity and abstraction, from global trade and finance to technological development and geopolitical competition. These forces are inherently unstable and frequently produce moments of crisis, not least due to an exogenous shock like a deadly plague. Though rarely openly acknowledged, the legitimacy of modern regimes has largely depended on a perceived ability to keep that instability at bay.

This is the case even at times of apparent calm, such as the period of U.S. global hegemony immediately following the Cold War. The market revolution of the 1980s and globalization of the 1990s were predicated on a conception of capitalism as an unpredictable, dynamic system which could nonetheless be harnessed and governed by technocratic expertise. Such were the hopes of “the great moderation.” A series of emerging market financial crises—in Mexico, Korea, Thailand, Indonesia, Russia, and Argentina—provided opportunities for the IMF and World Bank to demand compliance with the Washington Consensus in economic policy. Meanwhile, there were frequent occasions for the U.S. to coordinate global police actions in war-torn states.

Despite the façade of independent institutions and international bodies, it was in no small part through such crisis-fighting economic and military interventions that a generation of U.S. leaders projected power abroad and secured legitimacy at home. This model of competence and progress, which seems so distant now, was not based on a sense of inevitability so much as confidence in the capacity to manage one crisis after another: to “stabilize” the most recent eruption of chaos and instability.

A still more striking example comes from the European Union, another product of the post-Cold War era. The project’s main purpose was to maintain stability in a trading bloc soon to be dominated by a reunified Germany. Nonetheless, many of its proponents envisaged that the development of a fully federal Europe would occur through a series of crises, with the supra-national structures of the EU achieving more power and legitimacy at each step. When the Euro currency was launched in 1999, Romano Prodi, then president of the European Commission, spoke of how the EU would extend its control over economic policy: “It is politically impossible to propose that now. But some day there will be a crisis and new instruments will be created.”

It is not difficult to see why Prodi took this stance. Since the rise of the rationalized state two centuries ago, managerial competence has been central to notions of successful governance. In the late 19th century, French sociologist Emile Durkheim compared the modern statesman to a physician: “he prevents the outbreak of illnesses by good hygiene, and seeks to cure them when they have appeared.” Indeed, the bureaucratic structures which govern modern societies have been forged in the furnaces of crisis. Social security programs, income tax, business regulation, and a host of other state functions now taken for granted are a product of upheavals of the 19th and early 20th centuries: total war, breakneck industrialization, famine, and financial panic. If necessity is the mother of invention, crisis is the midwife of administrative capacity.

By the same token, the major political ideologies of the modern era have always claimed to offer some mastery over uncertainty. The locus of agency has variously been situated in the state, the nation, individuals, businesses, or some particular class or group; the stated objectives have been progress, emancipation, greatness, or simply order and stability. But in every instance, the message has been that the chaos endemic to modern history must be tamed or overcome by some paradigmatic form of human action. The curious development of Western modernity, where the management of complex, crisis-prone systems has come to be legitimated through secular mass politics, appears amenable to no other template.

It is against this backdrop that we can understand the period of crisis we have endured since 2008. The narratives of diagnosis and decision which have overtaken politics during this time are variations on a much older theme—one that is present even in what are retrospectively called “times of calm.” The difference is that, where established regimes have failed to protect citizens from instability, the logic of crisis management has burst its technocratic and ideological bounds and entered the wider political sphere. The greatest of these ruptures was captured by a famous statement attributed to Federal Reserve Chairman Ben Bernanke in September 2008. Pleading with Congress to pass a $700 billion bailout, Bernanke claimed: “If we don’t do this now, we won’t have an economy on Monday.”

This remark set the tone for the either/or, act-or-perish politics of the last decade. It points to a loss of control which, in the United States and beyond, opened the way for competing accounts not just of how order could be restored, but also what that order should look like. Danger and disruption have become a kind of opportunity, as political insurgents across the West have captured established parties, upended traditional power-sharing arrangements, and produced the electoral shocks suggested by the ubiquitous phrase “the age of Trump and Brexit.” These campaigns sought to give the mood of crisis a definite shape, directing it towards the need for urgent decision or transformative action, thereby giving supporters a compelling sense of their own agency.

***

Typically though, such movements do not merely offer a choice between existing chaos and redemption to come. In diagnoses of crisis, there is always an opposing agent who is responsible for and threatening to deepen the problem. We saw this already in Hanson’s and Packer’s association of the COVID-19 crisis with their political opponents. But it was there, too, among Trump’s original supporters, for whom the agents of crisis were not just immigrants and elites but, more potently, the threat posed by the progressive vision for America. This was most vividly laid out in Michael Anton’s infamous “Flight 93 Election” essay, an archetypal crisis narrative which urged fellow conservatives that only Trump could stem the tide of “wholesale cultural and political change,” claiming “if you don’t try, death is certain.”

Yet Trump’s victory only galvanized the radical elements of the left, as it gave them a villain to point to as a way of further raising the consciousness of crisis among their own supporters. The reviled figure of Trump has done more for progressive stances on immigration, healthcare, and climate action than anyone else, for he is the ever-present foil in these narratives of emergency. Then again, such progressive ambitions, relayed on Fox News and social media, have also proved invaluable in further stoking conservatives’ fears.

To simply call this polarization is to miss the point. The dynamic taking shape here is rooted in a shared understanding of crisis, one that treats the present as a time in which the future of society is being decided. There is no middle path, no going back: each party claims that if they do not take this opportunity to reshape society, their opponents will. In this way, narratives of crisis feed off one another, and become the basis for a highly ideological politics—a politics that de-emphasizes compromise with opponents and with the practical constraints of the situation at hand, prioritizing instead the fulfillment of a goal or vision for the future.

Liberal politics is ill-equipped to deal with, or even to properly recognize, such degeneration of discourse. In the liberal imagination, the danger of crisis is typically that the insecurity of the masses will be exploited by a demagogue, who will then transfigure the system into an illiberal one. In many cases, though, it is the system which loses legitimacy first, as the frustrating business of deliberative, transactional politics cannot meet the expectations of transformative change which are raised in the public sphere.

Consider the most iconic and, in recent years, most frequently analogized period of crisis in modern history: Germany’s Weimar Republic of 1918-33. These were the tempestuous years between World War I and Hitler’s dictatorship, during which a fledgling democracy was rocked by armed insurrection, hyperinflation, foreign occupation, and the onset of the Great Depression, all against a backdrop of rapid social, economic, and technological upheaval.

Over the past decade or so, there have been no end of suggestions that ours is a “Weimar moment.” Though echoes have been found in all sorts of social and cultural trends, the overriding tendency has been to view the crises of the Weimar period backwards through their end result, the establishment of Nazi dictatorship in 1933. In various liberal democracies, the most assertive Weimar parallels have referred to the rise of populist and nationalist politics, and in particular, the erosion of constitutional norms by leaders of this stripe. The implication is that history has warned us how the path of crisis can lead towards an authoritarian ending.

What this overlooks, however, is that Weimar society was not just a victim of crisis that stumbled blindly towards authoritarianism, but was active in interpreting what crises revealed and how they should be addressed. In particular, the notion of crisis served the ideological narratives of the day as evidence of the need to refashion the social settlement. Long before the National Socialists began their rise in the early 1930s, these conflicting visions, pointing to one another as evidence of the stakes, sapped the republic’s legitimacy by making it appear impermanent and fungible.

The First World War had left German thought with a pronounced sense of the importance of human agency in shaping history. On the one hand, the scale and brutality of the conflict left survivors adrift in a world of unprecedented chaos, seeming to confirm a suspicion of some 19th century German intellectuals that history had no inherent meaning. But at the same time, the war had shown the extraordinary feats of organization and ingenuity that an industrialized society, unified and mobilized around a single purpose, was capable of. Consequently, the prevailing mood of Weimar was best captured by the popular term Zeitenwende, the turning of the times. Its implication was that the past was irretrievably lost, the present was chaotic and dangerous, but the future was there to be claimed by those with the conviction and technical skill to do so.

Throughout the 1920s, this historical self-consciousness was expressed in the concept of Krisis or Krise, crisis. Intellectual buzzwords referred to a crisis of learning, a crisis of European culture, a crisis of historicism, crisis theology, and numerous crises of science and mathematics. The implication was that these fields were in a state of flux which called for resolution. A similar dynamic could be seen in the political polemics which filled the Weimar press, where discussions of crisis tended to portray the present as a moment of decision or opportunity. According to Rüdiger Graf’s study of more than 370 Weimar-era books and still more journal articles with the term “crisis” in their titles, the concept generally functioned as “a call to action” by “narrow[ing] the complex political world to two exclusive alternatives.”

Although the republic was most popular among workers and social democrats, the Weimar left contained an influential strain of utopian thought which saw itself as working beyond the bounds of formal politics. Here, too, crisis was considered a source of potential. Consider the sentiments expressed by Walter Gropius, founder of the Bauhaus school of architecture of design, in 1919:

Capitalism and power politics have made our generation creatively sluggish, and our vital art is mired in a broad bourgeois philistinism. The intellectual bourgeois of the old Empire…has proven his incapacity to be the bearer of German culture. The benumbed world is now toppled, its spirit is overthrown, and is in the midst of being recast in a new mold.

Gropius was among those intellectuals, artists, and administrators who, often taking inspiration from an idealized image of the Soviet Union, subscribed to the idea of the “new man”—a post-capitalist individual whose self-fulfillment would come from social duty. Urban planning, social policy, and the arts were all seen as means to create the environment in which this new man could emerge.

The “bourgeois of the old Empire,” as Gropius called them, had indeed been overthrown; but in their place came a reactionary modernist movement, often referred to as the “conservative revolution,” whose own ideas of political transformation used socialism both as inspiration and as ideological counterpoint. In the works of Ernst Jünger, technology and militarist willpower were romanticized as dynamic forces which could pull society out of decadence. Meanwhile, the political theorist Carl Schmitt emphasized the need for a democratic polity to achieve a shared identity in opposition to a common enemy, a need sometimes better accomplished by the decisive judgments of a sovereign dictator than by a fractious parliamentary system.

Even some steadfast supporters of the republic, like the novelist Heinrich Mann, seized on the theme of crisis as a call to transformative action. In a 1923 speech, against a backdrop of hyperinflation and the occupation of the Ruhr by French forces, Mann insisted that the republic should resist the temptation of nationalism, and instead fulfill its promise as a “free people’s state” by dethroning the “blood-gorging” capitalists who still controlled society in their own interests.

These trends were not confined to rhetoric and intellectual discussion. They were reflected in practical politics by the tendency of even trivial issues to be treated as crises that raised fundamental conflicts of worldview. So it was that, in 1926, a government was toppled by a dispute over the regulations for the display of the republican flag. Meanwhile, representatives were harangued by voters who expected them to embody the uncompromising ideological clashes taking place in the wider political sphere. In towns and cities across the country, rival marches and processions signaled the antagonism of socialists and their conservative counterparts—the burghers, professionals and petite bourgeoisie who would later form the National Socialist coalition, and who by mid-decade had already coalesced around President Paul von Hindenburg.

***

We are not Weimar. The ideologies of that era, and the politics that flowed from them, were products of their time, and there were numerous contingent reasons why the republic faced an uphill battle for acceptance. Still, there are lessons. The conflict between opposing visions of society may seem integral to the spirit of democratic politics, but at times of crisis, it can be corrosive to democratic institutions. The either/or mindset can add a whole new dimension to whatever emergency is at hand, forcing what is already a time of disorientating change into a zero-sum competition between grand projects and convictions that leave ordinary, procedural politics looking at best insignificant, and at worst an obstacle.

But sometimes this kind of escalation is simply unavoidable. Crisis ideologies amplify, but do not create, a desire for change. The always-evolving material realities of capitalist societies frequently create circumstances that are untenable, and which cannot be sufficiently addressed by political systems prone to inertia and capture by vested interests. When such a situation erupts into crisis, incremental change and a moderate tone may already be a foregone conclusion. If your political opponent is electrifying voters with the rhetoric of emergency, the only option might be to fight fire with fire.

There is also a hypocrisy innate to democratic politics which makes the reality of how severe crises are managed something of a dirty secret. Politicians like to invite comparisons with past leaders who acted decisively during crises, whether it be French president Macron’s idolization of Charles de Gaulle, the progressive movement in the U.S. and elsewhere taking Franklin D Roosevelt as their inspiration, or virtually every British leader’s wish to be likened to Winston Churchill. What is not acknowledged is the shameful compromises that accompanied these leaders’ triumphs. De Gaulle’s opportunity to found the French Fifth Republic came amid threats of a military coup. Roosevelt’s New Deal could only be enacted with the backing of Southern Democratic politicians, and as such, effectively excluded African Americans from its most important programs. Allied victory in the Second World War, the final fruit of Churchill’s resistance, came at the price of ceding Eastern and Central Europe to Soviet tyranny.

Such realities are especially difficult to bear because the crises of the past are a uniquely unifying force in liberal democracies. It was often through crises, after all, that rights were won, new institutions forged, and loyalty and sacrifice demonstrated. We tend to imagine those achievements as acts of principled agency which can be attributed to society as a whole, whereas they were just as often the result of improvisation, reluctant concession, and tragic compromise.

Obviously, we cannot expect a willingness to bend principles to be treated as a virtue, and nor, perhaps, should we want it to. But we can acknowledge the basic degree of pragmatism  which crises demand. This is the most worrying aspect of the narratives of decision surrounding the current COVID-19 crisis: still rooted in the projects and preoccupations of the past, they threaten to render us inflexible at a moment when we are entering uncharted territory.

Away from the discussions about what the emergency has revealed and the action it demands, a new era is being forged by governments and other institutions acting on a more pressing set of motives—in particular, maintaining legitimacy in the face of sweeping political pressures and staving off the risk of financial and public health catastrophes. It is also being shaped from the ground up, as countless individuals have changed their behavior in response to an endless stream of graphs, tables, and reports in the media.

Political narratives simply fail to grip the contingency of this situation. Commentators talk about the need to reduce global interdependence, even as the architecture of global finance has been further built up by the decision of the Federal Reserve, in March, to support it with unprecedented amounts of dollar liquidity. They continue to argue within a binary of free market and big government, even as staunchly neoliberal parties endorse state intervention in their economies on a previously unimaginable scale. Likewise, with discussions about climate policy or western relations with China—the parameters within which these strategies will have to operate are simply unknown.

To reduce such complex circumstances to simple, momentous decisions is to offer us more clarity and agency than we actually possess. Nonetheless, that is how this crisis will continue to be framed, as political actors strive to capture the mood of emergency. It will only make matters worse, though, if our judgment remains colored by ambitions and resentments which were formed in earlier crises. If we continue those old struggles on this new terrain, we will swiftly lose our purchase on reality. We will be incapable of a realistic appraisal of the constraints now facing us, and without such realistic appraisal, no solution can be effectively pursued.

What was Romanticism? Putting the “counter-Enlightenment” in context

In his latest book Enlightenment Now: The Case for Reason, Science, Humanism and Progress, Steven Pinker heaps a fair amount of scorn on Romanticism, the movement in art and philosophy which spread across Europe during the late-18th and 19th centuries. In Pinker’s Manichean reading of history, Romanticism was the malign counterstroke to the Enlightenment: its goal was to quash those values listed in his subtitle. Thus, the movement’s immense diversity and ambiguity are reduced to a handful of ideas, which show that the Romantics favored “the heart over the head, the limbic system over the cortex.” This provides the basis for Pinker to label “Romantic” various irrational tendencies that are still with us, such as nationalism and reverence for nature.

In the debates following Enlightenment Now, many have continued to use Romanticism simply as a suitcase term for “counter-Enlightenment” modes of thought. Defending Pinker in Areo, Bo Winegard and Benjamin Winegard do produce a concise list of Romantic propositions. But again, their version of Romanticism is deliberately anachronistic, providing a historical lineage for the “modern romantics” who resist Enlightenment principles today.

As it happens, this dichotomy does not appeal only to defenders of the Enlightenment. In his book The Age of Anger, published last year, Pankaj Mishra explains various 21st century phenomena — including right-wing populism and Islamism — as reactions to an acquisitive, competitive capitalism that he traces directly back to the 18th century Enlightenment. This, says Mishra, is when “the unlimited growth of production . . . steadily replaced all other ideas of the human good.” And who provided the template for resisting this development? The German Romantics, who rejected the Enlightenment’s “materialist, individualistic and imperialistic civilization in the name of local religious and cultural truth and spiritual virtue.”

Since the Second World War, it has suited liberals, Marxists, and postmodernists alike to portray Romanticism as the mortal enemy of Western rationalism. This can convey the impression that history has long consisted of the same struggle we are engaged in today, with the same teams fighting over the same ideas. But even a brief glance at the Romantic era suggests that such narratives are too tidy. These were chaotic times. Populations were rising, people were moving into cities, the industrial revolution was occurring, and the first mass culture emerging. Europe was wracked by war and revolution, nations won and lost their independence, and modern politics was being born.

So I’m going to try to explain Romanticism and its relationship with the Enlightenment in a bit more depth. And let me say this up front: Romanticism was not a coherent doctrine, much less a concerted attack on or rejection of anything. Put simply, the Romantics were a disparate constellation of individuals and groups who arrived at similar motifs and tendencies, partly by inspiration from one another, partly due to underlying trends in European culture. In many instances, their ideas were incompatible with, or indeed hostile towards, the Enlightenment and its legacy. On the other hand, there was also a good deal of mutual inspiration between the two.

 

Sour grapes

The narrative of Romanticism as a “counter-Enlightenment” often begins in the mid-18th century, when several forerunners of the movement appeared. The first was Jean-Jacques Rousseau, whose Social Contract famously asserts “Man is born free, but everywhere he is in chains.” Rousseau portrayed civilization as decadent and morally compromised, proposing instead a society of minimal interdependence where humanity would recover its natural virtue. Elsewhere in his work he also idealized childhood, and celebrated the outpouring of subjective emotion.

In fact various Enlightenment thinkers, Immanuel Kant in particular, admired Rousseau’s ideas; he was arguing that left to their own devices, ordinary people would use reason to discover virtue. Nonetheless, he was clearly attacking the principle of progress, and his apparent motivations for doing so were portentous. Rousseau had been associated with the French philosophes — men such as Thiry d’Holbach, Denis Diderot, Claude Helvétius and Jean d’Alembert — who were developing the most radical strands of Enlightenment thought, including materialist philosophy and atheism. But crucially, they were doing so within a rather glamorous, cosmopolitan milieu. Though they were monitored and harassed by the French ancien régime, many of the philosophes were nonetheless wealthy and well-connected figures, their Parisian salons frequented by intellectuals, ambassadors and aristocrats from across Europe.

Rousseau decided the Enlightenment belonged to a superficial, hedonistic elite, and essentially styled himself as a god-fearing voice of the people. This turned out to be an important precedent. In Prussia, where a prolific Romantic movement would emerge, such antipathy towards the effete culture of the French was widespread. For much to the frustration of Prussian intellectuals and artists — many of whom were Pietist Christians from lowly backgrounds — their ruler Frederick the Great was an “Enlightened despot” and dedicated Francophile. He subscribed to Melchior Grimm’s Correspondence Littéraire, which brought the latest ideas from the Paris; he hosted Voltaire at his court as an Enlightenment mascot; he conducted affairs in French, his first language.

This is the background against which we find Johann Gottfried Herder, whose ideas about language and culture were deeply influential to Romanticism. He argued that one can only understand the world via the linguistic concepts that one inherits, and that these reflect the contingent evolution of one’s culture. Hence in moral terms, different cultures occupy significantly different worlds, so their values should not be compared to one another. Nor should they be replaced with rational schemes dreamed up elsewhere, even if this means that societies are bound to come into conflict.

Rousseau and Herder anticipated an important cluster of Romantic themes. Among them are the sanctity of the inner-life, of folkways and corporate social structures, of belonging, of independence, and of things that cannot be quantified. And given the apparent bitterness of Herder and some of his contemporaries, one can see why Isaiah Berlin declared that all this amounted to “a very grand form of sour grapes.” Berlin takes this line too far, but there is an important insight here. During the 19th century, with the rise of the bourgeoisie and of government by utilitarian principles, many Romantics will show a similar resentment towards “sophisters, economists, and calculators,” as Edmund Burke famously called them. Thus Romanticism must be seen in part as coming from people denied status in a changing society.

Then again, Romantic critiques of excessive uniformity and rationality were often made in the context of developments that were quite dramatic. During the 1790s, it was the French Revolution’s degeneration into tyranny that led first-generation Romantics in Germany and England to fear the so-called “machine state,” or government by rational blueprint. Similarly, the appalling conditions that marked the first phase of the industrial revolution lay behind some later Romantics’ revulsion at industrialism itself. John Ruskin celebrated medieval production methods because “men were not made to work with the accuracy of tools,” with “all the energy of their spirits . . . given to make cogs and compasses of themselves.”

And ultimately, it must be asked if opposition to such social and political changes was opposition to the Enlightenment itself. The answer, of course, depends on how you define the Enlightenment, but with regards to Romanticism we can only make the following generalization. Romantics believed that ideals such as reason, science, and progress had been elevated at the expense of values like beauty, expression, or belonging. In other words, they thought the Enlightenment paradigm established in the 18th century was limited. This is well captured by Percy Shelley’s comment in 1821 that although humanity owed enormous gratitude to philosophers such as John Locke and Voltaire, only Rousseau had been more than a “mere reasoner.”

And yet, in perhaps the majority of cases, this did not make Romantics hostile to science, reason, or progress as such. For it did not seem to them, as it can seem to us in hindsight, that these ideals must inevitably produce arrangements such as industrial capitalism or technocratic government. And for all their sour grapes, they often had reason to suspect those whose ascent to wealth and power rested on this particular vision of human improvement.

 

“The world must be romanticized”

One reason Romanticism is often characterized as against something — against the Enlightenment, against capitalism, against modernity as such — is that it seems like the only way to tie the movement together. In the florescence of 19th century art and thought, Romantic motifs were arrived at from a bewildering array of perspectives. In England during the 1810s, for instance, radical, progressive liberals such as Shelley and Lord Byron celebrated the crumbling of empires and of religion, and glamorized outcasts and oppressed peoples in their poetry. They were followed by arch-Tories like Thomas Carlyle and Ruskin, whose outlook is fundamentally paternalistic. Other Romantics migrated across the political spectrum during their lifetimes, bringing their themes with them.

All this is easier to understand if we note that a new sensibility appeared in European culture during this period, remarkable for its idealism and commitment to principle. Disparaged in England as “enthusiasm,” and in Germany as Schwärmerei or fanaticism, we get a flavor of it by looking at some of the era’s celebrities. There was Beethoven, celebrated as a model of the passionate and impoverished genius; there was Byron, the rebellious outsider who received locks of hair from female fans; and there was Napoleon, seen as an embodiment of untrammeled willpower.

Curiously, though, while this Romantic sensibility was a far cry from the formality and refinement which had characterized the preceding age of Enlightenment, it was inspired by many of the same ideals. To illustrate this, and to expand on some key Romantic concepts, I’m going to focus briefly on a group that came together in Prussia at the turn of the 19th century, known as the Jena Romantics.

The Jena circle — centred around Ludwig Tieck, Friedrich and August Schlegel, Friedrich Hölderlin, and the writer known as Novalis — have often been portrayed as scruffy bohemians, a conservative framing that seems to rest largely on their liberal attitudes to sex. But this does give us an indication of the group’s aims: they were interested in questioning convention, and pursuing social progress (their journal Das Athenäum was among the few to publish female writers). They were children of the Enlightenment in other respects, too. They accepted that rational skepticism had ruled out traditional religion and superstition, and that science was a tool for understanding reality. Their philosophy, however, shows an overriding desire to reconcile these capacities with an inspiring picture of culture, creativity, and individual fulfillment. And so they began by adapting the ideas of two major Enlightenment figures: Immanuel Kant and Benedict Spinoza.

Kant, who spent his entire life among the Romantics in Prussia, had impressed on them the importance of one dilemma in particular: how was human freedom possible given that nature was determined? But rather than follow Kant down the route of transcendental freedom, the Jena school tried to update the universe Spinoza had described a century earlier, which was a single deterministic entity governed by a mechanical sequence of cause and effect. Conveniently, this mechanistic model had been called into doubt by contemporary physics. So they kept the integrated, holistic quality of Spinoza’s nature, but now suggested that it was suffused with another Kantian idea — that of organic force or purpose.

Consequently, the Jena Romantics arrived at an organic conception of the universe, in which nature expressed the same omnipresent purpose in all its manifestations, up to and including human consciousness. Thus there was no discrepancy between mental activity and matter, and the Romantic notion of freedom as a channelling of some greater will was born. After all, nature must be free because, as Spinoza had argued, there is nothing outside nature. Therefore, in Friedrich Schlegel’s words, “Man is free because he is the highest expression of nature.”

Various concepts flowed from this, the most consequential being a revolutionary theory of art. Whereas the existing neo-classical paradigm had assumed that art should hold a mirror up to nature, reflecting its perfection, the Romantics now stated that the artist should express nature, since he is part of its creative flow. What this entails, moreover, is something like a primitive notion of the unconscious. For this natural force comes to us through the profound depths of language and myth; it cannot be definitely articulated, only grasped at through symbolism and allegory.

Such longing for the inexpressible, the infinite, the unfathomable depth thought to lie beneath the surface of ordinary reality, is absolutely central to Romanticism. And via the Jena school, it produces an ideal which could almost serve as a Romantic program: being-through-art. The modern condition, August Schlegel says, is the sensation of being adrift between two idealized figments of our imagination: a lost past and an uncertain future. So ultimately, we must embrace our frustrated existence by making everything we do a kind of artistic expression, allowing us to move forward despite knowing that we will never reach what we are aiming for. This notion that you can turn just about anything into a mystery, and thus into a field for action, is what Novalis alludes to in his famous statement that “the world must be romanticized.”

It appears there’s been something of a detour here: we began with Spinoza and have ended with obscurantism and myth. But as Frederick Beiser has argued, this baroque enterprise was in many ways an attempt to radicalize the 18th century Enlightenment. Indeed, the central thesis that our grip on reality is not certain, but we must embrace things as they seem to us and continue towards our aims, was almost a parody of the skepticism advanced by David Hume and by Kant. Moreover, and more ominously, the Romantics amplified the Enlightenment principle of self-determination, producing the imperative that individuals and societies must pursue their own values.

 

The Romantic legacy

It is beyond doubt that some Romantic ideas had pernicious consequences, the most demonstrable being a contribution to German nationalism. By the end of the 19th century, when Prussia had become the dominant force in a unified Germany and Richard Wagner’s feverish operas were being performed, the Romantic fascination with national identity, myth, and the active will had evolved into something altogether menacing. Many have taken the additional step, which is not a very large one, of implicating Romanticism in the fascism of the 1930s.

A more tenuous claim is that Romanticism (and German Romanticism especially) contains the origins of the postmodern critique of the Enlightenment, and of Western civilization itself, which is so current among leftist intellectuals today. As we have seen, there was in Romanticism a strong strain of cultural relativism — which is to say, relativism about values. But postmodernism has at its core a relativism about facts, a denial of the possibility of reaching objective truth by reason or observation. This nihilistic stance is far from the skepticism of the Jena school, which was fundamentally a means for creative engagement with the world.

But whatever we make of these genealogies, remember that we are talking about developments, progressions over time. We are not saying that Romanticism was in any meaningful sense fascistic, postmodernist, or whichever other adjective appears downstream. I emphasize this because if we identify Romanticism with these contentious subjects, we will overlook its myriad more subtle contributions to the history of thought.

Many of these contributions come from what I described earlier as the Romantic sensibility: a variety of intuitions that seem to have taken root in Western culture during this era. For instance, that one should remain true to one’s own principles at any cost; that there is something tragic about the replacement of the old and unusual with the uniform and standardized; that different cultures should be appreciated on their own terms, not on a scale of development; that artistic production involves the expression of something within oneself. Whether these intuitions are desirable is open to debate, but the point is that the legacy of Romanticism cannot be compartmentalized, for it has colored many of our basic assumptions.

This is true even of ideas that we claim to have inherited from the Enlightenment. For some of these were these were modified, and arguably enriched, as they passed through the Romantic era. An explicit example comes from John Stuart Mill, the founding figure of classical Liberalism. Mill inherited from his father and from Jeremy Bentham a very austere version of utilitarian ethics. This posited as its goal the greatest good for the greatest number of people; but its notion of the good did not account for the value of culture, spirituality, and a great many other things we now see as intrinsic to human flourishing. As Mill recounts in his autobiography, he realized these shortcomings by reading England’s first-generation Romantics, William Wordsworth and Samuel Taylor Coleridge.

This is why, in 1840, Mill bemoaned the fact that his fellow progressives thought they had nothing to learn from Coleridge’s philosophy, warning them that “the besetting danger is not so much of embracing falsehood for truth, as of mistaking part of the truth for the whole.” We are committing a similar error today when we treat Romanticism simply as a “counter-Enlightenment.” Ultimately this limits our understanding not just of Romanticism but of the Enlightenment as well.

 

This essay was first published in Areo Magazine on June 10 2018. See it here.

Social media’s turn towards the grotesque

This essay was first published by Little Atoms on 09 August 2018. The image on my homepage is a detail from an original illustration by Jacob Stead. You can see the full work here.

Until recently it seemed safe to assume that what most people wanted on social media was to appear attractive. Over the last decade, the major concerns about self-presentation online have been focused on narcissism and, for women especially, unrealistic standards of beauty. But just as it is becoming apparent that some behaviours previously interpreted as narcissistic – selfies, for instance – are simply new forms of communication, it is also no longer obvious that the rules of this game will remain those of the beauty contest. In fact, as people derive an ever-larger proportion of their social interaction here, the aesthetics of social media are moving distinctly towards the grotesque.

When I use the term grotesque, I do so in a technical sense. I am referring to a manner of representing things – the human form especially – which is not just bizarre or unsettling, but which creates a sense of indeterminacy. Familiar features are distorted, and conventional boundaries dissolved.

Instagram, notably, has become the site of countless bizarre makeup trends among its large demographic of young women and girls. These transformations range from the merely dramatic to the carnivalesque, including enormous lips, nose-hair extensions, eyebrows sculpted into every shape imaginable, and glitter coated onto everything from scalps to breasts. Likewise, the popularity of Snapchat has led to a proliferation of face-changing apps which revel in cartoonish distortions of appearance. Eyes are expanded into enormous saucers, faces are ghoulishly elongated or squashed, and animal features are tacked onto heads. These images, interestingly, are also making their way onto dating app profiles.

Of course for many people such tools are simply a way, as one reviewer puts it, “to make your face more fun.” There is something singularly playful in embracing such plasticity: see for instance the creative craze “#slime”, which features videos of people playing with colourful gooey substances, and has over eight million entries on Instagram. But if you follow the threads of garishness and indeterminacy through the image-oriented realms of the internet, deeper resonances emerge.

The pop culture embraced by Millennials and the so-called Generation C (born after 2000) reflects a fascination with brightly adorned, shape-shifting and sexually ambiguous personae. If performers like Miley Cyrus and Lady Gaga were forerunners of this tendency, they are now joined by more dark and refined way figures such as Sophie and Arca from the dance music scene. Meanwhile fashion, photography and video abound with kitsch, quasi-surreal imagery of the kind popularised by Dazed magazine. Celebrated subcultures such as Japan’s “genderless Kei,” who are characterised by bright hairstyles and makeup, are also part of this picture.

But the most striking examples of this turn towards the grotesque come from art forms emerging within digital culture itself. It is especially well illustrated by Porpentine, a game designer working with the platform Twine, whose disturbing interactive poems have achieved something of a cult status. They typically place readers in the perspective of psychologically and socially insecure characters, leading them through violent urban futurescapes reminiscent of William Burrough’s Naked Lunch. The New York Times aptly describes her games as “dystopian landscapes peopled by cyborgs, intersectional empresses and deadly angels,” teeming with “garbage, slime and sludge.”

These are all manifestations both of a particular sensibility which is emerging in parts of the internet, and more generally of a new way of projecting oneself into public space. To spend any significant time in the networks where such trends appear is to become aware of a certain model of identity being enacted, one that is mercurial, effervescent, and boldly expressive. And while the attitudes expressed vary from anxious subjectivity to humorous posturing – as well as, at times, both simultaneously – in most instances one senses that the online persona has become explicitly artificial, plastic, or even disposable.

*   *   *

Why, though, would a paradigm of identity such as this invite expression as the grotesque? Interpreting these developments is not easy given that digital culture is so diffuse and rapidly evolving. One approach that seems natural enough is to view them as social phenomena, arising from the nature of online interaction. Yet to take this approach is immediately to encounter a paradox of sorts. If “the fluid self” represents “identity as a vast and ever-changing range of ideas that should all be celebrated” (according to trend forecaster Brenda Milis), then why does it seem to conform to generic forms at all? This is a contradiction, that in fact might prove enlightening.

One frame which has been widely applied to social media is sociologist Erving Goffman’s “dramaturgical model,” as outlined in his 1959 book The Presentation of Self in Every Day Life. According to Goffman, identity can be understood in terms of a basic dichotomy, which he explains in terms of “Front Stage” and “Back Stage.” Our “Front Stage” identity, when we are interacting with others, is highly responsive to context. It is preoccupied with managing impressions and assessing expectations so as to present what we consider a positive view of ourselves. In other words, we are malleable in the degree to which we are willing to tailor our self-presentation.

The first thing to note about this model is that it allows for dramatic transformations. If you consider the degree of detachment enabled by projecting ourselves into different contexts through words and imagery, and empathising with others on the same basis, then the stage is set for more or less anything becoming normative within a given peer group. As for why people would want to take this expressive potential to unusual places, it seems reasonable to speculate that in many cases, the role we want to perform is precisely that of someone who doesn’t care what anyone thinks. But since most of us do in fact care, we might end up, ironically enough, expressing this within certain established parameters.

But focusing too much on social dynamics risks underplaying the undoubted sense of freedom associated with the detachment from self in online interaction. Yes, there is peer pressure here, but within these bounds there is also a palpable euphoria in escaping mundane reality. The neuroscientist Susan Greenfield has made this point while commenting on the “alternative identity” embraced by young social media users. The ability to depart from the confines of stable identity, whether by altering your appearance or enacting a performative ritual, essentially opens the door to a world of fantasy.

With this in mind, we could see the digital grotesque as part of a cultural tradition that offers us many precedents. Indeed, this year marks the 200th anniversary of perhaps the greatest precedent of all: Mary Shelley’s iconic novel Frankenstein. The great anti-hero of that story, the monster who is assembled and brought to life by the scientist Victor Frankenstein, was regarded by later generations as an embodiment of all the passions that society requires the individual to suppress – passions that the artist, in the act of creation, has special access to. The uncanny appearance and emotional crises of Frankenstein’s monster thus signify the potential for unknown depths of expression, strange, sentimental, and macabre.

That notion of the grotesque as something uniquely expressive and transformative was and has remained prominent in all of the genres with which Frankenstein is associated – romanticism, science fiction, and the gothic. It frequently aligns itself with the irrational and surreal landscapes of the unconscious, and with eroticism and sexual deviancy; the films of David Lynch are emblematic of this crossover. In modern pop culture a certain glamourised version of the grotesque, which subverts rigid identity with makeup and fashion, appeared in the likes of David Bowie and Marilyn Manson.

Are today’s online avatars potentially incarnations of Frankenstein’s monster, tempting us with unfettered creativity? The idea has been explored by numerous artists over the last decade. Ed Atkins is renowned for his humanoid characters, their bodies defaced by crude drawings, who deliver streams of consciousness fluctuating between the poetic and the absurd. Jon Rafman, meanwhile, uses video and animation to piece together entire composite worlds, mapping out what he calls “the anarchic psyche of the internet.” Reflecting on his years spent exploring cyberspace, Rafman concludes: “We’ve reached a point where we’re enjoying our own nightmares.”

*   *   *

It is possible that the changing aesthetics of the Internet reflect both the social pressures and the imaginative freedoms I’ve tried to describe, or perhaps even the tension between them. One thing that seems clear, though, is that the new notions of identity emerging here will have consequences beyond the digital world. Even if we accept in some sense Goffman’s idea of a “Backstage” self, which resumes its existence when we are not interacting with others, the distinction is ultimately illusory. The roles and contexts we occupy inevitably feed back into how we think of ourselves, as well as our views on a range of social questions. Some surveys already suggest a generational shift in attitudes to gender, for instance.

That paradigms of identity shift in relation to technological and social changes is scarcely surprising. The first half of the 20th century witnessed the rise of a conformist culture, enabled by mass production, communication, and ideology, and often directed by the state. This then gave way to the era of the unique individual promoted by consumerism. As for the balance of psychological benefits and problems that will arise as online interaction grows, that is a notoriously contentious question requiring more research.

There is, however, a bigger picture here that deserves attention. The willingness of people to assume different identities online is really part of a much broader current being borne along by technology and design – one whose general direction to enable individuals to modify and customise themselves in a wide range of ways. Whereas throughout the 20th century designers and advertisers were instrumental in shaping how we interpreted and expressed our social identity – through clothing, consumer products, and so on – this function is now increasingly being assumed by individuals within social networks.

Indeed, designers and producers are surrendering control of both the practical and the prescriptive aspects of their trade. 3D printing is just one example of how, in the future, tools and not products will be marketed. In many areas, the traditional hierarchy of ideas has been reversed, as those who used to call the tune are now trying to keep up with and capitalise on trends that emerge from their audiences. One can see this loss of influence in an aesthetic trend that seems to run counter to those I’ve been observing here, but which ultimately reflects the same reality. From fashion to furniture, designers are making neutral products which can be customised by an increasingly identity-conscious, changeable audience.

Currently, the personal transformations taking place online rely for the most part on software; the body itself is not seriously altered. But with scientific fields such as bioengineering expanding in scope, this may not be the case for long. Alice Rawsthorn has considered the implications: “As our personal identities become subtler and more singular, we will wish to make increasingly complex and nuanced choices about the design of many aspects of out lives… We will also have more of the technological tools required to do so.” If this does turn out to be the case, we will face considerable ethical dilemmas regarding the uses and more generally the purpose of science and technology.

When did death become so personal?

 

I have a slightly gloomy but, I think, not unreasonable view of birthdays, which is that they are really all about death. It rests on two simple observations. First, much as they pretend otherwise, people do generally find birthdays to be poignant occasions. And second, a milestone can have no poignancy which does not ultimately come from the knowledge that the journey in question must end. (Would an eternal being find poignancy in ageing, nostalgia, or anything else associated with the passing of time? Surely not in the sense that we use the word). In any case, I suspect most of us are aware that at these moments when our life is quantified, we are in some sense facing our own finitude. What I find interesting, though, is that to acknowledge this is verboten. In fact, we seem to have designed a whole edifice of niceties and diversions – cards, parties, superstitions about this or that age – to avoid saying it plainly.

Well it was my birthday recently, and it appears at least one of my friends got the memo. He gave a copy of Hans Holbein’s Dance of Death, a sequence of woodcuts composed in 1523-5. They show various classes in society being escorted away by a Renaissance version of the grim reaper – a somewhat cheeky-looking skeleton who plays musical instruments and occasionally wears a hat. He stands behind The Emperor, hands poised to seize his crown; he sweeps away the coins from The Miser’s counting table; he finds The Astrologer lost in thought, and mocks him with a skull; he leads The Child away from his distraught parents.

Screen Shot 2018-05-21 at 09.58.17
Hans Holbein, “The Astrologer” and “The Child,” from “The Dance of Death” (1523-5)

It is striking for the modern viewer to see death out in the open like this. But the “dance of death” was a popular genre that, before the advent of the printing press, had adorned the walls of churches and graveyards. Needless to say, this reflects the fact that in Holbein’s time, death came frequently, often without warning, and was handled (both literally and psychologically) within the community. Historians speculate about what pre-modern societies really believed regarding death, but belief is a slippery concept when death is part of the warp and weft of culture, encountered daily through ritual and artistic representations. It would be a bit like asking the average person today what their “beliefs” are about sex – where to begin? Likewise in Holbein’s woodcuts, death is complex, simultaneously a bringer of humour, justice, grief, and consolation.

Now let me be clear, I am not trying to romanticise a world before antibiotics, germ theory, and basic sanitation. In such a world, with child mortality being what it was, you and I would most likely be dead already. Nonetheless, the contrast with our own time (or at least with certain cultures, and more about that later) is revealing. When death enters the public sphere today – which is to say, fictional and news media – it rarely signifies anything, for there is no framework in which it can do so. It is merely a dramatic device, injecting shock or tragedy into a particular set of circumstances. The best an artist can do now is to expose this vacuum, as the photographer Jo Spence did in her wonderful series The Final Project, turning her own death into a kitsch extravaganza of joke-shop masks and skeletons.

Screen Shot 2018-05-21 at 10.04.25
From Jo Spence, “The Final Project,” 1991-2, courtesy of The Jo Spence Memorial Archive and Richard Saltoun Gallery

And yet, to say that modern secular societies ignore or avoid death is, in my view, to miss the point. It is rather that we place the task of interpreting mortality squarely and exclusively upon the individual. In other words, if we lack a common means of understanding death – a language and a liturgy, if you like – it is first and foremost because we regard that as a private affair. This convention is hinted at by euphemisms like “life is short” and “you only live once,” which acknowledge that our mortality has a bearing on our decisions, but also imply that what we make of that is down to us. It is also apparent, I think, in our farcical approach to birthdays.

Could it be that, thanks to this arrangement, we have actually come to feel our mortality more keenly? I’m not sure. But it does seem to produce some distinctive experiences, such as the one described in Philip Larkin’s famous poem “Aubade” (first published in 1977):

Waking at four to soundless dark, I stare.
In time the curtain-edges will grow light.
Till then I see what’s really always there:
Unresting death, a whole day nearer now,
Making all thought impossible but how
And where and when I shall myself die.

Larkin’s sleepless narrator tries to persuade himself that humanity has always struggled with this “special way of being afraid.” He dismisses as futile the comforts of religion (“That vast moth-eaten musical brocade / Created to pretend we never die”), as well as the “specious stuff” peddled by philosophy over the centuries. Yet in the final stanza, as he turns to the outside world, he nonetheless acknowledges what does make his fear special:

telephones crouch, getting ready to ring
In locked-up offices, and all the uncaring
Intricate rented world begins to rouse.

Work has to be done.
Postmen like doctors go from house to house.

There is a dichotomy here, between a personal world of introspection, and a public world of routine and action. The modern negotiation with death is confined to the former: each in our own house.

 

*     *     *

 

When did this internalisation of death occur, and why? Many reasons spring to mind: the decline of religion, the rise of Freudian psychology in the 20thcentury, the discrediting of a socially meaningful death by the bloodletting of the two world wars, the rise of liberal consumer societies which assign death to the “personal beliefs” category, and would rather people focused on their desires in the here and now. No doubt all of these have had some part to play. But there is also another way of approaching this question, which is to ask if there isn’t some sense in which we actually savour this private relationship with our mortality that I’ve outlined, whatever the burden we incur as a result. Seen from this angle, there is perhaps an interesting story about how these attitudes evolved.

I direct you again to Holbein’s Dance of Death woodcuts.As I’ve said, what is notable from our perspective is that they picture death within a traditional social context. But as it turns out, these images also reflect profound changes that were taking place in Northern Europe during the early modern era. Most notably, Martin Luther’s Protestant Reformation had erupted less than a decade before Holbein composed them. And among the many factors which led to that Reformation was a tendency which had begun emerging within Christianity during the preceding century, and which would be enormously influential in the future. This tendency was piety, which stressed the importance of the individual’s emotional relationship to God.

As Ulinka Rublack notes in her commentary on The Dance of Death, one of the early contributions of piety was the convention of representing death as a grisly skeleton. This figure, writes Rublack, “tested its onlooker’s immunity to spiritual anxiety,” since those who were firm in their convictions “could laugh back at Death.” In other words, buried within Holbein’s rich and varied portrayal of mortality was already, in embryonic form, an emotionally charged, personal confrontation with death. And nor was piety the only sign of this development in early modern Europe.

HOLBEIN-Hans-the-Younger-The-Ambassadors
Hans Holbein, The Ambassadors (1533)

In 1533, Holbein produced another, much more famous work dealing with death: his painting The Ambassadors. Here we see two young members of Europe’s courtly elite standing either side of a table, on which are arrayed various objects that symbolise a certain Renaissance ideal: a life of politics, art, and learning. There are globes, scientific instruments, a lute, and references to the ongoing feud within the church. The most striking feature of the painting, however, is the enormous skull which hovers inexplicably in the foreground, fully perceptible only from a sidelong angle. This remarkable and playful item signals the arrival of another way of confronting death, which I describe as decadent. It is not serving any moral or doctrinal message, but illuminating what is most precious to the individual: status, ambition, accomplishment.

The basis of this decadent stance is as follows: death renders meaningless our worldly pursuits, yet at the same time makes them seem all the more urgent and compelling. This will be expounded in a still more iconic Renaissance artwork: Shakespeare’s Hamlet (1599). It is no coincidence that the two most famous moments in this play are both direct confrontations with death. One is, of course, the “To be or not to be” soliloquy; the other is the graveside scene, in which Hamlet holds a jester’s skull and asks: “Where be your gibes now, your gambols, your songs, your flashes of merriment, that were wont to set the table on a roar?” These moments are indeed crucial, for they suggest why the tragic hero, famously, cannot commit to action. As he weighs up various decisions from the perspective of mortality, he becomes intoxicated by the nuances of meaning and meaninglessness. He dithers because ultimately, such contemplation itself is what makes him feel, as it were, most alive.

All of this is happening, of course, within the larger development that historians like to call “the birth of the modern individual.” But as the modern era progresses, I think there are grounds to say that these two approaches – the pious and the decadent – will be especially influential in shaping how certain cultures view the question of mortality. And although there is an important difference between them insofar as one addresses itself to God, they also share something significant: a mystification of the inner life, of the agony and ecstasy of the individual soul, at the expense of religious orthodoxy and other socially articulated ideas about life’s purpose and meaning.

During the 17thcentury, piety became the basis of Pietism, a Lutheran movement that enshrined an emotional connection with God as the most important aspect of faith. Just as pre-Reformation piety may have been a response, in part, to the ravages of the Black Death, Pietism emerged from the utter devastation wreaked in Germany by the Thirty Years War. Its worship was based on private study of the bible, alone or in small groups (sometimes called “churches within a church”), and on evangelism in the wider community. In Pietistic sermons, the problem of our finitude – of our time in this world – is often bound up with a sense of mystery regarding how we ought to lead our lives. Everything points towards introspection, a search for duty. We can judge how important these ideas were to the consciousness of Northern Europe and the United States simply by naming two individuals who came strongly under their influence: Immanuel Kant and John Wesley.

It was also from the Central German heartlands of Pietism that, in the late-18thcentury, Romanticism was born – a movement which took the decadent fascination with death far beyond what we find in Hamlet. Goethe’s novel The Sorrows of Young Werther, in which the eponymous artist shoots himself from lovesickness, led to a wave of copycat suicides by men dressed in dandyish clothing. As Romanticism spread across Europe and into the 19thcentury, flirting with death, using its proximity as a kind of emotional aphrodisiac, became a prominent theme in the arts. As Byron describes one of his typical heroes: “With pleasure drugged, he almost longed for woe, / And e’en for change of scene would seek the shades below.” Similarly, Keats: “Many a time / I have been half in love with easeful Death.”

 

*     *     *

 

This is a very cursory account, and I am certainly not claiming there is any direct or inevitable progression between these developments and our own attitudes to death. Indeed, with Pietism and Romanticism, we have now come to the brink of the Great Awakenings and Evangelism, of Wagner and mystic nationalism – of an age, in other words, where spirituality enters the public sphere in a dramatic and sometimes apocalyptic way. Nonetheless, I think all of this points to a crucial idea which has been passed on to some modern cultures, perhaps those with a northern European, Protestant heritage; the idea that mortality is an emotional and psychological burden which the individual should willingly assume.

And I think we can now discern a larger principle which is being cultivated here – one that has come to define our understanding of individualism perhaps more than any other. That is the principle of freedom. To take responsibility for one’s mortality – to face up to it and, in a manner of speaking, to own it – is to reflect on life itself and ask: for what purpose, for what meaning? Whether framed as a search for duty or, in the extreme decadent case, as the basis of an aesthetic experience, such questions seem to arise from a personal confrontation with death; and they are very central to our notions of freedom. This is partly, I think, what underlies our convention that what you make of death is your own business.

The philosophy that has explored these ideas most comprehensively is, of course, existentialism. In the 20thcentury, Martin Heidegger and Jean Paul Sartre argued that the individual can only lead an authentic life – a life guided by the values they deem important – by accepting that they are free in the fullest, most terrifying sense. And this in turn requires that the individual honestly accept, or even embrace, their finitude. For the way we see ourselves, these thinkers claim, is future-oriented: it consists not so much in what we have already done, but in the possibility of assigning new meaning to those past actions through what we might do in the future. Thus, in order to discover what our most essential values really are – the values we wish to direct our choices as free beings – we should consider our lives from its real endpoint, which is death.

Sartre and Heidegger were eager to portray these dilemmas, and their solutions, as brute facts of existence which they had uncovered. But it is perhaps truer to say that they were signing off on a deal which had been much longer in the making – a deal whereby the individual accepts the burden of understanding their existence as doomed beings, with all the nausea that entails, in exchange for the very expansive sense of freedom we now consider so important. Indeed, there is very little that Sartre and Heidegger posited in this regard which cannot be found in the work of the 19thcentury Danish philosopher Søren Kierkegaard; and Kierkegaard, it so happens, can also be placed squarely within the traditions of both Pietism and Romanticism.

To grasp how deeply engrained these ideas have become, consider again Larkin’s poem “Aubade:”

Most things may never happen: this one will,
And realisation of it rages out
In furnace-fear when we are caught without
People or drink. Courage is no good:
It means not scaring others. Being brave
Lets no one off the grave.
Death is no different whined at than withstood.

Here is the private confrontation with death framed in the most neurotic and desperate way. Yet part and parcel with all the negative emotions, there is undoubtedly a certain lugubrious relish in that confrontation. There is, in particular, something titillating in the rejection of all illusions and consolations, clearing the way for chastisement by death’s uncertainty. This, in other words, is the embrace of freedom taken to its most masochistic limit. And if you find something strangely uplifting about this bleak poem, it may be that you share some of those intuitions.

 

 

 

How The Past Became A Battlefield

 

In recent years, a great deal has been written on the subject of group identity in politics, much of it aiming to understand how people in Western countries have become more likely to adopt a “tribal” or “us-versus-them” perspective. Naturally, the most scrutiny has fallen on the furthest ends of the spectrum: populist nationalism on one side, and certain forms of radical progressivism on the other. We are by now familiar with various economic, technological, and psychological accounts of these group-based belief systems, which are to some extent analogous throughout Europe and in North America. Something that remains little discussed, though, is the role of ideas and attitudes regarding the past.

When I refer to the past here, I am not talking about the study of history – though as a source of information and opinion, it is not irrelevant either. Rather, I’m talking about the past as a dimension of social identity; a locus of narratives and values that individuals and groups refer to as a means of understanding who they are, and with whom they belong. This strikes me as a vexed issue in Western societies generally, and one which has had a considerable bearing on politics of late. I can only provide a generic overview here, but I think it’s notable that movements and tendencies which emphasise group identity do so partly through a particular, emotionally salient conception of the past.

First consider populism, in particular the nationalist, culturally conservative kind associated with the Trump presidency and various anti-establishment movements in Europe. Common to this form of politics is a notion that Paul Taggart has termed “heartland” – an ill-defined earlier time in which “a virtuous and unified population resides.” It is through this temporal construct that individuals can identify with said virtuous population and, crucially, seek culprits for its loss: corrupt elites and, often, minorities. We see populist leaders invoking “heartland” by brandishing passports, or promising to make America great again; France’s Marine Le Pen has even sought comparison to Joan of Arc.

Meanwhile, parts of the left have embraced an outlook well expressed by Faulkner’s adage that the past is never dead – it isn’t even past. Historic episodes of oppression and liberating struggle are treated as continuous with, and sometimes identical to, the present. While there is often an element of truth in this view, its practical efficacy has been to spur on a new protest movement. A rhetorical fixation with slavery, colonialism, and patriarchy not only implies urgency, but adds moral force to certain forms of identification such as race, gender, or general antinomianism.

Nor are these tendencies entirely confined to the fringes. Being opposed to identity politics has itself become a basis for identification, albeit less distinct, and so we see purposeful conceptions of the past emerging among professed rationalists, humanists, centrists, classical liberals and so on. In their own ways, figures as disparate as Jordan Peterson and Steven Pinker define the terra firma of reasonable discourse by a cultural narrative of Western values or Enlightened liberal ideals, while everything outside these bounds invites comparison to one or another dark episode from history.

I am not implying any moral or intellectual equivalence between these different outlooks and belief systems, and nor am I saying their views are just figments of ideology. I am suggesting, though, that in all these instances, what could plausibly be seen as looking to history for understanding or guidance tends to shade into something more essential: the sense that a given conception of the past can underpin a collective identity, and serve as a basis for the demarcation of the political landscape into friends and foes.

 

*     *     *

 

These observations appear to be supported by recent findings in social psychology, where “collective nostalgia” is now being viewed as a catalyst for inter-group conflict. In various contexts, including populism and liberal activism, studies suggest that self-identifying groups can respond to perceived deprivation or threat by evoking a specific, value-leaden conception of the past. This appears to bolster solidarity within the group and, ultimately, to motivate action against out-groups. We might think of the past here as becoming a kind of sacred territory to be defended; consequently, it serves as yet another mechanism whereby polarisation drives further polarisation.

This should not, I think, come as a surprise. After all, nation states, religious movements and even international socialism have always found narratives of provenance and tradition essential to extracting sacrifices from their members (sometimes against the grain of their professed beliefs). Likewise, as David Potter noted, separatist movements often succeed or fail on the basis of whether they can establish a more compelling claim to historical identity than that of larger entity from which they are trying to secede.

In our present context, though, politicised conceptions of the past have emerged from cultures where this source of meaning or identity has largely disappeared from the public sphere. Generally speaking, modern Western societies allow much less of the institutional transmission of stories which has, throughout history, brought an element of continuity to religious, civic, and family life. People associate with one another on the basis of individual preference, and institutions which emerge in this way usually have no traditions to refer to. In popular culture, the lingering sense that the past withholds some profound quality is largely confined to historical epics on the screen, and to consumer fads recycling vintage or antiquated aesthetics. And most people, it should be said, seem perfectly happy with this state of affairs.

Nonetheless, if we want to understand how the past is involved with the politics of identity today, it is precisely this detachment that we should scrutinise more closely. For ironically enough, we tend to forget that our sense of temporality – or indeed lack thereof – is itself historically contingent. As Francis O’Gorman details in his recent book Forgetfulness: Making the Modern Culture of Amnesia, Western modernity is the product of centuries worth of philosophical, economic, and cultural paradigms that have fixated on the future, driving us towards “unknown material and ideological prosperities to come.” Indeed, from capitalism to Marxism, from the Christian doctrine of salvation to the liberal doctrine of progress, it is remarkable how many of the Western world’s apparently diverse strands of thought regard the future as the site of universal redemption.

But more to the point, and as the intellectual historian Isaiah Berlin never tired of pointing out, this impulse towards transcending the particulars of time and space has frequently provoked, or at times merged with, its opposite: ethnic, cultural, and national particularism. Berlin made several important observations by way of explaining this. One is that universal and future-oriented ideals tend to be imposed by political and cultural elites, and are thus resented as an attack on common customs. Another is that many people find something superficial and alienating about being cut off from the past; consequently, notions like heritage or historical destiny become especially potent, since they offer both belonging and a form of spiritual superiority.

I will hardly be the first to point out that the most recent apotheosis of progressive and universalist thought came in the era immediately following the Cold War (not for nothing has Francis Fukuyama’s The End of History become its most iconic text). In this moment, energetic voices in Western culture – including capitalists and Marxists, Christians and liberals – were preoccupied with cutting loose from existing norms. And so, from the post-national rhetoric of the EU to postmodern academia and the champions of the service economy and global trade, they all defined the past by outdated modes of thought, work, and indeed social identity.

I should say that I’m too young to remember this epoch before the war on terror and the financial crisis, but the more I’ve tried to learn about it, the more I am amazed by its teleological overreach. This modernising discourse, or so it appears to me, was not so much concerned with constructing a narrative of progress leading up to the present day as with portraying the past as inherently shameful and of no use whatsoever. To give just one example, consider that as late as 2005, Britain’s then Prime Minister Tony Blair did not even bother to clothe his vision of the future in the language of hope, simply stating: “Unless we ‘own’ the future, unless our values are matched by a completely honest understanding of the reality now upon us and the next about to hit us, we will fail.”

Did such ways of thinking lay in store the divisive attachments to the past we see in politics today? Arguably, yes. The populist impulse towards heartland has doubtless been galvanised by the perception that elites have abandoned provenance as a source of common values. Moreover, as the narrative of progress has become increasingly unconvincing in the twenty-first century, its latent view of history as a site of backwardness and trauma has been seized upon by a new cult of guilt. What were intended as reasons to dissociate from the past have become reasons to identify with it as victims or remorseful oppressors.

 

*     *     *

 

Even if you accept all of this, there remains a daunting question: namely, what is the appropriate relationship between a society and its past? Is there something to be gained from cultivating some sense of a common background, or should we simply refrain from undermining that which already exists? It’s important to state, firstly, that there is no perfect myth which every group in a polity can identify with equally. History is full of conflict and tension, and well as genuine injustice, and to suppress this fact is inevitably to sow the seeds of resentment. Such was the case, for instance, with the Confederate monuments which were the focus of last year’s protests in the United States: many of these were erected as part of a campaign for national unity in the early 20th century, one that denied the legacy of African American slavery.

Moreover, a strong sense of tradition is easily co-opted by rulers to sacralise their own authority and stifle dissent. The commemoration of heroes and the vilification of old enemies are today common motifs of state propaganda in Russia, India, China, Turkey, Poland and elsewhere. Indeed, many of the things we value about modern liberal society – free thought, scientific progress, political equality – have been won largely by intransigence towards the claims of the past. None of them sit comfortably in societies who afford significant moral authority to tradition. And this is to say nothing of the inevitable sacrificing of historical truth when the past is used as an agent of social cohesion.

But notwithstanding the partial resurgence of nationalism, it is not clear there exists in the West today any vehicle for such comprehensive, overarching myths. As with “tribal” politics in general, the politicisation of the past has been divergent rather than unifying because social identity is no longer confined to traditional concepts and categories. A symptom of this, at least in Europe, is that people who bemoan the absence of shared historical identity – whether politicians such as Emmanuel Macron or critics like Douglas Murray – struggle to express what such a thing might actually consist in. Thus they resort to platitudes like “sovereignty, unity and democracy” (Macron), or a rarefied high culture of Cathedrals and composers (Murray).

The reality which needs to be acknowledged, in my view, is that the past will never be an inert space reserved for mere curiosity or the measurement of progress. The human desire for group membership is such that it will always be seized upon as a buttress for identity. The problem we have encountered today is that, when society at large loses its sense of the relevance and meaning of the past, the field is left open to the most divisive interpretations; there is, moreover, no common ground from which to moderate between such conflicting narratives. How to broaden out this conversation, and restore some equanimity to it, might in the present circumstances be an insoluble question. It certainly bears thinking about though.

Consumerism or idealism? Making sense of authenticity

 

One of my favourite moments in cinema comes from Paolo Sorrentino’s film The Great Beauty. The scene is a fashionable get-together on a summer evening, and as the guests gossip over aperitifs, we catch a woman uttering: “Everybody knows Ethiopian jazz is the only kind worth listening to.” The brilliance of this line is not just that it shows the speaker to be a pretentious fool. More than that, it manages to demonstrate the slipperiness of a particular ideal. For what this character is implying, with her reference to Ethiopian jazz, is that she and her tastes are authentic. She appreciates artistic integrity, meaningful expression, and maybe a certain virtuous naivety. And the irony, of course, is that by setting out to be authentic she has merely stumbled into cliché.

I find myself recalling this dilemma when I pass through the many parts of London that seem to be suffering an epidemic of authenticity today. Over the past decade or so, life here and in many other cities has become crammed with nostalgic, sentimental objects and experiences. We’ve seen retro décor in cocktail bars and diners, the return of analogue formats like vinyl and film photography, and a fetishism of the vintage and the hand-made in everything from fashion to crockery. Meanwhile restaurants, bookshops, and social media feeds offer a similarly quaint take on customs from around the globe.

Whether looking back to a 1920s Chicago of leather banquettes and Old Fashioned cocktails, or the wholesome cuisine of a traditional Balkan home, these are so many tokens of an idealised past – attempts to signify that simple integrity which, paradoxically, is the mark of cosmopolitan sophistication. These motifs have long since passed into cliché themselves. Yet the generic bars and coffee shops keep appearing, the LPs are still being reissued, and urban neighborhoods continue being regenerated to look like snapshots of times and places that never quite existed.

Discount-Suit-Company-bar-r
The Discount Suit Company, one of London’s many “Prohibition-style cocktail dens” according to TimeOut

There is something jarring about this marriage of the authentic with the commercial and trendy, just as there is when someone announces their love of Ethiopian jazz to burnish their social credentials. We understand there is more to authenticity than just an aura of uniqueness, a vague sense of being true to something, which a product or experience might successfully capture. Authenticity is also defined by what it isn’t: shallow conformity. Whether we find it in the charmingly traditional or in the unusual and eccentric, authenticity implies a defiance of those aspects of our culture that strike us as superficial or contrived.

Unsurprisingly then, most commentators have concluded that what surrounds us today is not authenticity at all. Rather, in these “ready-made generic spaces,” what we see is no less than “the triumph of hive mind aesthetics to the expense of spirit and of soul.” The authentic has become a mere pretense, a façade behind which a homogenized, soulless modernity has consolidated its hold. And this says something about us of course. To partake in such a fake culture suggests we are either unfortunate dupes or, perhaps, something worse. As one critic rather dramatically puts it: “In cultural markets that are all too disappointingly accessible to the masses, the authenticity fetish disguises and renders socially acceptable a raw hunger for hierarchy and power.”

These responses echo a line of criticism going back to the 1970s, which sees the twin ideals of the authentic self and the authentic product as mere euphemisms for the narcissistic consumer and the passing fad. And who can doubt that the prerogative of realising our unique selves has proved susceptible to less-than-unique commercial formulas? This cosmetic notion of authenticity is also applied easily to cultures as a whole. As such, it is well suited to an age of sentimental relativism, when all are encouraged to be tourists superficially sampling the delights of world.

And yet, if we are too sceptical, we risk accepting the same anaemic understanding of authenticity that the advertisers and trendsetters foist on us. Is there really no value in authenticity beyond the affirmation it gives us as consumers? Is there no sense in which we can live up to this ideal? Does modern culture offer us nothing apart from illusions? If we try to grasp where our understanding of authenticity comes from, and how it governs our relationship with culture, we might find that for all its fallibility it remains something that is worth aiming for. More importantly perhaps, we’ll see that for better or for worse, it’s not a concept we can be rid of any time soon.

 

 

Authenticity vs. mass culture

In the narrowest sense of the word, authenticity applies to things like banknotes and paintings by Van Gogh: it describes whether they are genuine or fake. What do we mean, though, when we say that an outfit, a meal, or a way of life is authentic? Maybe it’s still a question of provenance and veracity – where they originate and whether they are what they claim – but now these properties have taken on a quasi-spiritual character. Our aesthetic intuitions have lured us into much deeper waters, where we grope at values like integrity, humility, and self-expression.

Clearly authenticity in this wider sense cannot be determined by an expert with a magnifying glass. In fact, if we want to grasp how such values can seem to be embodied in our cultural environment – and how this relates to the notion of being an authentic person – we should take a step back. The most basic answers can be found in the context from which the ideal of authenticity emerged, and in which it continues to operate today: Western mass culture.

That phrase – mass culture ­– might strike you as modern sounding, recalling as it does a world of consumerism, Hollywood and TV ads. But it simply means a culture in which beliefs and habits are shaped by exposure to the same products and media, rather than by person-to-person interaction. In Europe and elsewhere, this was clearly emerging in the 18th and 19th centuries, in the form of mass media (journals and novels), mass-produced goods, and a middle class seeking novelties and entertainments. During the industrial revolution especially, information and commodities began to circulate at a distinctly modern tempo and scale.

Gradually, these changes heralded a new and somewhat paradoxical experience. On the one hand, the content of this culture – whether business periodicals, novels and plays, or department store window displays – inspired people to see themselves as individuals with their own ambitions and desires. Yet those individuals also felt compelled to keep up with the latest news, fashions and opinions. Ensconced in a technologically driven, commercially-minded society, culture became the site of constant change, behind which loomed an inscrutable mass of people. The result was an anxiety which has remained a feature of art and literature ever since: that of the unique subject being pulled along, puppet-like, by social expectations, or caught up in the gears of an anonymous system.

And one product of that anxiety was the ideal of authenticity. Philosophers like Jean-Jacques Rousseau in the 18th century, Søren Kierkegaard in the 19th, and Martin Heidegger in the 20th, developed ideas of what it meant to be an authentic individual. Very broadly speaking, they were interested in the distinction between the person who conforms unthinkingly, and the person who approaches life on his or her own terms. This was never a question of satisfying the desire for uniqueness vis-à-vis the crowd, but an insistence that there were higher concepts and goals in relation to which individuals, and perhaps societies, could realise themselves.

Screen Shot 2017-11-27 at 09.44.12
John Ruskin’s illustrations of Gothic architecture, published in The Stones of Venice (1851)

Others, though, approached the problem from the opposite angle. The way to achieve an authentic way of being, they thought, was collectively, through culture. They emphasised the need for shared values that are not merely instrumental – values more meaningful than making money, saving time, or seeking social status. The most famous figures to attempt this in the 19th century were John Ruskin and William Morris, and the way they went about it was very telling indeed. They turned to the past and, drawing a direct link between aesthetics and morality, sought forms of creativity and production that seemed to embody a more harmonious existence among individuals.

For Morris, the answer was a return to small-scale, pre-industrial crafts. For Ruskin, medieval Gothic architecture was the model to be emulated. Although their visions of the ideal society differed greatly, both men praised loving craftsmanship, poetic expressiveness, imperfection and integrity – and viewed them as social as well as artistic virtues. The contrast with the identical commodities coming off factory production lines could hardly be more emphatic. In Ruskin’s words, whereas cheap wholesale goods forced workers “to make cogs and compasses of themselves,” the contours of the Gothic cathedral showed “the life and liberty of every workman who struck the stone.”

 

 

The authentic dilemma

In Ruskin and Morris we can see the outlines of our own understanding of authenticity today. Few of us share their moral and social vision (Morris was a utopian socialist, Ruskin a paternalist Christian), but they were among the first to articulate a particular intuition that arises from the experience of mass culture – one that leads us to idealise certain products and pastimes as embodiments of a more free-spirited and nourishing, often bygone world. Our basic sense of what it means to be an authentic individual is rooted in this same ground: a defiance of the superficial and materialistic considerations that the world seems to impose on us.

Thanks to ongoing technological change, mass culture has impressed each new generation with these same tensions. The latest installment, of course, has been the digital revolution. Many of us find something impersonal in cultural products that exist only as binary code and appear only on a screen – a coldness somehow worsened by their convenience. The innocuous branding of digital publishing companies, with cuddly names like Spotify and Kindle, struggles to hide the bloodless efficiency of the algorithm. This is stereotypically contrasted with the soulful pleasures of, say, the authentic music fan, pouring over the sleeve notes of his vinyl record on the top deck of the bus.

But this hackneyed image immediately recalls the dilemma we started with, whereby authenticity itself gets caught-up in the web of fashion and consumerist desire. So when did ideals become marketing tools? The prevailing narrative emphasises the commodification of leisure in the early 20th century, the expansion of mass media into radio and cinema, and the development of modern advertising techniques. Yet, on a far more basic level, authenticity was vulnerable to this contradiction from the very beginning.

Ideals are less clear-cut in practice than they are in the page. For Ruskin and Morris, the authenticity of certain products and aesthetics stemmed from their association with a whole other system of values and beliefs. To appreciate them was effectively to discard the imperatives of mass culture and commit yourself to a different way of being. But no such clear separation exists in reality. We are quite capable of recognizing and appreciating authenticity when it is served to us by mass culture itself – and we can do so without even questioning our less authentic motives and desires.

kaiser-panorama
Hi-tech Victorian entertainment: the Panorama. (Source: Wikimedia commons)

Thus, by the time Ruskin published “On the Nature of Gothic” in 1851, Britain had long been in the grip of a mass phenomenon known as the Gothic Revival – a fascination with Europe’s Christian heritage manifest in everything from painting and poetry to fashion and architecture. Its most famous monument would be the building from which the new industrial society was managed and directed: the Houses of Parliament in Westminster. Likewise, nodding along to Ruskin’s noble sentiments did not prevent bourgeois readers from enjoying modern conveniences and entertainments, and merely justified their disdain for mass-produced goods as cheap and common.

From then until now, to be “cultured” has to some degree implied a mingling of nostalgia and novelty, efficiency and sentimentality. Today’s middle-classes might resent their cultural pursuits becoming generic trends, but also know that their own behavior mirrors this duplicity. The artisanal plate of food is shared on Facebook, a yoga session begins a day of materialistic ambition, and the Macbook-toting creative expresses in their fashion an air of old-fashioned simplicity. It’s little wonder boutique coffee shops the world over look depressingly similar, seeing as most of their customers happily share the same environment on their screens.

Given this tendency to pursue conflicting values simultaneously, there is really nothing to stop authentic products and ideas becoming fashionable in their own right. And once they do so, of course, they have started their inevitable descent into cliché. But crucially, this does not mean that authenticity is indistinguishable from conformity and status seeking itself. In fact, it can remain meaningful even alongside these tendencies.

 

 

Performing the authentic

A few years ago, I came across a new, elaborately designed series of Penguin books. With their ornate frontispieces and tactile covers, these “Clothbound Classics” seemed to be recalling the kind volume that John Ruskin himself might have read. On closer inspection, though, these objects really reflected the desires of the present. The antique design elements were balanced with modern ones, so as to produce a carefully crafted simulacrum: a copy for which no original has ever existed. Deftly straddling the nostalgia market and the world of contemporary visuals, these were books for people who now did most of their reading from screens.

Screen Shot 2017-11-27 at 10.04.43
Volumes from Penguin’s “Clothbound Classics” series

As we’ve seen, to be authentic is to aspire to a value more profound than mere expediency – one that we often situate in the obsolete forms of the past. This same sentimental quality, however, also makes for a very good commodity. We often find that things are only old or useless insofar as this allows them to be used as novelties or fashion statements. And such appropriation is only too easy when the aura of authenticity can be summoned, almost magically, by the manipulation of symbols: the right typeface on a menu, the right degree of saturation in a photograph, the right pattern on a book cover.

This is where our self-deceiving relationship with culture comes into closer focus. How is it we can be fooled by what are clearly just token gestures towards authenticity, couched in alterior motives like making money or grabbing our attention? The reason is that, in our everyday interactions with culture, we are not going around as judges but as imaginative social beings who appreciate such gestures. We recognise that they have a value simply as reminders of ideals that we hold in common, or that we identify with personally. Indeed, buying into hints and suggestions is how ideals remain alive in amidst the disappointments and limitations of lived reality.

In his essay “A is for Authentic,” Design Museum curator Deyan Sudjic expands this idea by portraying culture as a series of choreographed rituals and routines, which demonstrate not so much authenticity as our aspirations towards it. From the homes we inhabit to the places we shop and the clothes we wear, Sudjic suggests, “we live much of our lives on a sequence of stage sets, modeled on dreamlike evocations of the world that we would like to live in rather than the world as it is.”

This role-play takes us away from the realities of profit and loss, necessity and compromise, and into a realm where those other notions like humility and integrity have the place they deserve. For Sudjic, the authentic charm of a period-themed restaurant, for instance, allows us to “toy with the idea that the rituals of everyday life have more significance than, in truth, we suspect that they really do.” We know we are not going to find anything like pure, undiluted authenticity, free from all pretense. But we can settle for something that acknowledges the value of authenticity in a compelling way – something “authentic in its artistic sincerity.” That is enough for us to play along.

Steven Poole makes a similar point about the ideal of being an authentic person, responding to the uncompromising stance that Jean Paul Satre takes on this issue. In Satre’s Being and Nothingness, there is a humorous vignette in which he caricatures the mannerisms of a waiter in a café. In Satre’s eyes, this man’s contrived behavior shows that he is performing a role rather than being his authentic self. But Poole suggests that, “far from being deluded that he really is a waiter,” maybe Satre’s dupe is aware that he is acting, and is just enjoying it.

Social life is circumscribed by performance and gesture to the extent that, were we to dig down in an effort to find some authentic bedrock, we would simply be taking up another role. Our surroundings and possessions are part of that drama too – products like books and Gothic cathedrals are ultimately just props we use to signal towards a hypothetical ideal. So yes, authenticity is a fiction. But insofar as it allows us to express our appreciation of values we regard as important, it can be a useful one.

 

 

Between thought and expression

Regardless of the benefits, though, our willingness to relax judgment for the sake of gesture has obvious shortcomings. The recent craze for the authentic, with its countless generic trends, has demonstrated them clearly. Carried away by the rituals of consumerism, we can end up embracing little more than a pastiche of authenticity, apparently losing sight of the bigger picture of sterile conformity in which those interactions are taking place. Again, the suspicion arises that authenticity itself is a sham. For how can it be an effective moral standard if, when it comes to actually consuming culture, we simply accept whatever is served up to us?

I don’t think this picture is entirely right, though. Like most of our ideals, authenticity has no clear and permanent outline, but exists somewhere between critical thought and social conventions. Yet these two worlds are not cut off from each other. We do still possess some awareness when we are immersed in everyday life, and the distinctions we make from a more detached perspective can, gradually and unevenly, sharpen that awareness. Indeed, even the most aggressive criticism of authenticity today is, at least implicitly, grounded in this possibility.

One writer, for instance, describes the vernacular of “reclaimed wood, Edison bulbs, and refurbished industrial lighting” which has become so ubiquitous in modern cities, calling it “a hipster reduction obsessed with a superficial sense of history and the remnants of industrial machinery that once occupied the neighbourhoods they take over.” The pretense of authenticity has allowed the emergence of zombie-like cultural forms: deracinated, fake, and sinister in their social implications. “From Bangkok to Beijing, Seoul to San Francisco,” he writes, this “tired style” is catering to “a wealthy, mobile elite, who want to feel like they’re visiting somewhere ‘authentic’ while they travel.”

This is an effective line of attack because it clarifies a vague unease that many will already feel in these surroundings. But crucially, it can only do this by appealing to a higher standard of authenticity. Like most recent critiques of this kind, it combines aesthetic revulsion at a soulless, monotonous landscape, with moral condemnation of the social forces responsible, and thus reads exactly like an updated version of John Ruskin’s arguments. In other words, the same intuitions that lead consumers, however erroneously, to find certain gestures and symbols appealing, are being leveraged here to clarify those intuitions.

This is the fundamental thing to understand about authenticity: it is so deeply ingrained in our ways of thinking about culture, and in our worldview generally, that it is both highly corruptible and impossible to dispense with. Since our basic desire for authenticity doesn’t come from advertisers or philosophers, but from the experience of mass culture itself, we can manipulate and refine that desire but we can’t suppress it. And almost regardless of what we do, it will continue to find expression in any number of ways.

kardashian-insta_trans_NvBQzQNjv4BqqVzuuqpFlyLIwiB6NTmJwfSVWeZ_vEN7c6bHu2jJnT8
A portrait posted by socialite Kendall Jenner on Instagram in 2015, typical of the new mannerist, sentimental style

This has been vividly demonstrated, for instance, in the relatively new domain of social media. Here the tensions of mass culture have, in a sense, risen afresh, with person-to-person interaction taking place within the same apparatus that circulates mass media and social trends. Thus a paradigm of authentic expression has emerged which in some places verges on outright romanticism: consider the phenomenon of baring your soul to strangers on Facebook, or the mannerist yet sentimental style of portrait that is so popular on Instagram. Yet this paradigm still functions precisely along the lines we identified earlier. Everybody knows it is ultimately a performance, but are willing to go along with it.

Authenticity has also become “the stardust of this political age.” The sprouting of a whole crop of unorthodox, anti-establishment politicians on both sides of the Atlantic is taken to mean that people crave conviction and a human touch. Yet even here it seems we are dealing not so much with authentic personas as with authentic products. For their followers, such leaders are an ideal standard against which culture can be judged, as well as symbolic objects that embody an ideology – much as handcrafted goods were for William Morris’ socialism, or Gothic architecture was for Ruskin’s Christianity.

Moreover, where these figures have broadened their appeal beyond their immediate factions, it is again because mass culture has allowed them to circulate as recognisable and indeed fashionable symbols of authenticity. One of the most intriguing objects I’ve come across recently is a “bootlegged” Nike t-shirt, made by the anonymous group Bristol Street Wear in support of the politician Jeremy Corbyn. Deliberately or not, their use of one of the most iconic commercial designs in history is an interesting comment on that trade-off between popularity and integrity which is such a feature of authenticity in general.

Screen Shot 2017-11-17 at 11.09.55
The bootleg t-shirt produced by Bristol Street Wear during the 2017 General Election campaign. Photograph: Victoria & Albert Museum, London

These are just cursory observations; my point is that the ideal of authenticity is pervasive, and that for this very reason, any expression of it risks being caught-up in the same system of superficial motives and ephemeral trends that it seeks to oppose. This does not make authenticity an empty concept. But it does mean that, ultimately, it should be seen as a form of aspiration, rather than a goal which can be fully realised.

Invisible Lives: Ethics between Europe and Africa

 

In the afternoon our house settles into a decadent air. My sisters’ children are asleep, there is the lingering smell of coffee, the corridors are in shade with leaves moving silently outside the windows. In my room light still pours in from the electric blue sky of the Eastern Cape. There is a view of the town, St Francis Bay, clustered picturesquely in the orthodox Dutch style, thatched roofs and gleaming whitewashed walls hugging the turquoise of the Indian Ocean.

This town, as I hear people say, is not really like South Africa. Most of its occupants are down over Christmas from the northern Highveld cities. During these three weeks the town’s population quadruples, the shopping centres, bars and beaches filling with more or less wealthy holidaymakers. They are white South Africans – English and Afrikaans-speaking – a few African millionaires, and recently, a number of integrated middle-class Africans too. The younger generations are Americanised, dressing like it was Orange County. There are fun runs and triathlons on an almost daily basis, and dance music drifts across the town every night.

But each year it requires a stronger act of imagination, or repression, to ignore the realities of the continent to which this place is attached. Already, the first world ends at the roadside, where families of pigs and goats tear open trash bags containing health foods and House and Leisure. Holidaymakers stock up on mineral water at the vast Spar supermarket, no longer trusting their taps. At night, the darkness of power cuts is met with the reliable whirring of generators.

And from where I sit at my desk I can make out, along the worn-out roads, impoverished African men loping in twos or threes towards the margins of town after their day of construction work, or of simply waiting at the street corner to be picked up for odd jobs. Most of them are headed to Sea Vista, or KwaNomzamo, third-world townships like those that gather around all of South Africa’s towns and cities, like the faded edges of a photograph.

When I visited South Africa as a child, this ragged frontier seemed normal, even romantic. Then, as I grew used to gazing at the world from London, my African insights became a source of tension. The situation felt rotten, unaccountable. But if responsibility comes from proximity, how can the judgment that demands it come from somewhere far removed? And who is being judged, anyway? In a place where the only truly shared experience is instability, judicial words like ‘inequality’ must become injudicious ones like ‘headfuck’. That is what South Africa is to an outsider: an uncanny dream where you feel implicated yet detached, unable to ignore or to understand.

 

–––––––––

Ethics is an inherently privileged pursuit, requiring objectivity, critical distance from a predicament. If, as Thomas Nagel says, objective judgment is ‘a set of concentric spheres, progressively revealed as we detach from the contingencies of the self,’ then ethics assume the right to reside in some detached outer sphere, a non-person looking down at the human nuclei trapped in their lesser orbits.

In his memoir Lost and Found in Johannesburg, Mark Gevisser uses another aerial view, a 1970s street guide, to recollect the divisions of apartheid South Africa. Areas designated for different races are placed on separate pages, or the offending reality of a black settlement is simply left blank. These omissions represented the outer limits of ethical awareness, as sanctioned by the state.

Gevisser, raised as a liberal, English-speaking South African, had at least some of the detachment implied by his map. Apartheid was the creation of the Afrikaner people, whose insular philosophy became bureaucratic reality in 1948, by virtue of their being just over half of South Africa’s white voters. My parents grew up within its inner circle, a world with no television and no loose talk at parties, tightly embraced by the National Party and by God himself through his Dutch Reformed church.

It was a prison of memory – the Afrikaners had never escaped their roots as the hopeless dregs of Western Europe that had coalesced on the tip of Africa in the 17th century. Later, the British colonists would call them ‘rock spiders’. They always respected a leader who snubbed the outside world, like Paul Kruger, who in the late 19th century called someone a liar for claiming to have sailed around the earth, which of course was flat. Their formation of choice was the laager, a circular fort of settlers’ wagons, with guns trained at the outside.

By the time my father bought the house in St Francis in 1987, the world’s opinions had long been flooding in. Apartheid’s collapse was under way, brought about, ironically, by dependence on African labour and international trade. My family lived in Pretoria, where they kept a revolver in the glove compartment. We left seven years later, when I was three, part of the first wave of a great diaspora of white South Africans to the English-speaking world.

 

–––––––––

From my half-detached perspective, the rhythms of South African history appear deep and unbending. The crude patchwork of apartheid dissolved only to reform as a new set of boundaries, distinct spheres of experience sliding past each other. Even as places like St Francis boomed, the deprived rural population suddenly found itself part of a global economy, and flooded into peripheral townships and squatter camps. During the year, when there is no work in St Francis, these are the ghosts who break into empty mansions to steal taps, kettles, and whatever shred of copper they can find.

This is how Patricia and her family moved to KwaNomzamo, near the poor town of Humansdorp, about 20 minutes’ drive from St Francis. Patricia is our cleaner, a young woman with bright eyes. She is Coloured, an ethnicity unique to South Africa, which draws its genes from African and Malay slaves, the indigenous San and Khoikhoi people of the Cape, and the Afrikaners, whose language they share. This is the deferential language of the past – ‘ja Mevrou,’ Patricia says in her lilting accent.

I have two images of Patricia. The first is a mental one of her home in KwaNomzamo, one of the tin boxes they call ‘disaster housing’, planted neatly in rows beside the sprawl of the apartheid-era ‘location’. This image is dominated by Patricia’s disabled mother, who spends her days here, mute and motionless like a character from an absurdist drama. Beside this is the actual photograph Patricia asked us to take at her boyfriend’s house, where they assumed a Madonna-like pose with their three-month-old child.

These memories drive apart the different perspectives in me like nothing else. The relationships between middle-class South Africans and their domestic staff today are a genuine strand of solidarity in an otherwise confusing picture. But from my European viewpoint, always aware of history and privilege, even empathy is just another measure of injustice, of difference. This mindset is calibrated from a distance: someone who brings it to actual relationships is not an attractive prospect, nor an ethical one. Self-aware is never far from self-absorbed.

 

–––––––––

The danger usually emphasised by ethics is becoming trapped in a subjective viewpoint, seeing the world from too narrow an angle. But another problem is the philosophical shrinking act sometimes known as false objectivity. If you already have a detached perspective, the most difficult part of forming a judgment is understanding the personal motives of those involved. ‘Reasons for action,’ as Nagel says, ‘have to be reasons for individuals’. The paradox is that a truly objective judgment has to be acceptable from any viewpoint, otherwise it is just another subjective judgment.

In Britain, hardship seems to exist for our own judicial satisfaction. Ethics are a spectator sport mediated by screens, a televised catharsis implying moral certainty. War, natural disasters, the boats crossing the Mediterranean – there’s not much we can offer these images apart from such Manichean responses as blind sympathy or outrage, and these we offer largely to our consciences. Looking out becomes another way of looking in.

The journalist R.W. Johnson noted that after liberation, foreign papers lost interest in commissioning stories about South Africa. Just as well, since it soon became a morass of competing anxieties, the idealism of the ‘rainbow nation’ corroded by grotesque feats of violence and corruption: I am not unusual in having relatives who have been murdered. Against this background, the pigs and potholes among the mansions of St Francis are like blood coughed into a silk handkerchief, signs of a hidden atrophy already far progressed.

Alison and Tim are the sort of young South Africans – and there remain many – whose optimism has always been the antidote to all this. They are Johannesburgers proud of their cosmopolitan city. One evening last Christmas, I sat with Tim in a St Francis bar that served craft beer and staged an indie band in the corner. This is not really like South Africa, he said, pointing to the entirely white crowd. Then he told me he, too, is thinking of leaving.

South Africa’s currency, the Rand, crashed in December after President Jacob Zuma fired his Finance Minister on a whim. You could not go anywhere without hearing about this. Everyone is looking for something to export, Tim said, a way to earn foreign currency before it becomes impossible to leave. He has a family to think of – and yes, he admitted several drinks later, it bothers him that you could wake any night with a gun to your head.

‘More often in the first world / one wakes from not to the nightmare’, writes the American poet Kathy Fagan. There is such as thing as a shared dream, but even nightmares that grow from the same source tend to grow apart. They are personal, invisible from outside.

This article was first published by The Junket on 29 Feb 2016