What space architecture says about us

With the recent expedition of Nasa’s Perseverence rover to Mars, I’ve taken an interest in space architecture; more specifically, habitats for people on the moon or the Red Planet. The subject first grabbed my attention earlier this year, when I saw that a centuries-old forestry company in Japan is developing wooden structures for future space colonies. Space architecture is not as other-worldly as you might think. In various ways, it holds a revealing mirror to life here on Earth. 

Designing human habitats for Mars is more than just a technical challenge (though protecting against intense radiation and minus 100C temperatures is, of course, a technical challenge). It’s also an exercise in anthropology. To ask what a group of scientists or pioneers will need from their Martian habitats is to ask what human beings need to be healthy, happy and productive. And we aren’t just talking about the material basics here. 

As Jonathan Morrison reported in the Times last weekend, Nasa is taking inspiration from the latest polar research bases. According to architects like Hugh Broughton, researchers working in these extreme environments need creature comforts. The fundamental problem, says Broughton, is “how architecture can respond to the human condition.” The extreme architect has to consider “how you deal with isolation, how you create a sense of community… how you support people in the darkness.”

I found these words disturbingly relatable; not just in light of the pandemic, which has forced us all into a kind of polar isolation, but in light of the wider problem of anomie in modern societies. Broughton’s questions are the same ones we tend to ask as we observe stubbornly high rates of depression, loneliness, self-medication, and so on. Are we all now living in an extreme environment?

Many architects in the modernist period dreamed that they could tackle such issues through the design of the built environment. But the problem of what people need in order to flourish confronted them in a much harder form. Given the complexity of modern societies, trying to facilitate a vision of human flourishing through architecture started to look a lot like forcing society into a particular mould.

The “master households” designed by Walter Gropius in the 1920s and 30s illustrates the dilemma. Gropius insisted his blueprints, which reduced private family space in favour of communal living, reflected the emerging socialist character of modern individuals. At the same time, he implied that this transformation in lifestyle needed the architect as its midwife. 

Today architecture has largely abandoned the dream of a society engineered by experts and visionaries. But heterotopias like research stations and space colonies still offer something of a paradise for the philosophical architect. By contrast to the messy complexity of society at large, these small communities have a very specific shared purpose. They offer clearly defined parameters for architects to address the problem of what human beings need. 

Sometimes the solutions to this profound question, however, are almost comically mundane. Morrison’s Times report mentions some features of recent polar bases:

At the Scott Base, due to be completed in 2027, up to 100 residents might while away the hours in a cafeteria and even a Kiwi-themed pub, while Halley VI… boasts a gym, library, large canteen, bar and mini cinema.

If this turns out to be the model, then a future Mars colony will be a lot like a cruise ship. This doesn’t reflect a lack of imagination on the architects’ part though. It points to the fact that people don’t just want sociability, stimulation and exercise as such – they want familiar forms of these things. So a big part of designing habitats for space pioneers will involve replicating institutions from their original, earthbound cultures. In this sense, Martian colonies won’t be a fresh start for humanity any more than the colonisation of the Americas was. 

Finally, it’s worth saying something about the politics of space habitats. It seems inevitable that whichever regime sends people to other planets will use the project as a means of legitimation: the government(s) and corporations involved will want us to be awed by their achievement. And this will be done by turning the project into a media spectacle. 

The recent Perseverance expedition has already shown this potential: social media users were thrilled to hear audio of Martian winds, and to see a Martian horizon with Earth sparkling in the distance (the image, alas, turned out to be a fake). The first researchers or colonists on Mars will likely be reality TV stars, their everyday lives an on-going source of fascination for viewers back home. 

The lunar base in Kubrick’s 2001: A Space Odyssey

This means space habitats won’t just be designed for the pioneers living in them, but also for remote visual consumption on Earth. The aesthetics of these structures will not, therefore, be particularly novel. Thanks to Hollywood, we already have established ideas of what space exploration should look like, and space architecture will try to satisfy these expectations. Beyond that, it will simply try to project a more futuristic version of the good life as we know it through pop culture: comfort, luxury and elegance. 

We already see this, I think, in the Mars habitat designed by Xavier De Kestelier of Hassel Studio, which features sweeping open-plan spaces with timber flooring, glass walls and minimalist furniture. It resembles a luxury spa more than a rugged outpost of civilisation. But this was already anticipated, with characteristic flair, by Stanley Kubrick in his 1968 sci-fi classic 2001: A Space Odyssey. In Kubrick’s imagined lunar base, there is a Hilton hotel hosting the stylish denizens of corporate America. The task of space architects will be to design this kind of enchanting fantasy, no less than to meet the needs of our first Martian settlers.  

The double nightmare of the cat-lawyer

Analysing internet memes tends to be self-defeating: mostly their magic comes from a fleeting, blasé irony which makes you look like a fool if you try to pin it down. But sometimes a gem comes along that’s too good to let pass. Besides, the internet’s endless stream of found objects, jokes and observations are ultimately a kind of glorious collective artwork, somewhere between Dada collage and an epic poem composed by a lunatic. And like all artworks, this one has themes and motifs worth exploring.

Which brings me to cat-lawyer. The clip of the Texas attorney who, thanks to a visual filter, manages to take the form of a fluffy kitten in a Zoom court hearing, has gone superviral. The hapless attorney, Rod Ponton, claims he’s been contacted by news outlets around the world. “I always wanted to be famous for being a great lawyer,” he reflected, “now I’m famous for appearing in court as a cat.”

The video clearly recalls the similarly sensational case of Robert Kelly, the Korea expert whose study was invaded by his two young children during a live interview with the BBC. What makes both clips so funny is the pretence of public formality – already under strain in the video-call format, since people are really just smartly dressed in their homes – being punctured by the frivolity of childhood. Ridiculously, the victims try to maintain a sense of decorum. The punctilious Kelly ignores his rampaging infants and mumbles an apology; the beleaguered Ponton, his saucer-like kitten’s eyes shifting nervously, insists he’s happy to continue the hearing (“I’m not a cat” he reassures the judge, a strong note of desperation in his voice).

These incidents don’t become so famous just because they’re funny, though. Like a lot of comedy, they offer a light-hearted, morally acceptable outlet for impulses that often appear in much darker forms. We are essentially relishing the humiliation of Ponton and Kelly, much as the roaming mobs of “cancel culture” relish the humiliation of their targets, but we expect the victims to recognise their own embarrassment as a public good. The thin line between such jovial mockery and the more malign search for scapegoats is suggested by the fact that people have actually tried to discredit both men. Kelly was criticised for how he handled his daughter during his ordeal, while journalists have dredged up old harassment allegations against Ponton.

But there are other reasons why, in the great collective fiction of internet life, cat-lawyer is an interesting character. As I’ve previously written at greater length, online culture carries a strong strain of the grotesque. The strange act of projecting the self into digital space, both liberating and anxiety-inducing, has spurred forms of expression that blur the boundaries of the human and of social identity. In this way, internet culture joins a long artistic tradition where surreal, monstrous or bizarre beings give voice to repressed aspects of the human imagination. Human/animal transformations like the cat-lawyer have always been a part of this motif.

Of course it’s probably safe to assume that Ponton’s children, and not Ponton himself, normally use the kitten filter. But childhood and adolescence are where we see the implications of the grotesque most clearly. Bodily transformation and animal characters are a staple of adolescent fiction, because teenagers tend to interpret them in light of their growing awareness of social boundaries, and of their own subjectivity. Incidentally, I remember having this response to a particularly cheesy series of pulp novels for teens called Animorphs. But the same ideas are being explored, whether playfully or disturbingly, in gothic classics like Frankenstein and the tales of E.T.A Hoffman, in the films of David Lynch, or indeed in the way people use filters and face-changing apps on social media. 

The cat-lawyer pushes these buttons too: his wonderful, mesmerising weirdness is a familiar expression of the grotesque. And this gels perfectly with the comedy of interrupted formality and humiliation. The guilty expression on his face makes it feels like he has, by appearing as a cat, accidentally exposed some embarrassing private fetish in the workplace. 

Perhaps the precedent this echoes most clearly is Kafka’s “Metamorphosis,” where the longsuffering salesman Gregor Samsa finds he has turned into an insect. Recall that Samsa’a family resents his transformation not just because he is ghastly, but because his ghastliness makes him useless in a world which demands respectability and professionalism. It is darkly absurd, but unsettling too: it awakens anxieties about the aspects of ourselves that we conceal from public view. 

The cat-lawyer’s ordeal is a similar kind of double nightmare: a surreal incident of transformation, an anxiety dream about being publicly exposed. Part of its appeal is that it lets us appreciate these strange resonances by cloaking them in humour. 

The Philosophy of Rupture: How the 1920s Gave Rise to Intellectual Magicians

This essay was originally published by Areo magazine on 4th November 2020.

When it comes to intellectual history, Central Europe in the decade of the 1920s presents a paradox. It was an era when revolutionary thought – original and iconoclastic ideas and modes of thinking – was not in fact revolutionary, but almost the norm. And the results are all around us today. The 1920s were the final flourish in a remarkable period of path-breaking activity in German-speaking Europe, one that laid many of the foundations for both analytic and continental philosophy, for psychology and sociology, and for several branches of legal philosophy and of theoretical science.

This creative ferment is partly what people grasp at when they refer to the “spirit” of the ’20s, especially in Germany’s Weimar Republic. But this doesn’t help us understand where that spirit came from, or how it draws together the various thinkers who, in hindsight, seem to be bursting out of their historical context rather than sharing it.

Wolfram Eilenberger attempts one solution to that problem in his new book, Time of the Magicians: The Invention of Modern Thought, 1919-1929. He manages to weave together the ideas of four philosophers – Ludwig Wittgenstein, Martin Heidegger, Walter Benjamin and Ernst Cassirer – by showing how they emerged from those thinkers’ personal lives. We get colourful accounts of money troubles, love affairs, career struggles and mental breakdowns, each giving way to a discussion of the philosophical material. In this way, the personal and intellectual journeys of the four protagonists are linked in an expanding web of experiences and ideas.

This is a satisfying format. There’s just no denying the voyeuristic pleasure of peering into these characters’ private lives, whether it be Heidegger’s and Benjamin’s attempts to rationalise their adulterous tendencies, or the series of car crashes that was Wittgenstein’s social life. Besides, it’s always useful to be reminded that, with the exception of the genuinely upstanding Cassirer, these great thinkers were frequently selfish, delusional, hypocritical and insecure. Just like the rest of us then.

But entertaining as it is, Eilenberger’s biographical approach does not really cast much light on that riddle of the age: why was this such a propitious time for magicians? If anything, his portraits play into the romantic myth of the intellectual window-breaker as a congenital outsider and unusual genius – an ideal that was in no small part erected by this very generation. This is a shame because, as I’ll try to show later, these figures become still more engaging when considered not just as brilliant individuals, but also as products of their time.

First, it’s worth looking at how Eilenberger manages to draw parallels between the four philosophers’ ideas, for that is no mean feat. Inevitably this challenge makes his presentation selective and occasionally tendentious, but it also produces some imaginative insights.

*          *          *

 

At first sight, Wittgenstein seems an awkward fit for this book, seeing as he did not produce any philosophy during the decade in question. His famous early work, the Tractatus Logico-Philosophicus, claimed to have solved the problems of philosophy “on all essential points.” So we are left with the (admittedly fascinating) account of how he signed away his vast inheritance, trained as a primary school teacher, and moved through a series of remote Austrian towns becoming increasingly isolated and depressed.

But this does leave Eilenberger plenty of space to discuss the puzzling Tractatus. He points out, rightly, that Wittgenstein’s mission to establish once and for all what can meaningfully be said – that is, what kinds of statements actually make sense – was far more than an attempt to rid philosophy of metaphysical hokum (even if that was how his logical-empiricist fans in Cambridge and the Vienna Circle wanted to read the work).

Wittgenstein did declare that the only valid propositions were those of natural science, since these alone shared the same logical structure as empirical reality, and so could capture an existing or possible “state of affairs” in the world. But as Wittgenstein freely admitted, this meant the Tractatus itself was nonsense. Therefore its reader was encouraged to disregard the very claims which had established how to judge claims, to “throw away the ladder after he has climbed up it.” Besides, it remained the case that “even if all possible scientific questions be answered, the problems of life have still not been touched at all.”

According to Eilenberger, who belongs to the “existentialist Wittgenstein” school, the Tractatus’ real goals were twofold. First, to save humanity from pointless conflict by clarifying what could be communicated with certainty. And second, to emphasise the degree to which our lives will always be plagued by ambiguity – by that which can only be “shown,” not said – and hence by decisions that must be taken on the basis of faith.

This reading allows Eilenberger to place Wittgenstein in dialogue with Heidegger and Benjamin. The latter both styled themselves as abrasive outsiders: Heidegger as the Black Forest peasant seeking to subvert academic philosophy from within, Benjamin as the struggling journalist and flaneur who, thanks to his erratic behaviour and idiosyncratic methods, never found an academic post. By the end of the ’20s, they had gravitated towards the political extremes, with Heidegger eventually joining the Nazi party and Benjamin flirting with Communism.

Like many intellectuals at this time, Heidegger and Benjamin were interested in the consequences of the scientific and philosophical revolutions of the 17th century, the revolutions of Galileo and Descartes, which had produced the characteristic dualism of modernity: the separation of the autonomous, thinking subject from a scientific reality governed by natural laws. Both presented this as an illusory and fallen state, in which the world had been stripped of authentic human purpose and significance.

Granted, Heidegger did not think such fine things were available to most of humanity anyway. As he argued in his masterpiece Being and Time, people tend to seek distraction in mundane tasks, social conventions and gossip. But it did bother him that philosophers had forgotten about “the question of the meaning of Being.” To ask this question was to realise that, before we come to do science or anything else, we are always already “thrown” into an existence we have neither chosen nor designed, and which we can only access through the meanings made available by language and by the looming horizon of our own mortality.

Likewise, Benjamin insisted language was not a means of communication or rational thought, but an aesthetic medium through which the world was revealed to us. In his work on German baroque theatre, he identified the arrival of modernity with a tragic distortion in that medium. Rather than a holistic existence in which in which everything had its proper name and meaning – an existence that, for Benjamin, was intimately connected with the religious temporality of awaiting salvation – the very process of understanding had become arbitrary and reified, so that any given symbol might as well stand for any given thing.

As Eilenberger details, both Heidegger and Benjamin found some redemption in the idea of decision – a fleeting moment when the superficial autonomy of everyday choices gave way to an all-embracing realisation of purpose and fate. Benjamin identified such potential in love and, on a collective and political level, in the “profane illuminations” of the metropolis, where the alienation of the modern subject was most profound. For Heidegger, only a stark confrontation with death could produce a truly “authentic” decision. (This too had political implications, which Eilenberger avoids: Heidegger saw the “possibilities” glimpsed in these moments as handed down by tradition to each generation, leaving the door open to a reactionary idea of authenticity as something a community discovers in its past).

If Wittgenstein, Heidegger and Benjamin were outsiders and “conceptual wrecking balls,” Ernst Cassirer cuts a very different figure. His inclusion in this book is the latest sign of an extraordinary revival in his reputation over the past fifteen years or so. That said, some of Eilenberger’s remarks suggest Cassirer has not entirely shaken off the earlier judgment, that he was merely “an intellectual bureaucrat,” “a thoroughly decent man and thinker, but not a great one.”

Cassirer was the last major figure in the Neo-Kantian tradition, which had dominated German academic philosophy from the mid-19th century until around 1910. At this point, it grew unfashionable for its associations with scientific positivism and naïve notions of rationality and progress (not to mention the presence of prominent Jewish scholars like Cassirer within its ranks). The coup de grâce was delivered by Heidegger himself at the famous 1929 “Davos debate” with Cassirer, the event which opens and closes Eilenberger’s book. Here contemporaries portrayed Cassirer as an embodiment of “the old thinking” that was being swept away.

That judgment was not entirely accurate. It’s true that Cassirer was an intellectual in the mould of 19th century Central European liberalism, committed to human progress and individual freedom, devoted to science, culture and the achievements of German classicism. Not incidentally, he was the only one of our four thinkers to wholeheartedly defend Germany’s Weimar democracy. But he was also an imaginative, versatile and unbelievably prolific philosopher.

Cassirer’s three-volume project of the 1920s, The Theory of Symbolic Forms, showed that he, too, understood language and meaning as largely constitutive of reality. But for Cassirer, the modern scientific worldview was not a debasement of the subject’s relationship to the world, but a development of the same faculty which underlay language, myth and culture – that of representing phenomena through symbolic forms. It was, moreover, an advance. The logical coherence of theoretical science, and the impersonal detachment from nature it afforded, was the supreme example of how human beings achieved freedom: by understanding the structure of the world they inhabited to ever greater degrees.

But nor was Cassirer dogmatic in his admiration for science. His key principle was the plurality of representation and understanding, allowing the same phenomenon to be grasped in different ways. The scientist and artist are capable of different insights. More to the point, the creative process through which human minds devised new forms of representation was open ended. The very history of science, as of culture, showed that there were always new symbolic forms to be invented, transforming our perception of the world in the process.

*          *          *

 

It would be unfair to say Eilenberger gives us no sense of how these ideas relate to the context in which they were formed; his biographical vignettes do offer vivid glimpses of life in 1920s Europe. But that context is largely personal, and rarely social, cultural or intellectual. As a result, the most striking parallel of all – the determination of Wittgenstein, Heidegger and Benjamin to upend the premises of the philosophical discipline, and that of Cassirer to protect them – can only be explained in terms of personality. This is misleading.

A time-traveller visiting Central Europe in the years after 1918 could not help but notice that all things intellectual were in a state of profound flux. Not only was Neo-Kantianism succumbing to a generation of students obsessed with metaphysics, existence and (in the strict sense) nihilism. Every certainty was being forcefully undermined: the superiority of European culture in Oswald Spengler’s bestselling Decline of the West (1918); the purpose and progress of history in Ernst Troeltsch’s “Crisis of Historicism” (1922); the Protestant worldview in Karl Barth’s Epistle to the Romans (1919); and the structure of nature itself in Albert Einstein’s article “On the Present Crisis in Theoretical Physics” (1922).

In these years, even the concept of revolution was undergoing a revolution, as seen in the influence of unorthodox Marxist works like György Lukács’ History and Class Consciousness (1923). And this is to say nothing of what our time-traveller would discover in the arts. Dada, a movement dedicated to the destruction of bourgeois norms and sensibilities, had broken out in Zurich in 1917 and quickly spread to Berlin. Here it infused the works of brilliant but scandalous artists such as George Grosz and Otto Dix.

German intellectuals, in other words, were conscious of living in an age of immense disruption. More particularly, they saw themselves as responding to world defined by rupture; or to borrow a term from Heidegger and Benjamin, by “caesura” – a decisive and irreversible break from the past.

It’s not difficult to imagine where that impression came from. This generation experienced the cataclysm of the First World War, an unprecedented bloodbath that discredited assumptions of progress even as it toppled ancient regimes (though among Eilenberger’s quartet, only Wittgenstein served on the front lines). In its wake came the febrile economic and political atmosphere of the Weimar Republic, which has invited so many comparisons to our own time. Less noticed is that the ’20s were also, like our era, a time of destabilising technological revolution, witnessing the arrival of radio, the expansion of the telephone, cinema and aviation, and a bevy of new capitalist practices extending from factory to billboard.

Nonetheless, in philosophy and culture, we should not imagine that an awareness of rupture emerged suddenly in 1918, or even in 1914. The war is best seen as an explosive catalyst which propelled and distorted changes already underway. The problems that occupied Eilenberger’s four philosophers, and the intellectual currents that drove them, stem for a deeper set of dislocations.

 Anxiety over the scientific worldview, and over philosophy’s relationship to science, was an inheritance from the 19thcentury. In Neo-Kantianism, Germany had produced a philosophy at ease with the advances of modern science. But paradoxically, this grew to be a problem when it became clear how momentous those advances really were. Increasingly science was not just producing strange new ways of seeing the world, but through technology and industry, reshaping it. Ultimately the Neo-Kantian holding pattern, which had tried to reconcile science with the humanistic traditions of the intellectual class, gave way. Philosophy became the site of a backlash against both.

But critics of philosophy’s subordination to science had their own predecessors to call on, not least with respect to the problem of language. Those who, like Heidegger and Benjamin, saw language not as a potential tool for representing empirical reality, but the medium which disclosed that reality to us (and who thus began to draw the dividing line between continental and Anglo-American philosophy), were sharpening a conflict that had simmered since the Enlightenment. They took inspiration from the 18th century mystic and scourge of scientific rationality, Johann Georg Hamann.

Meanwhile, the 1890s saw widespread recognition of the three figures most responsible for the post-war generation’s ideal of the radical outsider: Søren Kierkegaard, Friedrich Nietzsche and Karl Marx. That generation would also be taught by the great pioneers of sociology in Germany, Max Weber and Georg Simmel, whose work recognised what many could feel around them: that modern society was impersonal, fragmented and beset by irresolvable conflicts of value.

In light of all this, it’s not surprising that the concept of rupture appears on several levels in Wittgenstein, Heidegger and Benjamin. They presented their works as breaks in and with the philosophical tradition. They reinterpreted history in terms of rupture, going back and seeking the junctures when pathologies had appeared and possibilities had been foreclosed. They emphasised the leaps of faith and moments of decision that punctuated the course of life.

Even the personal qualities that attract Eilenberger to these individuals – their eccentric behaviour, their search for authenticity – were not theirs alone. They were part of a generational desire to break with the old bourgeois ways, which no doubt seemed the only way to take ownership of such a rapidly changing world.

 

Train-splaining a new world order

This article was originally published by The Critic on August 4th 2020.

“We have great ambitions for night trains in France,’ said transport minister Jean-Baptiste Djebbari in June. It was a curious statement. When it comes to infrastructure, the language of ambition is usually reserved for projects that convey scale, speed and technological prowess. Europe’s dwindling network of sleeper trains, by contrast, have long been considered a charming relic in an age of ever cheaper, faster and more atomised travel.

Not any longer. On Bastille Day, president Emmanuel Macron confirmed that sleeper trains would be returning to French rails, and in so doing, he was merely joining a continental trend. In January, the first sleeper service since 2003 departed Vienna’s Westbahnhof for Brussels. Its provider, the Austrian ÖBB network, had already resurrected routes to Germany, Italy and Switzerland. A new night train linking states on the European Union’s eastern periphery commenced in June, and is already increasing services to meet a growing demand – as are sleeper routes connecting the Nordic countries to Germany. The Swedish government last month committed to fund new services linking Stockholm and Malmö with Hamburg and Brussels.

This piqued my interest, because I’ve long felt that railways offer vivid windows into the states across which they roam. They tend to exhibit attitudes to public service provision and capital-intensive infrastructure, but they also say a great deal about the nature and extent of a society’s interrelatedness, its pace of life, and indeed its ambition.

On its face, the return of sleeper trains signals the rise of flygskam – a popular Swedish coinage meaning “flight shame,” part of the growing environmental conscience of European governments and consumers. In recent months, Covid-19 has also been boosting demand. And it remains true that continental Europe’s investment in all forms of rail leaves the UK’s patchy, overcrowded and overpriced networks in the shade (let’s not even mention HS2).

But just as Britain’s rail headaches say a great deal about us as a country – our uncertainty over the proper roles of the public and private sector, our incorrigible NIMBYism and our longstanding neglect of the nation beyond London – so it would only be a little facetious to say that sleeper trains capture something deeper about the European Geist today.

At the height of its 19th century confidence, the steam locomotive was the ultimate symbol of Europe’s headlong rush into modernity. Its near-manic desire to control the globe was likewise measured in yards and metres of railway track. Now, as Bruno Maçães eloquently argues, Europe has reached a different inflection point: it is coming to realize that the values it once took to be universal are merely those of its own “civilization state.” Relinquishing any sense of global mission, liberal-minded Europeans now seek to cultivate, in Maçães’ words, ‘a specific way of life: uncommitted, free, detached, aesthetic.’

Surely there’s no better metaphor for this inward turn than the tranquilising comforts of a slow-moving sleeper train. With the world around it growing increasingly chaotic and nasty, I picture Europe seated in the dining car with a Kindle edition of Proust, ordering the vegetarian option, and finally gazing half-drunk into the sunset. Would you not, dear reader, prefer that to the unseemly crush of your 6am Ryanair flight? Would you not prefer it to arriving anywhere at all?

Certainly, writers who step on board a night train cannot help but mention their “nostalgic” or “romantic” appeal – that is, if they don’t simply wallow in kitsch sentimentality. Consider one such account in The Guardian:

“I wake in the pre-dawn light – still inky blue in the compartment. I lie there, feeling the train rock beneath me and then push up the window blind with a foot. I’m rolling through misty flatlands. The landscape spooling past. Austria.”

But perhaps we don’t need to be figurative about this. After all, a quasi-national European consciousness, based around a common purpose like environmentalism, is undoubtedly something the EU would like to foster. And railways, being as skeletons to the bodies of nations, have always been a choice tool for such unification. So it should not surprise us that the return of sleeper trains comes partly under the auspices of the European Commission’s Green Deal, with 2021 slated as “the European Year of Rail.”

The distinctiveness of train culture in Europe comes into sharper focus when we consider its troubled cousin across the Atlantic, the United States. There too the westwards expansion of the railway was once a crucial component, both practically and symbolically, in the creation of a unified nation. Yet today the railway can be seen, like almost everything in American life, as an emblem of estrangement.

The so-called “flyover states,” those swathes of the continental heartland not visited by coastal elites, are in many cases states crossed by the long-distance Amtrak service. But taking the Amtrak, especially overnight, is viewed as a profound eccentricity. Last year a not entirely ironic New York Times Magazine feature reported the experience as though it belonged to another planet. ‘Train people,’ writes our correspondent, ‘are content to stare out the window for hours, like indoor cats … Train people are also individuals for whom small talk is as invigorating as a rail of cocaine.’

It is largely within Blue America – the coastal strips and the urbanised mid-West around Chicago – that high-speed links after the European fashion are being planned. Meanwhile, Elon Musk and others are racing to complete the first “hyperloop” service: a flashy, futuristic transport project of the kind loved by celebrity entrepreneurs, which will use vacuum technology to send passenger pods through tubes at over 750 mph (destinations San Francisco, Las Vegas, Orlando).

Of course, no discussion of modern rail systems would be complete without China, where the staggering proliferation of high-speed networks in recent decades (think two-thirds of world’s total) illustrates a scale and dynamism of which the west can only dream. These are a typical product of the Chinese economic model, which suppresses consumer spending in favor of state-managed export and investment as an engine of growth. That being said, China’s semi-private developers have still borrowed prodigiously, so that a number rail projects have recently ground to halt under a crushing debt burden.

Such vaulting ambition seems a world away from European decadence, but in one sense it is not. Railways also comprise a crucial element of the New Silk Road initiative, whereby China’s power is projected across the Eurasian landmass through infrastructure projects and trade. With over thirty Chinese cities already connected with Europe by rail, it may not be long before Chinese freight carriages and European sleeper carriages routinely share the same tracks.

Anti-racism and the long shadow of the 1970s

The essay was originally published by Unherd on August 3rd 2020.

Last month, following a bout of online outrage, the National Museum of African American History and Culture removed an infographic from its website. Carrying the title “Aspects and assumptions of whiteness and white culture in the United States,” the offending chart presented a list of cultural expectations which, apparently, reflect the “traditions, attitudes and ways of life” characteristic of “white people.” Among the items listed were “self-reliance,” “the nuclear family,” “respect authority,” “plan for future” and “objective, rational linear thinking”.

Critics seized on this as evidence that the anti-racism narrative that has taken hold in institutional America is permeated by a bigotry of low expectations. The chart seemed to suggest that African Americans should not be expected to adhere to the basic tenets of modern civil society and intellectual life. Moreover, the notion that prudence, personal responsibility and rationality are inherently white echoes to an uncanny degree the racist claims that have historically been used to justify the oppression of people of African descent.

We could assume, in the interests of fairness, that the problem with the NMAAHC’s chart was a lack of context. Surely the various qualities it ascribes to “white culture” should be read as though followed by a phrase like “as commonly understood in the United States today?” The problem is that the original document which inspired the chart, and which bore the copyright of corporate consultant Judith H. Katz, provides no such caveats.

If we look at Katz’s own career, however, we do find some illuminating context — not just for this particular incident, but also regarding the origins of the current anti-racism movement more broadly. During the 1970s, Katz pioneered a distinctive approach to combatting racism, one that was above all therapeutic and managerial. This approach, as the NMAAHC chart suggests, took little interest in the opinions and experiences of ethnic and racial minorities, but focused on helping white Americans understand their identity.

Katz’s most obvious descendent today is Robin DiAngelo, author of the bestselling White Fragility — a book relating the experiences and methods of DiAngelo’s lucrative career in corporate anti-racism training. Katz too developed a re-education program, “White awareness training,” which, according to her 1978 book White Awareness, “strives to help Whites understand that racism in the United States is a White problem and that being White implies being racist.”

Like DiAngelo, Katz rails against the pretense of individualism and colour blindness, which she regards as strategies for denying complicity in racism. And like DiAngelo, Katz emphasizes the need for exclusively white discussions (the “White-on-White training group”) to avoid turning minorities into teachers, which would be merely another form of exploitation.

Yet the most striking aspect of Katz’s ideas, by contrast to the puritanical DiAngelo, is her insistence that the real purpose of anti-racism training is to enable the psychological liberation and self-fulfillment of white Americans. She consistently discusses the problem of racism in the medicalizing language of sickness and trauma. It is, she says, “a form of schizophrenia,” “a pervasive form of mental illness,” a “disease,” and “a psychological disorder… deeply embedded in White people from a very early age on both a conscious and an unconscious level.” Thus the primary benefit offered by Katz is to save white people from this pathology, by allowing them to establish a coherent identity as whites.

Her program, she repeatedly emphasizes, is not meant to produce guilt. Rather, its premise is that in order to discover “our unique identities,” we must not overlook “[o]ur sexual and racial essences.” Her training allows its subjects to “become more fully human,” to “identify themselves as White and feel good about it.” Or as Katz writes in a journal article: “We must begin to remove the intellectual shackles and psychological chains that keep us in a mental and spiritual bondage. White people have been hurt for too long.”

Reading all of this, it is difficult not to be reminded of the critic Christopher Lasch’s portrayal of 1970s America as a “culture of narcissism”. Lasch was referring to a bundle of tendencies that characterised the hangover from the radicalism of the 1960s: a catastrophising hypochondria that found in everything the signs of impending disaster or decay; a naval-gazing self-awareness which sought expression in various forms of spiritual liberation; and consequently, a therapeutic culture obsessed with self-improvement and personal renewal.

The great prophet of this culture was surely Woody Allen, whose work routinely evoked crippling neuroses, fear of death, and psychiatry as the customary tool for managing the inner tensions of the liberated bourgeois. That Allen treated all of this with layer upon layer of self-deprecating irony points to another key part of Lasch’s analysis. The narcissist of this era retained enough idealism so as to be slightly ashamed of his self-absorption — unless, of course, some way could be found to justify it as a means towards wider social improvement.

And that is what Katz’s white awareness training offered: a way to resolve the tensions between a desire for personal liberation and a social conscience, or more particularly, a new synthesis of ’70s therapeutic culture with the collectivist political currents unleashed in the ’60s.

Moreover, in Katz’s work we catch a glimpse of what the vehicle for this synthesis would be: the managerial structures of the public or private institution, where a paternalistic attitude towards students, employees and the general public could provide the ideal setting for the tenets of “white awareness.” By way of promoting her program, Katz observed in the late ’70s a general trend towards “a more educational role for the psychotherapist… utilizing systemic training as the process by which to meet desired behavior change.” There was, she noted, a “growing demand” for such services.

Which brings us back to the NMAAHC’s controversial chart. It would be wrong to suggest that this single episode allows us to draw a straight line from the culture of narcissism in which Katz’s ideas emerged to the present anti-racism narrative. But the fact that there continues to be so much emphasis placed on the notion of “whiteness” today — the NMAAHC has an entire webpage under this heading, which prominently features Katz’s successor Robin DiAngelo — suggests that progressive politics has not entirely escaped the identity crises of the 1970s.

Today that politics might be more comfortable assigning guilt than Katz was, but it still places a disproportionate emphasis on those it calls “white” to adopt a noble burden of self-transformation, while relegating minorities to the role of a helpless other.

Of course, it is precisely this simplistic dichotomy which allows the anti-racism narrative to jump across borders and even oceans, as we have seen happening recently, into any context where there are people who can be called “white” and an institutional framework for administering reeducation. Already in 1983, Katz was able to promote her “white awareness training” in the British journal Early Child Development and Care, simply swapping her standard American intro for a discussion of English racism.

Then as now, the implication is that from the perspective of “whiteness,” the experience of African-Americans and of ethnic minorities in a host of other places is somehow interchangeable. This, I think, can justifiably be called a kind of narcissism.

Why I’m not giving up on my ego

This spring, I finally got round to reading Derek Parfit’s famous work, Reasons and Persons. Published in 1984, the book is often cited as a key inspiration for subsequent developments in moral philosophy, notably the field of population ethics and the Effective Altruism movement. (Both of which, incidentally, are closely associated with Oxford University, the institution where Parfit himself worked until his death in 2017). I found Reasons and Persons every bit the masterpiece many have made it out to be – a work not just of rich insight, but also of persuasive humility and charm. For this reason, and because some themes of the book resonate with certain cultural trends today, I thought it would be worth saying something about why Parfit did not win me over to his way of seeing the world.

In Reasons and Persons, Parfit takes on three main issues:

  1. He makes numerous arguments against the self-interest theory of rationality, which holds that what is most rational for any individual to do is whatever will benefit him or her the most;
  1. He argues for a Reductionist theory of identity, according to which there is no “deep further fact” or metaphysical essence underpinning our existence as individual persons, only the partial continuity of psychological experiences across time;
  1. He argues for the moral significance of future generations, and searches (unsuccessfully, by his own admission) for the best way to recognise that significance in our own decisions.

I want to consider (2), Parfit’s Reductionist view of identity. On my reading, this was really the lynchpin of the whole book. According to Parfit, we are inclined to believe there is a “deep further fact” involved in personal identity – that our particular bodies and conscious minds constitute an identity which is somehow more than the sum of these parts. If your conscious mind (your patterns of thought, memories and intentions) managed somehow to survive the destruction of your body (including your brain), and to find itself in a replica body, you may suspect that this new entity would not be you. Likewise if your body continued with some other mind. In either case some fundamental aspect of your personhood, perhaps a metaphysical essence or soul or self, would surely have perished along the way.

Parfit says these intuitions are wrong: there simply is no further fact involved in personal identity. In fact, as regards both a true understanding of reality and what we should value (or “what really matters,” as he puts it), Parfit thinks the notion of persons as bearers of distinct identities can be dispensed with altogether.

What really matters about identity, he argues, is nothing more than the psychological continuity that characterises our conscious minds; and this can be understood without reference to the idea of a person at all. If your body were destroyed and your mind transferred to a replica body, this would merely be “about as bad as ordinary survival.” Your mind could even find itself combined with someone else’s mind, in someone else’s body, which would no doubt present some challenges. In both cases, though, whether the new entity would “really be you” is an empty question. We could describe what had taken place, and that would be enough.

Finally, once we dispense with the idea of a person as bearer of a distinct identity, we notice how unpersonlike our conscious minds really are. Psychological continuity is, over the course of a life, highly discontinuous. Thought patterns, memories and intentions form overlapping “chains” of experience, and each of these ultimately expires or evolves in such a way that, although there is never a total rupture, our future selves might as well be different people.

As I say, I found these claims about identity to be the lynchpin of Reasons and Persons. Parfit doesn’t refer to them in the other sections of his book, where he argues against self-interest and for the moral significance of future generations. But you can hardly avoid noticing its relevance for both. Parfit’s agenda, ultimately, is to show that ethics is about the quality of human experiences, and that all experiences across time and space should have the same moral significance. Denying the sanctity of personal identity provides crucial support for that agenda. Once you accept that the notion of an experience being your experience is much less important than it seems, it is easier to care more about experiences happening on the other side of the planet, or a thousand years in the future.

But there is another reason I was especially interested in Parfit’s treatment of identity.  In recent years, some friends and acquaintances of mine have become fascinated by the idea of escaping from the self or ego, whether through neo-Buddhist meditation (I know people who really like Sam Harris) or the spiritualism of Eckhart Tolle. I’m also aware that various subcultures, notably in Silicon Valley, have become interested in the very Parfitian idea of transhumanism, whereby the transferal of human minds to enhanced bodies or machines raises the prospect of superseding humanity altogether. Add to these the new conceptions of identity emerging from the domain of cultural politics – in particular, the notion of gender fluidity and the resurgence of racial essentialism – and it seems to me we are living at a time when the metaphysics of selfhood and personhood have become an area of pressing uncertainty.

I don’t think it would be very productive to make Reasons and Persons speak to these contemporary trends, but they did inform my own reading of the book. In particular, they led me to notice something about Parfit’s presentation of the Reductionist view.

In the other sections of the Reasons and Persons, Parfit makes some striking historical observations. He argues for a rational, consequentialist approach to ethics by pointing out that in the modern world, our actions affect a far larger number of people than they did in the small communities where our traditional moral systems evolved. He reassures us of the possibility of moral progress by claiming that ethics is still in its infancy, since it has only recently broken free from a religious framework. In other words, he encourages us to situate his ideas in a concrete social and historical context, where they can be evaluated in relation to the goal of maximising human flourishing.

But this kind of contextualisation is entirely absent from Parfit’s treatment of identity. What he offers us instead is, ironically, a very personal reason for accepting the Reductionist view:

Is the truth depressing? Some may find it so. But I find it liberating, and consoling. When I believed that my existence was such a further fact, I seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others.

Parfit goes on to explain how accepting the Reductionist view helps him to reimagine his relationship to those who will be living after he has died. Rather than thinking “[a]fter my death, there will be no one living who will be me,” he can now think:

Though there will later be many experiences, none of these experiences will be connected to my present experiences by chains of such direct connections as those involved in experience-memory, or in the carrying out of an earlier intention.

There is certainly a suggestion here that, as I said earlier, the devaluation of personal identity supports a moral outlook which grants equal importance to all experiences across time and space. But there is no consideration of what it might be like if a significant number of people in our societies did abandon the idea of persons as substantive, continuous entities with real and distinct identities.

So what would that be like? Well, I don’t think the proposition makes much sense. As soon as we introduce the social angle, we see that Parfit’s treatment of identity is lacking an entire dimension. His arguments make us think about our personal identity in isolation, to show that in certain specific scenarios we imagine a further fact where there is none. But in social terms, our existence does involve a further fact – or rather, a multitude of further facts: facts describing our relations with others and the institutions that structure them. We are sons and daughters, parents, spouses, friends, citizens, strangers, worshippers, students, teachers, customers, employees, and so on. These are not necessarily well-defined categories, but they suggest the extent to which social life is dependent on individuals apprehending one another not in purely empirical terms, but in terms of roles with associated expectations, allowances and responsibilities.

And that, crucially, is also how we tend to understand ourselves – how we interpret our desires and formulate our motivations. The things we value, aim for, think worth doing, and want to become, inevitably take their shape from our impressions of the social world we inhabit, with its distinctive roles and practices.

We emulate people we admire, which does not mean we want to be exactly like them, but that they perform a certain role in a way that we identify with. There is some aspect of their identity, as we understand it, that we want to incorporate into our own. Likewise, when we care about something, we are typically situating ourselves in a social milieu whose values and norms become part of our identity. Such is the case with raising a family, being successful in some profession, or finding a community of interest like sport or art or playing with train sets. It is also the case, I might add, with learning meditation or studying philosophy in order to write a masterpiece about ethics.

There is, of course, a whole other tradition in philosophy that emphasises this interdependence of the personal and the social, from Aristotle and Hegel to Hannah Arendt and Alasdair MacIntyre. This tradition is sometimes called communitarian, by which is meant, in part, that it views the roles provided by institutions as integral to human flourishing. But the objection to Parfit I am trying to make here is not necessarily ethical.

My objection is that we can’t, in any meaningful sense, be Reductionists, framing our experiences and decisions as though they belong merely to transient nodes of psychological connectivity. Even if we consider personhood an illusion, it is an illusion we cannot help but participate in as soon as we begin to interact with others and to pursue ends in the social world. Identity happens, whether we like it or not: other people regard us in a certain way, we become aware of how they regard us, and in our ensuing negotiation with ourselves about how to behave, a person is born.

This is, of course, one reason that people find escaping the self so appealing: the problem of how to present ourselves in the world, and of deciding which values to consider authentically our own, can be a source of immense neurosis and anxiety. But the psychological dynamics from which all of this springs are a real and inescapable part of being human (there is a reason Buddhist sages have often lived in isolation – something I notice few of their contemporary western descendants do). You can go around suppressing these thoughts by continuously telling yourself they do not amount to a person or self, but then you would just be repeating the fallacy identified by Parfit – putting the emphasis on personhood rather than on experiences. Meanwhile, if you actually want to find purpose and fulfilment in the world, you will find yourself behaving like a person in all but name.

To truly step outside our identities by denying any further fact in our existence (or, for that matter, by experiencing the dissolution of the ego through meditation, or fantasising about being uploaded to a machine) is at most a private, intermittent exercise. And even then, our desire to undertake this exercise, our reasons for thinking it worthwhile, and the things we hope to achieve in the process, are firmly rooted in our histories as social beings. You must be a person before you can stop being a person.

Perhaps these complications explain why Parfit is so tentative in his report of what it is like to be a Reductionist: “There is still a difference between my life and the lives of other people. But the difference is less.” I interpret his claim that we should be Reductionists as the echo of an age-old wisdom: don’t get so caught up in your own personal dramas that you overlook your relative insignificance and the fact that others are, fundamentally, not so different to you. But this moral stance does not follow inevitably from a theoretical commitment to Reductionism (and like I say, I don’t think that commitment could be anything more than theoretical). In fact, it’s possible to imagine some horrific beliefs being just as compatible with the principle that persons do not really exist. Parfit’s claim that Reductionism makes him care more about humanity in general seems to betray his own place in the tradition of universalist moral thought – a tradition in which the sanctity of persons (and indeed of souls) has long been central.

As for my friends who like to step away from the self through meditation, if this helps them stay happy and grounded, more power to them. But I don’t think this could ever obviate the importance of engaging in another kind of reflection: one that recognises life as a journey we must all undertake as real persons living in a world with others, and which requires us to struggle to define who we are and want to be. This is not easy today, because the social frameworks that have always been necessary for persons, like so many climbing flowers, to grow, are now in a state of flux (but that is a subject for another time). Still, difficult as it may be, the road awaits.

Coronavirus and the spectre of the closed network

How will the world be reshaped by coronavirus? Answers to this question have almost become a genre unto themselves. Such speculation – even if it is just speculation – can be valuable, and not just insofar as it helps us to grapple with the particular threat facing us. Moments of unexpected shock like this one, when drastic change suddenly seems possible, can shake us out of our engrained ways of thinking and refocus our attention on the forces at play in our lives.

To be sure, the arrival of a global pandemic has not escaped the immense gravitational force of familiar arguments. Many conservative commentators have taken it as confirmation of long-held suspicions over globalisation: a pathogen bred in unhygienic Chinese animal markets seems an obvious reminder that bad things as well as good can spread through porous borders. There is, moreover, nothing quite like the prospect of collapsing supply chains to stir vague longings for autarchy. “If Britain were ever isolated from the rest of the world,” a columnist in the UK’s Telegraph opines, “It would need healthy farms.”

No more surprising have been the efforts to redirect discussion of the virus towards issues of prejudice and discrimination. The most cartoonish illustration of this remains the statement of the World Health Organisation director general that “The greatest enemy we face is not the virus itself; it’s the stigma that turns us against each other.”

Neither of these responses are entirely without merit, of course, but they are illustrative of the overly simplistic way we have come to think about the interconnected world in which we live. In the western democracies, a decade of political trench warfare over issues of national sovereignty, immigration and the domestic consequences of a globalised economy has led us to reduce the complex reality of networks and connectivity to questions about openness.

How should we weight the benefits of mobility and change versus those of stability? How flexible should our culture be? What responsibilities do we have towards the rest of the world? Should we be “somewheres” or “anywheres”? These are important questions, but they have absorbed our attention to the extent that we have not kept track of the conditions which allowed them to be posed in the first place. They seem to assume that a capacity for an ever-increasing interconnectedness is like an escalator towards an ever more open and fluid world; our decision is merely whether or not we should step onto it.

All the while, however, a different story has been emerging under our noses. Those very debates about the merits of openness have been facilitated by a boom of network technology, and this has itself brought about new kinds and degrees of fragmentation and mistrust. The division fuelled by social media, the cultural isolation manifest in personalised algorithms, the capture of audiences through disinformation and propaganda – these are only the most visible symptoms of that paradox.

Since the 1980s, the emergence of the so-called knowledge economy – where patents and other intellectual property are increasingly reliant on the management and analysis of vast seas of information, made available through connectivity – has exacerbated the social, cultural and economic decoupling of wealthy urban centres from resentful heartlands. As the economist Paul Krugman noted in a recent interview, the internet has made it possible for firms “to actually separate the low value activities from the high value activities, so that your back office operations can be some place where land is cheap and wages are low, but you can keep your corporate headquarters and your high-level technical staff in lower Manhattan.”

The coronavirus’ disruption of our habitual social and economic interactions has led to a dawning realisation that we have already adopted a suite of technologies with immense potential for social fragmentation. In The Atlantic, Ian Bogost points out that the infrastructure already exists for a privileged section of society to retreat into a virtual enclave of remote work, shopping, education and entertainment. Quarantine, he writes, “is just a raw, surprising name for the condition that computer technologies have brought about over the last two decades: making almost everything possible from the quiet isolation of a desk or chair illuminated by an internet-connected laptop or tablet.”

Similarly, much of the cosmopolitanism that exists in our societies today – and which is also centred in the hubs of the knowledge economy – has stemmed from the incentives for migration produced by an abundance of service sector jobs. These, too, could be made redundant by a greater leveraging of connectivity. As Ed Conway recently speculated in the London Times, the coronavirus shock could help to stimulate “a new model of globalization,” based on technologies such as 3D printing, artificial intelligence and robotics. This would allow labour and resources from around the world to be coordinated more efficiently, thereby reducing the unreliability which comes from actual people and things having to be in certain places at certain times. Conway asks us to imagine “hotel rooms in London being cleaned by robots controlled by cleaners in Poland, or lawns in Texas mowed by robots steered by gardeners in Mexico.”

Now, I have no idea whether coronavirus will launch a fourth industrial revolution, or hasten our evolution into housebound recluses sustained by Netflix, Amazon and telecommuting software. My point is this: connectivity does not exist on a simple scale of more and less, and nor does it axiomatically entail a high degree of openness. Rather, connectivity can come in many forms, and the world can continue to become more densely interconnected without any concomitant increase in the freedom or willingness to interact.

The implications of this reality do not favour communitarian “somewheres” any more than the liberal “anywheres,” to return to our troublesome dichotomy. For it means that we can have all the alienating effects of connectivity with none of the benefits. One can easily imagine an expansive network capable of harvesting huge amounts of information, and of coordinating vast resources, but where the majority of people who provide the inputs remain in many respects isolated, with limited ability to use the network for their own ends. They would be “connected,” but as mere nodes, not as agents. The obvious precedent here is China, where technology has enabled a previously unimaginable degree of surveillance and social control, including control over the circulation of information.

We can, of course, still argue about whether we should try to bring about a more cosmopolitan world. But the possibilities raised by coronavirus should, at the very least, drive home the realisation that networks cannot simply be presumed to facilitate the politics of openness. G.W.F Hegel famously said that the owl of Minerva takes flight at dusk, and we have been proving him correct by arguing over a world on which the sun is already setting.

Reading Antigone in an age of resistance

The play opens with two sisters, Antigone and Ismene, arguing about their duties to family versus those to the state. Their two brothers have just killed each other while leading opposing sides of a civil war in Thebes. Their uncle Creon has now taken charge of the city, and has decreed that one of the brothers, Polynices, is to be denied a funeral: “he must be left unburied, his corpse / carrion for the birds and dogs to tear, / an obscenity for the citizens to behold.”

Ismene chooses obedience to Creon, but Antigone decides to rebel. She casts a symbolic handful of dust over Polynices’ corpse, and when brought before Creon, affirms her action in the name of “the great unwritten, unshakeable traditions” demanding funeral rites for the dead. So begins a confrontation between two headstrong, unflinching protagonists. It will end with Antigone hanging herself in her jail cell, leading to the suicide both of Creon’s son (who was engaged to Antigone), and consequently of his wife.

*   *   *

 

“When I see that king in that play, the first name that came to mind was Donald Trump: arrogance, misogyny, tunnel vision.” This was reportedly one audience member’s response to Antigone in Ferguson, a 2018 theatre piece that brought a famous Greek tragedy into the context of US race relations. That tragedy is Sophocles’ Antigone, which I have summarised above. The play is now frequently being used to explore contemporary politics, especially in relation to the theme of resistance. “It’s a story of a woman who finds the courage of her convictions to speak truth to power,” said Carl Cofield, who directed another production of Antigone in New York last year. Cofield drew parallels with the #MeToo movement, Black Lives Matter, and “the resistance to the outcome of the presidential race.”

This reading of Antigone has become increasingly common since the post-war era. Its originator was perhaps Bertolt Brecht’s 1948 adaptation, which imagined a scenario where the German people had risen against Hitler. Since the 1970s Antigone has often been portrayed as a feminist heroine, and the play has served as a call-to-arms in countless non-western contexts too. As Fanny Söderbäck proudly notes: “Whenever and wherever civil liberties are endangered, when the rights or existence of aboriginal peoples are threatened, when revolutions are underway, when injustices take place – wherever she is needed, Antigone appears.”

Such appropriation of a classical figure is by no means unique. It echoes the canonisation of Socrates as a martyr for free speech and civil disobedience, most notably by John Stuart Mill, Mohandas Gandhi and Martin Luther King. And just as this image of Socrates rests on Plato’s Apology of Socrates, but ignores the quite different portrait in the Crito, the “resistance” reading of Antigone bears little resemblance to how the play was originally intended and received.

An audience in 5th century Athens would not have regarded Antigone as subversive towards the authority of the state. In fact, if you accept the conventional dating of the play (441 BC), the Athenian people elected Sophocles to serve as a general immediately after its first performance. Rather, the dramatic impact of Antigone lay in the clash of two traditional visions of justice. Creon’s position at the outset – “whoever places a friend / above the good of his own country, he is nothing” – was not a queue for booing and hissing, but a statement of conventional wisdom. Likewise, Antigone’s insistence on burying her brother was an assertion of divine law, and more particularly, her religious duties as a woman. Thus Creon’s error is not that he defends the prerogatives of the state, but that he makes them incompatible with the claims of the gods.

Sophocles’ protagonists were not just embodiments of abstract principles, though. He was also interested in what motivates individuals to defend a particular idea of justice. Creon, it seems, is susceptible to megalomania and paranoia. And as Antigone famously admits in her final speech, her determination to bury her brother was a very personal obsession, born from her uniquely wretched circumstances.

*   *   *

 

It’s hardly surprising that our intuitive reading of Antigone has changed over more than two millennia. The world we inhabit, and the moral assumptions that guide us through it, are radically different. Moreover, Antigone is one of those works that seem to demand a new interpretation in every epoch. Hegel, for instance, used the play to illustrate his theory of dialectical progress in history. The moral claims of Antigone and Creon – or in Hegel’s scheme, family and state – are both inadequate, but the need to synthesise them cannot be grasped until they have clashed and been found wanting. Simone de Beauvoir also identified both protagonists with flawed outlooks, though in her reading Antigone is a “moral idealist” and Creon a “political realist” – two ways, according to de Beauvoir, of avoiding moral responsibility.

So neither Hegel nor de Beauvoir recognised Antigone as the obvious voice of justice. Then again, they were clearly reading the play with the templates provided by their own moments in history. Hegel’s historical forces belong to the tumultuous conflicts of the early 19thcentury, in which he had staked out a position as both a monarchist and a supporter of the French Revolution. De Beauvoir’s archetypes belong to Nazi-occupied France – a world of vicious dilemmas in which pacifists, collaborators and resistors had all claimed to act for the greater good, and were all, in her eyes, morally compromised.

Thus, each era tries to understand Antigone using the roles and narratives particular to its own moral universe. And this, I would argue, is a natural part of artistic and political discourse. Such works cannot be quarantined in their original context – they have different resonances for different audiences. Moreover, the question of how one interprets something is always preceded by the question of why one bothers to interpret it at all, and that second question is inevitably bound up with what we consider important in the here and now. Our own moral universe, as I’ve already suggested, is largely defined by the righteousness of resistance and the struggle for freedom. Consequently, works from the past tend to be interpreted according to a narrative where one agent or category of agent suppresses the autonomy of another.

Nonetheless, there are pitfalls here. I think it is important for us to remain aware that our intuitive reading of a play like Antigone is precisely that – our intuitive reading. Otherwise, we may succumb to a kind of wishful thinking. We may end up being so comfortable projecting our values across time that we forget they belong to a contingent moment in history. We might forget, in other words, that our values are the product of a particular set of circumstances, not of some divine edict, and so cannot simply be accepted as right.

Of course we can always try to reason about right and wrong. But if we unthinkingly apply our worldview to people in other eras, we are doing precisely the opposite. We are turning history itself into a vast echo chamber, relieving us of the need to examine or defend our assumptions.

*   *   *

 

The task of guarding against such myopia has traditionally fallen to academic scholarship. And in a sense, this institution has never been better equipped to do it. Since the advent of New Historicism in the 1980s, the importance of the context in which works are made, as well as the context in which they are read, has been widely acknowledged in the humanities. But this has had a peculiarly inverse effect. The apparent impossibility of establishing any objective or timeless lesson in a play like Antigone has only heightened the temptation to claim it for ourselves.

Consider the approach taken by the influential gender theorist Judith Butler in her book Antigone’s Claim (2000). Using modern psychoanalytic concepts, Butler delves into the murky world of family and sexuality in the play (Antigone is the daughter of the infamously incestuous Oedipus, whose “curse” she is said to have inherited). Butler thus unearths “a classical western dilemma” about the treatment of those who do not fit within “normative versions of kinship.”

However Butler is not interested in establishing any timeless insights about Antigone. As she makes clear throughout her analysis, she is interested in Antigone “as a figure for politics,” and in particular, for the contemporary politics of resistance. “I began to think about Antigone a few years ago,” she says, “as I wondered what had happened to those feminist efforts to confront and defy the state.” She then sets out her aim of using the play to examine contemporary society, asking

what the conditions of intelligibility could have been that would have made [Antigone’s] life possible, indeed, what sustaining web of relations makes our lives possible, those of us who confound kinship in the rearticulation of its terms?

This leads her to compare Antigone’s plight to that of AIDS victims and those in alternative parenting arrangements, while also hinting at “the direction for a psychoanalytic theory” which avoids “heterosexual closure.”

Butler is clearly not guilty, then, of forgetting her own situatedness in history. However this does raise the question, if one is only interested in the present, why use a work from the past at all? Butler may well answer that such texts are an integral part of the political culture she is criticising. And that is fine, as far as it goes. But this approach seems to risk undermining the whole point of historicism. For although it does not pretend that people in other times had access to the same ideas and beliefs as we do, it does imply that the past is only worth considering in terms our own ideas and beliefs. And the result is very similar: Antigone becomes, effectively, a play about us.

In other words, Butler’s way of appropriating the past subtly makes it conform to contemporary values. And in doing so, it lays the ground for that echo-chamber I described earlier, whereby works from the past merely serve as opportunities to give our own beliefs a sheen of eternal truth. Indeed, elsewhere in the recent scholarship on Antigone, one finds that an impeccably historicist reading can nonetheless end  like this:

Thus is the nature of political activism bent on the expansion of human rights and the extension of human dignity. … Antigone is a charter member of a small human community that is “la Résistance,” wherever it pops up in the history of human civilisation(My emphasis)

Such statements are not just nonsensical, but self-defeating. However valuable ideas like human rights, human dignity, and resistance might be, they do not belong to “the history of human civilisation.” Moreover, it is impossible to understand their value unless one realises this.

The crucial question here is what we do with the knowledge that values differ across time. There is, perhaps, a natural tendency to see this as demanding an assertion of the ultimate validity of our own worldview. In this sense, our desire to portray Antigone as a figure of resistance recalls those theologians who used to scour classical texts for foreshadowings of Christ. I would argue, however, that we should treat the contingency of our beliefs as a warning against excessive certainty. Ideas are always changing in relation to circumstances, and as such, need to be constantly questioned.

Addressing the crisis of work

This article was first published by Arc Digital on December 10th 2018.

There are few ideals as central to the life of liberal democracies as that of stable and rewarding work. Political parties of every stripe make promises and boasts about job creation; even Donald Trump is not so eccentric that he does not brag about falling rates of unemployment. Preparing individuals for the job market is seen as the main purpose of education, and a major responsibility of parents too.

But all of this is starting to ring hollow. Today it is an open secret that, whatever the headline employment figures say, the future of work is beset by uncertainty.

Since the 1980s, the share of national income going to wages has declined in almost every advanced economy (the socially democratic Nordic countries are the exception). The decade since the financial crisis of 2007–8 has seen a stubborn rise in youth unemployment, and an increase in “alternative arrangements” characteristic of the gig economy: short-term contracts, freelancing and part-time work. Graduates struggle to find jobs to match their expectations. In many places the salaried middle-class is shrinking, leaving a workforce increasingly polarized between low- and high-earners.

Nor do we particularly enjoy our work. A 2013 Gallup survey found that in Western countries only a fifth of people say they are “engaged” at work, with the rest “not engaged” or “actively disengaged.”

The net result is an uptick of resentment, apathy, and despair. Various studies suggest that younger generations are less likely to identify with their career, or profess loyalty to their employer. In the United States, a worrying number of young men have dropped out of work altogether, with many apparently devoting their time to video games or taking prescription medication. And that’s without mentioning the ongoing automation revolution, which will exacerbate these trends. Robotics and artificial intelligence will likely wipe-out whole echelons of the current employment structure.

So what to do? Given the complexity of these problems — social, cultural, and economic — we should not expect any single, perfect solution. Yet it would be reckless to hope that, as the economy changes, it will reinvent a model of employment resembling what we have known in the past.

We should be thinking in broad terms about two related questions: in the short term, how could we reduce the strains of precarious or unfulfilling employment? And in the long term, what will we do if work grows increasingly scarce?

One answer involves a limited intervention by the state, aimed at revitalizing the habits of a free-market society — encouraging individuals to be independent, mobile, and entrepreneurial. American entrepreneur Andrew Yang proposes a Universal Basic Income (UBI) paid to all citizens, a policy he dubs “the freedom dividend.” Alternatively, Harvard economist Lawrence Katz suggests improving labor rights for part-time and contracted workers, while encouraging a middle-class “artisan economy” of creative entrepreneurs, whose greatest asset is their “personal flair.”

There are valid intuitions here about what many of us desire from work — namely, autonomy, and useful productivity. We want some control over how our labor is employed, and ideally to derive some personal fulfillment from its results. These values are captured in what political scientist Ian Shapiro has termed “the workmanship ideal”: the tendency, remarkably persistent in Western thought since the Enlightenment, to recognize “the sense of subjective satisfaction that attaches to the idea of making something that one can subsequently call one’s own.”

But if technology becomes as disruptive as many foresee, then independence may come at a steep price in terms of unpredictability and stress. For your labor — or, for that matter, your artisan products — to be worth anything in a constantly evolving market, you will need to dedicate huge amounts of time and energy to retraining. According to some upbeat advice from the World Economic Forum, individuals should now be aiming to “skill, reskill, and reskill again,” perhaps as often as every 2–3 years.

Is it time, then, for more radical solutions? There is a strand of thinking on the left which sees the demise of stable employment very differently. It argues that by harnessing technological efficiency in an egalitarian way, we could all work much less and still have the means to lead more fulfilling lives.

This “post-work” vision, as it is now called, has been gaining traction in the United Kingdom especially. Its advocates — a motley group of Marx-inspired journalists and academics — found an unexpected political platform in Jeremy Corbyn’s Labour Party, which has recently proposed cutting the working week to four days. It has also established a presence in mainstream progressive publications such as The Guardian and New Statesman.

To be sure, there is no coherent, long-term program here. Rather, there is a great deal of blind faith in the prospects of automation, common ownership and cultural revolution. Many in the post-work camp see liberation from employment, usually accompanied by UBI, as the first step in an ill-defined plan to transcend capitalism. Typical in that respect are Alex Williams and Nick Srnicek, authors of Inventing the Future: Postcapitalism and a World Without Work. This blueprint includes open borders and a pervasive propaganda network, and flirts with the possibility of “synthetic forms of biological reproduction” to enable “a newfound equality between the sexes.”

We don’t need to buy into any of this, though, to appreciate the appeal of enabling people to work less. Various thinkers, including Bertrand Russell and John Maynard Keynes, took this to be an obvious goal of technological development. And since employment does not provide many of us with the promised goods of autonomy, fulfillment, productive satisfaction and so on, why shouldn’t we make the time to pursue them elsewhere?

Now, one could say that even this proposition is based on an unrealistic view of human nature. Arguably the real value of work is not enjoyment or even wealth, but purpose: people need routine, structure, a reason to get up in the morning, otherwise they would be adrift in a sea of aimlessness. Or at least some of them would – for another thing employment currently provides is a relatively civilized way for ambitious individuals to compete for resources and social status. Nothing in human history suggests that, even in conditions of superabundance, that competition would stop.

According to this pessimistic view, freedom and fulfillment are secondary concerns. The real question is, in the absence of employment, what belief systems, political mechanisms, and social institutions would make work for all of those idle thumbs?

But the way things are headed, it looks like we are going to need to face that question anyway, in which case our work-centric culture is a profound obstacle to generating good solutions. With so much energy committed to long hours and career success (the former being increasingly necessary for the latter), there is no space for other sources of purpose, recognition, or indeed fulfilment to emerge in an organic way.

The same goes for the economic side of the problem. I am no supporter of UBI – a policy whose potential benefits are dwarfed by the implications of a society where every individual is a client of the state. But if we want to avoid that future, it would be better to explore other arrangements now than to cling to our current habits until we end up there by default. Thus, if for no other reason than to create room for such experiments, the idea of working less is worth rescuing from the margins of the debate.

More to the point, there needs to be a proper debate. Given how deeply rooted our current ideas about employment are, politicians will continue appealing to them. We shouldn’t accept such sedatives. Addressing this problem will likely be a messy and imperfect process however we go about it, and the sooner we acknowledge that the better.

Notes on “The Bowl of Milk”

I normally can’t stand hearing about the working habits of famous artists. Whether by sheer talent or some fiendish work ethic, they tend to be hyper-productive in a way that I could never be. Thankfully, there are counter-examples – like the painter Pierre Bonnard. As you can read in the first room of the Bonnard exhibition now at Tate Modern, he often took years to finish a painting, putting it to one side before coming back to it and reworking it multiple times. He was known to continue tinkering with his paintings when he came across them hanging on the wall of somebody’s house. At the very end of his life, no longer able to paint, he instructed his nephew to change a section of his final work Almond Tree in Blossom (1947).

Maybe this is wishful thinking, but I find things that have been agonised over to acquire a special kind of depth. In many ways Bonnard is not my kind of painter, but his work rewards close attention. There is hardly an inch of his canvases where you do not find different tones layered over each other – layers not only of paint, but of time and effort – creating a luminous sea of brushstrokes which almost swarms in front of your eyes. And this belaboured quality is all the more intriguing given the transience of his subject matter: gardens bursting with euphoric colour, interiors drenched in vibrant light, domestic scenes that capture the briefest of moments during the day.

Nowhere is this tension more pronounced than in The Bowl of Milk (1919). Pictured is a room with a window overlooking the sea, and two tables ranged with items of crockery and a vase of flowers. In the foreground stands a woman wearing a long gown and holding a bowl, presumably for the cat which approaches in the shadows at her feet. Yet there is something nauseating, almost nightmarish about this image. Everything swims with indeterminacy, vanishing from our grasp. So pallid is the light pouring through the window that at first I assumed it was night outside. The objects and figures crowding the room shimmer as though on the point of dissolving into air. The woman’s face is a vague, eyeless mask. The painting is composed so that if you focus on one particular passage, everything else recedes into a shapeless soup in the periphery of your vision. It is a moment of such vivid intensity that one is forced to realise it has been conjured from the depths of fantasy.

*     *     *

 

The woman in The Bowl of Milk is almost certainly Marthe de Méligny, formerly Maria Boursin, Bonnard’s lifelong model and spouse. They met in Paris in 1893, where de Méligny was employed manufacturing artificial flowers for funerals. Some five years later, Bonnard began to exhibit paintings that revealed their intimate domestic life together. These would continue throughout his career, with de Méligny portrayed in various bedrooms, bathrooms and hallways, usually alone, usually nude, and often in front of a mirror.

Pierre Bonnard (1867-1947). "Nu dans le bain". Huile sur toile, 1936. Paris, musée d'Art moderne.
Pierre Bonnard “Nude in the Bath” (1936). Oil paint on canvas. Paris, musée d’Art moderne.

It was not an uncomplicated relationship: Bonnard is thought to have had affairs, and when the couple eventually married in 1925 de Méligny revealed she had lied about her name and age (she had broken off contact with her family before moving to Paris). They were somewhat isolated. De Méligny is described as having a silent and unnerving presence, and later developed a respiratory disease which forced them to spend periods on the Atlantic coast. Yet Bonnard’s withdrawal from the Parisian art scene, where he had been prominent during his twenties, allowed him to develop his exhaustive, time-leaden painting process, and to forge his own style. The paintings of de Méligny seem to relish the freedom enabled by familiarity and seclusion. One of the gems of the current Tate exhibition are a series of nude photographs that the couple took of one another in their garden in the years 1899-1901. In each of these unmistakeably Edenic pictures, we see a bright-skinned body occupying a patch of sunlight, securely framed by shadowy thickets of grass and leaves.

pierre-bonnard-1900-1901-jardin-de-montval-marthe-bonnard-rmn1

pierre-bonnard-1900-1901-jardin-de-montval-marthe-bonnard-rmn
(Source: https://dantebea.com/category/peintures-dessins/pierre-bonnard/page/2/)

The female figure in The Bowl of Milk is far from familiar: she is a flicker of memory, a robed phantasm. But like other portrayals of de Méligny, this painting revels in the erotics of space, whereby the proximity and secrecy of the domestic setting are charged with the presence of a human subject – an effect only heightened by our voyeuristic discomfort at gaining access to this private world. There is no nudity, but a disturbing excess of sensual energy in the gleaming white plates, the crimson anemones, the rich shadows and the luxurious stride of the cat. To describe these details as sexual is to lessen their true impact: they are demonic, signalling the capacity of imagination to terrorise us with our own senses.

*     *     *

 

In 1912 Bonnard bought a painting by Henri Matisse, The Open Window at Collioure (1905). Matisse would soon emerge as one of the leading figures of modern painting, but the two were also friends, maintaining a lively correspondence over several decades. And one can see what inspired Bonnard to make this purchase: doors and windows appear continually in his own work, allowing interior space to be animated by the vitality of the outside world.

•-Open-Window-Collioure-1905-by-Henri-Matisse-•-Henri-Matisse-painted-Open-Window-Collioure-in-t
Henri Matisse, “The Open Window at Collioure” (1905). Oil paint on canvas. National Gallery of Art, Washington

Pierre Bonnard L'atelier au mimosa 1939-46 Musée National d'Art Moderne - Centre Pompidou (Paris, France)
Pierre Bonnard, “The Studio with Mimosas” (1939-46). Oil paint on canvas. Musée National d’Art Moderne – Centre Pompidou, Paris.

More revealing, though, are the differences we can glean from The Open Window at Collioure. Matisse’s painting, with its flat blocks of garish colour, is straining towards abstraction. As a formal device, the window merely facilitates a jigsaw of squares and rectangles. Such spatial deconstruction and pictorial simplification were intrinsic to the general direction of modernism at this time. This, however, was the direction from which the patient and meticulous Bonnard had partly stepped aside. For he remained under the influence of impressionist painting, which emphasised the subtlety and fluidity of light and colour as a means of capturing the immediacy of sensory experience. Thus, as Juliette Rizzi notes, Bonnard’s use of “framing devices such as doors, mirrors, and horizontal and vertical lines” allow him a compromise of sorts. They do not simplify his paintings so much as provide an angular scaffolding around which he can weave his nebulous imagery.

The window and its slanted rectangles of light are crucial to the strange drama of The Bowl of Milk. Formally, this element occupies the very centre of the composition, holding it in place. But it is also a source of ambiguity. The window is seemingly a portal to another world, flooding the room with uncanny energy. The woman appears stiff, frozen at the edge of a spotlight. It’s as though the scene has been illuminated just briefly – before being buried in darkness again.