What space architecture says about us

With the recent expedition of Nasa’s Perseverence rover to Mars, I’ve taken an interest in space architecture; more specifically, habitats for people on the moon or the Red Planet. The subject first grabbed my attention earlier this year, when I saw that a centuries-old forestry company in Japan is developing wooden structures for future space colonies. Space architecture is not as other-worldly as you might think. In various ways, it holds a revealing mirror to life here on Earth. 

Designing human habitats for Mars is more than just a technical challenge (though protecting against intense radiation and minus 100C temperatures is, of course, a technical challenge). It’s also an exercise in anthropology. To ask what a group of scientists or pioneers will need from their Martian habitats is to ask what human beings need to be healthy, happy and productive. And we aren’t just talking about the material basics here. 

As Jonathan Morrison reported in the Times last weekend, Nasa is taking inspiration from the latest polar research bases. According to architects like Hugh Broughton, researchers working in these extreme environments need creature comforts. The fundamental problem, says Broughton, is “how architecture can respond to the human condition.” The extreme architect has to consider “how you deal with isolation, how you create a sense of community… how you support people in the darkness.”

I found these words disturbingly relatable; not just in light of the pandemic, which has forced us all into a kind of polar isolation, but in light of the wider problem of anomie in modern societies. Broughton’s questions are the same ones we tend to ask as we observe stubbornly high rates of depression, loneliness, self-medication, and so on. Are we all now living in an extreme environment?

Many architects in the modernist period dreamed that they could tackle such issues through the design of the built environment. But the problem of what people need in order to flourish confronted them in a much harder form. Given the complexity of modern societies, trying to facilitate a vision of human flourishing through architecture started to look a lot like forcing society into a particular mould.

The “master households” designed by Walter Gropius in the 1920s and 30s illustrates the dilemma. Gropius insisted his blueprints, which reduced private family space in favour of communal living, reflected the emerging socialist character of modern individuals. At the same time, he implied that this transformation in lifestyle needed the architect as its midwife. 

Today architecture has largely abandoned the dream of a society engineered by experts and visionaries. But heterotopias like research stations and space colonies still offer something of a paradise for the philosophical architect. By contrast to the messy complexity of society at large, these small communities have a very specific shared purpose. They offer clearly defined parameters for architects to address the problem of what human beings need. 

Sometimes the solutions to this profound question, however, are almost comically mundane. Morrison’s Times report mentions some features of recent polar bases:

At the Scott Base, due to be completed in 2027, up to 100 residents might while away the hours in a cafeteria and even a Kiwi-themed pub, while Halley VI… boasts a gym, library, large canteen, bar and mini cinema.

If this turns out to be the model, then a future Mars colony will be a lot like a cruise ship. This doesn’t reflect a lack of imagination on the architects’ part though. It points to the fact that people don’t just want sociability, stimulation and exercise as such – they want familiar forms of these things. So a big part of designing habitats for space pioneers will involve replicating institutions from their original, earthbound cultures. In this sense, Martian colonies won’t be a fresh start for humanity any more than the colonisation of the Americas was. 

Finally, it’s worth saying something about the politics of space habitats. It seems inevitable that whichever regime sends people to other planets will use the project as a means of legitimation: the government(s) and corporations involved will want us to be awed by their achievement. And this will be done by turning the project into a media spectacle. 

The recent Perseverance expedition has already shown this potential: social media users were thrilled to hear audio of Martian winds, and to see a Martian horizon with Earth sparkling in the distance (the image, alas, turned out to be a fake). The first researchers or colonists on Mars will likely be reality TV stars, their everyday lives an on-going source of fascination for viewers back home. 

The lunar base in Kubrick’s 2001: A Space Odyssey

This means space habitats won’t just be designed for the pioneers living in them, but also for remote visual consumption on Earth. The aesthetics of these structures will not, therefore, be particularly novel. Thanks to Hollywood, we already have established ideas of what space exploration should look like, and space architecture will try to satisfy these expectations. Beyond that, it will simply try to project a more futuristic version of the good life as we know it through pop culture: comfort, luxury and elegance. 

We already see this, I think, in the Mars habitat designed by Xavier De Kestelier of Hassel Studio, which features sweeping open-plan spaces with timber flooring, glass walls and minimalist furniture. It resembles a luxury spa more than a rugged outpost of civilisation. But this was already anticipated, with characteristic flair, by Stanley Kubrick in his 1968 sci-fi classic 2001: A Space Odyssey. In Kubrick’s imagined lunar base, there is a Hilton hotel hosting the stylish denizens of corporate America. The task of space architects will be to design this kind of enchanting fantasy, no less than to meet the needs of our first Martian settlers.  

How much is a high-status meme worth?

This article was published by Unherd on February 25th 2021.

Today one of the most prestigious institutions in the art world, the 250-year-old auction house Christie’s, is selling a collection of Instagram posts. Or in its own more reserved language, Christie’s is now “the first major auction house to offer a purely digital work.”

The work in question is “Everydays: The First 5000 Days” by the South Carolina-based animation artist Beeple (real name Mike Winkelmann), an assemblage of images he has posted online over the last thirteen-odd years. Whoever acquires “Everydays” won’t get a unique product — the image is a digital file which can be copied like any other. They’ll just be paying for a proof of ownership secured through the blockchain.

But more significant than the work’s format is its artistic content. Beeple is opening the way for the traditional art world to embrace internet memes. 

Continue reading here.

The Philosophy of Rupture: How the 1920s Gave Rise to Intellectual Magicians

This essay was originally published by Areo magazine on 4th November 2020.

When it comes to intellectual history, Central Europe in the decade of the 1920s presents a paradox. It was an era when revolutionary thought – original and iconoclastic ideas and modes of thinking – was not in fact revolutionary, but almost the norm. And the results are all around us today. The 1920s were the final flourish in a remarkable period of path-breaking activity in German-speaking Europe, one that laid many of the foundations for both analytic and continental philosophy, for psychology and sociology, and for several branches of legal philosophy and of theoretical science.

This creative ferment is partly what people grasp at when they refer to the “spirit” of the ’20s, especially in Germany’s Weimar Republic. But this doesn’t help us understand where that spirit came from, or how it draws together the various thinkers who, in hindsight, seem to be bursting out of their historical context rather than sharing it.

Wolfram Eilenberger attempts one solution to that problem in his new book, Time of the Magicians: The Invention of Modern Thought, 1919-1929. He manages to weave together the ideas of four philosophers – Ludwig Wittgenstein, Martin Heidegger, Walter Benjamin and Ernst Cassirer – by showing how they emerged from those thinkers’ personal lives. We get colourful accounts of money troubles, love affairs, career struggles and mental breakdowns, each giving way to a discussion of the philosophical material. In this way, the personal and intellectual journeys of the four protagonists are linked in an expanding web of experiences and ideas.

This is a satisfying format. There’s just no denying the voyeuristic pleasure of peering into these characters’ private lives, whether it be Heidegger’s and Benjamin’s attempts to rationalise their adulterous tendencies, or the series of car crashes that was Wittgenstein’s social life. Besides, it’s always useful to be reminded that, with the exception of the genuinely upstanding Cassirer, these great thinkers were frequently selfish, delusional, hypocritical and insecure. Just like the rest of us then.

But entertaining as it is, Eilenberger’s biographical approach does not really cast much light on that riddle of the age: why was this such a propitious time for magicians? If anything, his portraits play into the romantic myth of the intellectual window-breaker as a congenital outsider and unusual genius – an ideal that was in no small part erected by this very generation. This is a shame because, as I’ll try to show later, these figures become still more engaging when considered not just as brilliant individuals, but also as products of their time.

First, it’s worth looking at how Eilenberger manages to draw parallels between the four philosophers’ ideas, for that is no mean feat. Inevitably this challenge makes his presentation selective and occasionally tendentious, but it also produces some imaginative insights.

*          *          *

 

At first sight, Wittgenstein seems an awkward fit for this book, seeing as he did not produce any philosophy during the decade in question. His famous early work, the Tractatus Logico-Philosophicus, claimed to have solved the problems of philosophy “on all essential points.” So we are left with the (admittedly fascinating) account of how he signed away his vast inheritance, trained as a primary school teacher, and moved through a series of remote Austrian towns becoming increasingly isolated and depressed.

But this does leave Eilenberger plenty of space to discuss the puzzling Tractatus. He points out, rightly, that Wittgenstein’s mission to establish once and for all what can meaningfully be said – that is, what kinds of statements actually make sense – was far more than an attempt to rid philosophy of metaphysical hokum (even if that was how his logical-empiricist fans in Cambridge and the Vienna Circle wanted to read the work).

Wittgenstein did declare that the only valid propositions were those of natural science, since these alone shared the same logical structure as empirical reality, and so could capture an existing or possible “state of affairs” in the world. But as Wittgenstein freely admitted, this meant the Tractatus itself was nonsense. Therefore its reader was encouraged to disregard the very claims which had established how to judge claims, to “throw away the ladder after he has climbed up it.” Besides, it remained the case that “even if all possible scientific questions be answered, the problems of life have still not been touched at all.”

According to Eilenberger, who belongs to the “existentialist Wittgenstein” school, the Tractatus’ real goals were twofold. First, to save humanity from pointless conflict by clarifying what could be communicated with certainty. And second, to emphasise the degree to which our lives will always be plagued by ambiguity – by that which can only be “shown,” not said – and hence by decisions that must be taken on the basis of faith.

This reading allows Eilenberger to place Wittgenstein in dialogue with Heidegger and Benjamin. The latter both styled themselves as abrasive outsiders: Heidegger as the Black Forest peasant seeking to subvert academic philosophy from within, Benjamin as the struggling journalist and flaneur who, thanks to his erratic behaviour and idiosyncratic methods, never found an academic post. By the end of the ’20s, they had gravitated towards the political extremes, with Heidegger eventually joining the Nazi party and Benjamin flirting with Communism.

Like many intellectuals at this time, Heidegger and Benjamin were interested in the consequences of the scientific and philosophical revolutions of the 17th century, the revolutions of Galileo and Descartes, which had produced the characteristic dualism of modernity: the separation of the autonomous, thinking subject from a scientific reality governed by natural laws. Both presented this as an illusory and fallen state, in which the world had been stripped of authentic human purpose and significance.

Granted, Heidegger did not think such fine things were available to most of humanity anyway. As he argued in his masterpiece Being and Time, people tend to seek distraction in mundane tasks, social conventions and gossip. But it did bother him that philosophers had forgotten about “the question of the meaning of Being.” To ask this question was to realise that, before we come to do science or anything else, we are always already “thrown” into an existence we have neither chosen nor designed, and which we can only access through the meanings made available by language and by the looming horizon of our own mortality.

Likewise, Benjamin insisted language was not a means of communication or rational thought, but an aesthetic medium through which the world was revealed to us. In his work on German baroque theatre, he identified the arrival of modernity with a tragic distortion in that medium. Rather than a holistic existence in which in which everything had its proper name and meaning – an existence that, for Benjamin, was intimately connected with the religious temporality of awaiting salvation – the very process of understanding had become arbitrary and reified, so that any given symbol might as well stand for any given thing.

As Eilenberger details, both Heidegger and Benjamin found some redemption in the idea of decision – a fleeting moment when the superficial autonomy of everyday choices gave way to an all-embracing realisation of purpose and fate. Benjamin identified such potential in love and, on a collective and political level, in the “profane illuminations” of the metropolis, where the alienation of the modern subject was most profound. For Heidegger, only a stark confrontation with death could produce a truly “authentic” decision. (This too had political implications, which Eilenberger avoids: Heidegger saw the “possibilities” glimpsed in these moments as handed down by tradition to each generation, leaving the door open to a reactionary idea of authenticity as something a community discovers in its past).

If Wittgenstein, Heidegger and Benjamin were outsiders and “conceptual wrecking balls,” Ernst Cassirer cuts a very different figure. His inclusion in this book is the latest sign of an extraordinary revival in his reputation over the past fifteen years or so. That said, some of Eilenberger’s remarks suggest Cassirer has not entirely shaken off the earlier judgment, that he was merely “an intellectual bureaucrat,” “a thoroughly decent man and thinker, but not a great one.”

Cassirer was the last major figure in the Neo-Kantian tradition, which had dominated German academic philosophy from the mid-19th century until around 1910. At this point, it grew unfashionable for its associations with scientific positivism and naïve notions of rationality and progress (not to mention the presence of prominent Jewish scholars like Cassirer within its ranks). The coup de grâce was delivered by Heidegger himself at the famous 1929 “Davos debate” with Cassirer, the event which opens and closes Eilenberger’s book. Here contemporaries portrayed Cassirer as an embodiment of “the old thinking” that was being swept away.

That judgment was not entirely accurate. It’s true that Cassirer was an intellectual in the mould of 19th century Central European liberalism, committed to human progress and individual freedom, devoted to science, culture and the achievements of German classicism. Not incidentally, he was the only one of our four thinkers to wholeheartedly defend Germany’s Weimar democracy. But he was also an imaginative, versatile and unbelievably prolific philosopher.

Cassirer’s three-volume project of the 1920s, The Theory of Symbolic Forms, showed that he, too, understood language and meaning as largely constitutive of reality. But for Cassirer, the modern scientific worldview was not a debasement of the subject’s relationship to the world, but a development of the same faculty which underlay language, myth and culture – that of representing phenomena through symbolic forms. It was, moreover, an advance. The logical coherence of theoretical science, and the impersonal detachment from nature it afforded, was the supreme example of how human beings achieved freedom: by understanding the structure of the world they inhabited to ever greater degrees.

But nor was Cassirer dogmatic in his admiration for science. His key principle was the plurality of representation and understanding, allowing the same phenomenon to be grasped in different ways. The scientist and artist are capable of different insights. More to the point, the creative process through which human minds devised new forms of representation was open ended. The very history of science, as of culture, showed that there were always new symbolic forms to be invented, transforming our perception of the world in the process.

*          *          *

 

It would be unfair to say Eilenberger gives us no sense of how these ideas relate to the context in which they were formed; his biographical vignettes do offer vivid glimpses of life in 1920s Europe. But that context is largely personal, and rarely social, cultural or intellectual. As a result, the most striking parallel of all – the determination of Wittgenstein, Heidegger and Benjamin to upend the premises of the philosophical discipline, and that of Cassirer to protect them – can only be explained in terms of personality. This is misleading.

A time-traveller visiting Central Europe in the years after 1918 could not help but notice that all things intellectual were in a state of profound flux. Not only was Neo-Kantianism succumbing to a generation of students obsessed with metaphysics, existence and (in the strict sense) nihilism. Every certainty was being forcefully undermined: the superiority of European culture in Oswald Spengler’s bestselling Decline of the West (1918); the purpose and progress of history in Ernst Troeltsch’s “Crisis of Historicism” (1922); the Protestant worldview in Karl Barth’s Epistle to the Romans (1919); and the structure of nature itself in Albert Einstein’s article “On the Present Crisis in Theoretical Physics” (1922).

In these years, even the concept of revolution was undergoing a revolution, as seen in the influence of unorthodox Marxist works like György Lukács’ History and Class Consciousness (1923). And this is to say nothing of what our time-traveller would discover in the arts. Dada, a movement dedicated to the destruction of bourgeois norms and sensibilities, had broken out in Zurich in 1917 and quickly spread to Berlin. Here it infused the works of brilliant but scandalous artists such as George Grosz and Otto Dix.

German intellectuals, in other words, were conscious of living in an age of immense disruption. More particularly, they saw themselves as responding to world defined by rupture; or to borrow a term from Heidegger and Benjamin, by “caesura” – a decisive and irreversible break from the past.

It’s not difficult to imagine where that impression came from. This generation experienced the cataclysm of the First World War, an unprecedented bloodbath that discredited assumptions of progress even as it toppled ancient regimes (though among Eilenberger’s quartet, only Wittgenstein served on the front lines). In its wake came the febrile economic and political atmosphere of the Weimar Republic, which has invited so many comparisons to our own time. Less noticed is that the ’20s were also, like our era, a time of destabilising technological revolution, witnessing the arrival of radio, the expansion of the telephone, cinema and aviation, and a bevy of new capitalist practices extending from factory to billboard.

Nonetheless, in philosophy and culture, we should not imagine that an awareness of rupture emerged suddenly in 1918, or even in 1914. The war is best seen as an explosive catalyst which propelled and distorted changes already underway. The problems that occupied Eilenberger’s four philosophers, and the intellectual currents that drove them, stem for a deeper set of dislocations.

 Anxiety over the scientific worldview, and over philosophy’s relationship to science, was an inheritance from the 19thcentury. In Neo-Kantianism, Germany had produced a philosophy at ease with the advances of modern science. But paradoxically, this grew to be a problem when it became clear how momentous those advances really were. Increasingly science was not just producing strange new ways of seeing the world, but through technology and industry, reshaping it. Ultimately the Neo-Kantian holding pattern, which had tried to reconcile science with the humanistic traditions of the intellectual class, gave way. Philosophy became the site of a backlash against both.

But critics of philosophy’s subordination to science had their own predecessors to call on, not least with respect to the problem of language. Those who, like Heidegger and Benjamin, saw language not as a potential tool for representing empirical reality, but the medium which disclosed that reality to us (and who thus began to draw the dividing line between continental and Anglo-American philosophy), were sharpening a conflict that had simmered since the Enlightenment. They took inspiration from the 18th century mystic and scourge of scientific rationality, Johann Georg Hamann.

Meanwhile, the 1890s saw widespread recognition of the three figures most responsible for the post-war generation’s ideal of the radical outsider: Søren Kierkegaard, Friedrich Nietzsche and Karl Marx. That generation would also be taught by the great pioneers of sociology in Germany, Max Weber and Georg Simmel, whose work recognised what many could feel around them: that modern society was impersonal, fragmented and beset by irresolvable conflicts of value.

In light of all this, it’s not surprising that the concept of rupture appears on several levels in Wittgenstein, Heidegger and Benjamin. They presented their works as breaks in and with the philosophical tradition. They reinterpreted history in terms of rupture, going back and seeking the junctures when pathologies had appeared and possibilities had been foreclosed. They emphasised the leaps of faith and moments of decision that punctuated the course of life.

Even the personal qualities that attract Eilenberger to these individuals – their eccentric behaviour, their search for authenticity – were not theirs alone. They were part of a generational desire to break with the old bourgeois ways, which no doubt seemed the only way to take ownership of such a rapidly changing world.

 

The Last of the Libertarians

This book review was originally published by Arc Digital on August 31st 2020.

As the world reels from the chaos of COVID-19, it is banking on the power of innovation. We need a vaccine, and before even that, we need new technologies and practices to help us protect the vulnerable, salvage our pulverized economies, and go on with our lives. If we manage to weather this storm, it will be because our institutions prove capable of converting human ingenuity into practical, scalable fixes.

And yet, even if we did not realize it, this was already the position we found ourselves in prior to the pandemic. From global warming to food and energy security to aging populations, the challenges faced by humanity in the 21st century will require new ways of doing things, and new tools to do them with.

So how can our societies foster such innovation? What are the institutions, or more broadly the economic and political conditions, from which new solutions can emerge? Some would argue we need state-funded initiatives to direct our best minds towards specific goals, like the 1940s Manhattan Project that cracked the puzzle of nuclear technology. Others would have us place our faith in the miracles of the free market, with its incentives for creativity, efficiency, and experimentation.

Matt Ridley, the British businessman, author, and science journalist, is firmly in the latter camp. His recent book, How Innovation Works, is a work of two halves. On the one hand it is an entertaining, informative, and deftly written account of the innovations which have shaped the modern world, delivering vast improvements in living standards and opportunity along the way. On the other hand, it is the grumpy expostulation of a beleaguered libertarian, whose reflexive hostility to government makes for a vague and contradictory theory of innovation in general.

Innovation, we should clarify, does not simply mean inventing new things, nor is it synonymous with scientific or technological progress. There are plenty of inventions that do not become innovations — or at least not for some time — because we have neither the means nor the demand to develop them further. Thus, the key concepts behind the internal combustion engine and general-purpose computer long preceded their fruition. Likewise, there are plenty of important innovations which are neither scientific nor technological — double-entry bookkeeping, for instance, or the U-bend in toilet plumbing — and plenty of scientific or technological advances which have little impact beyond the laboratory or drawing board.

Innovation, as Ridley explains, is the process by which new products, practices, and ideas catch on, so that they are widely adopted within an industry or society at large. This, he rightly emphasizes, is rarely down to a brilliant individual or blinding moment of insight. It is almost never the result of an immaculate process of design. It is, rather, “a collective, incremental, and messy network phenomenon.”

Many innovations make use of old, failed ideas whose time has come at last. At the moment of realization, we often find multiple innovators racing to be first over the line — as was the case with the steam engine, light bulb, and telegraph. Sometimes successful innovation hinges on a moment of luck, like the penicillin spore which drifted into Alexander Fleming’s petri dish while he was away on holiday. And sometimes a revolutionary innovation, such as the search engine, is strangely anticipated by no one, including its innovators, almost up until the moment it is born.

But in virtually every instance, the emergence of an innovation requires numerous people with different talents, often far apart in space and time. As Ridley describes the archetypal case: “One person may make a technological breakthrough, another work out how to manufacture it, and a third how to make it cheap enough to catch on. All are part of the innovation process and none of them knows how to achieve the whole innovation.”

These observations certainly lend some credence to Ridley’s arguments that innovation is best served by a dynamic, competitive market economy responding to the choices of consumers. After all, we are not very good at guessing from which direction the solution to a problem will come — we often do not even know there was a problem until a solution comes along — and so it makes sense to encourage a multitude of private actors to tinker, experiment, and take risks in the hope of discovering something that catches on.

Moreover, Ridley’s griping about misguided government regulation — best illustrated by Europe’s almost superstitious aversion to genetically modified crops — and about the stultifying influence of monopolistic, subsidy-farming corporations, is not without merit.

But not so fast. Is it not true that many innovations in Ridley’s book drew, at some point in their complex gestation, from state-funded research? This was the case with jet engines, nuclear energy, and computing (not to mention GPS, various products using plastic polymers, and touch-screen displays). Ridley’s habit of shrugging off such contributions with counterfactuals — had not the state done it, someone else would have — misses the point, because the state has basic interests that inevitably bring it into the innovation business.

It has always been the case that certain technologies, however they emerge, will continue their development in a limbo between public and private sectors, since they are important to economic productivity, military capability, or energy security. So it is today with the numerous innovative technologies caught up in the rivalry between the United States and China, including 5G, artificial intelligence, biotechnology, semiconductors, quantum computing, and Ridley’s beloved fracking for shale gas.

As for regulation, the idea that every innovation which succeeds in a market context is in humanity’s best interests is clearly absurd. One thinks of such profitable 19th-century innovations by Western businessmen as exporting Indian opium to the Far East. Ridley tries to forestall such objections with the claim that “To contribute to human welfare … an innovation must meet two tests: it must be useful to individuals, and it must save time, energy, or money in the accomplishment of some task.” Yet there are plenty of innovations which meet this standard and are still destructive. Consider the opium-like qualities of social media, or the subprime mortgage-backed securities which triggered the financial crisis of 2007–8 (an example Ridley ought to know about, seeing as he was chairman of Britain’s ill-fated Northern Rock bank at the time).

Ridley’s weakness in these matters is amplified by his conceptual framework, a dubious fusion of evolutionary theory and dogmatic libertarianism. Fundamentally, he holds that innovation is an extension of evolution by natural selection, “a process of constantly discovering ways of rearranging the world into forms that are unlikely to arise by chance — and that happen to be useful.” (Ridley even has a section on “The ultimate innovation: life itself.”) That same cosmic process, he claims, is embodied in the spontaneous order of the free market, which, through trade and specialization, allows useful innovations to emerge and spread.

This explains why How Innovation Works contains no suggestion about how we should weigh the risks and benefits of different kinds of innovation. Insofar as Ridley makes an ethical case at all, it amounts to a giant exercise in naturalistic fallacy. Though he occasionally notes innovation can be destructive, he more often moves seamlessly from claiming that it is an “inexorable” natural process, something which simply happens, to hailing it as “the child of freedom and the parent of prosperity,” a golden goose in perpetual danger of suffocation.

But the most savage contradictions in Ridley’s theory appear, once again, in his pronouncements on the role of the state. He insists that by definition, government cannot be central to innovation, because it has predetermined goals whereas evolutionary processes do not. “Trying to pretend that government is the main actor in this process,” he says, “is an essentially creationist approach to an essentially evolutionary phenomenon.”

Never mind that many of Ridley’s own examples involve innovators aiming for predetermined goals, or that in his (suspiciously brief) section on the Chinese innovation boom, he concedes in passing that shrewd state investment played a key role. The more pressing question is, what about those crucial innovations for which there is no market demand, and which therefore do not evolve?

Astonishingly, in his afterword on the challenges posed by COVID-19, Ridley has the gall to admonish governments for not taking the lead in innovation. “Vaccine development,” he writes, has been “insufficiently encouraged by governments and the World Health Organisation,” and “ignored, too, by the private sector because new vaccines are not profitable things to make.” He goes on: “Politicians should go further and rethink their incentives for innovation more generally so that we are never again caught out with too little innovation having happened in a crucial field of human endeavour.”

In these lines, we should read not just the collapse of Ridley’s central thesis, but more broadly, the demise of a certain naïve market libertarianism — a worldview that flourished during the 1980s and ’90s, and which, like most dominant intellectual paradigms, came to see its beliefs as reflecting the very order of nature itself. For what we should have learned in 2007–8, and what we have certainly learned this year, is that for all its undoubted wonders the market is always tacitly relying on the state to step in should the need arise.

This does not mean, of course, that the market has no role to play in developing the key innovations of the 21st century. I believe it has a crucial role, for it remains unmatched in its ability to harness the latent power of widely dispersed ideas and skills. But if the market’s potential is not to be snuffed out in a post-COVID era of corporatism and monopoly, then it will need more credible defenders than Ridley. It will need defenders who are aware of its limitations and of its interdependence with the state.

Anti-racism and the long shadow of the 1970s

The essay was originally published by Unherd on August 3rd 2020.

Last month, following a bout of online outrage, the National Museum of African American History and Culture removed an infographic from its website. Carrying the title “Aspects and assumptions of whiteness and white culture in the United States,” the offending chart presented a list of cultural expectations which, apparently, reflect the “traditions, attitudes and ways of life” characteristic of “white people.” Among the items listed were “self-reliance,” “the nuclear family,” “respect authority,” “plan for future” and “objective, rational linear thinking”.

Critics seized on this as evidence that the anti-racism narrative that has taken hold in institutional America is permeated by a bigotry of low expectations. The chart seemed to suggest that African Americans should not be expected to adhere to the basic tenets of modern civil society and intellectual life. Moreover, the notion that prudence, personal responsibility and rationality are inherently white echoes to an uncanny degree the racist claims that have historically been used to justify the oppression of people of African descent.

We could assume, in the interests of fairness, that the problem with the NMAAHC’s chart was a lack of context. Surely the various qualities it ascribes to “white culture” should be read as though followed by a phrase like “as commonly understood in the United States today?” The problem is that the original document which inspired the chart, and which bore the copyright of corporate consultant Judith H. Katz, provides no such caveats.

If we look at Katz’s own career, however, we do find some illuminating context — not just for this particular incident, but also regarding the origins of the current anti-racism movement more broadly. During the 1970s, Katz pioneered a distinctive approach to combatting racism, one that was above all therapeutic and managerial. This approach, as the NMAAHC chart suggests, took little interest in the opinions and experiences of ethnic and racial minorities, but focused on helping white Americans understand their identity.

Katz’s most obvious descendent today is Robin DiAngelo, author of the bestselling White Fragility — a book relating the experiences and methods of DiAngelo’s lucrative career in corporate anti-racism training. Katz too developed a re-education program, “White awareness training,” which, according to her 1978 book White Awareness, “strives to help Whites understand that racism in the United States is a White problem and that being White implies being racist.”

Like DiAngelo, Katz rails against the pretense of individualism and colour blindness, which she regards as strategies for denying complicity in racism. And like DiAngelo, Katz emphasizes the need for exclusively white discussions (the “White-on-White training group”) to avoid turning minorities into teachers, which would be merely another form of exploitation.

Yet the most striking aspect of Katz’s ideas, by contrast to the puritanical DiAngelo, is her insistence that the real purpose of anti-racism training is to enable the psychological liberation and self-fulfillment of white Americans. She consistently discusses the problem of racism in the medicalizing language of sickness and trauma. It is, she says, “a form of schizophrenia,” “a pervasive form of mental illness,” a “disease,” and “a psychological disorder… deeply embedded in White people from a very early age on both a conscious and an unconscious level.” Thus the primary benefit offered by Katz is to save white people from this pathology, by allowing them to establish a coherent identity as whites.

Her program, she repeatedly emphasizes, is not meant to produce guilt. Rather, its premise is that in order to discover “our unique identities,” we must not overlook “[o]ur sexual and racial essences.” Her training allows its subjects to “become more fully human,” to “identify themselves as White and feel good about it.” Or as Katz writes in a journal article: “We must begin to remove the intellectual shackles and psychological chains that keep us in a mental and spiritual bondage. White people have been hurt for too long.”

Reading all of this, it is difficult not to be reminded of the critic Christopher Lasch’s portrayal of 1970s America as a “culture of narcissism”. Lasch was referring to a bundle of tendencies that characterised the hangover from the radicalism of the 1960s: a catastrophising hypochondria that found in everything the signs of impending disaster or decay; a naval-gazing self-awareness which sought expression in various forms of spiritual liberation; and consequently, a therapeutic culture obsessed with self-improvement and personal renewal.

The great prophet of this culture was surely Woody Allen, whose work routinely evoked crippling neuroses, fear of death, and psychiatry as the customary tool for managing the inner tensions of the liberated bourgeois. That Allen treated all of this with layer upon layer of self-deprecating irony points to another key part of Lasch’s analysis. The narcissist of this era retained enough idealism so as to be slightly ashamed of his self-absorption — unless, of course, some way could be found to justify it as a means towards wider social improvement.

And that is what Katz’s white awareness training offered: a way to resolve the tensions between a desire for personal liberation and a social conscience, or more particularly, a new synthesis of ’70s therapeutic culture with the collectivist political currents unleashed in the ’60s.

Moreover, in Katz’s work we catch a glimpse of what the vehicle for this synthesis would be: the managerial structures of the public or private institution, where a paternalistic attitude towards students, employees and the general public could provide the ideal setting for the tenets of “white awareness.” By way of promoting her program, Katz observed in the late ’70s a general trend towards “a more educational role for the psychotherapist… utilizing systemic training as the process by which to meet desired behavior change.” There was, she noted, a “growing demand” for such services.

Which brings us back to the NMAAHC’s controversial chart. It would be wrong to suggest that this single episode allows us to draw a straight line from the culture of narcissism in which Katz’s ideas emerged to the present anti-racism narrative. But the fact that there continues to be so much emphasis placed on the notion of “whiteness” today — the NMAAHC has an entire webpage under this heading, which prominently features Katz’s successor Robin DiAngelo — suggests that progressive politics has not entirely escaped the identity crises of the 1970s.

Today that politics might be more comfortable assigning guilt than Katz was, but it still places a disproportionate emphasis on those it calls “white” to adopt a noble burden of self-transformation, while relegating minorities to the role of a helpless other.

Of course, it is precisely this simplistic dichotomy which allows the anti-racism narrative to jump across borders and even oceans, as we have seen happening recently, into any context where there are people who can be called “white” and an institutional framework for administering reeducation. Already in 1983, Katz was able to promote her “white awareness training” in the British journal Early Child Development and Care, simply swapping her standard American intro for a discussion of English racism.

Then as now, the implication is that from the perspective of “whiteness,” the experience of African-Americans and of ethnic minorities in a host of other places is somehow interchangeable. This, I think, can justifiably be called a kind of narcissism.

Why I’m not giving up on my ego

This spring, I finally got round to reading Derek Parfit’s famous work, Reasons and Persons. Published in 1984, the book is often cited as a key inspiration for subsequent developments in moral philosophy, notably the field of population ethics and the Effective Altruism movement. (Both of which, incidentally, are closely associated with Oxford University, the institution where Parfit himself worked until his death in 2017). I found Reasons and Persons every bit the masterpiece many have made it out to be – a work not just of rich insight, but also of persuasive humility and charm. For this reason, and because some themes of the book resonate with certain cultural trends today, I thought it would be worth saying something about why Parfit did not win me over to his way of seeing the world.

In Reasons and Persons, Parfit takes on three main issues:

  1. He makes numerous arguments against the self-interest theory of rationality, which holds that what is most rational for any individual to do is whatever will benefit him or her the most;
  1. He argues for a Reductionist theory of identity, according to which there is no “deep further fact” or metaphysical essence underpinning our existence as individual persons, only the partial continuity of psychological experiences across time;
  1. He argues for the moral significance of future generations, and searches (unsuccessfully, by his own admission) for the best way to recognise that significance in our own decisions.

I want to consider (2), Parfit’s Reductionist view of identity. On my reading, this was really the lynchpin of the whole book. According to Parfit, we are inclined to believe there is a “deep further fact” involved in personal identity – that our particular bodies and conscious minds constitute an identity which is somehow more than the sum of these parts. If your conscious mind (your patterns of thought, memories and intentions) managed somehow to survive the destruction of your body (including your brain), and to find itself in a replica body, you may suspect that this new entity would not be you. Likewise if your body continued with some other mind. In either case some fundamental aspect of your personhood, perhaps a metaphysical essence or soul or self, would surely have perished along the way.

Parfit says these intuitions are wrong: there simply is no further fact involved in personal identity. In fact, as regards both a true understanding of reality and what we should value (or “what really matters,” as he puts it), Parfit thinks the notion of persons as bearers of distinct identities can be dispensed with altogether.

What really matters about identity, he argues, is nothing more than the psychological continuity that characterises our conscious minds; and this can be understood without reference to the idea of a person at all. If your body were destroyed and your mind transferred to a replica body, this would merely be “about as bad as ordinary survival.” Your mind could even find itself combined with someone else’s mind, in someone else’s body, which would no doubt present some challenges. In both cases, though, whether the new entity would “really be you” is an empty question. We could describe what had taken place, and that would be enough.

Finally, once we dispense with the idea of a person as bearer of a distinct identity, we notice how unpersonlike our conscious minds really are. Psychological continuity is, over the course of a life, highly discontinuous. Thought patterns, memories and intentions form overlapping “chains” of experience, and each of these ultimately expires or evolves in such a way that, although there is never a total rupture, our future selves might as well be different people.

As I say, I found these claims about identity to be the lynchpin of Reasons and Persons. Parfit doesn’t refer to them in the other sections of his book, where he argues against self-interest and for the moral significance of future generations. But you can hardly avoid noticing its relevance for both. Parfit’s agenda, ultimately, is to show that ethics is about the quality of human experiences, and that all experiences across time and space should have the same moral significance. Denying the sanctity of personal identity provides crucial support for that agenda. Once you accept that the notion of an experience being your experience is much less important than it seems, it is easier to care more about experiences happening on the other side of the planet, or a thousand years in the future.

But there is another reason I was especially interested in Parfit’s treatment of identity.  In recent years, some friends and acquaintances of mine have become fascinated by the idea of escaping from the self or ego, whether through neo-Buddhist meditation (I know people who really like Sam Harris) or the spiritualism of Eckhart Tolle. I’m also aware that various subcultures, notably in Silicon Valley, have become interested in the very Parfitian idea of transhumanism, whereby the transferal of human minds to enhanced bodies or machines raises the prospect of superseding humanity altogether. Add to these the new conceptions of identity emerging from the domain of cultural politics – in particular, the notion of gender fluidity and the resurgence of racial essentialism – and it seems to me we are living at a time when the metaphysics of selfhood and personhood have become an area of pressing uncertainty.

I don’t think it would be very productive to make Reasons and Persons speak to these contemporary trends, but they did inform my own reading of the book. In particular, they led me to notice something about Parfit’s presentation of the Reductionist view.

In the other sections of the Reasons and Persons, Parfit makes some striking historical observations. He argues for a rational, consequentialist approach to ethics by pointing out that in the modern world, our actions affect a far larger number of people than they did in the small communities where our traditional moral systems evolved. He reassures us of the possibility of moral progress by claiming that ethics is still in its infancy, since it has only recently broken free from a religious framework. In other words, he encourages us to situate his ideas in a concrete social and historical context, where they can be evaluated in relation to the goal of maximising human flourishing.

But this kind of contextualisation is entirely absent from Parfit’s treatment of identity. What he offers us instead is, ironically, a very personal reason for accepting the Reductionist view:

Is the truth depressing? Some may find it so. But I find it liberating, and consoling. When I believed that my existence was such a further fact, I seemed imprisoned in myself. My life seemed like a glass tunnel, through which I was moving faster every year, and at the end of which there was darkness. When I changed my view, the walls of my glass tunnel disappeared. I now live in the open air. There is still a difference between my life and the lives of other people. But the difference is less. Other people are closer. I am less concerned about the rest of my own life, and more concerned about the lives of others.

Parfit goes on to explain how accepting the Reductionist view helps him to reimagine his relationship to those who will be living after he has died. Rather than thinking “[a]fter my death, there will be no one living who will be me,” he can now think:

Though there will later be many experiences, none of these experiences will be connected to my present experiences by chains of such direct connections as those involved in experience-memory, or in the carrying out of an earlier intention.

There is certainly a suggestion here that, as I said earlier, the devaluation of personal identity supports a moral outlook which grants equal importance to all experiences across time and space. But there is no consideration of what it might be like if a significant number of people in our societies did abandon the idea of persons as substantive, continuous entities with real and distinct identities.

So what would that be like? Well, I don’t think the proposition makes much sense. As soon as we introduce the social angle, we see that Parfit’s treatment of identity is lacking an entire dimension. His arguments make us think about our personal identity in isolation, to show that in certain specific scenarios we imagine a further fact where there is none. But in social terms, our existence does involve a further fact – or rather, a multitude of further facts: facts describing our relations with others and the institutions that structure them. We are sons and daughters, parents, spouses, friends, citizens, strangers, worshippers, students, teachers, customers, employees, and so on. These are not necessarily well-defined categories, but they suggest the extent to which social life is dependent on individuals apprehending one another not in purely empirical terms, but in terms of roles with associated expectations, allowances and responsibilities.

And that, crucially, is also how we tend to understand ourselves – how we interpret our desires and formulate our motivations. The things we value, aim for, think worth doing, and want to become, inevitably take their shape from our impressions of the social world we inhabit, with its distinctive roles and practices.

We emulate people we admire, which does not mean we want to be exactly like them, but that they perform a certain role in a way that we identify with. There is some aspect of their identity, as we understand it, that we want to incorporate into our own. Likewise, when we care about something, we are typically situating ourselves in a social milieu whose values and norms become part of our identity. Such is the case with raising a family, being successful in some profession, or finding a community of interest like sport or art or playing with train sets. It is also the case, I might add, with learning meditation or studying philosophy in order to write a masterpiece about ethics.

There is, of course, a whole other tradition in philosophy that emphasises this interdependence of the personal and the social, from Aristotle and Hegel to Hannah Arendt and Alasdair MacIntyre. This tradition is sometimes called communitarian, by which is meant, in part, that it views the roles provided by institutions as integral to human flourishing. But the objection to Parfit I am trying to make here is not necessarily ethical.

My objection is that we can’t, in any meaningful sense, be Reductionists, framing our experiences and decisions as though they belong merely to transient nodes of psychological connectivity. Even if we consider personhood an illusion, it is an illusion we cannot help but participate in as soon as we begin to interact with others and to pursue ends in the social world. Identity happens, whether we like it or not: other people regard us in a certain way, we become aware of how they regard us, and in our ensuing negotiation with ourselves about how to behave, a person is born.

This is, of course, one reason that people find escaping the self so appealing: the problem of how to present ourselves in the world, and of deciding which values to consider authentically our own, can be a source of immense neurosis and anxiety. But the psychological dynamics from which all of this springs are a real and inescapable part of being human (there is a reason Buddhist sages have often lived in isolation – something I notice few of their contemporary western descendants do). You can go around suppressing these thoughts by continuously telling yourself they do not amount to a person or self, but then you would just be repeating the fallacy identified by Parfit – putting the emphasis on personhood rather than on experiences. Meanwhile, if you actually want to find purpose and fulfilment in the world, you will find yourself behaving like a person in all but name.

To truly step outside our identities by denying any further fact in our existence (or, for that matter, by experiencing the dissolution of the ego through meditation, or fantasising about being uploaded to a machine) is at most a private, intermittent exercise. And even then, our desire to undertake this exercise, our reasons for thinking it worthwhile, and the things we hope to achieve in the process, are firmly rooted in our histories as social beings. You must be a person before you can stop being a person.

Perhaps these complications explain why Parfit is so tentative in his report of what it is like to be a Reductionist: “There is still a difference between my life and the lives of other people. But the difference is less.” I interpret his claim that we should be Reductionists as the echo of an age-old wisdom: don’t get so caught up in your own personal dramas that you overlook your relative insignificance and the fact that others are, fundamentally, not so different to you. But this moral stance does not follow inevitably from a theoretical commitment to Reductionism (and like I say, I don’t think that commitment could be anything more than theoretical). In fact, it’s possible to imagine some horrific beliefs being just as compatible with the principle that persons do not really exist. Parfit’s claim that Reductionism makes him care more about humanity in general seems to betray his own place in the tradition of universalist moral thought – a tradition in which the sanctity of persons (and indeed of souls) has long been central.

As for my friends who like to step away from the self through meditation, if this helps them stay happy and grounded, more power to them. But I don’t think this could ever obviate the importance of engaging in another kind of reflection: one that recognises life as a journey we must all undertake as real persons living in a world with others, and which requires us to struggle to define who we are and want to be. This is not easy today, because the social frameworks that have always been necessary for persons, like so many climbing flowers, to grow, are now in a state of flux (but that is a subject for another time). Still, difficult as it may be, the road awaits.

Reading Antigone in an age of resistance

The play opens with two sisters, Antigone and Ismene, arguing about their duties to family versus those to the state. Their two brothers have just killed each other while leading opposing sides of a civil war in Thebes. Their uncle Creon has now taken charge of the city, and has decreed that one of the brothers, Polynices, is to be denied a funeral: “he must be left unburied, his corpse / carrion for the birds and dogs to tear, / an obscenity for the citizens to behold.”

Ismene chooses obedience to Creon, but Antigone decides to rebel. She casts a symbolic handful of dust over Polynices’ corpse, and when brought before Creon, affirms her action in the name of “the great unwritten, unshakeable traditions” demanding funeral rites for the dead. So begins a confrontation between two headstrong, unflinching protagonists. It will end with Antigone hanging herself in her jail cell, leading to the suicide both of Creon’s son (who was engaged to Antigone), and consequently of his wife.

*   *   *

 

“When I see that king in that play, the first name that came to mind was Donald Trump: arrogance, misogyny, tunnel vision.” This was reportedly one audience member’s response to Antigone in Ferguson, a 2018 theatre piece that brought a famous Greek tragedy into the context of US race relations. That tragedy is Sophocles’ Antigone, which I have summarised above. The play is now frequently being used to explore contemporary politics, especially in relation to the theme of resistance. “It’s a story of a woman who finds the courage of her convictions to speak truth to power,” said Carl Cofield, who directed another production of Antigone in New York last year. Cofield drew parallels with the #MeToo movement, Black Lives Matter, and “the resistance to the outcome of the presidential race.”

This reading of Antigone has become increasingly common since the post-war era. Its originator was perhaps Bertolt Brecht’s 1948 adaptation, which imagined a scenario where the German people had risen against Hitler. Since the 1970s Antigone has often been portrayed as a feminist heroine, and the play has served as a call-to-arms in countless non-western contexts too. As Fanny Söderbäck proudly notes: “Whenever and wherever civil liberties are endangered, when the rights or existence of aboriginal peoples are threatened, when revolutions are underway, when injustices take place – wherever she is needed, Antigone appears.”

Such appropriation of a classical figure is by no means unique. It echoes the canonisation of Socrates as a martyr for free speech and civil disobedience, most notably by John Stuart Mill, Mohandas Gandhi and Martin Luther King. And just as this image of Socrates rests on Plato’s Apology of Socrates, but ignores the quite different portrait in the Crito, the “resistance” reading of Antigone bears little resemblance to how the play was originally intended and received.

An audience in 5th century Athens would not have regarded Antigone as subversive towards the authority of the state. In fact, if you accept the conventional dating of the play (441 BC), the Athenian people elected Sophocles to serve as a general immediately after its first performance. Rather, the dramatic impact of Antigone lay in the clash of two traditional visions of justice. Creon’s position at the outset – “whoever places a friend / above the good of his own country, he is nothing” – was not a queue for booing and hissing, but a statement of conventional wisdom. Likewise, Antigone’s insistence on burying her brother was an assertion of divine law, and more particularly, her religious duties as a woman. Thus Creon’s error is not that he defends the prerogatives of the state, but that he makes them incompatible with the claims of the gods.

Sophocles’ protagonists were not just embodiments of abstract principles, though. He was also interested in what motivates individuals to defend a particular idea of justice. Creon, it seems, is susceptible to megalomania and paranoia. And as Antigone famously admits in her final speech, her determination to bury her brother was a very personal obsession, born from her uniquely wretched circumstances.

*   *   *

 

It’s hardly surprising that our intuitive reading of Antigone has changed over more than two millennia. The world we inhabit, and the moral assumptions that guide us through it, are radically different. Moreover, Antigone is one of those works that seem to demand a new interpretation in every epoch. Hegel, for instance, used the play to illustrate his theory of dialectical progress in history. The moral claims of Antigone and Creon – or in Hegel’s scheme, family and state – are both inadequate, but the need to synthesise them cannot be grasped until they have clashed and been found wanting. Simone de Beauvoir also identified both protagonists with flawed outlooks, though in her reading Antigone is a “moral idealist” and Creon a “political realist” – two ways, according to de Beauvoir, of avoiding moral responsibility.

So neither Hegel nor de Beauvoir recognised Antigone as the obvious voice of justice. Then again, they were clearly reading the play with the templates provided by their own moments in history. Hegel’s historical forces belong to the tumultuous conflicts of the early 19thcentury, in which he had staked out a position as both a monarchist and a supporter of the French Revolution. De Beauvoir’s archetypes belong to Nazi-occupied France – a world of vicious dilemmas in which pacifists, collaborators and resistors had all claimed to act for the greater good, and were all, in her eyes, morally compromised.

Thus, each era tries to understand Antigone using the roles and narratives particular to its own moral universe. And this, I would argue, is a natural part of artistic and political discourse. Such works cannot be quarantined in their original context – they have different resonances for different audiences. Moreover, the question of how one interprets something is always preceded by the question of why one bothers to interpret it at all, and that second question is inevitably bound up with what we consider important in the here and now. Our own moral universe, as I’ve already suggested, is largely defined by the righteousness of resistance and the struggle for freedom. Consequently, works from the past tend to be interpreted according to a narrative where one agent or category of agent suppresses the autonomy of another.

Nonetheless, there are pitfalls here. I think it is important for us to remain aware that our intuitive reading of a play like Antigone is precisely that – our intuitive reading. Otherwise, we may succumb to a kind of wishful thinking. We may end up being so comfortable projecting our values across time that we forget they belong to a contingent moment in history. We might forget, in other words, that our values are the product of a particular set of circumstances, not of some divine edict, and so cannot simply be accepted as right.

Of course we can always try to reason about right and wrong. But if we unthinkingly apply our worldview to people in other eras, we are doing precisely the opposite. We are turning history itself into a vast echo chamber, relieving us of the need to examine or defend our assumptions.

*   *   *

 

The task of guarding against such myopia has traditionally fallen to academic scholarship. And in a sense, this institution has never been better equipped to do it. Since the advent of New Historicism in the 1980s, the importance of the context in which works are made, as well as the context in which they are read, has been widely acknowledged in the humanities. But this has had a peculiarly inverse effect. The apparent impossibility of establishing any objective or timeless lesson in a play like Antigone has only heightened the temptation to claim it for ourselves.

Consider the approach taken by the influential gender theorist Judith Butler in her book Antigone’s Claim (2000). Using modern psychoanalytic concepts, Butler delves into the murky world of family and sexuality in the play (Antigone is the daughter of the infamously incestuous Oedipus, whose “curse” she is said to have inherited). Butler thus unearths “a classical western dilemma” about the treatment of those who do not fit within “normative versions of kinship.”

However Butler is not interested in establishing any timeless insights about Antigone. As she makes clear throughout her analysis, she is interested in Antigone “as a figure for politics,” and in particular, for the contemporary politics of resistance. “I began to think about Antigone a few years ago,” she says, “as I wondered what had happened to those feminist efforts to confront and defy the state.” She then sets out her aim of using the play to examine contemporary society, asking

what the conditions of intelligibility could have been that would have made [Antigone’s] life possible, indeed, what sustaining web of relations makes our lives possible, those of us who confound kinship in the rearticulation of its terms?

This leads her to compare Antigone’s plight to that of AIDS victims and those in alternative parenting arrangements, while also hinting at “the direction for a psychoanalytic theory” which avoids “heterosexual closure.”

Butler is clearly not guilty, then, of forgetting her own situatedness in history. However this does raise the question, if one is only interested in the present, why use a work from the past at all? Butler may well answer that such texts are an integral part of the political culture she is criticising. And that is fine, as far as it goes. But this approach seems to risk undermining the whole point of historicism. For although it does not pretend that people in other times had access to the same ideas and beliefs as we do, it does imply that the past is only worth considering in terms our own ideas and beliefs. And the result is very similar: Antigone becomes, effectively, a play about us.

In other words, Butler’s way of appropriating the past subtly makes it conform to contemporary values. And in doing so, it lays the ground for that echo-chamber I described earlier, whereby works from the past merely serve as opportunities to give our own beliefs a sheen of eternal truth. Indeed, elsewhere in the recent scholarship on Antigone, one finds that an impeccably historicist reading can nonetheless end  like this:

Thus is the nature of political activism bent on the expansion of human rights and the extension of human dignity. … Antigone is a charter member of a small human community that is “la Résistance,” wherever it pops up in the history of human civilisation(My emphasis)

Such statements are not just nonsensical, but self-defeating. However valuable ideas like human rights, human dignity, and resistance might be, they do not belong to “the history of human civilisation.” Moreover, it is impossible to understand their value unless one realises this.

The crucial question here is what we do with the knowledge that values differ across time. There is, perhaps, a natural tendency to see this as demanding an assertion of the ultimate validity of our own worldview. In this sense, our desire to portray Antigone as a figure of resistance recalls those theologians who used to scour classical texts for foreshadowings of Christ. I would argue, however, that we should treat the contingency of our beliefs as a warning against excessive certainty. Ideas are always changing in relation to circumstances, and as such, need to be constantly questioned.

The Forgotten Books of Dorothea Tanning

This article was first published by MutualArt on 4 April 2019

It has often been said that Dorothea Tanning had two careers in her exceptionally long life: first as a visual artist, then as a writer. At the current Tate Modern exhibition of Tanning’s paintings and sculptures, you can read her statement that it was after the death of her husband Max Ernst in 1976 that she “gave full rein to her long felt compulsion to write.” The decades before her own death in 2012 were increasingly dedicated to literature, as she produced two memoirs, a novel, and two well-regarded collections of poetry.

Nonetheless, it would be truer to say that word and image went hand-in-hand throughout Tanning’s career. She published a steady stream of texts during the height of her visual output from the 1940s until the 1970s. Moreover, as the wealth of literary allusions in her paintings suggests, she drew constant inspiration from the horde of books she and Ernst kept in their home. Tanning told the New York Times in 1995: “All my life I’ve been on the fence about whether to be an artist or writer.”

But the most overlooked aspect of Tanning’s literary-artistic career is her involvement in numerous books of poetry and printmaking in France from the 1950s onwards. These include collaborations with several French authors, and two books of Tanning’s own French poetry and prints – Demain (1963) and En chair et en or (1974).

These works deserve more attention. For one thing, the etchings and lithographs Tanning produced for these books amount to a significant and distinctive part of her oeuvre. According to Clare Elliott, curator of an upcoming show of Tanning’s graphic works at the Menil Collection in Houston, her prints “achieve a variety of visual effects impossible to achieve with other materials. Ranging from dreamlike representation to near total abstraction, they reveal the breadth of her formal innovation.”

What is more, a closer look at Tanning’s bookmaking years can give us a unique perspective on her as an artist – her working methods, her outlook, and her relationship to the movement she was most influenced by, Surrealism.

 

Book mania

Arriving in Paris in 1950, Tanning discovered a thriving scene around the beau livre, or limited edition artist’s book. “Paris in the first fifty years of our century spawned more beau livresthan the rest of the world together,” she recalled in 1983. “To call it mania would not have surprised or displeased anyone.” Mostly these books were collaborations between an artist and a poet, “with mutual admiration as the basic glue that held them together,” as well as an editor who normally bankrolled the project.

Tanning dove straight into this milieu. In 1950 she produced a series of lithographs, Les 7 Périls Spectraux (The 7 Spectral Perils), to accompany text by the Surrealist poet André Pieyre de Mandiargues. Here we can recognise several motifs from Tanning’s early paintings – most notably in Premier peril, where a female figure with a dishevelled mask of hair presses herself against an open door, which is also the cover of a book. But with her combination of visual textures, Tanning achieves a new depth in these images, showing her embrace of the lithographic process in all its layered intricacy.

As the collaborations continued during the 1950s and 60s, Tanning’s printmaking ambitions grew. Like many artists before her, she discovered in etching and lithography a seemingly limitless arena for experimentation, attempting a wide range of techniques and compositions. And in 1963 she went a step further, replacing the poetry of other authors with her own.

Screenshot 2019-05-14 at 13.17.58
Dorothea Tanning, “Frontispiece for Demain” and “Untitled for Demain” (1963). Courtesy of the Dorothea Tanning Foundation.

The result was Demain (Tomorrow), a book of six etchings and a poem in French dispersed across several pages. Though modest in size – just ten squared centimetres – it is a punchy work of Surrealism. The poem progresses through a series of menacing images, as language breaks down in the presence of time and memory. It concludes: “The night chews its bone / My house asks itself / And deplores / Tonight, bath of mud / Evening fetish of a hundred thousand years, / My vampire.” The etchings convey a similar sense of dissolution, with vague forms emerging from a fog of aquatint.

Making Demain involved frustrations any printmaker could recognise. She would later describe watching her printer, Georges Visat, “wiping colours on the little plates while I stood by, always imploring for another try. There must have been fifty of these.” She was, however, thrilled by the result: “For my own words my own images – what more could one ask?”

Eleven years later Tanning produced En chair et en or (Of flesh and gold), a more substantial and, in every respect, more accomplished book. Its ten etchings, in which curvaceous, almost-human figures are suspended above landscapes of pale yellow and blue, show us what to expect from the accompanying poem. Everything expresses a sense of poise, a dazzling, enigmatic tension:

Body and face drift
Down with nightfall, unnoticed.
Draw near, draw nearer
Your destination.

Gradually, Tanning introduces notes of violence and desire, culminating in the striking final stanza: “Death on a weekend / Opened the dance like a vein / Flaming flesh and gold.”

 

Second languages

Dorothea Tanning, “Quoi de plus,” from “En chair et en or” (1974). Courtesy of the Dorothea Tanning foundation.

By the time of En chair et en or, we can identify some characteristic features in Tanning’s printmaking and poetry. Her etchings typically present coarse background textures, ghostly colours, and loosely organic forms. Her poems, meanwhile, reveal her exposure to the international Surrealist movement during the 1940s. (In “Demain”for instance, there are direct echoes of the Mexican poet Octavio Paz).

But this is not the most insightful way to approach Tanning’s books. For what really appealed to her, an English-speaking painter, about printmaking and French poetry was the opportunity to escape familiar forms of expression.

“Much of this work, and etchings that follow, have to do with chance,” she wrote about one of her collaborations, “for so many things can happen to a copper plate, depending on how you treat it, that implications are myriad.” Very few artists master the printmaking process to the degree that they know exactly what they are going to get at the end of it, but for Tanning this was part of its allure. In her comments about printmaking, she often used words like “discovery” and “adventure.” Unpredictability, in other words, was a creative asset.

The same can be said of her poetry in this period. The Irish playwright Samuel Beckett claimed that he wrote in French precisely because he did not know it as well as English, and so was less confined by conventional style and idiom. Likewise, it is striking how raw and immediate Tanning’s French poetry is by comparison with her later work in English.

All of this resonates with what originally drew Tanning to Surrealism – in her often quoted phrase from 1936, “the limitless expanse of POSSIBILITY.” In its earliest and most dramatic phase, an important aim of Surrealism had been for artists to loosen their control over expression, thus allowing more spontaneous, expansive forms of communication and meaning. This is what printmaking and French – both, in a sense, second languages – allowed Tanning to do.

Notes on The Artist’s Studio

The series of paintings known as Concetto spaziale, by the Argentine-Italian artist Lucio Fontana, is one of those moments in art history whose significance is easily overlooked today. It is difficult to imagine how radical they must have looked during the 1960s: plain white canvases presenting nothing more than one or a few slits where Fontana slashed the surface with a blade. Moreover, as I realised when I reviewed an exhibition featuring Fontana in 2015 (you can read that review here), it is only by considering the atmosphere of post-war Europe that one can grasp how freighted with purpose and symbolism this simple gesture had been.

But there are always new ways of looking at an artwork. The other evening I was visiting some galleries near Piccadilly and found myself, unexpectedly, confronted by one of the Concetto spaziale paintings once more. Only I wasn’t looking at the painting itself, but at a series of photographs that showed Fontana in his studio making it. Where previously there had been the stark aura of an iconic artwork, now there was melodrama and a rye sense of humour. The images, taken by the Italian photographer Ugo Mulas, were arranged in a climactic sequence. First we see Fontana poised at some distance from canvas, Stanley-knife in hand, his tense wrist and neatly folded sleeve suggesting the commencement of a long-anticipated act. There is a mood of ritual silence in the room, heightened by the soft light pouring through a large window. Then Fontana is approaching the canvas uncertainly, and making the first incision on its white surface – a moment pictured first in wide-angle, then close-up. Finally, the deed done, he lingers in a ceremonious bowing posture, the canvas now divided by a metre-long cleft.

Installation shot of Ugo Mulas, Lucio Fontana, L’Attesa, Milano 1-6, 1964 (2019). Modern print. Gelatin silver print on baritated paper. Edition of 8. Courtesy of Robilant+Voena.

These are just some of the photographs Mulas took of artists in their studios during the 1960s and 70s, which can be seen at Robilant+Voena gallery on Dover Street. Much like Fontana’s paintings, Mulas’ photographs require one to step imaginatively backwards in time; they now appear so classical in style, and so gorgeous in tone, that one can overlook their more subtle aspects. In particular, I get the sense Mulas was aware of his role as a myth-maker. His images playfully pander to the romance surrounding the artist’s studio – the setting where, in the popular imagination, unusual individuals go to perform some exotic and mysterious process of magic.

*   *   *

 

I have always been fascinated by studios, probably because I grew up with one at home. This was my mother’s studio. It was located between the kitchen and my brother’s bedroom, but I was always aware that it was a different kind of room from the others in the house. A place of inspiration, yes: a realm of coffee, bookshelves, and classical music. But also a site of labour, which smelled of turpentine and had a cold cement floor, a place where my old clothes became rags to wipe etching plates. Above all it was (and remains) a very particular setting, shaped by the contingencies of one person’s working life as it had evolved over many years.

Insofar as artists’ studios really are special, mysterious places, it is because of this particularity. This is rarely reflected, though, in the photography and journalism that surrounds them. Rather, studios tend to attract attention according to how well they embody a particular conception of the artist as an outsider, an unconventional or even otherworldly being. One studio that fits this template belongs to the monk-like painter Frank Auerbach, who has worked in the same dank cell in Mornington Crescent more or less every day since 1954 (Auerbach once quipped that age had finally forced him to reduce his working year, from 365 days to 364). Not only is the room cramped and barely furnished, but to the delight of various photographers over the years, Auerbach’s scraping technique has left the floor coated in layer upon layer of calcified paint. This is nothing, however, compared to the iconic lair of Francis Bacon – a disaster zone that resembled a trash-heap more closely than a studio, and captured perfectly Bacon’s persona as a chaotic, doomed madman.

90
Jorge Lewinski, “Frank Auerbach,” 1965. © The Lewinski Archive at Chatsworth.

studiohero
Perry Ogden, “Francis Bacon’s 7 Reece Mews studio, London, 1998.”

The fact is, of course, that studios are often highly utilitarian spaces – clean, carefully organised, with most consideration going to practical questions such as storage and lighting. Of course some artists are messy, but their clutter is not qualitatively different to that which exists in many workspaces. And yet, even the apparently humdrum reality of a studio can provide a mystifying effect. Journalists and visitors often dwell precisely on the most ordinary, relatable aspects of an artist’s working life, thereby implicitly reinforcing the idea that an artist is something other than ordinary. In one feature on “Secrets of the Studio,” for instance, we learn that Grayson Perry likes to “collapse in an armchair and listen to the Archers,” while George Shaw “pretty much work[s] office hours.”

This paradox was observed by Roland Barthes in his wonderful essay “The Writer on Holiday.” After noting the tendency of the press to dwell on such domestic aspects of a writer’s life as their holidays, diet, and the colour of their pyjamas, Barthes concludes:

Far from the details of his daily life bringing nearer to me the nature of his inspiration and making it clearer, it is the whole mythical singularity of his condition which the writer emphasises by such confidences. For I cannot but ascribe to some superhumanity the existence of beings vast enough to wear blue pyjamas at the very moment when they manifest themselves as universal conscience […].

Sometimes artists themselves appear to use this trick. Wolfgang Tillmans’ photograph Studio still life, c, 2014 shows a very ordinary desk spread with several computers, a keyboard, cellotape, post-it notes, and so on. There is just a suggestion of bohemia conveyed by the beer bottle, cigarette packs and ashtray. It is tempting to interpret this image, especially when shown alongside Tillmans’ other works, as a subtle piece of self-glorification – a gesture of humility that makes the artist seem all the more remarkable for being a real human being.

33386240175_6f52aa7ccc_b
Wolfgang Tillmans, “Studio still life, c, 2014.”

*   *   *

 

We shouldn’t be too cynical, though. The various romantic tropes that surround artists are not always and entirely tools of mystification, and nor do they show, as Barthes suggested, “the glamorous status bourgeois society liberally grants its spiritual representatives” in order to render them harmless. Such “myths” also offer a way of pointing towards, and navigating around, a deeper reality of which we are aware: that artistic production, at least in its modern form, is a very personal thing. This is why we will always have the sense, when seeing or entering a studio, that we are intruders in a place of esoteric ritual.

As I said, the beauty of a studio lies in its particularity. Does this mean, then, that one cannot appreciate a studio without becoming familiar with it? Not entirely. I was recently lent a copy of the architect MJ Long’s book Artists’ Studios, in which she chronicles the numerous spaces she designed for artists during her career. These include some of the most colourful and, indeed, most widely mythologised studios out there. But as an architect, Long is uniquely well placed to tell us the specific practical and personal considerations behind them. As such, she is able to bring out their genuinely poetic aspects without falling into cliché.

That poetry is captured, I think, in some notes left by Long’s husband and partner, Sandy Wilson, to encourage her to write her book. He briefly summarises a few of their studio projects, and the artists who commissioned them, as follows:

Kitaj, scholar-artist worked surrounded by books and the works of his friends. In his studio books lie open on the floor at the foot of each easel like paving stones in a Japanese garden.

Blake works in a sort of wonderland mirroring and embodying his magical mystery world of icons that feed into his imagination.

A dance photographer required a pure vacuum charged with light but no physical sense of place whatsoever.

Auerbach’s studio is the locked cell of the dedicated solitary.

Ben Johnson requires the clinical conditions of the operating theatre shared with meticulous operatives in a planned programme of execution.

 

Notes on “Why Liberalism Failed”

Patrick Deneen’s Why Liberalism Failed was one of the most widely discussed political books last year. In a crowded field of authors addressing the future of liberalism, Deneen stood out like a lightning-rod for his withering, full-frontal attack on the core principles and assumptions of liberal philosophy. And yet, when I recently went back and read the many reviews of Why Liberalism Failed, I came out feeling slightly dissatisfied. Critics of the book seemed all too able to shrug off its most interesting claims, and to argue in stead on grounds more comfortable to them.

Part of the problem, perhaps, is that Deneen’s book is not all that well written. His argument is more often a barrage of polemical statements than a carefully constructed analysis. Still, the objective is clear enough. He is taking aim at the liberal doctrine of individual freedom, which prioritises the individual’s right to do, be, and choose as he or she wishes. This “voluntarist” notion of freedom, Deneen argues, has shown itself to be not just destructive, but in certain respects illusory. On that basis he claims we would be better off embracing the constraints of small-scale community life.

Most provocatively, Deneen claims that liberal societies, while claiming merely to create conditions in which individuals can exercise their freedom, in fact mould people to see themselves and to act in a particular way. Liberalism, he argues, grew out of a particular idea of human nature, which posited, above all, that people want to pursue their own ends. It imagined our natural and ideal condition as that of freely choosing individual actors without connection to any particular time, place, or social context. For Deneen, this is a dangerous distortion – human flourishing also requires things at odds with personal freedom, such as self-restraint, committed relationships, and membership of a stable and continuous community. But once our political, economic, and cultural institutions are dedicated to individual choice as the highest good, we ourselves are encouraged to value that freedom above all else. As Deneen writes:

Liberalism began with the explicit assertion that it merely describes our political, social, and private decision making. Yet… what it presented as a description of human voluntarism in fact had to displace a very different form of human self-understanding and experience. In effect, liberal theory sought to educate people to think differently about themselves and their relationships.

Liberal society, in other words, shapes us to behave more like the human beings imagined by its political and economic theories.

It’s worth reflecting for a moment on what is being argued here. Deneen is saying our awareness of ourselves as freely choosing agents is, in fact, a reflection of how we have been shaped by the society we inhabit. It is every bit as much of a social construct as, say, a view of the self that is defined by religious duties, or by membership of a particular community. Moreover, valuing choice is itself a kind of constraint: it makes us less likely to adopt decisions and patterns of life which might limit our ability to choose in the future – even if we are less happy as a result. Liberalism makes us unfree, in a sense, to do anything apart from maximise our freedom.

*   *   *

 

Reviewers of Why Liberalism Failed did offer some strong arguments in defence of liberalism, and against Deneen’s communitarian alternative. These tended to focus on material wealth, and on the various forms of suffering and oppression inherent to non-liberal ways of life. But they barely engaged with his claims that our reverence for individual choice amounts to a socially determined and self-defeating idea of freedom. Rather, they tended to take the freely choosing individual as a given, which often meant they failed to distinguish between the kind of freedom Deneen is criticizing – that which seeks to actively maximise choice – and simply being free from coercion.

Thus, writing in the New York Times, Jennifer Szalai didn’t see what Deneen was griping about. She pointed out that

nobody is truly stopping Deneen from doing what he prescribes: finding a community of like-minded folk, taking to the land, growing his own food, pulling his children out of public school. His problem is that he apparently wants everyone to do these things

Meanwhile, at National Review, David French argued that liberalism in the United States actually incentivises individuals to“embrace the most basic virtues of self-governance – complete your education, get married, and wait until after marriage to have children.”And how so? With the promise of greater “opportunities and autonomy.” Similarly Deidre McCloskey, in a nonetheless fascinating rebuttal of Why Liberalism Failed, jumped between condemnation of social hierarchy and celebration of the “spontaneous order” of the liberal market, without acknowledging that she seemed to be describing two systems which shape individuals to behave in certain ways.

So why does this matter? Because it matters, ultimately, what kind of creatures we are – which desires we can think of as authentic and intrinsic to our flourishing, and which ones stem largely from our environment. The desire, for instance, to be able to choose new leaders, new clothes, new identities, new sexual partners – do these reflect the unfolding of some innate longing for self-expression, or could we in another setting do just as well without them?

There is no hard and fast distinction here, of course; the desire for a sports car is no less real and, at bottom, no less natural than the desire for friendship. Yet there is a moral distinction between the two, and a system which places a high value on the freedom to fulfil one’s desires has to remain conscious of such distinctions. The reason is, firstly, because many kinds of freedom are in conflict with other personal and social goods, and secondly, because there may come a time when a different system offers more by way of prosperity and security.  In both cases, it is important to be able to say what amounts to an essential form of freedom, and what does not.

*   *   *

 

Another common theme among Deneen’s critics was to question his motivation. His Catholicism, in particular, was widely implicated, with many reviewers insinuating that his promotion of close-knit community was a cover for a reactionary social and moral order. Here’s Hugo Drochon writing in The Guardian:

it’s clear that what he wants… is a return to “updated Benedictine forms” of Catholic monastic communities. Like many who share his worldview, Deneen believes that if people returned to such communities they would get back on a moral path that includes the rejection of gay marriage and premarital sex, two of Deneen’s pet peeves.

Similarly, Deidre McCloskey:

We’re to go back to preliberal societies… with the church triumphant, closed corporate communities of lovely peasants and lords, hierarchies laid out in all directions, gays back in the closet, women in the kitchen, and so forth.

Such insinuations strike me as unjustified – these views do not actually appear in Why Liberalism Failed– but they are also understandable. For Deneen does not clarify the grounds of his argument. His critique of liberalism is made in the language of political philosophy, and seems to be consequentialist: liberalism has failed, because it has destroyed the conditions necessary for human flourishing. And yet whenever Deneen is more specific about just what has been lost, one hears the incipient voice of religious conservatism. In sexual matters, Deneen looks back to “courtship norms” and “mannered interaction between the sexes”; in education, to “comportment” and “the revealed word of God.”

I don’t doubt that Deneen’s religious beliefs colour his views, but nor do I think his entire case springs from some dastardly deontological commitment to Catholic moral teaching. Rather, I would argue that these outbursts point to a much more interesting tension in his argument.

My sense is that the underpinnings of Why Liberalism Failed come from virtue ethics – a philosophy whose stock has fallen somewhat since the Enlightenment, but which reigned supreme in antiquity and medieval Christendom. In Deneen’s case, what is important to grasp is Aristotle’s linking of three concepts: virtue, happiness, and the polis or community. The highest end of human life, says Aristotle, is happiness (or flourishing). And the only way to attain that happiness is through consistent action in accordance with virtue – in particular, through moderation and honest dealing. But note, virtues are not rules governing action; they are principles that one must possess at the level of character and, especially, of motivation. Also, it is not that virtue produces happiness as a consequence; the two are coterminous – to be virtuous is to be happy. Finally, the pursuit of virtue/happiness can only be successful in a community whose laws and customs are directed towards this same goal. For according to Aristotle:

to obtain a right training for goodness from an early age is a hard thing, unless one has been brought up under right laws. For a temperate and hardy way of life is not a pleasant thing to most people, especially when they are young.

The problem comes, though, when one has to provide a more detailed account of what the correct virtues are. For Aristotle, and for later Christian thinkers, this was provided by a natural teleology – a belief that human beings, as part of a divinely ordained natural order, have a purpose which is intrinsic to them. But this crutch is not really available in a modern philosophical discussion. And so more recent virtue ethicists, notably Alasdair MacIntyre, have shifted the emphasis away from a particular set of virtues with a particular purpose, and towards virtue and purpose as such. What matters for human flourishing, MacIntyre argued, is that individuals be part of a community or tradition which offers a deeply felt sense of what it is to lead a good life. Living under a shared purpose, as manifest in the social roles and duties of the polis, is ultimately more important than the purpose itself.

This seems to me roughly the vision of human flourishing sketched out in Why Liberalism Failed. Yet I’m not sure Deneen has fully reconciled himself to the relativism that is entailed by abandoning the moral framework of a natural teleology. This is a very real problem – for why should we not accept, say, the Manson family as an example of virtuous community? – but one which is difficult to resolve without overtly metaphysical concepts. And in fact, Deneen’s handling of human nature does strain in that direction, as when he looks forward to

the only real form of diversity, a variety of cultures that is multiple yet grounded in human truths that are transcultural and hence capable of being celebrated by many peoples.

So I would say that Deneen’s talk of “courtship norms” and “comportment” is similar to his suggestion that the good life might involve “cooking, planting, preserving, and composting.” Such specifics are needed to refine what is otherwise a dangerously vague picture of the good life.