The Rise and Fall of the Creative Class

This essay was first published at The Pathos of Things newsletter. Subscribe here.

There is nothing inherently creative about vintage furniture, repurposed industrial materials or menus in typewriter font, but if you found yourself in a coffee shop with all of these elements present, then “creative” would be a common way to describe the vibe. Even more so if there was a yoga studio upstairs and a barista with neck tattoos. 

This visual language could also be called trendy or hipster, but the connotations are much the same. It is meant to invoke the imagined lifestyle of an urban creative – someone in or around the arts and media crowd – as it might have looked in Hackney or Williamsburg circa 2010. It signifies an attitude that is cultured but not elitist, cosmopolitan but not corporate, ethical but not boring, laid-back but still aspirational. In its upmarket versions (think more plants, more exotic words on the menu), the “creative” idiom implies a kind of refined hedonism, an artistic appreciation of beautiful and meaningful experiences.

Whether creatives can actually be found in such settings is beside the point, for once a lifestyle has been distilled into aesthetics it can be served to anyone, like an espresso martini. Indeed, the generic symbols of the creative lifestyle – suspended ceiling lights with large bulbs and metal hoods are an obvious example – have now spread everywhere, into Pizza Express restaurants and bankers’ apartments. 

The strange thing is that this triumph of the creative class in the realm of cultural capital has gone hand in hand with its economic evisceration. If you did see an actual creative in our imagined coffee shop – a photographer perhaps, or dare I say a writer – he or she would most likely be working frantically on a laptop, occupied with some form of glorified gig-economy job, or struggling to get a beleaguered business off the ground, or grinding away at a commercial sideline that squeezes out actual creative work.

Everyone wants to buy into the dream of the creative lifestyle, or at least drink cocktails in a place that invokes it, but for most creatives this is as much a fantasy as it is for everyone else.

If there is one institution that can help us understand this state of affairs, it is the Soho House network of private members’ clubs. Founded by the restaurateur Nick Jones, the first Soho House opened in 1995 on London’s Greek Street. It joined a number of exclusive new venues aimed at arts and media professionals, offering, as its website tells us, a place for “like-minded creative thinkers to meet, relax, have fun and grow” – or at least those deemed worthy of membership. In 2003, a Soho House opened in New York’s Meatpacking District, one of the first steps in a dizzying expansion which has seen some forty members’ clubs appear everywhere from West Hollywood to Barcelona, Miami to Mumbai.

In terms of aesthetics, Soho House did a lot to define the “creative” style. Ilse Crawford’s interior design for the New York venue became a landmark of sorts. Ranging over six floors of a converted warehouse, whose raw industrial features were emphasised rather than played-down, it announced the hipster affinity for obsolete, forgotten or simply nostalgic spaces. A bedroom where graffiti had been left on the wall was apparently a members’ favourite. Crawford’s furnishings likewise set the trend for combining antiques, modern design classics and found objects in an eclectic approach that tried to be both modern and comfortable.

For all its apparent authenticity and bohemian flavour, this style has since been exported around the world as seamlessly as McDonalds, not least within the Soho House empire itself. The brand prides itself in giving every venue a local accent, but it has really shown the uncanny way that a design formula can make very different settings look the same. 

In my reading, what all this illustrates is the emergence of a new creative elite – film producers and actors, fashion designers and models, publishers and magazine editors, musicians, advertising executives and so on – whose ambition and self-confidence were such that they did not want to merge with the existing circles of privilege. The exclusivity of these members’ clubs, buttressing the special status of “creativity,” was not about keeping the plebs out. It was about drawing a distinction with the philistines of the City of London and Wall Street, and with the stale elitism of Old Boys Clubs.

“Unlike other members’ clubs, which often focus on wealth and status,” explained the Soho House website a few years ago, “we aim to assemble communities of members that have something in common: namely, a creative soul.” When they first appeared, the brand’s distinctive aesthetics drew a contrast, above all, with the slick corporate interiors of luxury hotels in the 1990s. No suits was both the dress code and a principle for assessing membership applications, and those deemed “too corporate” have on occasion been purged. Another functionally equivalent measure was the “no-assholes rule,” though this did not stop Harvey Weinstein from winning a place on the initial New York membership.

But crucially, there is a wider context for the appearance of this creative elite. The first Soho House opened against the backdrop of rising excitement about the “creative industries,” a term adopted by Britain’s New Labour government in 1998. This idea was hopelessly baggy, grouping together advertising and architecture with antiques and software development. Nonetheless, it distilled a sense that, for the post-industrial societies of the west, the future belonged to those adept in the immaterial realms of communication, meaning and desire. In technical terms, the organising principle for this vision was to be the creation and control of intellectual property.  

Economists insisted that, beyond a certain income threshold, people wanted to spend their money on artistic and cultural products. A supporting framework of information technology, university expansion, globalisation and consumer-driven economic growth was coming into view. And a glimpse of this exciting future had already appeared in the Cool Britannia of the 1990s, with its iconoclastic Young British Artists, its anthemic pop bands, its cult films and edgy fashion designers. 

Institutions like Soho House provided a new language of social status to express these dreams of a flourishing creative class, a language that was glamorous, decadent, classy and fun. The song Ilse Crawford used when pitching her interior design ideas – Jane Birkin and Serge Gainsbourg’s 69 Année Érotique – captures it nicely, as does a British designer’s attempt to explain the club to the New York Times: “Think of the Racquet Club but with supermodels walking through the lobby.” The appeal of this world still echoes in the fantasy of the creative lifestyle, and surely played a part in persuading so many in my own generation, the younger millennials, to enter creative vocations.

So what happened? Put simply, the Great Financial Crisis of 2007-8 happened, and then the Great Recession, rudely interrupting the dreams of limitless growth to which the hopes of the creative industries were tied. In the new economy that formed from the wreckage, there was still room for a small elite who could afford Soho House’s membership fees, but legions of new graduates with creative aspirations faced a very different prospect. 

Theirs was a reality of unpaid internships and mounting student debts, more precarious and demanding working livesa lot of freelancingpoor income prospects, and reliance on family or second jobs to subsidise creative careers. Of course some of the pressures on young people in the arts and media, like high living costs and cuts in public funding, vary from place to place: there is a reason so many moved from the US and UK to Berlin. By and large though, the post-crash world starved and scattered the creative professions, squeezing budgets and forcing artists into the grim “independence” of self-promotion on digital platforms. 

One result of this was a general revulsion at capitalism, which partly explains why artisan ideals and environmentalism became so popular in creative circles. But despite this skepticism, and even as career prospects withered, the creative lifestyle maintained its appeal. In fact, the 2010s saw it taking off like never before. 

Young people couldn’t afford houses, but they had ready access to travel through EasyJet and AirBnB, to content through Spotify and Netflix, to a hectic nightlife through cheap Ubers, and they could curate all of these experiences for the world via Instagram. They could, in other words, enjoy a bargain version of the cultured hedonism that Soho House offered its members. The stage-sets for this lifestyle consumerism were the increasingly generic “creative” spaces, with their exposed brick walls and Chesterfield armchairs, that multiplied in fashionable urban districts around the world.

Perhaps the best illustration of this perverse situation is the development of the Soho House empire. Alongside its exclusive members’ clubs, the company now owns a plethora of trendy restaurant chains for the mass market. You can also get a taste of the Soho House lifestyle through its branded cosmetics, its interior design products, or a trip to one of its spas. With new membership models, freelancers can take their place among the massed vintage chairs and lamps of the brand’s boutique workspaces. There was even talk of going into student accommodation.

And so an institution that symbolised the promise of a flourishing creative class now increasingly markets the superficial trappings of success. As a kind of compensation for the vocational opportunities that never materialised, creatives can consume their dreams in the form of lifestyle, though even this does not make them special. The 2010s were also the decade when the corporate barbarians broke into the hipster citadels, occupying the clothes, bars and apartments which the creative class made desirable, and pricing them out in the process.

In one sense, though, the “creative industries” vision was correct. Intellectual property really is the basis for growth and high incomes in today’s economy; see, for instance, the longstanding ambition of the Chinese state to transition “from made in China to designed in China.” But the valuable intellectual property is increasingly concentrated in the tech sector. It is largely because IT and software are included that people can still claim the creative industries are an exciting area of job creation.

The tech world is, of course, a very creative place, but it represents a different paradigm of creativity to the arts and media vocations we inherited from the late-20th century. We are living in a time when this new creativity is rapidly eclipsing the old, as reflected by the drop in arts and humanities students, especially in the US and UK, at the expense of STEM subjects. Whether tech culture will also inherit the glamour of the declining creative milieu I can’t say, but those of us bred into the old practices can only hope our new masters will find some use for us.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Design for Dictators

This essay was first published at The Pathos of Things newsletter. Subscribe here.

The 1937 World Fair in Paris was the stage for one of the great symbolic confrontations of the 20th century. On either side of the unfortunately titled Avenue of Peace, with the Eiffel Tower in the immediate background, the pavilions of Nazi Germany and the Soviet Union faced one another. The former was soaring cuboid of limestone columns, crowned with the brooding figure of an eagle clutching a swastika; the latter was a stepped podium supporting an enormous statue of a man and woman holding a hammer and sickle aloft.

This is, at first glance, the perfect illustration of an old Europe being crushed in the antagonism of two ideological extremes: Communism versus National Socialism, Stalin versus Hitler. But on closer inspection, the symbolism becomes less clear-cut. For one thing, there is a striking degree of formal similarity between the two pavilions. And when you think about it, these are strange monuments for states committed, in one case, to the glorification of the German race, and in the other, to the emancipation of workers from bourgeois domination. As was noted by the Nazi architect Albert Speer, who designed the German structure, both pavilions took the form of a simplified neoclassicism: a modern interpretation of ancient Greek, Roman, and Renaissance architecture.

These paradoxes point to some of the problems faced by totalitarian states of the 1920s and 30s in their efforts to use design as a political tool. They all believed in the transformative potential of aesthetics, regarding architecture, uniforms, graphic design and iconography as means for reshaping society and infusing it with a sense of ideological purpose. All used public space and ceremony to mobilise the masses. Italian Fascist rallies were politicised total artworks, as were those of the Nazis, with their massed banners, choreographed movements, and feverish oratory broadcast across the nation by radio. In Moscow, revolutionary holidays included the ritual of crowds filing past new buildings and displays of city plans, saluting the embodiments of Stalin’s mission to “build socialism.”

The beginnings of all this, as I wrote last week, can be seen in the Empire Style of Napoleon Bonaparte, a design language intended to cultivate an Enlightenment ethos of reason and progress. But whereas it is not surprising that, in the early 19th century, Napoleon assumed this language should be neoclassical, the return to that genre more than a century later revealed the contradictions of the modernising state more than its power.

One issue was the fraught nature of transformation itself. The regimes of Mussolini, Hitler and Stalin all wished to present themselves as revolutionary, breaking with the past (or at least a rhetorically useful idea of the past) while harnessing the Promethean power of mass politics and technology. Yet it had long been evident that the promise of modernity came with an undertow of alienation, stemming in particular from the perceived loss of a more rooted, organic form of existence. This tension had already been engrained in modern design through the medieval nostalgia of the Gothic revival and the arts and crafts movement, currents that carried on well into the 20th century; the Bauhaus, for instance, was founded on the model of the medieval guild.

This raised an obvious dilemma. Totalitarian states were inclined to brand themselves with a distinct, unified style, in order to clearly communicate their encompassing authority. But how can a single style represent the potency of modernity – of technology, rationality and social transformation – while also compensating for the insecurity produced by these same forces? The latter could hardly be neglected by regimes whose first priority was stability and control.

Another problem was that neither the designer nor the state can choose how a given style is received by society at large. People have expectations about how things ought to look, and a framework of associations that informs their response to any designed object. Influencing the public therefore means engaging it partly on its own terms. Not only does this limit what can be successfully communicated through design, it raises the question of whether communication is even possible between more radical designers and a mass audience, groups who are likely to have very different aesthetic intuitions. This too was already clear by the turn of 20th century, as various designers that tried to develop a socialist style, from William Morris to the early practitioners of art nouveau in Belgium, found themselves working for a small circle of progressive bourgeois clients.

Constraints like these decided much about the character of totalitarian design. They were least obvious in Mussolini’s Italy, since the Fascist mantra of restoring the grandeur of ancient Rome found a natural expression in modernised classical forms, the most famous example being the Palazzo della Civiltà Italiana in Rome. The implicit elitism of this enterprise was offset by the strikingly modern style of military dress Mussolini had pioneered in the 1920s, a deliberate contrast with the aristocratic attire of the preceding era. The Fascist blend of ancient and modern was also flexible enough to accommodate a more radical designers such as Giuseppe Terragni, whose work for the regime included innovative collages and buildings like the Casa Del Fascio in Como.

The situation in the Soviet Union was rather different. The aftermath of the October Revolution of 1917 witnessed an incredible florescence of creativity, as artists and designers answered the revolution’s call to build a new world. But as Stalin consolidated his dictatorship in the early 1930s, he looked upon cultural experimentation with suspicion. In theory Soviet planners still hoped the urban environment could be a tool for creating a socialist society, but the upheaval caused by Stalin’s policies of rapid industrial development and the new atmosphere of conservatism ultimately cautioned against radicalism in design.

Then there was the awkward fact that the proletariat on whose behalf the new society would be constructed showed little enthusiasm for the ideas of the avant garde. When it came to building the industrial city of Magnitogorsk, for instance, the  regime initially requested plans from the German Modernist Ernst May. But after enormous effort on May’s part, his functionalist approach to workers’ housing was eventually rejected for its abstraction and meanness. As Stephen Kotkin writes, “for the Soviet authorities, no less than many ordinary people, their buildings had to ‘look like something,’ had to make one feel proud, make one see that the proletariat… would have its attractive buildings.”

By the mid-1930s, the architectural establishment had come to the unlikely conclusion that a grandiose form of neoclassicism was the true expression of Soviet Communism. This was duly adopted as Stalin’s official style. Thus the Soviet Union became the most reactionary of the totalitarian states in design terms, smothering a period of extraordinary idealism in favour of what were deemed the eternally valid forms of ancient Greece and Rome. The irony was captured by Stalin’s decision to demolish one of the most scared buildings of the Russian Orthodox Church, the Cathedral of Christ the Saviour in Moscow, and resurrect in its place a Palace of the Soviets. Having received proposals from some of Europe’s most celebrated progressive architects, the regime instead chose Boris Iofan to build a gargantuan neoclassical structure topped by a statue of Lenin (the project was abandoned some years later). Iofan himself had previously worked for Mussolini’s regime in Libya.

If Stalinism ended up being represented by a combination of overcrowded industrial landscapes and homages to the classical past, this was more stylistic unity than Nazi Germany was able to achieve. Hitler’s regime was pulled in at least three directions, between its admiration for modern technology, its obsession with the culture of an imagined Nordic Volk (which, in a society traumatised by war and economic ruin, functioned partly as a retreat from modernity), and Germany’s own tradition of monumental neoclassicism inherited from the Enlightenment. Consequently there was no National Socialist style, but an assortment of ideological solutions in different contexts.

Despite closing the Bauhaus on coming to power in 1933, the Nazis imitated that school’s sleek functionalist aesthetic in its industrial and military design, including the Volkswagen cars designed to travel on the much-vaunted Autobahn. Yet the citizens who worked in its modern factories were sometimes provided housing in the Heimatstil, an imitation of a traditional rural vernacular. Propaganda could be printed in a Gothic Blackletter typeface or broadcast through mass-produced radios. But the absurdity of Nazi ideology was best demonstrated by the fact that, like Stalin, Hitler could not conceive of a monumental style to embellish his regime that did not continue in the cosmopolitan neoclassical tradition inspired by the ancient Mediterranean. The cut-stone embodiments of the Third Reich, including Hitler’s imagined imperial capital of Germania, were projected in the stark neoclassicism of Speer’s pavilion for the Paris World Fair. It was only in the regime’s theatrical public ceremonies that these clashing ideas were integrated into something like a unified aesthetic experience, as the goose-stepping traditions of Prussian militarism were updated with Hugo Boss uniforms and the crypto-Modernist swastika banner.

Of course it was not contradictions of style that ended the three classic totalitarian regimes; it was the destruction of National Socialism and Fascism in the Second World War, and Stalin’s death in 1953. Still, it seems safe to say that no state after them saw in design the same potential for a transformative mass politics. 

Dictatorships did make use of design in the later parts of the 20th century, but that is a subject for another day. As in the western world, they were strongly influenced by Modernism. A lot of concrete was poured, some of it into quite original forms – in Tito’s Yugoslavia for instance – and much of it into impoverished grey cityscapes. Stalinist neoclassicism continued sporadically in the Communist world, and many opulent palaces were constructed, in a partial reversion to older habits of royalty. Above all though, the chaos of ongoing urbanisation undermined any pretence of the state to shape the aesthetic environment of most of its citizens, a loss of control symbolised by the fate of the great planned capitals of the 1950s, Le Corbusier’s Chandigarh and Lúcio Costa’s Brasilia, which overflowed their margins with satellite cities and slums.

In the global market society of recent decades, the stylistic pluralism of the mega-city is the overwhelming pattern (or lack of pattern), seen even in the official buildings of an authoritarian state like China. On the other hand, I’ve recently argued elsewherethat various repressive regimes have found a kind of signature style in the spectacular works of celebrity architects, the purpose of which is not to set them apart but to confirm their rightful place in the global economic and financial order. But today the politics of built form feel like an increasingly marginal leftover from an earlier time. It has long been in the realm of media that aesthetics play their most important political role, a role that will only continue to grow.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

The Lost Magic of the Seas

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Many British people will hear about Felixstowe for the first time this month, thanks to a planned workers’ strike that promises yet more economic pain. Located on the Suffolk coast, Felixstowe is the site of the UK’s biggest container port; almost half of the goods coming and going from our shores pass through here, stowed away in brightly coloured shipping containers that resemble enormous Lego bricks.

The absence of Felixstowe from the national vocabulary speaks volumes about the era we live in. Britain is an island after all, and its various port towns have been central to its history for centuries. Now we are more dependent on the sea than ever (around ninety percent of the world’s traded goods travel by ship), but we barely realise it.  

So what happened? That is the question I want to consider today, with the help of David Abulafia’s The Boundless Sea, an epic history of human activity on the ocean. One of the themes in this book is the relationship between the intimate and the global: how our sense of what is valuable or important is tied up with our impressions of the world at large. 

Container ports appear in the final, slim section of The Boundless Sea, where Abulafia describes the disappearance, since the 1950s, of the ancient maritime patterns he has detailed for some 900 pages. “By the beginning of the 21st century,” he writes, “the ocean world of the last four millennia had ceased to exist.”

Given the dramatic nature of this change – a mass extinction of seafaring cultures around the world – the treatment is strikingly brief. Then again, this is a useful reminder that modernity is a tiny slice of time containing enormous transformations.

Container ports symbolise this rupture from the past: mechanised coastal nodes where huge vessels, each bearing thousands of standardised containers, load and unload goods from around the world. By contrast to the lively port towns that litter The Boundless Sea, container ports “are not centres of trade inhabited by a colourful variety of people from many backgrounds, but processing plants in which machinery, not men, do the heavy work and no one sees the cargoes… sealed inside their big boxes.” Felixstowe, says Abulafia, is “a great machine.”

 

Who crossed the oceans before the container ships did? Polynesian navigators explored the vastness of the Pacific over millennia, with only the stars for a compass. Bronze Age Egyptians ventured down the Red Sea in search of frankincense and myrrh. Merchants in sewn-plank boats spread Buddhism and Islam in the southern Indian Ocean, even as Vikings set out from their Greenland farmsteads in search of narwhal tusks. In the early modern era, pirates, traders and profit-hungry explorers swarmed the coasts of Africa and the Americas. These examples are just a drop in the ocean of Abulafia’s sweeping narrative. 

But despite its enormous scope, there is a golden thread running through this book, uniting different eras and pulling continents together: the human desire for rare, beautiful, and exceptionally useful things.

The main protagonists of maritime history are merchants, since buying and selling has been the most common reason to cross the seas. But what is difficult to grasp today, when even the most mundane products have supply lines spanning the oceans, is the special value which has often been attached to seaborne goods, especially before the 18th century. Some cities, most famously Rome, did rely on short-distance shipping for basic needs like food. And some products, like English wool or Chinese ceramics, were crossing the water in large volumes centuries ago. But generally the risks and expenses of taking to sea, especially over large distances, demanded that merchants focus on the most sought-after goods. And conversely, goods were particularly precious if they could only be delivered by ship.

So seaborne cargoes show us what was considered valuable in the places they docked, or at least among the elites of those places. The human history of the oceans is in large part a catalogue of highly prized things: ornate weapons and exotic animals, spices and textiles, materials like sandalwood and ivory, or foodstuffs like honey, oil and figs. Of course that catalogue also includes human beings reduced to the status of objects, such as eunuchs, performers and slaves.

If the value of such things was generally financial for merchants, it took many forms in the cultures where they arrived. Before the ocean could be reliably traversed with steamships and (eventually) aeroplanes, foreign products bore the mystery of unknown lands. They often became tokens of social status, symbols of spiritual significance, or preferred forms of sensual pleasure and beauty. Ivory from African elephants and north-Atlantic walruses were treasured materials for religious sculpture in medieval Europe, just as red Portuguese cloth was prized by West African elites in the 17th century.

This traffic in desirable objects made the world we know today. The European expansion that began in the late-15th century was driven by the prospect of delivering expensive goods in ever-larger quantities, making them accessible to an ever-larger market. These included products only available in East Asia, like silk, spices and high-quality ceramics, and those that could only be produced with slave labour in tropical climates, such as sugar, coffee and tobacco.

Once the Spanish had established a Pacific route between the Americas and the Philippines, the first truly global networks appeared. The volume of maritime trade began to grow, and one of the foundations of modern capitalism was in place. Abulafia aptly describes Chinese junks arriving in Spanish Manila as “the 16th century equivalent of a floating department store.” Among the items in their holds were “linen and cotton cloth, hangings, coverlets, tapestries, metal goods including copper kettles, gunpowder, wheat flower, fresh and preserved fruits, decorated writing cases, gilded benches, live birds and pack animals.”

But no less dramatic than the growing movement of goods, people and ideas was the emergence, for the first time, of a global consciousness. This is strikingly visualised by the maps that accompany each of the fifty-one chapters of The Boundless Sea. In the first half of the book, these maps show the relatively small regions in which maritime connections existed, with the exception of the world’s oldest trans-oceanic network in the Indian Ocean. In the second half, the maps zoom dizzyingly outwards, eventually incorporating the entire world. 

That world map is something we take for granted in an era of instant communication and accessible satellite imagery, but for most of history, huge swathes of the globe were completely unknown to any given group of people. To be fully aware of our species’ planetary parameters marks nothing less than a revolution in how human beings understand themselves. And one of the driving forces behind that revolution was the ambition to bring desirable (and profitable) things from across the ocean. 

But if trade underpinned seafaring ways of life throughout history, it finally led to their extinction. More and more shipping did not just make formerly exotic goods commonplace, it eventually made most states integrate their economies into a global marketplace, so that seafaring became more like a conveyor belt than a culture. This culminated in the container ships that now have the oceans almost to themselves, their efficiencies of scale rendering other forms of seaborne trade obsolete.

In the age of the container, most products do not even come from a particular place. They are devised, extracted, processed, manufactured and assembled in many different places, so as to achieve the lowest cost. Even things that do come from distant lands no longer have the same aura of the unfamiliar, since the world is now almost entirely visible through imagery and media. 

And that is where this story provides an important insight into the way we design, exchange and value objects today. In consumer societies, enormous resources are devoted to engineering desire, by making products appear uncommon and exclusive. We are used to thinking of this practice as peculiarly modern, and in many ways it is. But maybe we should also see it as an attempt to recreate something of the lost value that, for most of human history, belonged to things from across the ocean.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

The Consolations of Green Design

This essay was first published at The Pathos of Things newsletter. Subscribe here.

I recently found myself browsing a Financial Times feature about “great tech for greener living,” a selection of stylish items for the principled customer. They included an oak iPhone stand sustainably crafted by Polish artisans (holding your phone “at a perfect 25-degree tilt”); wireless earphones with a wood inlay by House of Marley, the eco-friendly studio founded by Bob Marley’s son Rohan; and an app, Ethy, that audits brands for their environmental credentials.

This is a good snapshot of the environmental consciousness that has emerged among upmarket consumers. They still want fashionable, functional and beautiful products, but these qualities are no longer enough. The casual pillaging of the planet that once lay concealed behind the shiny exterior of consumer goods is gradually coming into focus, so that every object now carries the risk of moral contamination. The devil, we have learned, is in the detail: “Even the Scandinavian-style minimalist interiors that seem so pure and clean,” writes sustainability consultant Edwin Datschefski, have a “hidden ugliness – formaldehyde in the plywood and mdf, hexavalent chromium pollution from tanning leather, and damage to communities and the landscape from mining the pigments used in white paint.”

And designers are more than happy to remove that taint of evil. Increasingly, the green ethos is providing design with a sense of mission not seen since the Modernist era of the 1920s-60s. With Modernism, the goal was to harness the power of mass-production to improve the material and aesthetic conditions of ordinary people. For green design, it is to minimise the environmental damage, as well as the human exploitation, caused by a product in each stage of its lifecycle: materials, supply, manufacturing, use and disposal. The two movements share a vision of design as a moral crusade, as well as a certain phobic quality; green designers tend to avoid any suggestion of industry and labour with the same fastidiousness that Modernists applied to cleanliness and hygiene.

This sense of purpose has delivered some notable achievements in the 21st century. Most obviously, green design has consistently generated ingenious new materials and methods, from timber skyscrapers and lampshades made of sugar to the use of mycelium, a fungal substrate, for 3D-printed architectural elements. This year’s winners of the Earthshot sustainable design prize include a seaweed-based alternative to plastic packaging, and a flat-pack greenhouse that will allow small-scale farmers to produce higher yields using much less water. Green designers have also shown an interest in humanising production, preferring to use less alienating forms of labour and trying to integrate aspects of local heritage from the regions where they work.

Last but not least, green design is good at artistic propaganda. Its back catalogue is full of works that communicate the ideals of environmentalism in evocative and inspiring ways, such as Stuart Haygarth’s chandelier made from recycled prescription glasses, or Tomas Gabzdil Libertiny’s extraordinary honeycomb vases, each of which are manufactured by bees inside a hive over the course of a week.

Yet there is often an air of unreality about green design, a not-quite-right feeling that starts to nag at you the more you think about it. The problem is most apparent in the grand philosophical ambitions that frequently emanate from the movement. According to its theorists, the mission of green design is nothing less than the transformation of the relationship between humanity and nature, rejecting the modern (and Modernist) project of shaping the world for our own ends and recognising ourselves as natural and ecologically limited beings. A few examples from the archives of Domus magazine will give a sense of this discourse. In 1997 one author demanded a “realisation that man will be able to sustain himself only if the self-regulating ecosystem of the universe continues and is not disrupted by man’s intervention.” More recently, former MoMA design director Emilio Ambasz told the magazine that “Building inevitably changes Nature… into a human-made nature. The goal should be to reduce and, if possible, to compensate for our intrusion in the Vegetal Kingdom.” Finally, consider the words of the eminent furniture design and research duo Formafantasma:

sustainability is a strong utopia because it goes beyond modernity. It’s remote from twentieth-century culture and fully inserted in our new way of understanding our relationship to nature. […] Contemporary civilisation has a growing awareness that we can continue to live only if we work together with other living beings. As designers, but above all as human beings, we have to take care not only of ourselves, but all the other species on the planet. 

All of this sounds excellent, but there is a yawning gap between these lofty aspirations and what green design actually does for the most part, which is to develop marginal alternatives, communicate ideas, and as that Financial Times feature suggests, offer boutique products to those who can afford ethics as a lifestyle choice. What to make of this discrepancy? It raises the possibility that green design has become trapped in a comfortable role which is less about changing the world than legitimising a consumer culture which is really not very green. With eye-catching sustainable product lines and utopian language, big brands can trumpet their green ambitions even as they keep plying their destructive trade in garments, furniture and cars. Occasionally buying eco-friendly goods is an excellent way to feel better about all the other things you buy. It’s almost like the indulgences sold by the medieval church: pay a bit more, fear a bit less for your soul.

There is surely some truth in this cynical interpretation, although I wouldn’t pin the blame on the designers. Like all of us, they have to reconcile many conflicting desires in their lives, including the desire for financial security and for success in their craft. Developing a practice with integrity is admirable, even if it can only serve a small audience. In any case, there is a more generous and, I think, equally plausible way of understanding the role of green design.

The burden of living in a complex society is the knowledge of one’s powerlessness to change the systems in which one is trapped. Reducing the environmental impact of our material culture is perhaps the ultimate example of this, since it ultimately hinges on countless technical issues. At scale, improvements tend to come less from green design than from the greening of design, or techniques that do better than the alternatives without fully solving the problem; architecture that passively regulates temperature, for instance, or electric cars. Progress depends on questions such as: will the more sustainable fibres being developed by Scandinavian companies become a viable alternative to cotton? Will electricity ever be capable of replacing fossil fuels in the most energy-intensive manufacturing processes? How much can we reduce the CO2 emissions associated with cement? This trajectory is bound to be slow, messy, frustrating, tragic, and uncertain of success. But for the time being, it’s all we’ve got. 

Against this background, green design can be seen as a kind of informal arrangement between designers and consumers that allows each party to express ideals reality cannot accommodate. These include hope, imagination, and above all responsibility. You could say this is a fiction, but as long as no one mistakes it for an answer to the world’s problems, it seems like a valuable fiction. Besides, it’s better than just making and buying more crap.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

How the Internet Turned Sour: Jon Rafman and the Closing of the Digital Frontier

This essay was first published by IM1776 on 17th August 2021

A tumble-drier is dragged out into someone’s garden and filled with something heavy — a brick perhaps. After setting it spinning, a figure in a camouflage jacket and protective face visor retreats from the camera frame. Immediately the machine begins to shudder violently, and soon disintegrates as parts fly off onto the surrounding lawn. 

This is the opening shot of Mainsqueeze, a 2014 video collage by the Canadian artist Jon Rafman. What comes after is no less unsettling: a young woman holds a small shellfish, stroking it affectionately, before placing it on the ground and crushing it slowly under her heel; an amateur bodybuilder, muscles straining grotesquely, splits a watermelon between his thighs. 

Rafman, concerned about the social and existential impact of technology on contemporary life, discovered these and many other strange performances while obsessively trawling the subaltern corners of the internet — communities of trolls, pranksters and fetishists. The artist’s aim, however, isn’t to ridicule these characters as freaks: to the contrary, he maintains: “The more marginal, the more ephemeral the culture is, the more fleeting the object is… the more it can actually reflect and reveal ‘culture at large.’” What looks at first like a glimpse into the perverse fringes, is really meant to be a portrait of online culture in general: a fragmented world of niche identities and uneasy escapism, where humor and pleasure carry undercurrents of aggression and despair. With such an abundance of stimulation, it’s difficult to say where satisfaction ends and enslavement begins.

Even as we joke about the pathologies of online life, we often lose sight of the depressing arc the internet revolution has followed during the past decade. It’s impossible to know exactly what lies behind the playful tone of Twitter and the carefree images of Instagram, but judging by the personal stories we hear, there’s no shortage of addiction (to social media, porn, smartphones), identity crisis, and anxiety about being judged or exposed. It seems much of our online existence is now characterized by the same sense of hyper-alert boredom, claustrophobia and social estrangement that Rafman found at the margins of the internet years ago.

Indeed, the destructive impulses of Rafman’s trolls seem almost quaint by comparison to the shaming and malicious gossip we take for granted on social media. And whereas a plurality of outlooks and personalities was once the glory of the internet, today every conceivable subject, from art and sports to haircuts, food, and knitting, is reified as a divisive issue within a vast political metanarrative.

In somewhat of an ironic twist, last year, Rafman himself was dropped or suspended by numerous galleriesfollowing accusations of inappropriate sexual behavior, leveled through the anonymous Instagram account Surviving the Artworld (which publishes allegations of abusive behavior in the art industry). The accusers say they felt taken advantage of by the artist; Rafman insists that there was a misunderstanding. It’s always hard to know what to make of such cases, but that social media now serves as a mechanism for this kind of summary justice seems symptomatic of the social disintegration portrayed in works like Mainsqueeze.

Even if these accusations mark the end of Rafman’s career, his efforts to document online culture now seem more valuable than ever. His art gives us a way of thinking about the internet and its discontents that goes beyond manipulative social media algorithms, ideological debasement or the culture wars. The artist’s work shows the evolution of the virtual realm above all as a new chapter of human experience, seeking to represent the structures of feeling that made this world so enticing and, ultimately, troubled.

The first video by Rafman I came across reminded me of Swift’s Gulliver’s Travels. Begun in 2008, the visionary Kool-Aid Man in Second Life consists of a series of tours through the virtual world platform Second Life, where users have designed a phantasmagorical array of settings in which their avatars can lead, as the name suggests, another life. In the video, our guide is Rafman’s own avatar, the famous Kool-Aid advertising mascot (a jug of red liquid with the weird rictus grin) — a protagonist that reminds us we’ve entered an era where, as Rafman puts it, “different symbols float around equally and free from the weight of history.” For the entire duration, Kool-Aid Man wanders around aimlessly in a surreal, artificial universe, sauntering in magical forests and across empty plains, through run-down cityscapes and futuristic metropolises, placidly observing nightclub dance floors, ancient temples, and the endless stages where the denizens of Second Life perform their sexual fantasies.

Kool-Aid Man in Second Life is best viewed against the backdrop of the great migration onto the internet which started in the mid-2000s, facilitated by emerging tech giants like Amazon, Google and Facebook. For the great majority of people, this was when the internet ceased being merely a toolbox for particular tasks and became part of everyday life (the art world jargon for this was ‘post-internet’). The artwork can be seen as a celebration of the curiosity, fun, and boundless sense of possibility that accompanied this transition. Humanity was stepping en-masse out of the limits of physical space, and what it found was both trivial and sublime: a kitsch world of selfies and cute animal as well as effortless new forms of association and access to knowledge. The euphoric smile of Kool-Aid Man speaks to the birth of online mass culture as an innocent adventure.

Similar themes appear also in Rafman’s more famous (and ongoing) early work The Nine Eyes of Google Street View, in which the artist collects peculiar images captured by Google Maps’ vehicles. Scenes include a magnificent stag bounding down a coastal highway, a clown stepping into a minibus, a lone woman breastfeeding her child in a desolate landscape of dilapidated buildings. As in Rafman’s treatment of Second Life, such eclectic scenes are juxtaposed to portray the internet as an emotional voyage of discovery, marked by novel combinations of empathy and detachment, sincerity and irony, humour and desire. But in hindsight, no less striking than the spirit of wonder in these works are the ways they seem to anticipate the unravelling of online culture. 

If there’s something ominous about the ornate dream palaces of Second Life, it comes from our intuition that the stimulation and belonging offered by this virtual community is also a measure of alienation. The internet gives us relations with people and things that have the detached simplicity of a game, which only become more appealing as we find niches offering social participation and identity. But inevitably, these ersatz lives become a form of compulsive retreat from the difficulties of the wider world and a source of personal and social tension. Rafman’s Second Life is a vivid metaphor for how virtual experience tempts us with the prospect of a weightless existence, one that can’t possibly be realised and must, ultimately, lead to resentment. 

Equally prescient was Rafman’s emphasis on the breakdown of meaning, as words, images, and symbols of all kinds become unmoored from any stable context. Today, all ‘content’ presents itself much like the serendipitous scenes in The Nine-Eyes of Google Street View – an arbitrary jumble of trivial and profound, comic and tragic, impressions stripped of semantic coherence and flattened into passing flickers of stimulation. Symbols are no longer held firm in their meaning by clearly defined contexts where we might expect to find them, but can be endlessly mixed and refashioned in the course of online communication. This has been a great source of creativity, most obviously in the form of memes, but it has also produced neurosis. Today’s widespread sensitivity to the alleged violence concealed in language and representation, and the resulting desire to police expression, seems to reflect deep anxiety about a world where nothing has fixed significance. 

These more ominous trends dominate the next phase of Rafman’s work, where we find pieces like Mainsqueeze. Here Rafman plunges us into the sordid underworld of the internet, a carnival of adolescent rebellion and perverse obsessions. A sequence of images showing a group of people passed-out drunk, one with the word “LOSER” scrawled on his forehead, captures the overall tone. In contrast to Rafman’s Second Life, where the diversity of the virtual realm could be encompassed by a single explorer, we now find insular and inaccessible communities, apparently basking in an angry sense of estrangement from the mainstream of culture. Their various transgressive gestures — swastikas, illicit porn, garish make-up — seem tinted with desperation, as though they’re more about finding boundaries than breaking them.

This portrayal of troll culture has some unsettling resonances with the boredom and anxiety of internet life today. According to Rafman himself, however, the wider relevance of these outcasts concerns their inability to confront the forces shaping their frustrated existence. Trapped in a numbing cycle of distraction, their subversive energy is channelled into escapist rituals rather than any kind of meaningful criticism of the society they seem to resent. Seen from this perspective, online life comes to resemble a form of unknowing servitude, a captive state unable to grasp the conditions of its own deprivation.

All of this points to the broader context which is always dimly present in Rafman’s work: the architecture of the virtual world itself through which Silicon Valley facilitated the great migration onto the internet over the past fifteen-odd year. In this respect, Rafman’s documentation of Second Life becomes even more interesting, since that platform really belonged to the pre-social media Cyberpunk era, which would make it a eulogy for the utopian ethos of the early internet, with its dreams of transcending the clutches of centralised authority. The power that would crush those dreams is represented, of course, by Rafman’s Google Street View’s car — the outrider of big tech on its endless mission to capitalise on all the information it can gather.

But how does this looming corporate presence relates to the disintegration of online culture traced by Rafman? The artist’s comments about misdirected critical potential suggest one depressing possibility: the internet is a power structure which sustains itself through our distraction, addiction and alienation. We might think of Huxley’s Brave New World, but with shitposting and doom-scrolling instead of the pleasure-drug soma. Rafman’s most recent animation work, Disaster under the Sun, seems to underscore this dystopian picture. We are given a God’s-eye perspective over a featureless grey landscape, where crowds of faceless human forms attack and merge into one another, their activities as frantic and vicious as they are lacking any apparent purpose. 

It’s certainly true that the internet giants have gained immense wealth and power while overseeing the profound social and political dislocations of the last decade. But it’s also true that there are limits to how far they can benefit from anarchy. This, might explain why we are now seeing the emergence of something like a formal constitutional structure to govern the internet’s most popular platforms, such as with Facebook, whose Oversight Board now even provides a court of appeal for its users — but also Twitter, Google, and now PayPal. The consolidation of centralized authority over the internet resembles the closing of a frontier, as a once-lawless space of discovery, chaos and potential is settled and brought under official control. 

Rafmans’ work allows us to grasp how this process of closure has also been a cultural and psychological one. We have seen how, in his art, the boundlessness of the virtual realm, and our freedom within it, are portrayed not just as a source of wonder but also of disorientation and insecurity. There have been plenty of indications that these feelings of flux have made people anxious to impose order, whether in the imagined form of conspiracy theories or by trying to enforce new norms and moral codes.

This isn’t to say that growing regulation will relax the tensions that have overtaken online culture. Given the divergence of identities and worldviews illustrated by Rafman’s depiction of the marginal internet, it seems highly unlikely that official authority can be impartial; drawing boundaries will involve taking sides and identifying who must be considered subversive. But all of this just emphasises that the revolutionary first chapter of internet life is drawing to a close. For better or worse, the particular spirit of discovery that marked the crossing of this frontier will never return.

Tooze and the Tragedy of the Left

Adam Tooze is one of the most impressive public intellectuals of our time. No other writer has the Columbia historian’s skill for laying bare the political, economic and financial sinews that tie together the modern world.

Tooze’s new book, Shutdown: How Covid Shook the World’s Economy, provides everything his readers have come to expect: a densely woven, relentlessly analytical narrative that uncovers the inner workings of a great crisis – in this case, the global crisis sparked by the Covid pandemic in 2020.

But Shutdown provides something else, too. It shows with unusual clarity that, for all his dry detachment and attention to detail, Tooze’s view of history is rooted in a deep sense of tragedy.

Towards the end of the book, Tooze reflects on the escalating “polycrisis” of the 21st century – overlapping political, economic and environmental conflagrations:

In an earlier period of history this sort of diagnosis might have been coupled with a forecast of revolution. If anything is unrealistic today, that prediction surely is. Indeed, radical reform is a stretch. The year 2020 was not a moment of victory for the left. The chief countervailing force to the escalation of global tension in political, economic, and ecological realms is therefore crisis management on an ever-larger scale, crisis-driven and ad hoc. … It is the choice between the third- and fourth-best options.

This seems at first typical of Tooze’s hard-nosed realism. He has long presented readers with a world shaped by “crisis management on an ever-larger scale.” Most of his work focuses on what, in Shutdown, he calls “functional elites” – small networks of technocratic professionals wielding enormous levers of power, whether in the Chinese Communist Party or among the bureaucrats and bankers of the global financial system.

These authorities, Tooze emphasises, are unable or unwilling to reform the dynamics of “heedless global growth” which keep plunging the world into crisis. But their ability to act in moments of extreme danger – the ability of the US Federal Reserve, for instance, to calm financial markets by buying assets at a rate of $1 million per second, as it did in March last year – is increasingly our last line of defence against catastrophe. The success or failure of these crisis managers is the difference between our third- and fourth-best options.

But when Tooze notes that radical change would have been thinkable “in an earlier period of history,” it is not without pathos. It calls to mind a historical moment that looms large in Tooze’s work. 

That moment is the market revolution of the 1980s, the birth of neoliberalism. For Tooze, this did not just bring about an economic order based on privatisation, the free movement of goods and capital, the destruction of organised labour and the dramatic growth of finance.

More fundamentally, neoliberalism was about what Tooze calls “depoliticisation.” As the west’s governing elites were overtaken by dogmas about market efficiency, the threat of inflation and the dangers of government borrowing, they hard-wired these principles into the framework of globalisation. Consequently, an entire spectrum of possibilities concerning how wealth and power might be distributed were closed-off to democratic politics. 

And so the inequalities created by the neoliberal order became, as Tony Blair said of globalisation, as inevitable as the seasons. Or in Thatcher’s more famous formulation, There Is No Alternative.

Tooze’s view of the present exists in the shadow of this earlier failure; it is haunted by what might have been. As he bitterly observes in Shutdown, it might appear that governments have suddenly discovered the joys of limitless spending, but this is only because the political forces that once made them nervous about doing so – most notably, a labour movement driving inflation through wage demands – have long since been “eviscerated.”

But it seems to me that Tooze’s tragic worldview reveals a trap facing the left today. It raises the question: what does it mean to accept, or merely to suspect, that radical change is off the table? 

We glimpse an answer of sorts when Tooze writes about how 2020 vindicated his own political movement, the environmentalist left. The pandemic, he claims, showed that huge state intervention against climate change and inequality is not just necessary, but possible. With all the talk of “Building Back Better” and “Green Deals,” centrist governments appear to be getting the message. Even Wall Street is “learning to love green capitalism.”

Of course, as per the tragic formula, Tooze does not imagine this development will be as transformative as advertised. A green revolution from the centre will likely be directed towards a conservative goal: “Everything must change so that everything remains the same.” The climate agenda, in other words, is being co-opted by a mutating neoliberalism. 

But if we follow the thrust of Tooze’s analysis, it’s difficult to avoid the conclusion that realistic progressives should embrace this third-best option. Given the implausibility of a genuine “antisystemic challenge” – and in light of the fragile systems of global capitalism, geopolitics and ecology which are now in play – it seems the best we can hope for is enlightened leadership by “functional elites.”

This may well be the true. But I think the price of this bargain will be higher than Tooze acknowledges. 

Whether it be climate, state investment, or piecemeal commitments to social justice, the guardians of the status quo have not accepted the left’s diagnosis simply because they realise change is now unavoidable. Rather, these policies are appealing because, with all their moral and existential urgency, they can provide fresh justification for the unaccountable power that will continue to be wielded by corporate, financial and bureaucratic interests. 

In other words, now that the free-market nostrums of neoliberalism 1.0 are truly shot, it is the left’s narratives of crisis that will offer a new basis for depoliticisation – another way of saying There Is No Alternative.

And therein lies the really perverse tragedy for a thinker like Tooze. If he believes the choice is survival on these terms or not at all, then he will have to agree.

Disaster Junkies

We live in an era where catastrophe looms large in the political imagination. On the one side, we find hellacious visions of climate crisis and ecological collapse; on the other, grim warnings of social disintegration through plummeting birth rates, mass immigration and crime. Popular culture’s vivid post-apocalyptic worlds, from Cormac McCarthy’s The Road to Margaret Atwood’s Handmaid’s Tale, increasingly echo in political discourse – most memorably in Donald Trump’s 2016 inauguration speech on the theme of “American Carnage.” For more imaginative doom-mongers there are various technological dystopias to contemplate, whether AI run amok, a digital surveillance state, or simply the replacement of physical experience with virtual surrogates. Then in 2020, with the eruption of a global pandemic, catastrophe crossed from the silver screen to the news studio, as much of the world sat transfixed by a profusion of statistics, graphs and harrowing reports of sickness and death.

If you are anything like me, the role of catastrophe in politics and culture raises endless fascinating questions. How should we explain our visceral revulsion at fellow citizens dying en mass from an infectious disease, and our contrasting apathy to other forms of large-scale suffering and death? Why can we be terrified by climate change without necessarily feeling a commensurate urgency to do something about it? Why do certain political tribes obsess over certain disasters?

It was questions like these that led me to pick up Niall Ferguson’s new book, Doom: The Politics of Catastrophe. I did this somewhat nervously, it must be said. I found one of Ferguson’s previous books extremely boring, and tend to cringe at his use of intellectual gimmicks – like his idea that the past success of Western civilisation can be attributed to six “killer apps.” Then again, Ferguson’s contrarianism does occasionally produce an interesting perspective, such as his willingness to weigh the negative aspects of the British Empire against the positive, as historians do with most other empires. But as I say, it was really the subject of this latest book that drew me in.

I might as well say upfront that I found it very disappointing. This is going to be a bad review – though hopefully not a pointless one. The flaws of this book can, I think, point us towards a richer understanding of catastrophe than Ferguson himself offers.

Firstly, Doom is not really about “the politics of catastrophe” as I understand that phrase. A few promising questions posed in the introduction – “Why do some societies and states respond to catastrophe so much better than others? Why do some fall apart, most hold together, and a few emerge stronger? Why does politics sometimes cause catastrophe?” – are not addressed in any sustained way. What this book is really about is the difficulty of predicting and mitigating statistically irregular events which cause excess deaths. That sounds interesting enough, to be sure, but there’s just one fundamental problem: Ferguson never gets to grips with what actually makes such events catastrophic, leaving a rather large hole where the subject of the book should be. 

The alarm bells start ringing when Ferguson introduces the book as “a general history of catastrophe” and, in case we didn’t grasp how capacious that sounds, tells us it will include:

not just pandemics but all kinds of disasters, from the geological (earthquakes) to the geopolitical (wars), from the biological (pandemics) to the technological (nuclear accidents). Asteroid strikes, volcanic eruptions, extreme weather events, famines, catastrophic accidents, depressions, revolutions, wars, and genocides: all life – and much death – is here.

You may be asking if there is really much of a relationship, throughout all the ages of history, between asteroid strikes, nuclear accidents and revolutions – and I’d say this gets to a pretty basic problem with tackling a subject like this. Writing about catastrophe (or disaster – the two are used a synonyms) requires finding a way to coherently group together the extremely diverse phenomena that might fall into this category. It requires, in other words, developing an understanding of what catastrophe actually means, in a way that allows for useful parallels between its different manifestations. 

Ferguson seems to acknowledge this when he rounds off his list by asking “For how else are we to see our disaster [i.e. Covid] – any disaster – in proper perspective?” Yet his concept of catastrophe turns out to be circular, inconsistent and inadequate. Whatever aspect of catastrophe Ferguson happens to be discussing in a particular chapter becomes, temporarily, his definition of catastrophe as such. When he is talking about mortality, mortality becomes definitive of catastrophe (“disaster, in the sense of excess mortality, can take diverse forms and yet pose similar challenges”). Likewise when he is showing how infrequent and therefore hard to predict catastrophes are (“the rare, large scale disasters that are the subject of this book”). In Ferguson’s chapter seeking similarities between smaller and larger disasters, he seems happy to simply accept whatever is viewed as a disaster in the popular memory: the Titanic, Chernobyl, the failed launch of NASA’s Challenger spacecraft. 

This is not nitpicking. I’m not expecting the metaphysical rigor of Immanuel Kant. I like an ambitious, wide-ranging discussion, even if that means sacrificing some depth. But attempting this without any real thesis, or even a firm conceptual framework, risks descending into a series of aimless and confusing digressions which don’t add up to anything. And that is more or less what happens in this book.

Consider Ferguson’s chapter on “The Psychology of Political Incompetence.” After a plodding and not especially relevant summary of Tolstoy’s concluding essay in War and Peace, Ferguson briefly introduces the idea that political leaders’ power is curtailed by the bureaucratic structures they inhabit. He then cuts to a discussion of the role of ideology in creating disastrous food shortages, by way of supporting Amartya Sen’s argument that democratic regimes respond better to famines than non-democratic ones. It’s not clear how this relates to the theme of bureaucracy and leadership, but this is one of the few sections where Ferguson is actually addressing something like “the politics of catastrophe;” and when he poses the interesting question of “why Sen’s theory does not apply to all forms of disaster” it feels like we are finally getting somewhere.

Alas, as tends to be the case in this book, Ferguson doesn’t answer the question, but embarks on a series of impromptu arguments against straw men. A winding discussion of British ineptness during the two World Wars brings him to the conclusion that “Democracy may insure a country against famine; it clearly does not insure against military disaster.” Who said that it does? Then Ferguson has suddenly returned to the issue of individual leadership, arguing that “it makes little sense” to hold Churchill solely responsible for the fall of Singapore to the Japanese in 1942. Again, who said we should? Ferguson then rounds off the chapter with an almost insultingly cursory discussion of “How Empires Fall,” cramming eight empires into less than five pages, to make the highly speculative argument that that imperial collapse is as unpredictable as various other kinds of disaster.

Insofar as anything holds this book together, it is the thin sinews of statistical probability models and network science. These do furnish a few worthwhile insights. Many of the events Ferguson classes as disasters follow power-law distributions, which is to say there is no regular relationship between their scale and the frequency with which they occur. So big disasters are essentially impossible to predict. In many cases, this is because they emerge from complex systems – natural, economic and social – which can unexpectedly amplify small events into enormous ones. In hindsight, these often seem to have been entirely predictable, and the Cassandras who warned of them are vindicated. But a regime that listened to every Cassandra would incur significant political costs in preparing for disasters that usually won’t materialize.

I also liked Ferguson’s observation that the key factor determining the scale of a disaster, in terms of mortality, is “whether or not there is contagion – that is, some way of propagating the initial shock through the biological networks of life or the social networks of humanity.” But his other useful comments about networks come in a single paragraph, and can be quoted without much further explanation:

If Cassandras had higher centrality [in the network], they might be more often heeded. If erroneous doctrines [i.e. misinformation] spread virally through a large social network, effective mitigation of disaster becomes much harder. Finally… hierarchical structures such as states exist principally because, while inferior to distributed networks when it comes to innovation, they are superior when it comes to defence.

I’m not sure it was necessary to have slogged through an entire chapter on network science, recycled from Ferguson’s last book, The Square and the Tower, to understand these points.

But returning to my main criticism, statistical and network analysis doesn’t really allow for meaningful parallels between different kinds of catastrophe. This is already evident in the introduction, when Ferguson states that “disaster takes too many forms for us to process with conventional approaches to risk mitigation. No sooner have we focused our minds on the threat of Salafi jihad than we find ourselves in a financial crisis originating in subprime mortgages.” As this strange comment suggests, the implied perspective of the book is that of a single government agency tasked with predicting everything from financial crises and terrorist attacks to volcanic eruptions and genocides. But no such agency exists, of course, for the simple reason that when you zoom in from lines plotted on a graph, the illusion that these risks are similar dissolves into a range of totally different phenomena attached to various concrete situations. The problem is absurdly illustrated when, having cited a statistical analysis of 315 conflicts between 1820-1950, Ferguson declares that in terms of predictability, “wars do indeed resemble pandemics and earthquakes. We cannot know in advance when or where a specific event will strike, nor on what scale.” Which makes it sound like we simply have no way of knowing whether the next conflict is more likely to break out in Gaza or Switzerland.  

In any case, there is something patently inadequate about measuring catastrophe in terms of mortality figures and QALYs (quality-adjusted life years), as though the only thing we have in common is a desire to live for as long as possible. Not once is the destruction of culture or ways of life mentioned in the book, despite the fact that throughout history these forms of loss have loomed large in people’s sense of catastrophe. Ferguson even mentions several times that the most prolific causes of mortality are often not recognised as catastrophes – but does not seem to grasp the corollary that catastrophe is about something more than large numbers of deaths. 

Indeed, maybe the best thing that can be said about Doom is that its shortcomings help us to realise what does need to be included in an understanding of catastrophe. Throughout the book, we see such missing dimensions flicker briefly into view. In his discussion of the flu pandemic of the late 1950s, Ferguson notes in passing that the Soviet launch of the Sputnik satellite in October 1957 “may help to explain why the memory of the Asian flu has faded” in the United States. This chimes with various other hints that this pandemic was not really perceived as a catastrophe. But why? And it what sense was it competing with the Cold War in the popular imagination? Likewise, Ferguson mentions that during the 1930s the lawyer Basil O’Connor used “the latest techniques in advertising and fundraising” to turn the “horrific but relatively rare disease” of polio into “the most feared affliction of the age.” This episode is briefly contrasted to the virtual silence of the American media and political class over AIDS during the 1980s. 

In fact, unacknowledged catastrophes are an unacknowledged theme of the book. It re-emerges in several intriguing mentions of the opioid epidemic in the United States, with its associated “deaths of despair.” At the same time as there was “obsessive discussion” of global warming among the American elite, Ferguson points out, “the chance of dying from an overdose was two hundred times greater than the chance of being killed by a cataclysmic storm.” He also describes the opioid crisis as “the biggest disaster of the Obama presidency,” and suggests that although “the media assigned almost no blame to Obama” for it, “such social trends did much to explain Donald J. Trump’s success.” Finally, Ferguson notes that during the current Covid crisis, the relative importance of protecting the vulnerable from the disease versus maintaining economic activity became an active front in the American culture war. 

The obvious implication of all this is that, while Ferguson does not really engage with “the politics of catastrophe,” the concept and reality of catastrophe is inherently political. There isn’t really an objective measure of catastrophe: the concept implies judging the nature and consequences of an event to be tragic. Whether or not something meets this standard often depends on who it affects and whether it fits into the emotionally compelling narratives of the day. The AIDS and opioid epidemics initially went unrecognized because their victims were homosexuals and working class people respectively. To take another example, the 1921 pogrom against the affluent African American community in Tulsa, Oklahoma, was for the longest time barely known about, let alone mourned (except of course by African Americans themselves); yet a hundred years later it is being widely recognised as a travesty. Last week’s volcanic eruption in the Democratic Republic of Congo, which may have left 20,000 people homeless, would probably be acknowledged as catastrophic by a Westerner who happened to read about it in the news. But we are much more likely to be aware of, and emotionally invested in, the disastrous Israeli-Palestinian conflict of recent weeks. 

Catastrophe, in other words, is inextricably bound up with popular perception and imagination. It is rooted in the emotions of fear, anger, sadness, horror and titillation with which certain events are experienced, remembered or anticipated. This is how we can make sense of apathy to the late-1950s flu pandemic: such hazards, as Ferguson mentions, were still considered a normal part of life rather than an exceptional danger, and people’s minds were focused on the potential escalation of the Cold War. Hence also the importance of the media in determining whether and how disasters become embedded in public discourse. While every culture has its religious and mythical visions of catastrophe (a few are mentioned in a typically fleeting discussion near the start of Doom), today Netflix and the news media have turned us into disaster junkies, giving form and content to our apocalyptic impulses. The Covid pandemic has been a fully mediated experience, an epic rollercoaster of the imagination, its personal and social significance shaped by a constant drumbeat of new information. It is because climate change cannot be made to fit this urgent tempo that is has been cast in stead as a source of fatalism and dread, always looming on the horizon and inspiring millions with a sense of terrified helplessness.  

Overlooking the central role of such cultural and political narratives probably meant that Ferguson’s Doom was doomed from the start. For one thing, this missing perspective immediately shows the problem with trying to compare catastrophes across all human history. Yes, there are fascinating patterns even at this scale, like the tendency of extreme ideological movements to emerge in the midst of disasters – whether the flagellant orders that sprang from the 14th century Black Death, or the spread of Bolshevism in the latter part of the First World War. But to really understand any catastrophe, we have to know what it meant to the people living through it, and this means looking at the particulars of culture, politics and religion which vary enormously between epochs. This, I would argue, is why Ferguson’s attempt to compare the Athenian plague of the late 5th century BC to the Black Death in medieval England feels rather superficial. 

And whatever the historical scope, statistics simply don’t get close to the imaginative essence of catastrophe. Whether or not a disaster actually happens is incidental to its significance in our lives; many go unnoticed, others transform culture through mere anticipation. Nor do we experience catastrophes as an aggregate of death-fearing individuals. We do so as social beings whose concerns are much more elaborate and interesting than mere life and death.

Tradition with a capital T: Dylan at 80

It’s December 1963, and a roomful of liberal luminaries are gathered at New York’s Americana Hotel. They are here for the presentation of the Emergency Civil Liberties Committee’s prestigious Tom Paine Award, an accolade which, a year earlier, had been accepted by esteemed philosopher and anti-nuclear campaigner Bertrand Russell. If any in the audience have reservations about this year’s recipient, a 22-year-old folk singer called Bob Dylan, their skepticism will soon be vindicated. 

In what must rank as one of the most cack-handed acceptance speeches in history, an evidently drunk Dylan begins with a surreal digression about the attendees’ lack of hair, his way of saying that maybe it’s time they made room for some younger voices in politics. “You people should be at the beach,” he informs them, “just relaxing in the time you have to relax. It is not an old people’s world.” Not that it really matters anyway, since, as Dylan goes on to say, “There’s no black and white, left and right to me anymore; there’s only up and down… And I’m trying to go up without thinking of anything trivial such as politics.” Strange way to thank an organisation which barely survived the McCarthyite witch-hunts, but Dylan isn’t finished. To a mounting chorus of boos, he takes the opportunity to express sympathy for Lee Harvey Oswald, the assassin who had shot president John F. Kennedy less than a month earlier. “I have to be honest, I just have to be… I got to admit honestly that I, too, saw some of myself in him… Not to go that far and shoot…”

Stories like this one have a special status in the world of Bobology, or whatever we want to call the strange community-cum-industry of critics, fans and vinyl-collecting professors who have turned Dylan into a unique cultural phenomenon. The unacceptable acceptance speech at the Americana is among a handful of anecdotes that dramatize the most iconic time in his career – the mid-’60s period when Dylan rejected/ betrayed/ transcended (delete as you see fit) the folk movement and its social justice oriented vision of music. 

For the benefit of the uninitiated, Dylan made his name in the early ’60s as a politically engaged troubadour, writing protest anthems that became the soundtrack of the Civil Rights movement. He even performed as a warm-up act for Martin Luther King Jnr’s “I Have a Dream” speech at the 1963 March on Washington. Yet no sooner had Dylan been crowned “the conscience of a generation” than he started furiously trying to wriggle out of that role, most controversially through his embrace of rock music. In 1965, Dylan plugged in to play an electric set at the Newport Folk Festival (“the most written about performance in the history of rock,” writes biographer Clinton Heylin), leading to the wonderful though apocryphal story of folk stalwart Pete Seeger trying to cleave the sound cables with an axe. Another famous confrontation came at the Manchester Free Trade Hall in 1966, where angry folkies pelted Dylan with cries of “Judas!” (a moment whose magic really rests on Dylan’s response, as he turns around to his electric backing band and snarls “play it fuckin’ loud”). 

In the coming days, as the Bobologists celebrate their master’s 80th birthday, we’ll see how Dylan’s vast and elaborate legend remains anchored in this original sin of abandoning the folk community. I like the Tom Paine Award anecdote because it makes us recall that, for all his prodigious gifts, Dylan was little more than an adolescent when these events took place – a chaotic, moody, often petulant young man. What has come to define Dylan, in a sense, is a commonplace bout of youthful rebellion which has been elevated into a symbolic narrative about a transformative moment in cultural history. 

Still, we can hardly deny its power as a symbolic narrative. Numerous writers have claimed that Dylan’s rejection of folk marks a decisive turning point in the counterculture politics of ’60s, separating the collective purpose and idealism of the first half of the decade, as demonstrated in the March on Washington, from the bad acid trips, violent radicalism and disillusionment of the second. Hadn’t Dylan, through some uncanny intuition, sensed this descent into chaos? How else can we explain the radically different mood of his post-folk albums? The uplifting “Come gather ’round people/ Wherever you roam” is replaced by the sneering “How does it feel/ to be on your own,” and the hopeful “The answer, my friend, is blowin’ in the wind” by the cynical “You don’t need a weatherman to know which way the wind blows.” Or was Dylan, in fact, responsible for unleashing the furies of the late-’60s? That last lyric, after all, provided the name for the militant activist cell The Weathermen.

More profound still, Dylan’s mid-’60s transformation seemed to expose a deep fault line in the liberal worldview, a tension between two conceptions of freedom and authenticity. The folk movement saw itself in fundamentally egalitarian and collectivist terms, as a community of values whose progressive vision of the future was rooted in the shared inheritance of the folk tradition. Folkies were thus especially hostile to the rising tide of mass culture and consumerism in America. And clearly, had Dylan merely succumbed to the cringeworthy teenybopper rock ’n’ roll which was then topping the charts, he could have been written off as a sell-out. But Dylan’s first three rock records – the “Electric Trilogy” of Bringing It All Back HomeHighway 61 Revisited and Blonde on Blonde – are quite simply his best albums, and probably some of the best albums in the history of popular music. They didn’t just signal a move towards a wider market of consumers; they practically invented rock music as a sophisticated and artistically credible form. And the key to this was a seductive of vision of the artist as an individual set apart, an anarchic fount of creativity without earthly commitments, beholden only to the sublime visions of his own interior world. 

It was Dylan’s lyrical innovations, above all, that carried this vision. His new mode of social criticism, as heard in “Gates of Eden” and “It’s Alright, Ma (I’m Only Bleeding),” was savage and indiscriminate, condemning all alike and refusing to offer any answers. Redemption came in stead from the imaginative power of the words and images themselves – the artist’s transcendent “thought dreams,” his spontaneous “skippin’ reels of rhyme” – his ability to laugh, cry, love and express himself in the face of a bleak and inscrutable world.

Yes, to dance beneath the diamond sky with one hand waving free
Silhouetted by the sea, circled by the circus sands
With all memory and fate driven deep beneath the waves

Here is the fantasy of artistic individualism with which Dylan countered the idealism of folk music, raising a dilemma whose acuteness can still be felt in writing on the subject today. 

But for a certain kind of Dylan fan, to read so much into the break with folk is to miss the magician’s hand in the crafting of his own legend. Throughout his career, Dylan has shown a flair for mystifying his public image (some would say a flair for dishonesty). His original folksinger persona was precisely that – a persona he copied from his adolescent hero Woody Guthrie, from the pitch of his voice and his workman’s cap to the very idea of writing “topical” songs about social injustice. From his first arrival on the New York folk scene, Dylan intrigued the press with fabrications about his past, mostly involving running away from home, travelling with a circus and riding on freight trains. (He also managed to persuade one of his biographers, Robert Shelton, that he had spent time working as a prostitute, but the less said about that yarn the better). Likewise, Dylan’s subsequent persona as the poet of anarchy drew much of its effect from the drama of his split with the folk movement, and so its no surprise to find him fanning that drama, both at the time and long afterwards, with an array of facetious, hyperbolic and self-pitying comments about what he was doing. 

When the press tried to tap into Dylan’s motivations, he tended to swat them away with claims to the effect that he was just “a song and dance man,” a kind of false modesty (always delivered in a tone of preening arrogance) that fed his reputation for irreverence. He told the folksinger Joan Baez, among others, that his interest in protest songs had always been cynical – “You know me. I knew people would buy that kind of shit, right? I was never into that stuff” – despite numerous confidants from Dylan’s folk days insisting he had been obsessed with social justice. Later, in his book Chronicles: Volume One, Dylan made the opposite claim, insisting both his folk and post-folk phases reflected the same authentic calling: “All I’d ever done was sing songs that were dead straight and expressed powerful new realities. … My destiny lay down the road with whatever life invited, had nothing to do with representing any kind of civilisation.” He then complained (and note that modesty again): “It seems like the world has always needed a scapegoat – someone to lead the charge against the Roman Empire.” Incidentally, the “autobiographical” Chronicles is a masterpiece of self-mythologizing, where, among other sleights of hand, Dylan cuts back and forth between different stages of his career, neatly evading the question of how and why his worldview evolved.

Nor, of course, was Dylan’s break with folk his last act of reinvention. The rock phase lasted scarcely two years, after which he pivoted towards country music, first with the austere John Wesley Harding and then with the bittersweet Nashville Skyline. In the mid-1970s, Dylan recast himself as a travelling minstrel, complete with face paint and flower-decked hat, on the Rolling Thunder Revue tour. At the end of that decade he emerged as a born-again Christian playing gospel music, and shortly afterwards as an Infidel (releasing an album with that title). In the ’90s he appeared, among other guises, as a blues revivalist, while his more recent gestures include a kitsch Christmas album and a homage to Frank Sinatra. If there’s one line that manages to echo through the six decades of Dylan’s career, it must be “strike another match, go start anew.” 

This restless drive to wrong-foot his audience makes it tempting to see Dylan as a kind of prototype for the shape-shifting pop idol, anticipating the likes of David Bowie and Kate Bush, not to mention the countless fading stars who refresh their wardrobes and their political causes in a desperate clinging to relevance. Like so many readings of Dylan, this one inevitably doubles back, concertina-like, to the original break with folk. That episode can now be made to appear as the sudden rupture with tradition that gave birth to the postmodern celebrity, a paragon of mercurial autonomy whose image can be endlessly refashioned through the media.

But trying to fit Dylan into this template reveals precisely what is so distinctive about him. Alongside his capacity for inventing and reinventing himself as a cultural figure, there has always been a sincere and passionate devotion to the forms and traditions of the past. Each of the personae in Dylan’s long and winding musical innings – from folk troubadour to country singer to roadshow performer to bluesman to roots rocker to jazz crooner – has involved a deliberate engagement with some aspect of the American musical heritage, as well as with countless other cultural influences from the U.S. and beyond. This became most obvious from the ’90s onwards, with albums such as Good As I Been to You and World Gone Wrong, composed entirely of covers and traditional folk songs – not to mention “Love and Theft, a title whose quotation marks point to a book by historian Eric Lott, the subject of which, in turn, is the folklore of the American South. But these later works just made explicit what he had been doing all along.

“What I was into was traditional stuff with a capital T,” writes Dylan about his younger self in Chronicles. The unreliability of that book has already been mentioned, but the phrase is a neat way of describing his approach to borrowing from history. Dylan’s personae are never “traditional” in the sense of adhering devoutly to a moribund form; nor would it be quite right to say that he makes older styles his own. Rather, he treats tradition as an invitation to performance and pastiche, as though standing by the costume cupboard of history and trying on a series of eye-catching but not-quite-convincing disguises, always with a nod and a wink. I remember hearing Nashville Skyline for the first time and being slightly bemused at what sounded like an entirely artless imitation of country music; I was doubly bemused to learn this album had been recorded and released in 1969, the year of Woodstock and a year when Dylan was actually living in Woodstock. But it soon occurred to me that this was Dylan’s way of swimming against the tide. He may have lit the fuse of the high ’60s, but by the time the explosion came he had already moved on, not forward but back, recognising where his unique contribution as a musician really lay: in an ongoing dance with the spirits of the past, part eulogy and part pantomime. I then realised this same dance was happening in his earlier folk period, and in any number of his later chapters.

“The madly complicated modern world was something I took little interest in” – Chronicles again – “What was swinging, topical and up to date for me was stuff like the Titanic sinking, the Galveston flood, John Henry driving steel, John Hardy shooting a man on the West Virginia line.” We know this is at least partly true, because this overtly mythologized, larger-than-life history, this traditional stuff with a capital T, is never far away in Dylan’s music. The Titanic, great floods, folk heroes and wild-west outlaws all appear in his catalogue, usually with a few deliberate twists to imbue them with a more biblical grandeur, and to remind us not to take our narrator too seriously. It’s even plausible that he really did take time out from beatnik life in Greenwich Village to study 19th century newspapers at the New York Public Library, not “so much interested in the issues as intrigued by the language and rhetoric of the times.” Dylan is nothing if not a ventriloquist, using his various musical dummies to recall the languages of bygone eras. 

And if we look more closely at the Electric Trilogy, the infamous reinvention that sealed Dylan’s betrayal of folk, we find that much of the innovation on those albums fits into a twelve-bar blues structure, while their rhythms recall the R&B that Dylan had performed as a teenager in Hibbing, Minnesota. Likewise, it’s often been noted that their lyrical style, based on chains of loosely associated or juxtaposed images, shows not just the influence of the Beats, but also French symbolist poet Arthur Rimbaud, German radical playwright Bertolt Brecht, and bluesman Robert Johnson. This is to say nothing of the content of the lyrics, which feature an endless stream of allusions to history, literature, religion and myth. Songs like “Tombstone Blues” make an absurd parody of their own intertextuality (“The ghost of Belle Starr she hands down her wits/ To Jezebel the nun she violently knits/ A bald wig for Jack the Ripper who sits/ At the head of the chamber of commerce”). For all its iconoclasm, Dylan’s novel contribution to songwriting in this phase was to bring contemporary America into dialogue with a wider universe of cultural riches. 

Now consider this. Could it be that even Dylan’s disposable approach to his own persona, far from hearkening the arrival of the modern media star, is itself a tip of the hat to some older convention? The thought hadn’t occurred to me until I dipped into the latest round of Bobology marking Dylan’s 80th. There I found an intriguing lecture by the critic Greil Marcus about Dylan’s relationship to blues music (and it’s worth recalling that, by his own account, the young Dylan only arrived at folk music via the blues of Lead Belly and Odetta). “The blues,” says Marcus, “mandate that you present a story on the premise that it happened to you, so it has to be written [as] not autobiography but fiction.” He explains:

words first came from a common store of phrases, couplets, curses, blessings, jokes, greetings, and goodbyes that passed anonymously between blacks and whites after the Civil War. From that, the blues said, you craft a story, a philosophy lesson, that you present as your own: This happened to me. This is what I did. This is how it felt.

Is this where we find a synthesis of those two countervailing tendencies in Dylan’s career – on to the next character, back again to the “common store” of memories? Weaving a set of tropes into a fiction, which you then “present as your own,” certainly works as a description of how Dylan constructs his various artistic masks, not to mention many of his songs. It would be satisfying to imagine that this practice is itself a refashioned one – and as a way of understanding where Dylan is coming from, probably no less fictitious than all the others.

Europe’s empty moral gestures

The story of the Opium Wars in mid-19th century China has been told in many ways, but the account which has always stayed with me is the short one given by W.G. Sebald in The Rings of Saturn. In just a few pages, and with his novelist’s eye for arresting detail, Sebald portrays the European incursion into the Celestial Empire as a tragic meeting of two hubristic and uncomprehending civilisations. 

The Opium Wars unfolded amid efforts to keep China open to European commercial interests, after the Chinese had tried to limit the British opium trade through Canton. In the Second Opium War of the 1850s, the British, soon to be joined by the French, sent an expeditionary force to make the ailing Qing Emperor Hsien-feng come to terms. The Europeans, Sebald notes, saw themselves as the righteous bearers of those necessary conditions for progress, “Christian evangelism and free trade.” Having marched inland from Canton, however, they were baffled when the Emperor’s delegates demanded they pay homage, in order to fulfil “the immemorial obligations toward the Son of Heaven of envoys from satellite powers.” 

If the Europeans felt any sense of superiority, they were disabused of it when they came across the glorious Yuan Ming Yuan gardens near Peking; Sebald speculates that the horrific looting and destruction they carried out there may have been driven by shame at the achievements of the Chinese. But a steeper fall from grace awaited the decaying Qing court, where although “the ritualisation of imperial power was at its most elaborate: at the same time, that power itself was by now almost completely hollowed out.” Sebald portrays the decline of the Qing as an increasingly empty and deluded going-through-the-motions of imperial splendour – a fate illustrated by the magnificent yet miserable funeral cortege of Hsien-feng, which bore the Emperor’s body on foot for three weeks through a rugged, rain-lashed autumn countryside. 

I often think of these scenes when there are debates over the European stance towards China, as in December when the EU concluded its investment deal, and again this week with the publication of the UK’s new foreign policy review (which also acknowledged the importance of economic relations with China). The question, we like to think, is whether we ought to be trading with an authoritarian state which has just crushed democracy in Hong Kong, and is carrying out a vast and brutal persecution of its Muslim Uighur minority. Sadly though, the real question is not whether we ought to, but given the increasing dominance of China in the global economy, whether we have any choice. 

I don’t have a clear-cut answer, but the predicament itself reveals how radical the shift in global power towards the east really is. Western nations may no longer see themselves as agents of Christian evangelism, but their sense of their role in the world is defined by a post-1945 liberalism which has much the same effect. As Europeans, we conceive of morality in universal terms, such that atrocities taking place in other nations cannot simply be ignored. Only, we no longer have either the means or the desire to launch crusades. Likewise with free trade: whereas this apparent boon to humanity long justified westerners, and especially the British and Americans, in their global supremacy, now Chinese investment and access to Chinese markets is desperately sought as a means of reviving stagnant economies. 

“Jesus Christ is Free Trade, and Free Trade is Jesus Christ” – so said Dr John Bowring, British Governor of Hong Kong at the time of the Second Opium War. Some 150 years later, the same supposed inseparability of economic liberalism and moral salvation was used by the United States in support of China’s entry to the World Trade Organisation, the argument being that Chinese political liberalisation would inevitably follow. That the imperatives of trade and moral conscience can no longer be rhetorically aligned in the east testifies to the world historical shift we are living through.

Meanwhile, the events described by Sebald were the beginning of what is known among Chinese elites as “the century of national humiliation” – a century that ended, of course, with Mao Zedong’s establishment of a Communist regime in 1949. References to that century of humiliation are apparently much in use to legitimise the current regime of Xi Jinping, since they offer a salutary contrast to his assertive use of China’s superpower status on the world stage.

Thanks to changes in technology and statecraft, it is unlikely that depredations like those of the Opium Wars will be visited on Europe by China. Perhaps there will be lesser humiliations: perhaps European states will be brought to heel by crippling Chinese economic sanctions or cyber attacks; perhaps the Elgin Marbles, having finally been returned from London to Greece, will in due course be sent on to Beijing as security against loans. 

But it does look as though, in its various declarations and debates on human rights abuses in China, Europe will increasingly resemble the late Qing court described by Sebald, with its ritualised fulfillments of an increasingly empty power. Imposing sanctions on individuals and companies profiting from the nightmare in Xinjiang, as the EU’s foreign ministers did this week, is certainly morally justifiable, but seems somewhat perfunctory given the investment deal so eagerly concluded just a few months ago. The euphemisms, equivocations and parliamentary protests about human rights which accompanied that deal – like the sheepish use of the term “values” in UK’s foreign policy review – point to a future in which western nations’ concern for universal justice is increasingly ceremonial and toothless. 

Of course, as Sebald’s account of the Opium Wars suggests, the west’s universal moral mission was always inconsistent and self-serving – something Chinese officials remind us of when they cynically justify the atrocities in Xinjiang as promoting the emancipation of women. But now Europeans will be increasingly guilty of a different kind of hypocrisy: directing moral criticism at other states safe in the knowledge that no one expects them to do anything about it.

What space architecture says about us

With the recent expedition of Nasa’s Perseverence rover to Mars, I’ve taken an interest in space architecture; more specifically, habitats for people on the moon or the Red Planet. The subject first grabbed my attention earlier this year, when I saw that a centuries-old forestry company in Japan is developing wooden structures for future space colonies. Space architecture is not as other-worldly as you might think. In various ways, it holds a revealing mirror to life here on Earth. 

Designing human habitats for Mars is more than just a technical challenge (though protecting against intense radiation and minus 100C temperatures is, of course, a technical challenge). It’s also an exercise in anthropology. To ask what a group of scientists or pioneers will need from their Martian habitats is to ask what human beings need to be healthy, happy and productive. And we aren’t just talking about the material basics here. 

As Jonathan Morrison reported in the Times last weekend, Nasa is taking inspiration from the latest polar research bases. According to architects like Hugh Broughton, researchers working in these extreme environments need creature comforts. The fundamental problem, says Broughton, is “how architecture can respond to the human condition.” The extreme architect has to consider “how you deal with isolation, how you create a sense of community… how you support people in the darkness.”

I found these words disturbingly relatable; not just in light of the pandemic, which has forced us all into a kind of polar isolation, but in light of the wider problem of anomie in modern societies. Broughton’s questions are the same ones we tend to ask as we observe stubbornly high rates of depression, loneliness, self-medication, and so on. Are we all now living in an extreme environment?

Many architects in the modernist period dreamed that they could tackle such issues through the design of the built environment. But the problem of what people need in order to flourish confronted them in a much harder form. Given the complexity of modern societies, trying to facilitate a vision of human flourishing through architecture started to look a lot like forcing society into a particular mould.

The “master households” designed by Walter Gropius in the 1920s and 30s illustrates the dilemma. Gropius insisted his blueprints, which reduced private family space in favour of communal living, reflected the emerging socialist character of modern individuals. At the same time, he implied that this transformation in lifestyle needed the architect as its midwife. 

Today architecture has largely abandoned the dream of a society engineered by experts and visionaries. But heterotopias like research stations and space colonies still offer something of a paradise for the philosophical architect. By contrast to the messy complexity of society at large, these small communities have a very specific shared purpose. They offer clearly defined parameters for architects to address the problem of what human beings need. 

Sometimes the solutions to this profound question, however, are almost comically mundane. Morrison’s Times report mentions some features of recent polar bases:

At the Scott Base, due to be completed in 2027, up to 100 residents might while away the hours in a cafeteria and even a Kiwi-themed pub, while Halley VI… boasts a gym, library, large canteen, bar and mini cinema.

If this turns out to be the model, then a future Mars colony will be a lot like a cruise ship. This doesn’t reflect a lack of imagination on the architects’ part though. It points to the fact that people don’t just want sociability, stimulation and exercise as such – they want familiar forms of these things. So a big part of designing habitats for space pioneers will involve replicating institutions from their original, earthbound cultures. In this sense, Martian colonies won’t be a fresh start for humanity any more than the colonisation of the Americas was. 

Finally, it’s worth saying something about the politics of space habitats. It seems inevitable that whichever regime sends people to other planets will use the project as a means of legitimation: the government(s) and corporations involved will want us to be awed by their achievement. And this will be done by turning the project into a media spectacle. 

The recent Perseverance expedition has already shown this potential: social media users were thrilled to hear audio of Martian winds, and to see a Martian horizon with Earth sparkling in the distance (the image, alas, turned out to be a fake). The first researchers or colonists on Mars will likely be reality TV stars, their everyday lives an on-going source of fascination for viewers back home. 

The lunar base in Kubrick’s 2001: A Space Odyssey

This means space habitats won’t just be designed for the pioneers living in them, but also for remote visual consumption on Earth. The aesthetics of these structures will not, therefore, be particularly novel. Thanks to Hollywood, we already have established ideas of what space exploration should look like, and space architecture will try to satisfy these expectations. Beyond that, it will simply try to project a more futuristic version of the good life as we know it through pop culture: comfort, luxury and elegance. 

We already see this, I think, in the Mars habitat designed by Xavier De Kestelier of Hassel Studio, which features sweeping open-plan spaces with timber flooring, glass walls and minimalist furniture. It resembles a luxury spa more than a rugged outpost of civilisation. But this was already anticipated, with characteristic flair, by Stanley Kubrick in his 1968 sci-fi classic 2001: A Space Odyssey. In Kubrick’s imagined lunar base, there is a Hilton hotel hosting the stylish denizens of corporate America. The task of space architects will be to design this kind of enchanting fantasy, no less than to meet the needs of our first Martian settlers.