The Architecture of Autocracy

This article was originally published by Unherd in November 2022. Read it here.

For a building project marketed like a Hollywood blockbuster, the latest footage from the deserts of northwestern Saudi Arabia is a little underwhelming. A column of trucks is moving sand, a row of diggers poking at the barren landscape like toys arranged on a beach. The soundtrack, an epic swirl of fast-paced, rising strings, doesn’t really belong here.

Still, the video got its message across: it’s really happening. The widest aerial shots reveal an enormous groove in the sand, stretching to the horizon. We are seeing the birth of “The Line”, an insanely ambitious project for a city extending 170km through the desert, sandwiched in a narrow space between two immense walls. The new construction footage is an update on the viral CGI trailers that overwhelmed the internet last year, showing us glimpses of what life will be like inside this linear chasm of a city: a city where there will be no cars or roads, where every amenity is always a five-minute walk away, and where, according to one planning document, there could be robot maids.

This scheme sounds mad enough, but The Line is only the centrepiece of a much bigger development, called Neom (a blend of neo and mustaqbal, Arabic for “future”). Neom will be a semi-autonomous state, encompassing 26,000 square kilometres of desert with new resorts and tech industry centres.

There may be no philosopher kings, but there are sci-fi princes. The dreams of Mohammed bin Salman, crown prince of Saudi Arabia and chairman of the Neom board, make the techno-futurism of Silicon Valley look down to earth. Bin Salman is especially fond of the cyber-punk genre of science fiction, which involves gritty hi-tech dystopias. He has enlisted a number of prominent Hollywood visual specialists for the Neom project, including Olivier Pron of Marvel’s Guardians of the Galaxy franchise. A team of consultants was asked to develop science-fiction aesthetics for a tourist resort, resulting in “37 options, arranged alphabetically from ‘Alien Invasion’ to ‘Utopia’”. One proposal for a luxury seaside destination, which featured a glowing beach of crushed marble, was deemed insufficiently imaginative.

Such spectacular indulgence must be causing envy among the high-flying architects and creative consultants not yet invited to join the project — if there are any left. But it also makes the moral dimension difficult to ignore: how should we judge those jumping on board bin Salman’s gravy train? Saudi Arabia — in case anyone has forgotten in the years since the journalist Jamal Khashoggi was murdered at its consulate in Istanbul — is a brutal authoritarian state.

In recent weeks, this has prompted some soul-searching in the architecture community, with several stinging rebukes aimed at Neom. Writing in Dezeen, the urbanist Adam Greenfield asks firms such as Morphosis, the California-based architects designing The Line, to consider “whether the satisfaction of working on this project, and the compensation that attends the work, will ever compensate you for your participation in an ecological and moral atrocity”. Ouch. Greenfield’s intervention came a week after Rowan Moore asked in The Observer: “When will whatever gain that might arise from the creation of extraordinary buildings cease to outweigh the atrocities that go with them?”

You see, bin Salman’s blank slate in the desert was not actually blank (they never are); settlements belonging to the Huwaitat tribespeople have been ruthlessly flattened to make space for Neom. One man leading resistance to the clearances, Abdul Rahim al-Huwaiti, was killed by security forces in 2020, and three others have been sentenced to execution. Critics also point to the absurd pretence that The Line is an eco-friendly project, given the industrial operations needed to build and maintain a city for nine million people in searing desert temperatures.

There is an obvious parallel here with the criticism of celebrities and commentators taking part in the World Cup in Qatar, another unsavoury petro-state. International sporting events are notorious for giving legitimacy to dictatorships, so why wouldn’t we see architectural monuments in the same way? With Neom there is barely a distinction to draw. Zaha Hadid Architects, the British firm that designed one of the Qatari football stadiums — a project synonymous with the shocking treatment of migrant construction workers — is also working on one of the Neom sites, an artificial ski resort that will host the 2029 Asian Winter Games.

In the 21st century, Western architects have helped to burnish the image of repressive regimes, especially those big-name architects who specialise in spectacular monumental buildings. Zaha Hadid was the most wide-ranging: her trademark swooshing structures include a museum honouring the ruling family in Azerbaijan and a conference hall in Muammar Gaddafi’s Libya (never completed due to Gaddafi’s demise). But the biggest patrons of globetrotting architects have been the Arab Gulf States — especially Qatar, Saudi Arabia and the United Arab Emirates — along with China. Among countless examples in these regions, the most infamous is probably Rem Koolhaas’s Chinese Television Headquarters in Beijing, a suitably sinister-looking building for a state organisation that shapes the information diet of hundreds of millions of people each day.

The uncomfortable truth is that autocrats and architects share complimentary motivations. The former use architecture to glorify their regimes, both domestically and internationally, whereas the latter are attracted to the creative freedom that only unconstrained state power can provide. In democratic societies, there is always tension between the grand visions of architects and the numerous interest groups that have a say in the final result. Why compromise with planning restrictions and irate neighbours when there is a dictator who, as Greenfield puts it, “offers you a fat purse for sharing the contents of your beautiful mind with the world?”

This is not just speculation. As Koolhaas himself stated: “What attracts me about China is that there is still a state. There is something that can take initiative on a scale and of a nature that almost nobody that we know of today could even afford or contemplate.”

But really this relationship between architect and state is a triangle, with financial interests making up the third pole. Despite the oft-repeated line that business loves the stability offered by the rule of law, when it comes to building things, the money-men are as fond of the autocrat’s empty canvas as the architects are. When he first pitched the Neom project to investors in 2017, bin Salman told them: “Imagine if you are the governor of New York without having any public demands. How much would you be able to create for the companies and the private sector?”

This points us to the deeper significance of the Gulf States and China as centres of high-profile architecture. These were crucial regions for post-Nineties global capitalism: the good illiberal states. Celebrity architects brought to these places the same spectacular style of building that was appearing in Europe and North America; each landmark “iconic” and distinct but, in their shared scale and audacity, also placeless and generic. Such buildings essentially provided a seal of legitimacy for the economic and financial networks of globalisation. Can this regime’s values really be so different to ours, an investor might say, when they have a museum by Jean Nouvel, or an arts centre by Norman Foster? British architects build football stadiums and skyscrapers in Qatar and Saudi Arabia, while those governments own football stadiums and skyscrapers in Britain, such as The Shard and Newcastle’s St James’s Park.

This is not to suggest some sort of conspiracy: the ethical issues of working for repressive states have often been debated by architects. When the tide of liberal capitalism seemed to be coming in around the world, they could say, and believe, that their buildings were optimistic gestures, representing a hoped-for convergence around a single global modernity. It is the collapse of those illusions over the last decade that makes such reasoning look increasingly suspect.

With Neom, bin Salman is making explicit the publicity value of architecture, by pushing it to a whole new degree. Aware that breakthroughs in clean energy would essentially render his kingdom a stranded asset, he is trying to rebrand Saudi Arabia as a high-tech green state. He offers investors a package they, like many architects, dream about: breath-taking novelty and innovation, combined with sustainability and an apparent humanistic concern.

But ironically, what bin Salman has really shown is that architects are increasingly unnecessary for conveying political messages. They are being replaced by those masters of unreality who use digital technology to the same ends, like the Marvel film magicians creating a vision of Neom in the global imagination. Whether or not a city like The Line actually exists is almost beside the point in terms of its publicity value. After all, this is an era where the superhero realm of Wakanda is praised as a depiction of Africa, and where America tore itself apart for four years over a wall that never actually came into being.

Likewise, given the technological challenges involved, we can be certain the vast furrow appearing in the Saudi desert will never become The Line as portrayed in the promotional videos. But videos will be enough to project the desired image of an innovative, progressive state. That bin Salman himself might really believe in his futuristic city, encouraged by his army of paid-up designers, will only make him a better salesman.

Design for the End Times

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Youtube is one of the most powerful educational tools ever created; so powerful, in fact, it can teach someone as inept as myself to fix things. I am slightly obsessed with DIY tutorials. Your local Internet handyman talks you through the necessary gear, then patiently demonstrates how to wire appliances, replace car batteries or plaster walls. I’ve even fantasised that, one day, these strangers with power tools will help me build a house.

To feel self-sufficient is deeply satisfying, though I have to admit there are more hysterical motives here too. I’ve always been haunted by the complacency of life in a reasonably well-functioning modern society, where we rely for our most basic needs on dazzlingly complex supply chains and financial arrangements. If everything went tits-up and we had to eke out an existence amidst the rubble of modernity, I would be almost useless; the years I have spent reading books about arcane subjects would be worth even less than they are today. Once the Internet goes down, I will not even have Youtube to teach me how to make a crossbow. 

But what if, instead of becoming more competent, you could simply create a technologically advanced bubble to shelter from the chaos of a collapsing society? Welcome to the world of post-apocalyptic hideouts for the super-rich, one of the most whacky and morbidly fascinating design fields to flourish in the last decade.

In a recent book, Survival of the Richest, media theorist Douglas Rushkoff describes the growing demand for these exclusive refuges. Rushkoff was invited to an exclusive conference where a group of billionaires, from “the upper echelon of the tech investing and hedge fund world,” interrogated him about survival strategies:

New Zealand or Alaska? Which region will be less impacted by the coming climate crisis? … Which was the greater threat: climate change or biological warfare? How long should one plan to be able to survive with no outside help? Should a shelter have its own air supply? What is the likelihood of groundwater contamination?

Apparently these elite preppers were especially vexed by the problem of ensuring the loyalty of their armed security personnel, who would be necessary “to protect their compounds from raiders as well as angry mobs.” One of their solutions was “making guards wear disciplinary collars of some kind in return for their survival.”

There is now a burgeoning industry of luxury bunker specialists, such as the former defence contractor Larry Hall, to address such dilemmas. In Kansas, Hall has managed to convert a 1960s silo for launching nuclear missiles into a “Survival Condo,” where seventy-five clients can allegedly outlive three to five years of nuclear winter. As I learned from this tour (thanks again, Youtube), Hall’s bunker is essentially a 200-foot cylinder sunk into the ground, lined with concrete and divided into fifteen floors. There are seven floors of luxury living quarters, practical areas such as food stores and medical facilities, and leisure amenities including a bar, a library, and a cinema with 3,000 films. Energy comes from multiple renewable sources.

What to make of this undertaking? There is a surreal quality to the designers’ efforts to simulate a familiar environment, which presumably have less to do with the realities of post-apocalyptic life than marketing to potential buyers. The swimming pool area is adorned with artificial boulders and umbrellas (yes, underground umbrellas), there are classrooms where children can continue their school syllabus (because who knows when those credentials will come in handy), and each client is provided with pre-downloaded Internet content based on keywords of their choice. You can even push a shopping trolley around a pitiful approximation of a supermarket. Honestly, it’s like the world never ended, except that a lack of space for toilet paper means you have to use bidet toilets.

But there is another way to look at these pretences of normality. Dystopian projections tend to reflect the social conditions from which they emerge: just as my own dreams of self-sufficiency are no doubt the standard insecurities of an alienated knowledge worker, luxury bunkers are merely an extension of the gated communities and exclusive lifestyles that many of the super-wealthy already inhabit. As Rushkoff suggests, these escape strategies smack of an outlook which has been “rejecting the collective polity all along, and embracing the hubristic notion that with enough money and technology, the world can be redesigned to one’s personal specifications.” From this perspective, society already resembles a savage horde lurking beyond the gate. Perhaps the ambition of retreating into an underground pleasure palace defended by armed guards is less a dystopia than a utopia.

One of Britain’s own nuclear refuges from the Cold War era – a vast underground complex in Wiltshire, including sixty miles of roads and a BBC recording studio – will apparently be difficult to convert into a luxury bunker because it has been listed as a historic structure. On the other hand, I suppose the post-apocalyptic property developers could charge extra for a heritage asset.

The overlap between escaping catastrophe and simply abandoning society is even more evident in the “seasteading” movement, which aims to create autonomous cities on the oceans. This project was hatched in 2008 by Google engineer Patri Friedman, grandson of the influential free-market economist Milton Friedman, with funding from tech investor Peter Thiel. The idea was that communities floating in international waters could serve as laboratories for new forms of libertarian self-governance, away from the clutches of centralised states. But as the movement evolved into different strands, the rhetoric became increasingly apocalyptic. A group called Ocean Builders, for instance, has presented the floating homes it is designing in Panama as a “lifeboat” to escape disasters such as the Covid pandemic, as well as government tyranny.

These “SeaPods” have much in common with the luxury bunkers back on terra firma. Designed by Dutch architect Koen Olthuis, they consist of a streamlined capsule elevated above the surface of the ocean, with steps leading down the inside of a floating pole to give access to a small block of underwater rooms. They are imagined as lavishly crafted, exclusive products, reminiscent of holiday retreats, with autonomous supplies of energy, food and water. The only problem is such designs need to be trialled in coastal waters, and for some reason governments have not been very receptive to anarcho-capitalist tax-dodgers trying to establish sovereign entities along their shorelines.

But entertaining as it is to ridicule these schemes, there is a danger that it becomes a kind of avoidance strategy. It is not actually far-fetched to acknowledge the possibility of a far-reaching social collapse (on a regional level, it has already occurred numerous times in living memory), and even the United Nations has embraced the speculative prepper mindset. Anticipating the potential effects of climate change, the UN is backing another version of seasteading, with modular floating islands designed by the fashionable architect Bjarke Ingels.

All civilisations must come to an end eventually, and ours is fairly fragile. In complex systems like those we rely on for basic goods and materials, a breakdown in one area of the network can have dramatic destabilising effects for the rest. We have already seen glimpses of this with medical shortages during the pandemic, and soaring energy costs due to the Ukraine war. How far we will fall in the event of a sudden collapse depends on the back-up systems in place. One of the more intriguing figures offering emergency retreats for the wealthy, the American businessman J.C. Cole, is also trying to develop a network of local farms to provide a broader population with a sustainable food supply. Cole witnessed the collapse of the Soviet Union the early 1990s, and inferred that Latvia experienced less violence because people were able to grow food on their dachas

But perhaps the most unnerving prospect, as well as the most likely, is that collapse won’t be sudden or dramatic. There won’t be a moment to rush into a bunker, or to use emergency mechanic skills acquired from Youtube. Rather, the fabric of civilised life will fray slowly, with people making adjustments and improvising solutions to specific problems as they appear. Only gradually will we transition into an entirely different kind of society.

In South Africa, the country where I was born and where I am writing now, this process seems to be going on all the time. There are pockets of extraordinary wealth here, but public infrastructure is crumbling. The power cuts out for several hours every day, so many businesses have essentially gone off-grid with their own generators. The rail network has largely disappeared. Private security companies have long since replaced various policing functions, even among the middle class, and better-organised neighbourhoods coordinate their own patrols. Meanwhile in poorer areas, people still inhabit a quasi-modern world of mobile phones and branded products, but face a constant struggle with badly maintained water, electricity and sewage systems. Housing is often improvised and travel involves paying for a seat on a minibus.

The point is not that South Africa is collapsing – it still has a lot going for it, and besides, most of its population has never enjoyed first-world comforts – but that this is how it might look if, like many civilisations in the past, the advanced societies of today were to “collapse” gradually, over generations. It would be a slow-motion version of the polarisation between survivors and rejects that we see in the escape plans of the super-rich. And though we would realise things are not as they should be, we would keep hoping the decline was just temporary. Only in the distant future, after a new civilisation had arisen, would people say that we lived through a kind of apocalypse.

Anyway, merry Christmas everyone.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Infinite Style

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Ever since Charles Frederick Worth sat down to have his picture taken wearing a dark beret, thereby inviting comparisons with Rembrandt, we have liked to think of fashion design as a glamorous art form.

Worth, whose photograph also reveals a drooping moustache and fur collar, is generally regarded as the first modern fashion designer. He made gowns for the high-society women of late-19th century Europe, and greased the wheels of a nascent consumerism; some of his clients, like France’s Empress Eugénie, boasted about never wearing the same dress twice.

Since then high fashion has featured a long series of artistic visionaries, from the ingenious Coco Chanel to the stylistic rebel Alexander McQueen. The names of such designers, immortalised as luxury brands, confer a sense of prestige long after their death. Like famous artists, designers can command staggering prices: earlier this year, a limited run of 200 Nike Air Force 1 sneakers, designed by the recently deceased Virgil Abloh, fetched $25 million at Sotheby’s. One can judge the high status of the profession by the fact that celebrities become designers almost as often as they become authors.

But the aura surrounding the most celebrated designers gives a misleading impression of the fashion business. It is a feature of most design forms that artistic creativity and craft are constantly being outpaced by technological change, and fashion is no exception.

Today the world’s biggest fashion retailer is a company founded by a former search-engine specialist. That company is Shein, launched by Chris Xu in 2008 and based in Nanjing, China. It has no stores and no style to call its own, but it does have a shrewd understanding of online culture and an innovative logistics operation.

In the last few years, Shein has developed a model of “real-time fashion.” It uses algorithms to trawl the oceans of social media content, picking up new trends as they emerge. These styles are imitated by its design team, before sophisticated supply-chain software assigns orders to workshops in Guangdong. New items can be turned around in just ten days, and as many as 6,000 of them appear on Shein’s website daily (it is the most visited fashion site in the world, despite the company not even selling its goods in China itself). These bargain-priced clothes are marketed on TikTok and Youtube by paid celebrity influencers like Katy Perry, and by legions of Gen-Z girls who film themselves trying on their latest “Shein haul”.

There is a barely a trace of the traditional fashion designer in this operation: no artistic vision or coherent identity. In a kind of postmodern twist, the task of generating new fashions has been outsourced to the consumers, who influence one another on social media to build up their own personal brands. Shein just picks the winning horses and encourages the trends to take off. It still relies on the artistic efforts of others – the company has stolen ideas from a bewildering array of artists and designers – but only once the virtual hive-mind has hinted at their desirability.

This is certainly a case of technology crowding out creativity. Big-data analytics and cloud-based software fill roles that once needed an aesthetic sense and an intuition for where culture is headed. But the major technological change behind all of this relates to social media, which has enabled a shift of authority from designers to influencers.

This shift has been a long time in coming, since modern commercial design has always been bound up with the mimetic aspect of culture, or the desire to copy those with status. Josiah Wedgwood, the British ceramicist and entrepreneur, noted in the late-18th century that after a member of the royal family began using one of his ranges, it rapidly spread “almost over the whole Globe.” He concluded that influential patrons could be “as necessary to the sale of an article of Luxury, as real Elegance and beauty.”

No design form has exploited this insight more effectively than fashion. Charles Worth made his name through the patronage of glamorous figures like Empress Eugénie, while his successors benefited from a mass media that publicised what the smart set was wearing. Eventually fashion grew entwined with an advertising industry whose strategy was to associate a certain look with social status and sex appeal. Ad campaigns since the 1960s have generally been one long procession of the young, thin, carefree, glamorous and wealthy.

In other words, fashion has long relied on our desire to be like other people. Social media marks the ultimate triumph of this mimetic impulse, allowing influencers to cast a charismatic spell over their followers. Over the past decade, brands have increasingly been tapping into that relationship for marketing purposes, and who can blame them: in October last year the Chinese influencer Li Jiaqi, famous for his make-up styling videos, sold $1.9 billion worth of goods during a single live stream

But influencers are not blank canvases on which advertisers can paint their own messages. They construct their personas with distinct aesthetics, lifestyles and political causes. As such, they encroach on the creative agency of designers in a way that celebrities and models in earlier decades did not. Shein is only the most dramatic example of a brand allowing social media to determine its design vision; even at the most exalted fashion houses, influencers get involved in the design of the garments they wear, and occupy the front row of runway shows.

Shein has also revealed that influencing is becoming something much more habitual and amorphous. It is a common culture in which young people participate, seeking recognition and generating their own fashions and norms. The countless “Shein haul” videos on social media are a theatrical genre in their own right, a kind of ritual where would-be internet celebs copy each other and seek to be copied in turn. Shein’s key insight is that in this world, the designer’s vision counts for little. The goal is to provide an abundance of cheap props for the online theatre, and to follow the plotline wherever it leads.

Nonetheless, the ad campaigns of luxury brands did pave the way for Shein. By wrapping consumers in a cocoon of aspirational imagery, they fuelled a culture of appearances with little regard for the quality of products or the people who manufacture them. Hence Shein’s garments are apparently no less desirable for being poorly made in dubious working conditions. As for the supposed environmentalism of Gen-Z, it clearly doesn’t outweigh the shame of being called “cheugy,” or out of date.

Altogether, Shein’s real-time fashion revolution points to a less interesting designed environment in the future. Data harvesting and trend reports will increasingly tell designers exactly what we want. That sounds nice, but as the rise of influencers has underscored, what we want is usually what someone else has told us we should want. Expecting designers to follow the herd deprives us of their imagination and insight. We should value design for challenging us, not pandering to us. 

But by the same token, these developments will only enhance the prestige of the artistic designer. The guilty secret of artists is that their appeal depends, to a large degree, on the homogenising forces of mass culture that always appear to be pushing them to the margins. Living amidst an ocean of generic products sharpens our eye for creative brilliance, and turns it into a mark of refinement – especially as ambitious design tends to come with a higher price tag. Besides, the cult of personality bred by the Internet will only feed the romance of celebrity designers, who are themselves becoming social media stars.

So while design in general will be ruled by algorithms and rapidly shifting trends, artistic products will continue their ascent to the status of coveted luxury commodities. Or to put it another way, there will always be space for a few original designers in the top rung of influencers.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

The Rise and Fall of the Creative Class

This essay was first published at The Pathos of Things newsletter. Subscribe here.

There is nothing inherently creative about vintage furniture, repurposed industrial materials or menus in typewriter font, but if you found yourself in a coffee shop with all of these elements present, then “creative” would be a common way to describe the vibe. Even more so if there was a yoga studio upstairs and a barista with neck tattoos. 

This visual language could also be called trendy or hipster, but the connotations are much the same. It is meant to invoke the imagined lifestyle of an urban creative – someone in or around the arts and media crowd – as it might have looked in Hackney or Williamsburg circa 2010. It signifies an attitude that is cultured but not elitist, cosmopolitan but not corporate, ethical but not boring, laid-back but still aspirational. In its upmarket versions (think more plants, more exotic words on the menu), the “creative” idiom implies a kind of refined hedonism, an artistic appreciation of beautiful and meaningful experiences.

Whether creatives can actually be found in such settings is beside the point, for once a lifestyle has been distilled into aesthetics it can be served to anyone, like an espresso martini. Indeed, the generic symbols of the creative lifestyle – suspended ceiling lights with large bulbs and metal hoods are an obvious example – have now spread everywhere, into Pizza Express restaurants and bankers’ apartments. 

The strange thing is that this triumph of the creative class in the realm of cultural capital has gone hand in hand with its economic evisceration. If you did see an actual creative in our imagined coffee shop – a photographer perhaps, or dare I say a writer – he or she would most likely be working frantically on a laptop, occupied with some form of glorified gig-economy job, or struggling to get a beleaguered business off the ground, or grinding away at a commercial sideline that squeezes out actual creative work.

Everyone wants to buy into the dream of the creative lifestyle, or at least drink cocktails in a place that invokes it, but for most creatives this is as much a fantasy as it is for everyone else.

If there is one institution that can help us understand this state of affairs, it is the Soho House network of private members’ clubs. Founded by the restaurateur Nick Jones, the first Soho House opened in 1995 on London’s Greek Street. It joined a number of exclusive new venues aimed at arts and media professionals, offering, as its website tells us, a place for “like-minded creative thinkers to meet, relax, have fun and grow” – or at least those deemed worthy of membership. In 2003, a Soho House opened in New York’s Meatpacking District, one of the first steps in a dizzying expansion which has seen some forty members’ clubs appear everywhere from West Hollywood to Barcelona, Miami to Mumbai.

In terms of aesthetics, Soho House did a lot to define the “creative” style. Ilse Crawford’s interior design for the New York venue became a landmark of sorts. Ranging over six floors of a converted warehouse, whose raw industrial features were emphasised rather than played-down, it announced the hipster affinity for obsolete, forgotten or simply nostalgic spaces. A bedroom where graffiti had been left on the wall was apparently a members’ favourite. Crawford’s furnishings likewise set the trend for combining antiques, modern design classics and found objects in an eclectic approach that tried to be both modern and comfortable.

For all its apparent authenticity and bohemian flavour, this style has since been exported around the world as seamlessly as McDonalds, not least within the Soho House empire itself. The brand prides itself in giving every venue a local accent, but it has really shown the uncanny way that a design formula can make very different settings look the same. 

In my reading, what all this illustrates is the emergence of a new creative elite – film producers and actors, fashion designers and models, publishers and magazine editors, musicians, advertising executives and so on – whose ambition and self-confidence were such that they did not want to merge with the existing circles of privilege. The exclusivity of these members’ clubs, buttressing the special status of “creativity,” was not about keeping the plebs out. It was about drawing a distinction with the philistines of the City of London and Wall Street, and with the stale elitism of Old Boys Clubs.

“Unlike other members’ clubs, which often focus on wealth and status,” explained the Soho House website a few years ago, “we aim to assemble communities of members that have something in common: namely, a creative soul.” When they first appeared, the brand’s distinctive aesthetics drew a contrast, above all, with the slick corporate interiors of luxury hotels in the 1990s. No suits was both the dress code and a principle for assessing membership applications, and those deemed “too corporate” have on occasion been purged. Another functionally equivalent measure was the “no-assholes rule,” though this did not stop Harvey Weinstein from winning a place on the initial New York membership.

But crucially, there is a wider context for the appearance of this creative elite. The first Soho House opened against the backdrop of rising excitement about the “creative industries,” a term adopted by Britain’s New Labour government in 1998. This idea was hopelessly baggy, grouping together advertising and architecture with antiques and software development. Nonetheless, it distilled a sense that, for the post-industrial societies of the west, the future belonged to those adept in the immaterial realms of communication, meaning and desire. In technical terms, the organising principle for this vision was to be the creation and control of intellectual property.  

Economists insisted that, beyond a certain income threshold, people wanted to spend their money on artistic and cultural products. A supporting framework of information technology, university expansion, globalisation and consumer-driven economic growth was coming into view. And a glimpse of this exciting future had already appeared in the Cool Britannia of the 1990s, with its iconoclastic Young British Artists, its anthemic pop bands, its cult films and edgy fashion designers. 

Institutions like Soho House provided a new language of social status to express these dreams of a flourishing creative class, a language that was glamorous, decadent, classy and fun. The song Ilse Crawford used when pitching her interior design ideas – Jane Birkin and Serge Gainsbourg’s 69 Année Érotique – captures it nicely, as does a British designer’s attempt to explain the club to the New York Times: “Think of the Racquet Club but with supermodels walking through the lobby.” The appeal of this world still echoes in the fantasy of the creative lifestyle, and surely played a part in persuading so many in my own generation, the younger millennials, to enter creative vocations.

So what happened? Put simply, the Great Financial Crisis of 2007-8 happened, and then the Great Recession, rudely interrupting the dreams of limitless growth to which the hopes of the creative industries were tied. In the new economy that formed from the wreckage, there was still room for a small elite who could afford Soho House’s membership fees, but legions of new graduates with creative aspirations faced a very different prospect. 

Theirs was a reality of unpaid internships and mounting student debts, more precarious and demanding working livesa lot of freelancingpoor income prospects, and reliance on family or second jobs to subsidise creative careers. Of course some of the pressures on young people in the arts and media, like high living costs and cuts in public funding, vary from place to place: there is a reason so many moved from the US and UK to Berlin. By and large though, the post-crash world starved and scattered the creative professions, squeezing budgets and forcing artists into the grim “independence” of self-promotion on digital platforms. 

One result of this was a general revulsion at capitalism, which partly explains why artisan ideals and environmentalism became so popular in creative circles. But despite this skepticism, and even as career prospects withered, the creative lifestyle maintained its appeal. In fact, the 2010s saw it taking off like never before. 

Young people couldn’t afford houses, but they had ready access to travel through EasyJet and AirBnB, to content through Spotify and Netflix, to a hectic nightlife through cheap Ubers, and they could curate all of these experiences for the world via Instagram. They could, in other words, enjoy a bargain version of the cultured hedonism that Soho House offered its members. The stage-sets for this lifestyle consumerism were the increasingly generic “creative” spaces, with their exposed brick walls and Chesterfield armchairs, that multiplied in fashionable urban districts around the world.

Perhaps the best illustration of this perverse situation is the development of the Soho House empire. Alongside its exclusive members’ clubs, the company now owns a plethora of trendy restaurant chains for the mass market. You can also get a taste of the Soho House lifestyle through its branded cosmetics, its interior design products, or a trip to one of its spas. With new membership models, freelancers can take their place among the massed vintage chairs and lamps of the brand’s boutique workspaces. There was even talk of going into student accommodation.

And so an institution that symbolised the promise of a flourishing creative class now increasingly markets the superficial trappings of success. As a kind of compensation for the vocational opportunities that never materialised, creatives can consume their dreams in the form of lifestyle, though even this does not make them special. The 2010s were also the decade when the corporate barbarians broke into the hipster citadels, occupying the clothes, bars and apartments which the creative class made desirable, and pricing them out in the process.

In one sense, though, the “creative industries” vision was correct. Intellectual property really is the basis for growth and high incomes in today’s economy; see, for instance, the longstanding ambition of the Chinese state to transition “from made in China to designed in China.” But the valuable intellectual property is increasingly concentrated in the tech sector. It is largely because IT and software are included that people can still claim the creative industries are an exciting area of job creation.

The tech world is, of course, a very creative place, but it represents a different paradigm of creativity to the arts and media vocations we inherited from the late-20th century. We are living in a time when this new creativity is rapidly eclipsing the old, as reflected by the drop in arts and humanities students, especially in the US and UK, at the expense of STEM subjects. Whether tech culture will also inherit the glamour of the declining creative milieu I can’t say, but those of us bred into the old practices can only hope our new masters will find some use for us.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

How We Got Hooked on Chips

This essay was first published at The Pathos of Things newsletter. Subscribe here.

As I am making the final edits to this article, the media are reporting that Chinese fighter jets are flying along the narrow strait separating China from Taiwan.

This dramatic gesture, along with other signals of military readiness, raises the spectre of a catastrophic conflict between the world’s two superpowers, China and the United States. I just hope that by the time this is published, you don’t have to read it in a bunker.

The immediate reason for this crisis is an official visit on Wednesday by Nancy Pelosi, the Speaker of the US House of Representatives, to Taiwan, an island that China claims as part of its territory. But remarkably, one of the deeper sources of this tension is a story about design.

Taiwan has something that every powerful nation on the planet wants, and needs access to. It has the Taiwan Semiconductor Manufacturing Company (TSMC), an industrial enterprise that manufactures half of the world’s semiconductors, or computer chips. More importantly, TSMC makes the vast majority of the most advanced logic chips (only South Korea’s Samsung can produce them to a similar standard). These advanced chips provide the computing power behind our most important gadgets, from smartphones and laptops to artificial intelligence, cloud software and state-of-the-art military technology.

Pelosi’s incendiary visit this week reflects a striking fact about the early-21st century: the world’s most powerful states cannot provide for themselves stamp-sized electronic components made in a factory. The precariousness of this situation has hit home in recent years, as trade-wars, lockdowns and supply chain disruptions have created a global semiconductor shortage. This cost the car industry an estimated $210 billion last year, while slashing Apple’s output by up to 10 million iPhones. Chip shortages are a major obstacle to US efforts to supply Ukraine with weapons (a Javelin rocket launcher uses around 250 semiconductors).

So why don’t states just make their own advanced chips? They are trying. The US and the European Union are each offering investments of around $50 billion for domestic semiconductor manufacturing. This will largely involve subsidising Intel, the last great American hope for advanced chip-making, in its ambitions to catch up with its rivals in Taiwan and South Korea. Meanwhile China, rapidly gaining ground in the chip race despite US efforts to hamper it, is spending much more than that.

It would be comforting to think that all this is just the result of complacency: that western governments did not realise their dangerous dependence on Taiwanese chips, and will now lessen that dependence. This would allow us to feel slightly more confident that a face-off over Taiwan will not provide the spark for World War Three.

The reality is more sobering. None of the plans being laid now appear likely to diminish the importance of Taiwan. What is more, TSMC represents only one aspect of the west’s dependence on Asia for the future of its chip industry. To understand why, we have to look at how semiconductors became the most astonishing design and engineering achievement of our age.

The surface of a computer chip is like a city, only it is measured not in miles, but in microns. This city is not made from buildings and streets, but from transistors etched in silicon: hundreds of millions of them carefully arranged in every square millimetre. Each transistor is essentially just a gate, which can be opened or closed to regulate an electric current. But billions of transistors, mapped out in a microscopic architecture, can serve as the brain of your smartphone.

Recognising our dependence on such artefacts is unnerving. It reveals how the texture of our lives is interwoven with economic and technological forces we can barely comprehend. Semiconductors are everywhere: in contactless cards, household appliances, solar panels, pacemakers, watches, and all kinds of medical equipment and transportation systems. Aside from the logic chips that provide computing ability, semiconductors are needed for functions like memory, power and remote connection. They are the intelligent hardware underpinning the entire virtual universe of the Internet: we think and feel and dream in languages that semiconductors have made possible.

All this is thanks to a somewhat esoteric doctrine known as Moore’s Law. In 1965, the research director Gordon Moore predicted that the number of transistors on a computer chip would double every year, an estimate he later revised to every two years. This was, in effect, a prediction of the exponential growth of computing power, as well as its falling cost, and it has proved remarkably accurate to this day. The strange thing is that Moore’s Law has no hard scientific underpinning: it was simply an extrapolation based on Moore’s observations of the early semiconductor industry. But his “law” has become an almost religious mission within that industry, a prescribed rate of progress that every generation of designers and engineers seeks to uphold.

The result has been an incredible series of innovations, allowing transistor density to keep increasing despite regular claims that the laws of physics won’t allow it. Moore’s Law gives semiconductor production its defining characteristic: chips are never something that you can learn how to make once and for all. Tomorrow’s chips are always just around the corner, and they will probably require a whole new technique.

This is why, by the late 1980s, it was becoming financially impossible for the great majority of firms to manufacture advanced chips. Doing so, then as now, means placing huge bets on new ideas that might produce the next breakthrough demanded by Moore’s Law. It means spending billions of dollars on a factory, which needs enough orders so that it can run 24/7 and recoup its costs. And it means doing this in the knowledge that everything will need to be upgraded or replaced in a matter of years.

The solution to this impasse came in 1987 with the founding of TSMC, assisted, ironically enough, by engineers and technology transfers from the United States. The Taiwanese company’s key innovation was to focus purely on manufacturing, allowing all the other firms that want to make chips to specialise in design. With its gigantic order book, TSMC makes enough money to continuously invest in new manufacturing techniques. Meanwhile, companies such as British-based ARM, and Apple and Qualcomm in the US, have focused on designing ever more revolutionary chips to be manufactured by TSMC.

With this basic division of labour in place, the semiconductor industry became a highly specialised, intensely competitive global enterprise. It takes millions of research and engineering hours to design new chips, many of which are done in India to make the process faster and cheaper (up to date statistics are hard to find, but by 2007 just 57% of engineers at American chip companies were based in the US). The semiconductors are made in Taiwan with Dutch machinery and Japanese chemicals and components, before being taken to China for testing, assembly and installation. And increasingly, the profits needed to keep everything going come from Asian consumers.

This is how Moore’s Law has been upheld, and how we have received ever-improving gadgets over the past three decades. But this chip-making system relies on each of its key points developing intense expertise in a specific area, to deliver constant techno-scientific progress and cost efficiency. TSMC is one of those key points, and its skills and experience cannot simply be copied elsewhere.

It is worth taking a moment to appreciate the mind-bending process, known as Extreme Ultraviolet Lithography, by which TSMC makes the most cutting-edge chips. It involves a droplet of molten tin, half the size of a single hair’s breadth, falling through a vacuum and being vaporised by a laser that fires fifty thousand times per second. This produces a burst of light which you cannot actually see, since its short wavelength renders it invisible. After bouncing off numerous lenses, that light will meet the chemically treated surface of a silicon wafer, where, over the course of numerous projections, it will etch the billions of transistors that allow a chip to function. These transistors are hundreds of times smaller than the cells in our bodies, not much larger than molecules.

Now we can begin to grasp why America and Europe are unlikely to replicate TSMC on their own shores. In fact, the Taiwanese company already operates factories in the US, making less advanced chips; but as its founder Morris Chang recently revealed, the lack of industrial expertise there prevents it from being competitive. Chang called the whole idea of an American semiconductor revival “a very expensive exercise in futility.” Analysts have similarly poured scorn on the EU’s chip manufacturing ambitions. 

Given the scale of the challenge, the investments on offer in the US and EU are practically chump change. They are already spent by companies like TSMC and Samsung every single year. And those companies can buy more with it too: according to one assessment, the cost of building and running a semiconductor plant in the US is one-third higher than in Taiwan, South Korea or Singapore. Another widely cited report estimated that if the US wanted semiconductor self-sufficiency, it would cost over $1 trillion, or twenty times the sum currently on offer.

As for the United Kingdom, the less said the better. The UK finds itself with a potentially pioneering semiconductor plant in Newport, Wales, but has allowed the research facility there to lapse into a storage area, and has been trying to sell it to a Chinese-owned company for several years. As Ed Conway concludes, the UK government’s semiconductor strategy is simply non-existent. Absurdly, the responsibility for devising one was given to the Department for Digital, Culture, Media and Sport.

In short, the US and Europe are a long way from making semiconductors at a scale and with a proficiency that would seriously reduce dependence on TSMC. It is not just about mastering the cutting-edge techniques of today; it is about generating enough revenue to research and develop the techniques of tomorrow. And it is not just about meeting current demand; it is about building capacity for a decade in which global demand is expected to double. Advanced chips will be needed for 5G gaming and video streaming, artificial intelligence and home offices.

I will leave it to the international relations people to assess what this means for US-China relations. The obvious conclusion is that western dependence on TSMC raises the stakes of a potential Chinese invasion of Taiwan, which in turn makes it more likely that the US will provoke China with its support for the island’s independence. But the picture is extremely knotty, given China’s own centrality to the business models of western companies, including chip designers.

What seems more clear is that the politics surrounding semiconductors in the west are highly misleading. Support for domestic chip-making has been tied into a narrative about moderating the excesses of globalisation and rebuilding industry at home. But in practice, we are talking about state subsidies for huge global companies which continue to rely on access to foreign labour markets and consumers. This contradiction is neatly captured by Intel banging the drum for more government investment, while simultaneously lobbying to ensure it can continue taking its technology to China.

Yet this is only fitting, since semiconductors are emblematic of the contradictions of post-1990s globalisation. A system defined by economic openness and expansion ultimately concentrated power in the hands of those supplying the most important resources, whether it be technological expertise or cheap fossil fuels. And even if the system unravels, our dependence on the resources will remain. The events surrounding Taiwan this week are just another reminder of that.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Fake It ‘Til You Break It

This article was originally published by The Critic magazine in November 2022. Read it here.

n Friday the saga of Elizabeth Holmes will move one step closer to its conclusion. Holmes, founder of the ill-fated health tech company Theranos, was convicted of fraud and conspiracy at the start of this year, and she will now receive her sentence. This is bad news for the army of hacks, podcasters and documentary makers who have spent years making hay from the Theranos debacle, a story of a charismatic young woman who fooled a wide swathe of elite America with her vision of innovation as a force for good. 

They shouldn’t worry. With impeccable timing, a new tale of investor credulity and disastrous personal ambition has burst into the headlines. Last week, the cryptocurrency exchange FTX dramatically imploded after details of its creative accounting techniques were leaked to a news website, and a rival exchange started a run on its digital tokens. Billions of dollars belonging to investors and customers evaporated more or less overnight. As the news unfolded, all eyes turned to Sam Bankman-Fried, the 30-year-old crypto whizz-kid and Effective Altruism guru who exercised close control over FTX and a powerful grip on the imagination of Silicon Valley investors.

Bankman-Fried has reportedly been taken into custody by authorities in the Bahamas, where FTX was based. I won’t comment on the legal implications of his demise. What we can say, though, is that between him and Holmes, we havemounting evidence that the cult of the disruptive genius at the heart of tech capitalism has become a danger to the public.

Bankman-Fried rose through the crypto world on a wave of high-minded talk and personal charm. He offered financial brilliance along with the image of a scruffy outsider, appearing on stage with Bill Clinton and Tony Blair in a t-shirt and shorts, playing video games during business calls and bragging about sleeping on a beanbag next to his desk. Closely associated with the Effective Altruism movement — a school of ethics that seeks the most rational ways of maximising human wellbeing — Bankman-Fried claimed he was getting stinking rich so he could give it away. He cemented his public profile by cooperating with lawmakers in Washington over crypto regulation, whilst sponsoring various sports teams and donating to the Democratic party. Naturally he also made time to hobnob with celebrities like the supermodel Gisele Bundchen, appointed to lead FTX’s environmental and social initiatives. 

The investors lapped it up. A now-deleted article on the website of Sequoia Capital, the venture capital firm that previously backed PayPal and Google, described the response of its partners when Bankman-Fried pitched to them: “I LOVE THIS FOUNDER”; “I am 10 out of 10”; “YES!!” The problem is, neither theynor anyone else beyond a tight circle of friends had the full picture of FTX’s financial dealings. If they had, they might have seen that customer deposits werebeing loaned to Alameda Research, another of Bankman-Fried’s companies, to shore up risky investments. In the end, the assets FTX held as collateral weremostly its own digital tokens, whose value crashed when panicked customers tried to withdraw their funds en masse.

Bankman-Fried’s rise and fall shows a more than passing resemblance to the case of Holmes and Theranos. She too cultivated a disruptive image, a Stanford drop-out turned founder at the age of nineteen, wearing black turtlenecks in a weird homage to Steve Jobs. Her promise of a revolutionary blood-testing technology, bypassing scary needles and replacing doctors’ appointments with a trip to Walmart, won her more illustrious supporters than it’s possible to enumerate. They included Barack Obama, Hillary Clinton, Henry Kissinger, Bill Gates, Rupert Murdoch and Betsy DeVos. As revealed by John Carreyou in his 2015 Wall Street Journal exposé, and by a succession of witnesses at Holmes’ trial, she lied about the capabilities of her so-called Edison machine, keeping her supposedly ground-breaking tech cloaked in secrecy to hide its failures. 

Prosecutors demanding a tough sentence for Holmes this week claim she “seized upon investors’ desire to make the world a better place in order to lure them into believing her lies”. Weare already seeing similar claims of moral betrayal from Bankman-Fried’s supporters, including a hand-wringing Twitter thread by leading Effective Altruism philosopher William MacAskill. It would be comforting to think that Holmes and Bankman-Fried were just massive grifters — sociopaths preying on the goodwill of others — but this would be letting their backers and the system that produced them off the hook. The real story here is surely the lack of scepticism shown towards these celebrity entrepreneurs once their messianic image was established. 

Holmes’ followers had no right to be astonished that Theranos turned out to be a dud, given that clinical scientists were raising flags about the company’s secrecy long before it became a public scandal. Writing a New Yorker profile of Holmes in2014, at the height of her fame, Ken Auletta described her explanation of the technology as “comically vague”. Did none of Theranos’ board members stop to ask why their illustrious colleagues included just two people with a medical licence?

With Bankman-Fried the signs were even more obvious. In a now-infamous Bloomberg interview, the founder described part of his business model as a magic box that people stuff money into simply because others are doing so, and which “literally does nothing” apart from generate tokens that can be traded based on the hypothetical potential of said box. The journalist Matt Levine paraphrased this theory as “I’m in the Ponzi business and it’s pretty good”, which Bankman-Fried admitted was “a pretty reasonable response”. 

The entrepreneur Antonio García Martínez has an interesting historical take on the FTX fiasco, pointing out that charisma and speculation are typical in the early stages of a new technological paradigm, before it settles into a more stable, regulated status-quo. “Innovation starts in mad genius and grift and bubbles,” he writes, “and ends in establishment institutions that go on to reject the next round of mayhem.” A good point, but hardly a reassuring one, given that there will always be another hyper-ambitious figure promising to open up a new frontier.

We know this because there is such obvious demand in American elite society for individuals who can legitimise tech capitalism, whether through their aura of personal brilliance or by demonstrating its potential for beneficial progress. The reverence for Steve Jobs and Elon Musk are proof of this, but Holmes and Bankman-Fried went a step further by presenting themselves as ambitious prodigies and evangelical do-gooders. Where did they learn this formula for adulation? It’s notable that both are themselves quintessential products of the elite: Holmes’ parents were Washington insiders; Bankman-Fried’s were Stanford law professors. 

Ironically, the danger posed by such figures is not so much that they are “disruptive”, but that they awaken a deeply conformist desire to worship the glamorous heralds of progress. A desire, it seems, that can make people rather gullible. 

According to the latest reports, the collapse of Bankman-Fried’s crypto empire could affect as many as a million creditors, which is saying nothing of the individuals whose savings were implicated through institutional investors. Still, we can be grateful it happened now, and not at a point where cryptocurrency had become large enough to pose a systemic threat to the financial system. As for Theranos, the testimony of patients who received false blood-test results ought to be warning enough about what a close call that was. How long will our luck last? Given the ever-growing role of technology in our lives, the next hyped-up young genius may cause more havoc still. 

Design for Dictators

This essay was first published at The Pathos of Things newsletter. Subscribe here.

The 1937 World Fair in Paris was the stage for one of the great symbolic confrontations of the 20th century. On either side of the unfortunately titled Avenue of Peace, with the Eiffel Tower in the immediate background, the pavilions of Nazi Germany and the Soviet Union faced one another. The former was soaring cuboid of limestone columns, crowned with the brooding figure of an eagle clutching a swastika; the latter was a stepped podium supporting an enormous statue of a man and woman holding a hammer and sickle aloft.

This is, at first glance, the perfect illustration of an old Europe being crushed in the antagonism of two ideological extremes: Communism versus National Socialism, Stalin versus Hitler. But on closer inspection, the symbolism becomes less clear-cut. For one thing, there is a striking degree of formal similarity between the two pavilions. And when you think about it, these are strange monuments for states committed, in one case, to the glorification of the German race, and in the other, to the emancipation of workers from bourgeois domination. As was noted by the Nazi architect Albert Speer, who designed the German structure, both pavilions took the form of a simplified neoclassicism: a modern interpretation of ancient Greek, Roman, and Renaissance architecture.

These paradoxes point to some of the problems faced by totalitarian states of the 1920s and 30s in their efforts to use design as a political tool. They all believed in the transformative potential of aesthetics, regarding architecture, uniforms, graphic design and iconography as means for reshaping society and infusing it with a sense of ideological purpose. All used public space and ceremony to mobilise the masses. Italian Fascist rallies were politicised total artworks, as were those of the Nazis, with their massed banners, choreographed movements, and feverish oratory broadcast across the nation by radio. In Moscow, revolutionary holidays included the ritual of crowds filing past new buildings and displays of city plans, saluting the embodiments of Stalin’s mission to “build socialism.”

The beginnings of all this, as I wrote last week, can be seen in the Empire Style of Napoleon Bonaparte, a design language intended to cultivate an Enlightenment ethos of reason and progress. But whereas it is not surprising that, in the early 19th century, Napoleon assumed this language should be neoclassical, the return to that genre more than a century later revealed the contradictions of the modernising state more than its power.

One issue was the fraught nature of transformation itself. The regimes of Mussolini, Hitler and Stalin all wished to present themselves as revolutionary, breaking with the past (or at least a rhetorically useful idea of the past) while harnessing the Promethean power of mass politics and technology. Yet it had long been evident that the promise of modernity came with an undertow of alienation, stemming in particular from the perceived loss of a more rooted, organic form of existence. This tension had already been engrained in modern design through the medieval nostalgia of the Gothic revival and the arts and crafts movement, currents that carried on well into the 20th century; the Bauhaus, for instance, was founded on the model of the medieval guild.

This raised an obvious dilemma. Totalitarian states were inclined to brand themselves with a distinct, unified style, in order to clearly communicate their encompassing authority. But how can a single style represent the potency of modernity – of technology, rationality and social transformation – while also compensating for the insecurity produced by these same forces? The latter could hardly be neglected by regimes whose first priority was stability and control.

Another problem was that neither the designer nor the state can choose how a given style is received by society at large. People have expectations about how things ought to look, and a framework of associations that informs their response to any designed object. Influencing the public therefore means engaging it partly on its own terms. Not only does this limit what can be successfully communicated through design, it raises the question of whether communication is even possible between more radical designers and a mass audience, groups who are likely to have very different aesthetic intuitions. This too was already clear by the turn of 20th century, as various designers that tried to develop a socialist style, from William Morris to the early practitioners of art nouveau in Belgium, found themselves working for a small circle of progressive bourgeois clients.

Constraints like these decided much about the character of totalitarian design. They were least obvious in Mussolini’s Italy, since the Fascist mantra of restoring the grandeur of ancient Rome found a natural expression in modernised classical forms, the most famous example being the Palazzo della Civiltà Italiana in Rome. The implicit elitism of this enterprise was offset by the strikingly modern style of military dress Mussolini had pioneered in the 1920s, a deliberate contrast with the aristocratic attire of the preceding era. The Fascist blend of ancient and modern was also flexible enough to accommodate a more radical designers such as Giuseppe Terragni, whose work for the regime included innovative collages and buildings like the Casa Del Fascio in Como.

The situation in the Soviet Union was rather different. The aftermath of the October Revolution of 1917 witnessed an incredible florescence of creativity, as artists and designers answered the revolution’s call to build a new world. But as Stalin consolidated his dictatorship in the early 1930s, he looked upon cultural experimentation with suspicion. In theory Soviet planners still hoped the urban environment could be a tool for creating a socialist society, but the upheaval caused by Stalin’s policies of rapid industrial development and the new atmosphere of conservatism ultimately cautioned against radicalism in design.

Then there was the awkward fact that the proletariat on whose behalf the new society would be constructed showed little enthusiasm for the ideas of the avant garde. When it came to building the industrial city of Magnitogorsk, for instance, the  regime initially requested plans from the German Modernist Ernst May. But after enormous effort on May’s part, his functionalist approach to workers’ housing was eventually rejected for its abstraction and meanness. As Stephen Kotkin writes, “for the Soviet authorities, no less than many ordinary people, their buildings had to ‘look like something,’ had to make one feel proud, make one see that the proletariat… would have its attractive buildings.”

By the mid-1930s, the architectural establishment had come to the unlikely conclusion that a grandiose form of neoclassicism was the true expression of Soviet Communism. This was duly adopted as Stalin’s official style. Thus the Soviet Union became the most reactionary of the totalitarian states in design terms, smothering a period of extraordinary idealism in favour of what were deemed the eternally valid forms of ancient Greece and Rome. The irony was captured by Stalin’s decision to demolish one of the most scared buildings of the Russian Orthodox Church, the Cathedral of Christ the Saviour in Moscow, and resurrect in its place a Palace of the Soviets. Having received proposals from some of Europe’s most celebrated progressive architects, the regime instead chose Boris Iofan to build a gargantuan neoclassical structure topped by a statue of Lenin (the project was abandoned some years later). Iofan himself had previously worked for Mussolini’s regime in Libya.

If Stalinism ended up being represented by a combination of overcrowded industrial landscapes and homages to the classical past, this was more stylistic unity than Nazi Germany was able to achieve. Hitler’s regime was pulled in at least three directions, between its admiration for modern technology, its obsession with the culture of an imagined Nordic Volk (which, in a society traumatised by war and economic ruin, functioned partly as a retreat from modernity), and Germany’s own tradition of monumental neoclassicism inherited from the Enlightenment. Consequently there was no National Socialist style, but an assortment of ideological solutions in different contexts.

Despite closing the Bauhaus on coming to power in 1933, the Nazis imitated that school’s sleek functionalist aesthetic in its industrial and military design, including the Volkswagen cars designed to travel on the much-vaunted Autobahn. Yet the citizens who worked in its modern factories were sometimes provided housing in the Heimatstil, an imitation of a traditional rural vernacular. Propaganda could be printed in a Gothic Blackletter typeface or broadcast through mass-produced radios. But the absurdity of Nazi ideology was best demonstrated by the fact that, like Stalin, Hitler could not conceive of a monumental style to embellish his regime that did not continue in the cosmopolitan neoclassical tradition inspired by the ancient Mediterranean. The cut-stone embodiments of the Third Reich, including Hitler’s imagined imperial capital of Germania, were projected in the stark neoclassicism of Speer’s pavilion for the Paris World Fair. It was only in the regime’s theatrical public ceremonies that these clashing ideas were integrated into something like a unified aesthetic experience, as the goose-stepping traditions of Prussian militarism were updated with Hugo Boss uniforms and the crypto-Modernist swastika banner.

Of course it was not contradictions of style that ended the three classic totalitarian regimes; it was the destruction of National Socialism and Fascism in the Second World War, and Stalin’s death in 1953. Still, it seems safe to say that no state after them saw in design the same potential for a transformative mass politics. 

Dictatorships did make use of design in the later parts of the 20th century, but that is a subject for another day. As in the western world, they were strongly influenced by Modernism. A lot of concrete was poured, some of it into quite original forms – in Tito’s Yugoslavia for instance – and much of it into impoverished grey cityscapes. Stalinist neoclassicism continued sporadically in the Communist world, and many opulent palaces were constructed, in a partial reversion to older habits of royalty. Above all though, the chaos of ongoing urbanisation undermined any pretence of the state to shape the aesthetic environment of most of its citizens, a loss of control symbolised by the fate of the great planned capitals of the 1950s, Le Corbusier’s Chandigarh and Lúcio Costa’s Brasilia, which overflowed their margins with satellite cities and slums.

In the global market society of recent decades, the stylistic pluralism of the mega-city is the overwhelming pattern (or lack of pattern), seen even in the official buildings of an authoritarian state like China. On the other hand, I’ve recently argued elsewherethat various repressive regimes have found a kind of signature style in the spectacular works of celebrity architects, the purpose of which is not to set them apart but to confirm their rightful place in the global economic and financial order. But today the politics of built form feel like an increasingly marginal leftover from an earlier time. It has long been in the realm of media that aesthetics play their most important political role, a role that will only continue to grow.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Crisis and Heroic Design

This essay was first published at The Pathos of Things newsletter. Subscribe here.

One of my favourite artefacts is a series of banknotes designed by Herbert Bayer in 1923, during Weimar Germany’s famous hyperinflation. This was the period when, as you might recall from the images in your school history textbook, the German currency devalued so dramatically that people needed wheelbarrows of money to buy a loaf of bread, and cash was a cheaper way to start a fire than kindling.

Bayer’s banknotes, which came in denominations from one-million to fifty-million Marks, are emblematic of how crises can stimulate innovative design. If it wasn’t for the unusual problem of needing to produce an emergency supply of banknotes, it is unlikely the State Bank of Thuringia would have commissioned Bayer, who was then still a student at Bauhaus school of design. Bayer had no formal training in typography, but he did have some radical ideas involving highly simplified sans-serif numbers and letters, which he duly used for the banknotes. The descendants of those numbers and letters include the font you are reading right now.

This story resonates with an outlook we might call the heroic theory of design, where designers step up at moments of crisis to change the world. Typefaces don’t seem like a big deal, but Bayer’s ideas were part of wider movement to radically rethink every area of design for the practical benefit of society as a whole. By 1926, he had developed a “universal alphaphet” of lower-case-only, sans-serif letters, to make printing, typing and public communication more efficient and accessible. The Bauhaus (or bauhaus, as Bayer would have put it) was suffused with such urgent, experimental thinking, always framed as a response to the prevailing mood of crisis in Weimar Germany. This is part of the reason it remains the most influential design school in history, despite only operating for fourteen years.

The heroic theory is deeply appealing because it taps into the basic narrative of modern design: the promise of order in a world of constant change. The words “crisis” and “emergency” describe modernity at its most raw and contingent, a “moment of decision” (the original Greek translation of crisis) when the shape of the future is at stake in a fundamental way. Crises therefore seem to be the moments we are most in need of design and its order-giving potential, to solve problems and resolve uncertainty in an active, positive manner.

But to what extent can design actually play this heroic role in times of crisis, and under what conditions? This question is of more than academic interest, since we ourselves live in an era defined by multiple crises, from climate and pandemic to war, energy shortages, economic hardship and political turbulence. The German chancellor Olaf Scholz has even invoked the concept of Zeitenwende, “a time of transformation,” which was current during the Weimar years.

The eminent writer Alice Rawsthorn has responded with a new heroic theory of design, first labelled “design as an attitude” (the name of her 2018 book) and more recently “design emergency.” Rejecting the traditional image of design as a commercial discipline, Rawsthorn places her hope in resourceful individuals who find innovative answers to ecological, humanitarian and political problems. More broadly, she encourages us, collectively, to see crisis as an opportunity to actively remake the world for the better. There is even a link to Weimar, as Rawsthorn draws inspiration from another Bauhaus figure, the Hungarian polymath László Moholy-Nagy, who was Bayer’s teacher at the time he designed his banknotes.

The new breed of heroic designers includes the Dutch university student Boyan Slat, who crowd-funded an enormous initiative to tackle ocean pollution, and the duo of Saeed Khurram and Iffat Zafar, doctors who used video conferencing to deliver health care to women in remote Pakistan. Rawsthorn argues that, while many of our systems and institutions have shown themselves no longer fit for purpose, network technology is allowing such enterprising figures to find funding, collaborators and publicity for their ideas.

That point about the empowering potential of networks strikes me as crucial to the plausibility of this outlook. The Internet has definitely made it easier for individual initiatives to have an impact, but can this effect really scale enough to answer the crises we face? Going forward, the heroic theory hinges on this question, because history (including recent history) points in a different direction.

We need to draw a distinction between the general flourishing of creativity and ingenuity in times of crisis, and the most consequential design visions that are widely implemented. The latter, it seems, are overwhelmingly determined by established institutions. Take Bayer again; he could not have put his designs in hands of millions of people without the assistance of a state bank, any more than he could have actually solved the problem of hyperinflation. Likewise, the immediate impact of the Bauhaus and of Modernism in general, limited as it was, depended on its ability to persuade big manufacturers and municipal governments to adopt its ideas. Margarete Schütte-Lihotzky’s groundbreaking Frankfurt kitchen, which I wrote about recently, owed its success to the boldness of that city’s housing program.

Innovation in general tends to unfold in stages, and often with input from numerous sources, big and small. But in times of crisis, which typically demand large-scale, complex initiatives on a limited timescale, institutions with significant resources and organisational capacity play a decisive role. Insofar as individuals and smaller movements make a difference, they first need the levers of power to be put in their hands, as it were.

Wars are famously powerful engines of innovation, precisely because these are the moments when the state’s resources are most intensely focused. Addressing the problems of infectious disease and abject living conditions in the 19th century required not just city planners and sanitation experts, but governments to empower their designs. No one expects plucky outsiders to develop vaccines or mitigate the effects of financial crises. Even on the longer time horizon of climate change, the development of renewable energy is requiring extensive government and corporate involvement, partly to combat the vested interests of the status quo. The crucial breakthrough may turn out to be the emergence of a “green industrial complex,” a new set of powerful interests with a stake in the energy transition.

This does not mean the answers arrived at in this way are necessarily good ones, and they will certainly bear the stamp of the power structures that produce them. This is why slum clearances in the mid-19th century produced cities designed for property investors, while slum clearances in the mid-20th century produced public housing. That said, it is not straightforward to work out what a good answer to a crisis actually is.

Though crises usually have an underlying material reality, they are ultimately political phenomena: a crisis comes into existence with our perception of it (the “fear itself” that Franklin Roosevelt spoke of during the Great Depression). Thus an “effective” solution is one that addresses perceptions, even if its material results are questionable. Rawsthorn understands this, as did the Modernists of the 1920s, for these approaches to design are about transforming worldview as much as generating practical solutions. But ultimately, the political nature of crisis only reaffirms the importance of powerful institutions. For better or worse, there tends to be Hobbesian flight to authority in times of emergency, a search for leaders who can take control of the situation.

Another observation which undermines the heroic theory is that the most important designs in moments of crisis are rarely new ones. As Charles Leadbeater points out in his fascinating comparison of British efforts during the Second World War and in the Covid pandemic (and hat-tip Saloni Dattani for sharing), effective answers tend to come from the repurposing of existing technologies and ideas. This too has a strong institutional component, since knowledge needs to be built up over time before it can be repurposed during a crisis.

By way of illustration, Leadbeater’s remarks about the UK’s failed efforts to design new ventilators in the midst of the pandemic are worth quoting at length:

Code-named Operation Last Gasp when the Prime Minister first enlisted British manufacturers to pivot their production lines from aero engines, vacuum cleaners and racing cars to ventilators, five thousand companies and 7,500 staff responded to the challenge to design new ventilators, in what was billed as a showcase of British engineering prowess. Companies such as Dyson and Babcock joined universities and Formula 1 teams only to find they were sent down blind alleys to design, from scratch, machines that clinicians would not use.

Those in the industry who suggested that it would be more sensible to produce more machines based on existing designs were eventually vindicated… The main usable innovation was a version of an existing British machine which was upgraded so it could be exported.

The 2,500 ventilators the UK procured from abroad were more sophisticated machines needed to sustain people in intensive care for weeks on end. The most famous manufacturer of those high-end machines is the family-owned German company Draeger, founded in Lubeck in 1889, which made the first-ever ventilator, the Pulmotor. The company’s latest product, the Pulmovista 500 visualises the flow of air through the lungs in such detail that clinicians can monitor it in real-time and make minute adjustments to the flow. The company’s chief executive, Stefan Draeger is the fifth generation of the family to lead the company. You do not invent that kind of capability from scratch in a few weeks.

Even 1920s Modernism, the archetypal heroic design movement, did not emerge ex-nihilo. Its foundations were laid in the years before the First World War, through the patronage of German industrial giants like AEG, and in the Deutscher Werkbund before that.  

For Rawsthorn’s vision of crisis entrepreneurs to be realised on a bigger scale, network technology would have to replace this institutional development across timewith individual collaboration across space. For all the power of open source databases and information sharing, I’m yet to be convinced this is possible.

It remains true, of course, that crisis design which fails to have an immediate impact can still be revolutionary in the longer term. The Bauhaus is an excellent example of this. But it’s interesting to note that the lasting effects of crises on design are not always predictable. The experience of popular mobilisation for the First World War persuaded the survivors of the power of mass media and propaganda. The idea of “built-in obsolescence” – making minor alterations to products so that consumers want to buy the newer version – was widely taken up in response to the Great Depression. Research undertaken during the Second World War led to a boom in the use of plastic materials. Covid, it seems, has prompted the mass adoption of remote working technologies. 

Crises pave the way for such shifts, because by definition, these are moments when we see our current reality as provisional. At times of crisis, like the one we are in now, no one believes that the future will look like the recent past; we have, unconsciously, prepared ourselves for dramatic change. In this space of expectation new forms of design can emerge, though we don’t yet know what they will be.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Powerless Heritage

This article was originally published by The Critic Magazine in October 2022. Read it here.

Approaching the revamped Battersea Power Station on a sunny autumn morning, the area around the building is dotted with slack-jawed visitors, peering skywards in awe through the lenses of theirsmartphones. This masterpiece on the Thames, designed by Giles Gilbert Scott in the early 1930s, has been shrouded by cranes and scaffolding for years, its quartet of cream-coloured chimneys a familiar but unapproachable part of the London skyline. Now, thanks to a consortium of Malaysian property developers and architecture firm WilkinsonEyre, Scott’s building has been restored in all its monumental glory.

The result really is sublime, a vast block of rhythmic brick facades and stepped parapets, resembling something between a cathedral and a fortress. Scott was a versatile architect, adept at combining historicist and modern styles with his output ranging from Liverpool Cathedral to the red phone box (though he wanted it light blue). At Battersea Power Station, an industrial building somehow presents us with a synthesis of gothic, neoclassical and jazz-age rhetoric. For this we can thank the great British tradition of NIMBYism, since Scott was drafted in to beautify the power station after complaints from Westminster and Chelsea residents about their property values. 

This Instagram-ready spectacle comes with a note of unease. For what purpose has this historic structure been so lavishly recreated? Bear in mind that, after being decommissioned in the early-1980s, it sat here for many years without a roof, in the course of failed proposals to turn it into (among other things) a rubbish incinerator, a theme park and a football stadium.

The short answer is that heritage is being honoured here, but we ought to ask what exactly this means. The redevelopment clearly has little in common with maypoles and William Morris wallpaper. It is, rather, an orgy of commodification. The former power station is now squeezed between woozy sculptural buildings designed by starchitects Frank Gehry and Norman Foster, which are packed with luxury apartments, hotels and retail. Stepping inside Scott’s restored structure, you find a banquet of historical elements — including art deco fluted pillars, steel beams and exposed brickwork — encompassing what otherwise feels a lot like a duty free shopping area, with brands like Rolex and Cartier taking the prime spots alongside Starbucks and Pret. 

Elsewhere in the building, some 250 apartments, penthouses and rooftop villas are being sold for between £1 million and £18 million. Much of the 45,000 square metres of office space will be occupied by the new UK headquarters (sorry, “campus”) of tech giant Apple Inc.

The point of heritage is that people living in a particular place derive their shared identity, in part, from a connection with the history of that place. What we havehere is more like zombie heritage, where the past is kept alive in the sterile form of a branded product. It even feels misleading to say the building has been repurposed. Really it has been resurrected as a kind of Madame Tussauds waxwork, to provide a themed backdrop for property investment and shopping. 

This is most obvious in the architects’ obsessive attention to “authentic” details. Bricks were sourced from the original suppliers, who made them using traditional methods of hand-moulding and wire-cutting. In the former control rooms, retro panels of dials, buttons and levers have been meticulously restored as decoration for a cocktail bar and a private events space. Someone has even contrived a puff of smoke rising from one of the chimneys. The power station has become its own death mask. 

None of this should be surprising though, since Battersea is just the latest instance in a trend for turning heritage into exclusive real estate. A similar fate has ironically befallen East London’s brutalist landmark Balfron Tower, originally designed by socialist architect Erno Goldfinger. Balfron’s council tenants have now been booted out to make room for expensive “heritage flats”, adorned with 1970s period fittings and décor. Likewise, in Manhattan, the art deco profile and interiors of the latest “skinny-scraper” on 57th street, also known as Billionaires’ Row, uses as its base the restored 1920s Steinway Tower. 

Back in London, the recent redevelopment of King’s Cross began with the conversion of a Victorian granary into Central Saint Martins art college, and the reinvention of 19th century brick warehouses as the retail complex Coal Drops Yard. The surrounding area now hosts the offices of various cutting-edge multinationals, including Meta and Google. The Battersea architects were involved here too, constructing luxury apartments inside the cylindrical iron frames of whatwere once gas holders. 

It’s easy to imagine why the trappings of heritage might appeal to rich urbanites and corporations. Besides making wealth and power appear more humane, it is just much nicer to feel rooted in the history and fabric of a city than to look down at it, literally and figuratively, from a glass box in the sky. At the same time, because heritage is precious and scarce, it can still serve as a marker of status.  This trend has doubtless prevented some fine buildings from being destroyed, and for that much we should be grateful. Markus Binney, the conservationist who has done more than anyone else to save Battersea Power Station, is thrilled by that site’s redevelopment. He says it will now be “buzzing for years to come”, providing “a giant boost for schemes aimed at bringing Britain’s many industrial landmarks back to life”.

Buzzing or not, what we are seeing today is quite different from the old arrangement, where a historic setting would provide an attractive venue for a restaurant or business park. Heritage can maintain its civic value whilst also being a commercial asset, but increasingly, its civic value is the thing being commercialised. Developers use a whole language of place names, typefaces and design details to emphasise the presence of history, whilst only letting you access it as a consumer or a spectator of the lifestyles of the wealthy. This is privatisation in its metaphysical stage. 

These developments seem especially corrosive in a case like Battersea Power Station, which has a public character due to its monumental presence in the city. To see a better outcome you only need to follow the Thames to the former Bankside Power Station, also designed by Scott, which now serves as the Tate Modern gallery. Bankside has been repurposed in a much more imaginative way, respecting the original structure without treating it as sacred. This flows naturally from the fact that the Tate has a civic function to perform. The same could be said, for instance, about St Pancras International station, or the Camden Roundhouse, or any number of London’s historic museums. 

The new Battersea Power Station, by contrast, recreates an “iconic landmark” (as the estate agents have it) only to infuse it with a luxury ethos that, for all its charms, is anything but unique. The grandeur of Scott’s building, so inspiring at first sight, ultimately becomes another gimmick. 

The Lost Magic of the Seas

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Many British people will hear about Felixstowe for the first time this month, thanks to a planned workers’ strike that promises yet more economic pain. Located on the Suffolk coast, Felixstowe is the site of the UK’s biggest container port; almost half of the goods coming and going from our shores pass through here, stowed away in brightly coloured shipping containers that resemble enormous Lego bricks.

The absence of Felixstowe from the national vocabulary speaks volumes about the era we live in. Britain is an island after all, and its various port towns have been central to its history for centuries. Now we are more dependent on the sea than ever (around ninety percent of the world’s traded goods travel by ship), but we barely realise it.  

So what happened? That is the question I want to consider today, with the help of David Abulafia’s The Boundless Sea, an epic history of human activity on the ocean. One of the themes in this book is the relationship between the intimate and the global: how our sense of what is valuable or important is tied up with our impressions of the world at large. 

Container ports appear in the final, slim section of The Boundless Sea, where Abulafia describes the disappearance, since the 1950s, of the ancient maritime patterns he has detailed for some 900 pages. “By the beginning of the 21st century,” he writes, “the ocean world of the last four millennia had ceased to exist.”

Given the dramatic nature of this change – a mass extinction of seafaring cultures around the world – the treatment is strikingly brief. Then again, this is a useful reminder that modernity is a tiny slice of time containing enormous transformations.

Container ports symbolise this rupture from the past: mechanised coastal nodes where huge vessels, each bearing thousands of standardised containers, load and unload goods from around the world. By contrast to the lively port towns that litter The Boundless Sea, container ports “are not centres of trade inhabited by a colourful variety of people from many backgrounds, but processing plants in which machinery, not men, do the heavy work and no one sees the cargoes… sealed inside their big boxes.” Felixstowe, says Abulafia, is “a great machine.”

 

Who crossed the oceans before the container ships did? Polynesian navigators explored the vastness of the Pacific over millennia, with only the stars for a compass. Bronze Age Egyptians ventured down the Red Sea in search of frankincense and myrrh. Merchants in sewn-plank boats spread Buddhism and Islam in the southern Indian Ocean, even as Vikings set out from their Greenland farmsteads in search of narwhal tusks. In the early modern era, pirates, traders and profit-hungry explorers swarmed the coasts of Africa and the Americas. These examples are just a drop in the ocean of Abulafia’s sweeping narrative. 

But despite its enormous scope, there is a golden thread running through this book, uniting different eras and pulling continents together: the human desire for rare, beautiful, and exceptionally useful things.

The main protagonists of maritime history are merchants, since buying and selling has been the most common reason to cross the seas. But what is difficult to grasp today, when even the most mundane products have supply lines spanning the oceans, is the special value which has often been attached to seaborne goods, especially before the 18th century. Some cities, most famously Rome, did rely on short-distance shipping for basic needs like food. And some products, like English wool or Chinese ceramics, were crossing the water in large volumes centuries ago. But generally the risks and expenses of taking to sea, especially over large distances, demanded that merchants focus on the most sought-after goods. And conversely, goods were particularly precious if they could only be delivered by ship.

So seaborne cargoes show us what was considered valuable in the places they docked, or at least among the elites of those places. The human history of the oceans is in large part a catalogue of highly prized things: ornate weapons and exotic animals, spices and textiles, materials like sandalwood and ivory, or foodstuffs like honey, oil and figs. Of course that catalogue also includes human beings reduced to the status of objects, such as eunuchs, performers and slaves.

If the value of such things was generally financial for merchants, it took many forms in the cultures where they arrived. Before the ocean could be reliably traversed with steamships and (eventually) aeroplanes, foreign products bore the mystery of unknown lands. They often became tokens of social status, symbols of spiritual significance, or preferred forms of sensual pleasure and beauty. Ivory from African elephants and north-Atlantic walruses were treasured materials for religious sculpture in medieval Europe, just as red Portuguese cloth was prized by West African elites in the 17th century.

This traffic in desirable objects made the world we know today. The European expansion that began in the late-15th century was driven by the prospect of delivering expensive goods in ever-larger quantities, making them accessible to an ever-larger market. These included products only available in East Asia, like silk, spices and high-quality ceramics, and those that could only be produced with slave labour in tropical climates, such as sugar, coffee and tobacco.

Once the Spanish had established a Pacific route between the Americas and the Philippines, the first truly global networks appeared. The volume of maritime trade began to grow, and one of the foundations of modern capitalism was in place. Abulafia aptly describes Chinese junks arriving in Spanish Manila as “the 16th century equivalent of a floating department store.” Among the items in their holds were “linen and cotton cloth, hangings, coverlets, tapestries, metal goods including copper kettles, gunpowder, wheat flower, fresh and preserved fruits, decorated writing cases, gilded benches, live birds and pack animals.”

But no less dramatic than the growing movement of goods, people and ideas was the emergence, for the first time, of a global consciousness. This is strikingly visualised by the maps that accompany each of the fifty-one chapters of The Boundless Sea. In the first half of the book, these maps show the relatively small regions in which maritime connections existed, with the exception of the world’s oldest trans-oceanic network in the Indian Ocean. In the second half, the maps zoom dizzyingly outwards, eventually incorporating the entire world. 

That world map is something we take for granted in an era of instant communication and accessible satellite imagery, but for most of history, huge swathes of the globe were completely unknown to any given group of people. To be fully aware of our species’ planetary parameters marks nothing less than a revolution in how human beings understand themselves. And one of the driving forces behind that revolution was the ambition to bring desirable (and profitable) things from across the ocean. 

But if trade underpinned seafaring ways of life throughout history, it finally led to their extinction. More and more shipping did not just make formerly exotic goods commonplace, it eventually made most states integrate their economies into a global marketplace, so that seafaring became more like a conveyor belt than a culture. This culminated in the container ships that now have the oceans almost to themselves, their efficiencies of scale rendering other forms of seaborne trade obsolete.

In the age of the container, most products do not even come from a particular place. They are devised, extracted, processed, manufactured and assembled in many different places, so as to achieve the lowest cost. Even things that do come from distant lands no longer have the same aura of the unfamiliar, since the world is now almost entirely visible through imagery and media. 

And that is where this story provides an important insight into the way we design, exchange and value objects today. In consumer societies, enormous resources are devoted to engineering desire, by making products appear uncommon and exclusive. We are used to thinking of this practice as peculiarly modern, and in many ways it is. But maybe we should also see it as an attempt to recreate something of the lost value that, for most of human history, belonged to things from across the ocean.

This essay was first published at The Pathos of Things newsletter. Subscribe here.