The Sublime Hubris of Tropical Modernism

This review was originally published by Engelsberg Ideas in April 2024.

In December 1958 an All-African People’s Conference was held in Accra, capital of the newly independent Ghana. It brought together delegates from 28 African countries, many of them still European colonies. Their purpose, according to Ghanaian prime minister Kwame Nkrumah, was ‘planning for a final assault upon Imperialism and Colonialism’, so that African peoples could be free and united in the ‘economic and social reconstruction’ of their continent. Above the entrance of the community centre where the conference took place, there was a mural which seemed to echo Nkrumah’s sentiment. Painted by the artist Kofi Antubam, it showed four standing figures along with the slogan: ‘It is good we live together as friends and one people.’

The building was a legacy of Ghana’s own recent colonial history. During the 1940s the UK government’s Colonial Development and Welfare fund had decided to build a number of community centres in what was then the Gold Coast. Most of the funding would come from British businesses active in the region, and the spaces would provide a setting for recreation, education and local administration. The Accra Community Centre, neatly arranged around two rectangular courtyards with colonnaded walkways, was designed by the British Modernist architects Jane Drew and Maxwell Fry. Antubam’s mural calling for amity reads somewhat differently if we consider the circumstances in which it was commissioned. The United Africa Company, the main sponsor of the project, was trying to repair its public relations after its own headquarters had been torched in a protest against price fixing.

The Accra Community Centre is emblematic of the ambiguous role played by Modernist architecture in the immediate post-colonial era. Like so many ideas embraced by the elites of newly independent states, Modernism was a western, largely European doctrine, repurposed as a means of asserting freedom from European rule. ‘Tropical Modernism’, a compelling exhibition at London’s V&A, tries to document this paradoxical moment in architectural history, through an abundance of photographs, drawings, letters, models and other artefacts.

Drew and Fry are the exhibition’s main protagonists, an energetic pair of architects who struggled to implement their vision in Britain but had more success in warmer climes. In addition to the community centre in Accra, they designed numerous buildings in West Africa, most of them educational institutions in Ghana and Nigeria. In the course of this ‘African experiment’, as Architectural Review dubbed it in 1953, they developed a distinctive brand of Modernism, of which the best example is probably Ibadan University in Nigeria. It consisted of horizontal, geometric volumes, often raised on stilts, with piers running rhythmically along their facades and, most characteristically, perforated screens to guard against the sun while allowing for ventilation.

On the basis of this work, Drew and Fry were invited to work on the planning of Chandigarh, the new capital of the state of Punjab in India, which had just secured its own independence from Britain. Here they worked alongside Le Corbusier, the leading Modernist architect, on what was undoubtedly one of the most influential urban projects of the 20th century. Drew and Fry also helped to establish Tropical Architecture courses at London’s Architectural Association and MIT in Massachusetts, where many architects from post-colonial nations would receive training.

Not that those students passively accepted what they were taught. The other major theme of the exhibition concerns the ways that Indian and Ghanaian designers adopted, adapted and challenged the Modernist paradigm, and the complex political atmosphere surrounding these responses. Both Nkrumah and Jawaharlal Nehru, India’s first prime minister, preferred bold and bombastic forms of architecture to announce their regimes’ modernising aspirations. This Le Corbusier duly provided, with his monumental capitol buildings at Chandigarh, while Nkrumah summoned Victor Adegbite back from Harvard to design Accra’s Black Star Square. In India, however, figures such as Achyut Kavinde and Raj Rewal would in the coming decades forge their own modern styles, borrowing skilfully from that country’s diverse architectural traditions. At Ghana’s own design school, KNUST, it was the African American architect J Max Bond who encouraged a similar approach to national heritage, telling students to ‘assume a broader place in society, as consolidators, innovators, propagandists, activists, as well as designers’.

As is often the case, the most interesting critique came not from an architect, but an eccentric. In Chandigarh, the highway inspector Nek Chand spent years gathering scraps of industrial and construction material, which he secretly recycled into a vast sculpture garden in the woods. His playful figures of ordinary people and animals stand as a kind of riposte to the city’s inhuman scale.

One question raised by all of this, implicitly but persistently, is how we should view the notion of Modernism as a so-called International Style. In the work of Drew, Fry and Le Corbusier it lived up to that label, though not necessarily in a good way. Certainly, these designers tried diligently to adapt their buildings to new climatic conditions and to incorporate visual motifs from local cultures. In light of these efforts, it is all the more striking that the results still resemble placeless technocratic gestures, albeit sometimes rather beautiful and ingenious ones. We could also speak of an International Style with respect to the ways that these ideas and methods spread: through evangelism, émigrés and centres of education. It’s important to emphasise, which the V&A show doesn’t, that these forms of transmission were typical of Modernism everywhere.

By the 1930s, Le Corbusier was corresponding or collaborating with architects as far afield as South Africa and Brazil (and the latter was surely the original Tropical Modernism). Likewise, a handful of European exiles, often serving as professors, played a wildly disproportionate role in taking the International Style everywhere from Britain and the US to Kenya and Israel.

If Modernism was international, its Tropical phase shows that it was not, as many of its adherents believed, a universal approach to architecture, rooted in scientific rationality. Watching footage at the exhibition of Indian women transporting wet concrete on their heads for Chandigarh’s vast pyramids of progress, one is evidently seeing ideas whose visionary appeal has far outstripped the actual conditions in the places where they were applied. As such, Modernism was at least a fitting expression of the ill-judged policies of rapid, state-led economic development that were applied across much of the post-colonial world. Their results differed, but Ghana’s fate was especially tragic. A system where three quarters of wage earners worked for the state was painfully vulnerable to a collapse in the price of its main export, cocoa, which duly came in the 1960s. Nkrumah’s regime fell to a coup in 1967, along with his ambitions of pan-African leadership and the country’s Modernist experiment. Those buildings had signified ambition and idealism, but also hubris.

London: Zombie Capital

This essay appeared in my regular newsletter, The Pathos of Things, in November 2023. Subscribe here

In a world of mass media, we are ruled by the tyranny of comparison. We are surrounded by images of beauty and style, of fulfilment and success, that make us feel inadequate by contrast. Sometimes it is even an image of ourselves, a glimpse of what we once were or could have been, that we yearn to emulate.

What if something similar can happen to a city? Could an entire metropolis be oppressed by an idealised version of itself, as seen in films, advertising, and the imagination of the wider world? This seems to be the case in London today, and no doubt in other famous cities too. A caricature of the British capital, part period drama and part Richard Curtis romcom, has been sold to a global audience of nostalgic Anglophiles. And because London is where the UK welcomes with the world, all the other twee fantasies of Englishness are expected here as well. Increasingly, the city is being warped under the pressure of its own whimsical image.

Consider the Londoner, a new themed resort in Macau, off the south coast of China. Visitors are treated to every English cliché imaginable, from Scotch eggs and scones to David Beckham and the Spice Girls, not to mention replicas of various London landmarks. A correspondent for the Times describes it thusly:

As the actor playing Her Majesty waves demurely from a balcony, Grenadier Guards and a few Metropolitan Police bobbies dance to fanfare all around the Crystal Palace — a glass-topped atrium inspired by the building which once adorned Hyde Park. All the while, hundreds of people delightedly video this twice-daily performance… from either side of a replica of Eros from Piccadilly Circus.

I make pretend calls from red telephone boxes before boarding an imported, 1966 Routemaster bus. Looming near that is a full-size duplicate of the Elizabeth Tower, aka the home of Big Ben; behind, the building’s intricate lower façade apes the Houses of Parliament. Airport transfers involve vintage Rolls-Royces.

An obvious farce, yes, but a farce that many people love. Besides, the Londoner is far from the only simulacrum of British culture in China. Imitations of prestigious public schools and colleges have been cropping up around the country. One university in Hebei is modelled on Hogwarts.

The TV and film industry are of course the greatest purveyors of fantasy Britannia. Much as western audiences like exotic portrayals of the East, foreign audiences like old-fashioned portraits of England and its capital city, normally involving the upper class. Global hits include Downton Abbey and The Crown, Harry Potter and James Bond, and of course Titanic and Notting Hill. Arriving in Shanghai two years ago, the first person I spoke to referenced the 1980s comedy Yes Minister, which seemed rather niche but still proves the point I think.

It just so happens that many Europeans, Brits included, like this picture of Englishness too, but the enormous markets beyond Europe would be enough to justify it in commercial terms. As the FT’s Stephen Bush puts it, “left to the market alone, the UK depicted on screen will look rather like the India presented by The Best Exotic Marigold Hotel: not a country that any of its citizens would readily recognise, but one that reflects foreign customers’ idea of it.”

What goes for the screen increasingly goes for London itself, as the city is transformed to realise the commercial potential of its brand. Its historic centre has become a theme park rather like the one in Macau, catering to tourism, shopping and hospitality to the extent that it no longer really seems to belong to the city. Most Londoners can’t afford a drink in these areas, let alone to live or run a business there. The capital’s most famous monuments, including the buildings where the country is governed, are so tied up with marketing and merchandise that it’s sometimes surprising to be reminded they are real places.

The irony, of course, is that much of London’s living history – such as the independent bars and shops which thrived in Soho even a decade ago – has been strangled by this process. It is being replaced by something I’ve previously called zombie heritage, whereby the city’s architecture comes to resemble a series of waxworks recreating an older London for commercial purposes. In many cases, such as the revamped Battersea Power Station (luxury apartments and shopping) or the recent redevelopment of Kings Cross (prestigious art school, corporate offices and shopping), these new “old” places have essentially been designed as giant magnets for global wealth.

And make no mistake, there is money to be made from Anglomania. In 2016, an academic at the London School of Economics estimated that the Harry Potter franchise generated £4 billion for London in that year alone. The question is whether such returns can justify, in economic or social terms, surrendering much of the capital to tourism and novelty consumption.

In a recent essay, Deyan Sudjic has also noted the “creeping fossilisation” of London, as the city’s productive capacities are crowded out by the hawking of heritage. Sudjic acidly observes “the Old War Office building on Whitehall, from which Winston Churchill led the defence of Britain against Hitler, has become the Raffles OWO hotel where bed and breakfast starts at over £1,000 a night.” Meanwhile in Camden, “facsimile punks” perform for tourists “as if they were Beefeaters on parade.”

If this trend continues, encouraged by white-collar professionals evacuating the centre to work remotely, then London’s old urban core will eventually resemble a cross between Austin Powers and Hogwarts: a moribund museum of British kitsch, stretching from Shepherd’s Bush to Hackney, from Kentish Town to Southwark. By way of warning, Sudjic cites Venice, the infamous case of a city frozen in the form of a luxury destination. Maybe it’s already too late, for right next to the Londoner in Macao is another hotel and shopping complex modelled on a famous city: the Venetian.

The Lost Art of Leisure

This essay was published by the New Statesman in May 2023, under the headline “You Should Only Work Four Hours a Day.”

Decades ago, Roland Barthes quipped that “one is a writer as Louis XIV was king, even on the toilet”. He was mocking the way literary types like to distinguish themselves from the mass of working people. According to Barthes, writers insist that their productive activities are not limited to any time and place, but flow constantly like an “involuntary secretion”.

Well, we are all writers now, at least in this sense. Stealing a few holiday hours to work on an article used to be my party trick. Now I find that, on Mondays and Fridays when many office buildings stand empty, my salaried comrades are sending emails from an Airbnb somewhere. Come the weekend, they might close their laptops, but they don’t stop checking their phones.

Of course this hardly compares with the instability further down the pay scale. Around one in seven British workers now do gig-economy jobs like Uber or Amazon delivery at least once a week, according to research for the Trades Union Congress, many of them on top of full-time employment.

Work today is fluid, overflowing its traditional boundaries and seeping into new domains. Meditation and exercise look suspiciously like personal optimisation. Artistic vocations centre on tireless self-promotion to a virtual audience. A movement of “homesteaders” churning their own butter and knitting their own jumpers are simply cosplaying older forms of work, and probably posting the results on Instagram.

With the help of our digital tools, we are adapting ourselves to productivity as involuntary secretion. The result is an evisceration of personal life and an epidemic of burnout.

Our diffuse working culture has attracted plenty of critiques. The problem is most of them share the basic outlook that enabled the spread of work to begin with. Should we recognise “quiet quitting” as a justified response to unreasonable demands by employers? Is rest a form of “resistance”? Do we all just need a better “work-life balance”? These arguments present life as a two-way split between work and some nondescript realm of personal freedom, the question being how we can reclaim time from one for the sake of the other.

As long as the alternative to work remains just a negative space, work will continue leaching into it. What we are missing is a real counterbalance: a positive vision of leisure.

Properly speaking, leisure is not rest or entertainment, though it can provide both. It is not mere fun, though it ought to be satisfying. Its forms change over time, but it generally involves elements of play, fantasy and connection with other people or the natural world. Most importantly, leisure is superfluous to our worldly needs and ambitions: something we do not as a means to any end, but simply for its own sake.

Truly mass participation in leisure was a striking feature of British life in the early 20th century. People played in brass bands and raced pigeons. They learned to dance and performed in plays and choirs. In 1926 nearly 4,000 working-class anglers from Birmingham took part in a single fishing competition along 20-odd miles of river. During the 1930s, as the historian Ross McKibbin writes, “one of the great sights of the English weekend were the fleets of cyclists riding countrywards along the arterial roads of the major towns”.

People still do these things, of course, but they do them as hobbies. The hobby belongs to a culture defined by work: it is a creature of downtime and a quirk of character. Hobbies rely on individual enthusiasm, so they often collapse in the face of stress or time pressure. Besides, we tend to judge them by the unleisurely criteria of self-improvement. Physical and intellectual pursuits are admirable, since they bring fitness and cultural capital. Excessive interest in bird watching marks you out as an eccentric.

Taking the superfluous seriously is a brave act in a utilitarian world, so leisure needs its own social legitimacy to thrive. This used to come from class-based associational life, with its clubs, unions and organised religion. If video games and social media smack of pseudo-leisure, it is because they are often part of a lonely struggle with the productivity impulse: they palliate restless and atomised minds. Maybe the only forms of leisure with a more than marginal role in popular culture today are amateur football, travel and the pub.

Aristotle thought a political community should exist to provide the conditions for leisure, which he saw as the key to human flourishing. At the very least, it is crucial for a balanced existence. Meaningful work, entertainment and indulgence all have their place, but they become destructive in excess. Life should be more than an on/off switch. Leisure is the space for conversation and reflection, friendship and loyalty, playfulness and joie de vivre. These are not qualities we can develop because we want them on our CVs: they are by-products of doing something for its own sake.

In a more civilised society, leisure would define our identities as much as labour does. To see what a distant prospect that is, try to imagine a politician talking about activities that might bring satisfaction to our lives half as much as he or she talks about “ordinary working people” or “hard-working families”. Celebrating leisure would be branded out-of-touch, but that is because we have accepted the disgraceful assumption that enjoyable pastimes are only for those who can afford them.

Asset-holding baby boomers are the masters of leisure today, using retirement for tourism, sport and artistic dabbling. Good for them. Still, we should resist the idea that such opportunities must be earned by decades of graft. This morality feels natural only because we don’t acknowledge our common interest in leisure. We accept everyone wants higher pay, so why treat activities that enrich our culture as an extravagance?

The struggle to keep work in its proper place has already consumed a generation: the lifestyle guru Tim Ferriss published his bestseller The 4-Hour Workweek in 2007. It seems not all of us want to be our productive selves even on the toilet.

But it’s equally clear that blank slots carved out of our personal timetables are too flimsy: you cannot beat discipline with discipline. It would be better if we combined our productive energies and channelled them towards reviving the art of leisure.

Fully Automated Desert Dystopia

This is my latest newsletter published at Substack. Read more and subscribe here.

It takes some chutzpah, you would think, for Saudi Arabia to portray itself as a modern, forward-thinking state. Ruled by crown prince Mohammed bin Salman, the country is an authoritarian theocratic monarchy. Political parties are outlawed. No religion other than Islam can be openly practiced, and apostasy is legally punishable by death. Women have only been allowed to drive cars since 2018.

But there are other ways, outside the western liberal paradigm, for a regime to assert its progressive credentials – especially if it happens to control fifteen percent of the world’s known oil reserves. As bin Salman has shown, one effective way to harness the romance of “the future” is through design.

Witness Neom, a spectacular plan to fill the Saudi desert with hi-tech cities and resorts. One of these is already famous: “The Line,” advertised as a radically new kind of city. Nothing has been built yet, but the hype has been endless, and very successful in publicity terms. A new documentary by the Discovery Channel (or rather, a promotional film in the guise of a documentary) is just the latest instalment of it.

The Line promises to make science fiction reality. Two enormous parallel walls, taller than the Empire State building, will run for 170km through the desert. In the narrow gap between them will be a “vertical city” for nine million people. That means, in effect, the population of greater London living in a single, very long skyscraper, just three times wider than a football pitch. Everything residents need will be accessible within five minutes, it will all be powered by renewable energy, fully automated, entirely car-free, etc. etc.

“Progressive,” “futuristic” and “poetic” is how The Line is described in the new film. As bin Salman himself puts it, “we have the cash, we have the land, we have the stability,” and now “we want to create the new civilisation for tomorrow.”

Digital rendering of life inside “The Line.” (Image: Neom)

The crown prince’s campaign for cultural prestige does not end there. His regime’s $650 billion Public Investment Fund is being ploughed into an array of fashionable consumer industries and green technologies, from coffee and vaping to electric cars and hydrogen-powered buses.

Bin Salman could be compared to the “Enlightened despots” of the 18th century, rulers who used their absolute power to enact progressive reforms. And just as Enlightenment philosophers were happy to act as consultants for Frederick the Great or Catherine the Great, a long list of high-profile architects and designers have flocked to bin Salman’s court. Many of them, including the erstwhile enfant terrible and Archigram founder Peter Cook, can be seen praising their client’s imagination and insight in the Discovery Channel film. 

As I wrote last year, Neom has revealed a certain synergy between designers and autocrats. Unaccountable rulers like bin Salman offer vast resources and creative freedom; his architects can implement “any kind of technology… or urban design solution,” as one project manager puts it. That is an opportunity ambitious designers dream about, and in return, many are only too happy to deliver a grandiose project that glorifies their employer’s power. If all this can be presented as vital to the future of humanity, so much the better.

But I think the enthusiasm for Neom reflects something deeper than artistic vanity, something in the makeup of modern design itself. By and large, designers like to create things that are functionally efficient, rational, and optimised for specific outcomes. That applies to urban planning as much as product design, the obvious difference being that in something as big and messy as a city, there is rarely an opportunity to start from scratch. New developments have to fit into an existing landscape that has evolved chaotically over time. 

With Neom, there are no such constraints. The designer’s love of functionality and order can be indulged to an enormous extent. Consider the fantasies of one planner, who wants to use her “passion as a tool for positive social change,” as reported by Bloomberg:

Imagine a sixth grader, she says. When he wakes up, his home will scan his metabolism. Because he had too much sugar the night before, the refrigerator will suggest porridge instead of the granola bar he wanted. Outside he’ll find a swim lane instead of a bus stop. Carrying a waterproof backpack, he’ll breaststroke the whole way to school. … If all goes well, she says, residents can expect an extra 10 years of ‘healthy life expectancy.’

This is human life reduced to a design problem, its smooth functioning almost indistinguishable from that of the technology that surrounds and supports it. The same tendency is apparent in the Discovery Channel film, where architects discuss The Line as though it were a new smartphone, rather than a supposed home for millions of people.

Digital rendering of life inside “The Line.” (Image: Neom)

This attitude recalls some of the worst excesses of Modernism, such as the “Functional City” discourse launched by CIAM in the 1930s. Here urban life was separated out into discrete “functions,” as though society was something that could be reorganised into labelled drawers. Today’s urbanism is based on very different ideas, but there is no reason to think the results will be any less remote from people’s real needs. A city is simply too complex a thing to be re-engineered from the top down; attempting to do so is pure arrogance.

The Line resembles nothing so much as a setting for a cookie-cutter Hollywood sci-fi. When a plan is consistently sold as “futuristic,” it generally means the designers are more interested in the concept and the aesthetics than the practical reality. One can only hope they are aware that little of it will actually be built, and are just happy to go along with a petro-dictator’s publicity stunt. The other possibility – that they think they can imagine a new society into being – would be far worse.

This is my latest newsletter published at Substack. Read more and subscribe here.

The Architecture of Autocracy

This article was originally published by Unherd in November 2022. Read it here.

For a building project marketed like a Hollywood blockbuster, the latest footage from the deserts of northwestern Saudi Arabia is a little underwhelming. A column of trucks is moving sand, a row of diggers poking at the barren landscape like toys arranged on a beach. The soundtrack, an epic swirl of fast-paced, rising strings, doesn’t really belong here.

Still, the video got its message across: it’s really happening. The widest aerial shots reveal an enormous groove in the sand, stretching to the horizon. We are seeing the birth of “The Line”, an insanely ambitious project for a city extending 170km through the desert, sandwiched in a narrow space between two immense walls. The new construction footage is an update on the viral CGI trailers that overwhelmed the internet last year, showing us glimpses of what life will be like inside this linear chasm of a city: a city where there will be no cars or roads, where every amenity is always a five-minute walk away, and where, according to one planning document, there could be robot maids.

This scheme sounds mad enough, but The Line is only the centrepiece of a much bigger development, called Neom (a blend of neo and mustaqbal, Arabic for “future”). Neom will be a semi-autonomous state, encompassing 26,000 square kilometres of desert with new resorts and tech industry centres.

There may be no philosopher kings, but there are sci-fi princes. The dreams of Mohammed bin Salman, crown prince of Saudi Arabia and chairman of the Neom board, make the techno-futurism of Silicon Valley look down to earth. Bin Salman is especially fond of the cyber-punk genre of science fiction, which involves gritty hi-tech dystopias. He has enlisted a number of prominent Hollywood visual specialists for the Neom project, including Olivier Pron of Marvel’s Guardians of the Galaxy franchise. A team of consultants was asked to develop science-fiction aesthetics for a tourist resort, resulting in “37 options, arranged alphabetically from ‘Alien Invasion’ to ‘Utopia’”. One proposal for a luxury seaside destination, which featured a glowing beach of crushed marble, was deemed insufficiently imaginative.

Such spectacular indulgence must be causing envy among the high-flying architects and creative consultants not yet invited to join the project — if there are any left. But it also makes the moral dimension difficult to ignore: how should we judge those jumping on board bin Salman’s gravy train? Saudi Arabia — in case anyone has forgotten in the years since the journalist Jamal Khashoggi was murdered at its consulate in Istanbul — is a brutal authoritarian state.

In recent weeks, this has prompted some soul-searching in the architecture community, with several stinging rebukes aimed at Neom. Writing in Dezeen, the urbanist Adam Greenfield asks firms such as Morphosis, the California-based architects designing The Line, to consider “whether the satisfaction of working on this project, and the compensation that attends the work, will ever compensate you for your participation in an ecological and moral atrocity”. Ouch. Greenfield’s intervention came a week after Rowan Moore asked in The Observer: “When will whatever gain that might arise from the creation of extraordinary buildings cease to outweigh the atrocities that go with them?”

You see, bin Salman’s blank slate in the desert was not actually blank (they never are); settlements belonging to the Huwaitat tribespeople have been ruthlessly flattened to make space for Neom. One man leading resistance to the clearances, Abdul Rahim al-Huwaiti, was killed by security forces in 2020, and three others have been sentenced to execution. Critics also point to the absurd pretence that The Line is an eco-friendly project, given the industrial operations needed to build and maintain a city for nine million people in searing desert temperatures.

There is an obvious parallel here with the criticism of celebrities and commentators taking part in the World Cup in Qatar, another unsavoury petro-state. International sporting events are notorious for giving legitimacy to dictatorships, so why wouldn’t we see architectural monuments in the same way? With Neom there is barely a distinction to draw. Zaha Hadid Architects, the British firm that designed one of the Qatari football stadiums — a project synonymous with the shocking treatment of migrant construction workers — is also working on one of the Neom sites, an artificial ski resort that will host the 2029 Asian Winter Games.

In the 21st century, Western architects have helped to burnish the image of repressive regimes, especially those big-name architects who specialise in spectacular monumental buildings. Zaha Hadid was the most wide-ranging: her trademark swooshing structures include a museum honouring the ruling family in Azerbaijan and a conference hall in Muammar Gaddafi’s Libya (never completed due to Gaddafi’s demise). But the biggest patrons of globetrotting architects have been the Arab Gulf States — especially Qatar, Saudi Arabia and the United Arab Emirates — along with China. Among countless examples in these regions, the most infamous is probably Rem Koolhaas’s Chinese Television Headquarters in Beijing, a suitably sinister-looking building for a state organisation that shapes the information diet of hundreds of millions of people each day.

The uncomfortable truth is that autocrats and architects share complimentary motivations. The former use architecture to glorify their regimes, both domestically and internationally, whereas the latter are attracted to the creative freedom that only unconstrained state power can provide. In democratic societies, there is always tension between the grand visions of architects and the numerous interest groups that have a say in the final result. Why compromise with planning restrictions and irate neighbours when there is a dictator who, as Greenfield puts it, “offers you a fat purse for sharing the contents of your beautiful mind with the world?”

This is not just speculation. As Koolhaas himself stated: “What attracts me about China is that there is still a state. There is something that can take initiative on a scale and of a nature that almost nobody that we know of today could even afford or contemplate.”

But really this relationship between architect and state is a triangle, with financial interests making up the third pole. Despite the oft-repeated line that business loves the stability offered by the rule of law, when it comes to building things, the money-men are as fond of the autocrat’s empty canvas as the architects are. When he first pitched the Neom project to investors in 2017, bin Salman told them: “Imagine if you are the governor of New York without having any public demands. How much would you be able to create for the companies and the private sector?”

This points us to the deeper significance of the Gulf States and China as centres of high-profile architecture. These were crucial regions for post-Nineties global capitalism: the good illiberal states. Celebrity architects brought to these places the same spectacular style of building that was appearing in Europe and North America; each landmark “iconic” and distinct but, in their shared scale and audacity, also placeless and generic. Such buildings essentially provided a seal of legitimacy for the economic and financial networks of globalisation. Can this regime’s values really be so different to ours, an investor might say, when they have a museum by Jean Nouvel, or an arts centre by Norman Foster? British architects build football stadiums and skyscrapers in Qatar and Saudi Arabia, while those governments own football stadiums and skyscrapers in Britain, such as The Shard and Newcastle’s St James’s Park.

This is not to suggest some sort of conspiracy: the ethical issues of working for repressive states have often been debated by architects. When the tide of liberal capitalism seemed to be coming in around the world, they could say, and believe, that their buildings were optimistic gestures, representing a hoped-for convergence around a single global modernity. It is the collapse of those illusions over the last decade that makes such reasoning look increasingly suspect.

With Neom, bin Salman is making explicit the publicity value of architecture, by pushing it to a whole new degree. Aware that breakthroughs in clean energy would essentially render his kingdom a stranded asset, he is trying to rebrand Saudi Arabia as a high-tech green state. He offers investors a package they, like many architects, dream about: breath-taking novelty and innovation, combined with sustainability and an apparent humanistic concern.

But ironically, what bin Salman has really shown is that architects are increasingly unnecessary for conveying political messages. They are being replaced by those masters of unreality who use digital technology to the same ends, like the Marvel film magicians creating a vision of Neom in the global imagination. Whether or not a city like The Line actually exists is almost beside the point in terms of its publicity value. After all, this is an era where the superhero realm of Wakanda is praised as a depiction of Africa, and where America tore itself apart for four years over a wall that never actually came into being.

Likewise, given the technological challenges involved, we can be certain the vast furrow appearing in the Saudi desert will never become The Line as portrayed in the promotional videos. But videos will be enough to project the desired image of an innovative, progressive state. That bin Salman himself might really believe in his futuristic city, encouraged by his army of paid-up designers, will only make him a better salesman.

Design for the End Times

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Youtube is one of the most powerful educational tools ever created; so powerful, in fact, it can teach someone as inept as myself to fix things. I am slightly obsessed with DIY tutorials. Your local Internet handyman talks you through the necessary gear, then patiently demonstrates how to wire appliances, replace car batteries or plaster walls. I’ve even fantasised that, one day, these strangers with power tools will help me build a house.

To feel self-sufficient is deeply satisfying, though I have to admit there are more hysterical motives here too. I’ve always been haunted by the complacency of life in a reasonably well-functioning modern society, where we rely for our most basic needs on dazzlingly complex supply chains and financial arrangements. If everything went tits-up and we had to eke out an existence amidst the rubble of modernity, I would be almost useless; the years I have spent reading books about arcane subjects would be worth even less than they are today. Once the Internet goes down, I will not even have Youtube to teach me how to make a crossbow. 

But what if, instead of becoming more competent, you could simply create a technologically advanced bubble to shelter from the chaos of a collapsing society? Welcome to the world of post-apocalyptic hideouts for the super-rich, one of the most whacky and morbidly fascinating design fields to flourish in the last decade.

In a recent book, Survival of the Richest, media theorist Douglas Rushkoff describes the growing demand for these exclusive refuges. Rushkoff was invited to an exclusive conference where a group of billionaires, from “the upper echelon of the tech investing and hedge fund world,” interrogated him about survival strategies:

New Zealand or Alaska? Which region will be less impacted by the coming climate crisis? … Which was the greater threat: climate change or biological warfare? How long should one plan to be able to survive with no outside help? Should a shelter have its own air supply? What is the likelihood of groundwater contamination?

Apparently these elite preppers were especially vexed by the problem of ensuring the loyalty of their armed security personnel, who would be necessary “to protect their compounds from raiders as well as angry mobs.” One of their solutions was “making guards wear disciplinary collars of some kind in return for their survival.”

There is now a burgeoning industry of luxury bunker specialists, such as the former defence contractor Larry Hall, to address such dilemmas. In Kansas, Hall has managed to convert a 1960s silo for launching nuclear missiles into a “Survival Condo,” where seventy-five clients can allegedly outlive three to five years of nuclear winter. As I learned from this tour (thanks again, Youtube), Hall’s bunker is essentially a 200-foot cylinder sunk into the ground, lined with concrete and divided into fifteen floors. There are seven floors of luxury living quarters, practical areas such as food stores and medical facilities, and leisure amenities including a bar, a library, and a cinema with 3,000 films. Energy comes from multiple renewable sources.

What to make of this undertaking? There is a surreal quality to the designers’ efforts to simulate a familiar environment, which presumably have less to do with the realities of post-apocalyptic life than marketing to potential buyers. The swimming pool area is adorned with artificial boulders and umbrellas (yes, underground umbrellas), there are classrooms where children can continue their school syllabus (because who knows when those credentials will come in handy), and each client is provided with pre-downloaded Internet content based on keywords of their choice. You can even push a shopping trolley around a pitiful approximation of a supermarket. Honestly, it’s like the world never ended, except that a lack of space for toilet paper means you have to use bidet toilets.

But there is another way to look at these pretences of normality. Dystopian projections tend to reflect the social conditions from which they emerge: just as my own dreams of self-sufficiency are no doubt the standard insecurities of an alienated knowledge worker, luxury bunkers are merely an extension of the gated communities and exclusive lifestyles that many of the super-wealthy already inhabit. As Rushkoff suggests, these escape strategies smack of an outlook which has been “rejecting the collective polity all along, and embracing the hubristic notion that with enough money and technology, the world can be redesigned to one’s personal specifications.” From this perspective, society already resembles a savage horde lurking beyond the gate. Perhaps the ambition of retreating into an underground pleasure palace defended by armed guards is less a dystopia than a utopia.

One of Britain’s own nuclear refuges from the Cold War era – a vast underground complex in Wiltshire, including sixty miles of roads and a BBC recording studio – will apparently be difficult to convert into a luxury bunker because it has been listed as a historic structure. On the other hand, I suppose the post-apocalyptic property developers could charge extra for a heritage asset.

The overlap between escaping catastrophe and simply abandoning society is even more evident in the “seasteading” movement, which aims to create autonomous cities on the oceans. This project was hatched in 2008 by Google engineer Patri Friedman, grandson of the influential free-market economist Milton Friedman, with funding from tech investor Peter Thiel. The idea was that communities floating in international waters could serve as laboratories for new forms of libertarian self-governance, away from the clutches of centralised states. But as the movement evolved into different strands, the rhetoric became increasingly apocalyptic. A group called Ocean Builders, for instance, has presented the floating homes it is designing in Panama as a “lifeboat” to escape disasters such as the Covid pandemic, as well as government tyranny.

These “SeaPods” have much in common with the luxury bunkers back on terra firma. Designed by Dutch architect Koen Olthuis, they consist of a streamlined capsule elevated above the surface of the ocean, with steps leading down the inside of a floating pole to give access to a small block of underwater rooms. They are imagined as lavishly crafted, exclusive products, reminiscent of holiday retreats, with autonomous supplies of energy, food and water. The only problem is such designs need to be trialled in coastal waters, and for some reason governments have not been very receptive to anarcho-capitalist tax-dodgers trying to establish sovereign entities along their shorelines.

But entertaining as it is to ridicule these schemes, there is a danger that it becomes a kind of avoidance strategy. It is not actually far-fetched to acknowledge the possibility of a far-reaching social collapse (on a regional level, it has already occurred numerous times in living memory), and even the United Nations has embraced the speculative prepper mindset. Anticipating the potential effects of climate change, the UN is backing another version of seasteading, with modular floating islands designed by the fashionable architect Bjarke Ingels.

All civilisations must come to an end eventually, and ours is fairly fragile. In complex systems like those we rely on for basic goods and materials, a breakdown in one area of the network can have dramatic destabilising effects for the rest. We have already seen glimpses of this with medical shortages during the pandemic, and soaring energy costs due to the Ukraine war. How far we will fall in the event of a sudden collapse depends on the back-up systems in place. One of the more intriguing figures offering emergency retreats for the wealthy, the American businessman J.C. Cole, is also trying to develop a network of local farms to provide a broader population with a sustainable food supply. Cole witnessed the collapse of the Soviet Union the early 1990s, and inferred that Latvia experienced less violence because people were able to grow food on their dachas

But perhaps the most unnerving prospect, as well as the most likely, is that collapse won’t be sudden or dramatic. There won’t be a moment to rush into a bunker, or to use emergency mechanic skills acquired from Youtube. Rather, the fabric of civilised life will fray slowly, with people making adjustments and improvising solutions to specific problems as they appear. Only gradually will we transition into an entirely different kind of society.

In South Africa, the country where I was born and where I am writing now, this process seems to be going on all the time. There are pockets of extraordinary wealth here, but public infrastructure is crumbling. The power cuts out for several hours every day, so many businesses have essentially gone off-grid with their own generators. The rail network has largely disappeared. Private security companies have long since replaced various policing functions, even among the middle class, and better-organised neighbourhoods coordinate their own patrols. Meanwhile in poorer areas, people still inhabit a quasi-modern world of mobile phones and branded products, but face a constant struggle with badly maintained water, electricity and sewage systems. Housing is often improvised and travel involves paying for a seat on a minibus.

The point is not that South Africa is collapsing – it still has a lot going for it, and besides, most of its population has never enjoyed first-world comforts – but that this is how it might look if, like many civilisations in the past, the advanced societies of today were to “collapse” gradually, over generations. It would be a slow-motion version of the polarisation between survivors and rejects that we see in the escape plans of the super-rich. And though we would realise things are not as they should be, we would keep hoping the decline was just temporary. Only in the distant future, after a new civilisation had arisen, would people say that we lived through a kind of apocalypse.

Anyway, merry Christmas everyone.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

How We Got Hooked on Chips

This essay was first published at The Pathos of Things newsletter. Subscribe here.

As I am making the final edits to this article, the media are reporting that Chinese fighter jets are flying along the narrow strait separating China from Taiwan.

This dramatic gesture, along with other signals of military readiness, raises the spectre of a catastrophic conflict between the world’s two superpowers, China and the United States. I just hope that by the time this is published, you don’t have to read it in a bunker.

The immediate reason for this crisis is an official visit on Wednesday by Nancy Pelosi, the Speaker of the US House of Representatives, to Taiwan, an island that China claims as part of its territory. But remarkably, one of the deeper sources of this tension is a story about design.

Taiwan has something that every powerful nation on the planet wants, and needs access to. It has the Taiwan Semiconductor Manufacturing Company (TSMC), an industrial enterprise that manufactures half of the world’s semiconductors, or computer chips. More importantly, TSMC makes the vast majority of the most advanced logic chips (only South Korea’s Samsung can produce them to a similar standard). These advanced chips provide the computing power behind our most important gadgets, from smartphones and laptops to artificial intelligence, cloud software and state-of-the-art military technology.

Pelosi’s incendiary visit this week reflects a striking fact about the early-21st century: the world’s most powerful states cannot provide for themselves stamp-sized electronic components made in a factory. The precariousness of this situation has hit home in recent years, as trade-wars, lockdowns and supply chain disruptions have created a global semiconductor shortage. This cost the car industry an estimated $210 billion last year, while slashing Apple’s output by up to 10 million iPhones. Chip shortages are a major obstacle to US efforts to supply Ukraine with weapons (a Javelin rocket launcher uses around 250 semiconductors).

So why don’t states just make their own advanced chips? They are trying. The US and the European Union are each offering investments of around $50 billion for domestic semiconductor manufacturing. This will largely involve subsidising Intel, the last great American hope for advanced chip-making, in its ambitions to catch up with its rivals in Taiwan and South Korea. Meanwhile China, rapidly gaining ground in the chip race despite US efforts to hamper it, is spending much more than that.

It would be comforting to think that all this is just the result of complacency: that western governments did not realise their dangerous dependence on Taiwanese chips, and will now lessen that dependence. This would allow us to feel slightly more confident that a face-off over Taiwan will not provide the spark for World War Three.

The reality is more sobering. None of the plans being laid now appear likely to diminish the importance of Taiwan. What is more, TSMC represents only one aspect of the west’s dependence on Asia for the future of its chip industry. To understand why, we have to look at how semiconductors became the most astonishing design and engineering achievement of our age.

The surface of a computer chip is like a city, only it is measured not in miles, but in microns. This city is not made from buildings and streets, but from transistors etched in silicon: hundreds of millions of them carefully arranged in every square millimetre. Each transistor is essentially just a gate, which can be opened or closed to regulate an electric current. But billions of transistors, mapped out in a microscopic architecture, can serve as the brain of your smartphone.

Recognising our dependence on such artefacts is unnerving. It reveals how the texture of our lives is interwoven with economic and technological forces we can barely comprehend. Semiconductors are everywhere: in contactless cards, household appliances, solar panels, pacemakers, watches, and all kinds of medical equipment and transportation systems. Aside from the logic chips that provide computing ability, semiconductors are needed for functions like memory, power and remote connection. They are the intelligent hardware underpinning the entire virtual universe of the Internet: we think and feel and dream in languages that semiconductors have made possible.

All this is thanks to a somewhat esoteric doctrine known as Moore’s Law. In 1965, the research director Gordon Moore predicted that the number of transistors on a computer chip would double every year, an estimate he later revised to every two years. This was, in effect, a prediction of the exponential growth of computing power, as well as its falling cost, and it has proved remarkably accurate to this day. The strange thing is that Moore’s Law has no hard scientific underpinning: it was simply an extrapolation based on Moore’s observations of the early semiconductor industry. But his “law” has become an almost religious mission within that industry, a prescribed rate of progress that every generation of designers and engineers seeks to uphold.

The result has been an incredible series of innovations, allowing transistor density to keep increasing despite regular claims that the laws of physics won’t allow it. Moore’s Law gives semiconductor production its defining characteristic: chips are never something that you can learn how to make once and for all. Tomorrow’s chips are always just around the corner, and they will probably require a whole new technique.

This is why, by the late 1980s, it was becoming financially impossible for the great majority of firms to manufacture advanced chips. Doing so, then as now, means placing huge bets on new ideas that might produce the next breakthrough demanded by Moore’s Law. It means spending billions of dollars on a factory, which needs enough orders so that it can run 24/7 and recoup its costs. And it means doing this in the knowledge that everything will need to be upgraded or replaced in a matter of years.

The solution to this impasse came in 1987 with the founding of TSMC, assisted, ironically enough, by engineers and technology transfers from the United States. The Taiwanese company’s key innovation was to focus purely on manufacturing, allowing all the other firms that want to make chips to specialise in design. With its gigantic order book, TSMC makes enough money to continuously invest in new manufacturing techniques. Meanwhile, companies such as British-based ARM, and Apple and Qualcomm in the US, have focused on designing ever more revolutionary chips to be manufactured by TSMC.

With this basic division of labour in place, the semiconductor industry became a highly specialised, intensely competitive global enterprise. It takes millions of research and engineering hours to design new chips, many of which are done in India to make the process faster and cheaper (up to date statistics are hard to find, but by 2007 just 57% of engineers at American chip companies were based in the US). The semiconductors are made in Taiwan with Dutch machinery and Japanese chemicals and components, before being taken to China for testing, assembly and installation. And increasingly, the profits needed to keep everything going come from Asian consumers.

This is how Moore’s Law has been upheld, and how we have received ever-improving gadgets over the past three decades. But this chip-making system relies on each of its key points developing intense expertise in a specific area, to deliver constant techno-scientific progress and cost efficiency. TSMC is one of those key points, and its skills and experience cannot simply be copied elsewhere.

It is worth taking a moment to appreciate the mind-bending process, known as Extreme Ultraviolet Lithography, by which TSMC makes the most cutting-edge chips. It involves a droplet of molten tin, half the size of a single hair’s breadth, falling through a vacuum and being vaporised by a laser that fires fifty thousand times per second. This produces a burst of light which you cannot actually see, since its short wavelength renders it invisible. After bouncing off numerous lenses, that light will meet the chemically treated surface of a silicon wafer, where, over the course of numerous projections, it will etch the billions of transistors that allow a chip to function. These transistors are hundreds of times smaller than the cells in our bodies, not much larger than molecules.

Now we can begin to grasp why America and Europe are unlikely to replicate TSMC on their own shores. In fact, the Taiwanese company already operates factories in the US, making less advanced chips; but as its founder Morris Chang recently revealed, the lack of industrial expertise there prevents it from being competitive. Chang called the whole idea of an American semiconductor revival “a very expensive exercise in futility.” Analysts have similarly poured scorn on the EU’s chip manufacturing ambitions. 

Given the scale of the challenge, the investments on offer in the US and EU are practically chump change. They are already spent by companies like TSMC and Samsung every single year. And those companies can buy more with it too: according to one assessment, the cost of building and running a semiconductor plant in the US is one-third higher than in Taiwan, South Korea or Singapore. Another widely cited report estimated that if the US wanted semiconductor self-sufficiency, it would cost over $1 trillion, or twenty times the sum currently on offer.

As for the United Kingdom, the less said the better. The UK finds itself with a potentially pioneering semiconductor plant in Newport, Wales, but has allowed the research facility there to lapse into a storage area, and has been trying to sell it to a Chinese-owned company for several years. As Ed Conway concludes, the UK government’s semiconductor strategy is simply non-existent. Absurdly, the responsibility for devising one was given to the Department for Digital, Culture, Media and Sport.

In short, the US and Europe are a long way from making semiconductors at a scale and with a proficiency that would seriously reduce dependence on TSMC. It is not just about mastering the cutting-edge techniques of today; it is about generating enough revenue to research and develop the techniques of tomorrow. And it is not just about meeting current demand; it is about building capacity for a decade in which global demand is expected to double. Advanced chips will be needed for 5G gaming and video streaming, artificial intelligence and home offices.

I will leave it to the international relations people to assess what this means for US-China relations. The obvious conclusion is that western dependence on TSMC raises the stakes of a potential Chinese invasion of Taiwan, which in turn makes it more likely that the US will provoke China with its support for the island’s independence. But the picture is extremely knotty, given China’s own centrality to the business models of western companies, including chip designers.

What seems more clear is that the politics surrounding semiconductors in the west are highly misleading. Support for domestic chip-making has been tied into a narrative about moderating the excesses of globalisation and rebuilding industry at home. But in practice, we are talking about state subsidies for huge global companies which continue to rely on access to foreign labour markets and consumers. This contradiction is neatly captured by Intel banging the drum for more government investment, while simultaneously lobbying to ensure it can continue taking its technology to China.

Yet this is only fitting, since semiconductors are emblematic of the contradictions of post-1990s globalisation. A system defined by economic openness and expansion ultimately concentrated power in the hands of those supplying the most important resources, whether it be technological expertise or cheap fossil fuels. And even if the system unravels, our dependence on the resources will remain. The events surrounding Taiwan this week are just another reminder of that.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Fake It ‘Til You Break It

This article was originally published by The Critic magazine in November 2022. Read it here.

n Friday the saga of Elizabeth Holmes will move one step closer to its conclusion. Holmes, founder of the ill-fated health tech company Theranos, was convicted of fraud and conspiracy at the start of this year, and she will now receive her sentence. This is bad news for the army of hacks, podcasters and documentary makers who have spent years making hay from the Theranos debacle, a story of a charismatic young woman who fooled a wide swathe of elite America with her vision of innovation as a force for good. 

They shouldn’t worry. With impeccable timing, a new tale of investor credulity and disastrous personal ambition has burst into the headlines. Last week, the cryptocurrency exchange FTX dramatically imploded after details of its creative accounting techniques were leaked to a news website, and a rival exchange started a run on its digital tokens. Billions of dollars belonging to investors and customers evaporated more or less overnight. As the news unfolded, all eyes turned to Sam Bankman-Fried, the 30-year-old crypto whizz-kid and Effective Altruism guru who exercised close control over FTX and a powerful grip on the imagination of Silicon Valley investors.

Bankman-Fried has reportedly been taken into custody by authorities in the Bahamas, where FTX was based. I won’t comment on the legal implications of his demise. What we can say, though, is that between him and Holmes, we havemounting evidence that the cult of the disruptive genius at the heart of tech capitalism has become a danger to the public.

Bankman-Fried rose through the crypto world on a wave of high-minded talk and personal charm. He offered financial brilliance along with the image of a scruffy outsider, appearing on stage with Bill Clinton and Tony Blair in a t-shirt and shorts, playing video games during business calls and bragging about sleeping on a beanbag next to his desk. Closely associated with the Effective Altruism movement — a school of ethics that seeks the most rational ways of maximising human wellbeing — Bankman-Fried claimed he was getting stinking rich so he could give it away. He cemented his public profile by cooperating with lawmakers in Washington over crypto regulation, whilst sponsoring various sports teams and donating to the Democratic party. Naturally he also made time to hobnob with celebrities like the supermodel Gisele Bundchen, appointed to lead FTX’s environmental and social initiatives. 

The investors lapped it up. A now-deleted article on the website of Sequoia Capital, the venture capital firm that previously backed PayPal and Google, described the response of its partners when Bankman-Fried pitched to them: “I LOVE THIS FOUNDER”; “I am 10 out of 10”; “YES!!” The problem is, neither theynor anyone else beyond a tight circle of friends had the full picture of FTX’s financial dealings. If they had, they might have seen that customer deposits werebeing loaned to Alameda Research, another of Bankman-Fried’s companies, to shore up risky investments. In the end, the assets FTX held as collateral weremostly its own digital tokens, whose value crashed when panicked customers tried to withdraw their funds en masse.

Bankman-Fried’s rise and fall shows a more than passing resemblance to the case of Holmes and Theranos. She too cultivated a disruptive image, a Stanford drop-out turned founder at the age of nineteen, wearing black turtlenecks in a weird homage to Steve Jobs. Her promise of a revolutionary blood-testing technology, bypassing scary needles and replacing doctors’ appointments with a trip to Walmart, won her more illustrious supporters than it’s possible to enumerate. They included Barack Obama, Hillary Clinton, Henry Kissinger, Bill Gates, Rupert Murdoch and Betsy DeVos. As revealed by John Carreyou in his 2015 Wall Street Journal exposé, and by a succession of witnesses at Holmes’ trial, she lied about the capabilities of her so-called Edison machine, keeping her supposedly ground-breaking tech cloaked in secrecy to hide its failures. 

Prosecutors demanding a tough sentence for Holmes this week claim she “seized upon investors’ desire to make the world a better place in order to lure them into believing her lies”. Weare already seeing similar claims of moral betrayal from Bankman-Fried’s supporters, including a hand-wringing Twitter thread by leading Effective Altruism philosopher William MacAskill. It would be comforting to think that Holmes and Bankman-Fried were just massive grifters — sociopaths preying on the goodwill of others — but this would be letting their backers and the system that produced them off the hook. The real story here is surely the lack of scepticism shown towards these celebrity entrepreneurs once their messianic image was established. 

Holmes’ followers had no right to be astonished that Theranos turned out to be a dud, given that clinical scientists were raising flags about the company’s secrecy long before it became a public scandal. Writing a New Yorker profile of Holmes in2014, at the height of her fame, Ken Auletta described her explanation of the technology as “comically vague”. Did none of Theranos’ board members stop to ask why their illustrious colleagues included just two people with a medical licence?

With Bankman-Fried the signs were even more obvious. In a now-infamous Bloomberg interview, the founder described part of his business model as a magic box that people stuff money into simply because others are doing so, and which “literally does nothing” apart from generate tokens that can be traded based on the hypothetical potential of said box. The journalist Matt Levine paraphrased this theory as “I’m in the Ponzi business and it’s pretty good”, which Bankman-Fried admitted was “a pretty reasonable response”. 

The entrepreneur Antonio García Martínez has an interesting historical take on the FTX fiasco, pointing out that charisma and speculation are typical in the early stages of a new technological paradigm, before it settles into a more stable, regulated status-quo. “Innovation starts in mad genius and grift and bubbles,” he writes, “and ends in establishment institutions that go on to reject the next round of mayhem.” A good point, but hardly a reassuring one, given that there will always be another hyper-ambitious figure promising to open up a new frontier.

We know this because there is such obvious demand in American elite society for individuals who can legitimise tech capitalism, whether through their aura of personal brilliance or by demonstrating its potential for beneficial progress. The reverence for Steve Jobs and Elon Musk are proof of this, but Holmes and Bankman-Fried went a step further by presenting themselves as ambitious prodigies and evangelical do-gooders. Where did they learn this formula for adulation? It’s notable that both are themselves quintessential products of the elite: Holmes’ parents were Washington insiders; Bankman-Fried’s were Stanford law professors. 

Ironically, the danger posed by such figures is not so much that they are “disruptive”, but that they awaken a deeply conformist desire to worship the glamorous heralds of progress. A desire, it seems, that can make people rather gullible. 

According to the latest reports, the collapse of Bankman-Fried’s crypto empire could affect as many as a million creditors, which is saying nothing of the individuals whose savings were implicated through institutional investors. Still, we can be grateful it happened now, and not at a point where cryptocurrency had become large enough to pose a systemic threat to the financial system. As for Theranos, the testimony of patients who received false blood-test results ought to be warning enough about what a close call that was. How long will our luck last? Given the ever-growing role of technology in our lives, the next hyped-up young genius may cause more havoc still. 

Design for Dictators

This essay was first published at The Pathos of Things newsletter. Subscribe here.

The 1937 World Fair in Paris was the stage for one of the great symbolic confrontations of the 20th century. On either side of the unfortunately titled Avenue of Peace, with the Eiffel Tower in the immediate background, the pavilions of Nazi Germany and the Soviet Union faced one another. The former was soaring cuboid of limestone columns, crowned with the brooding figure of an eagle clutching a swastika; the latter was a stepped podium supporting an enormous statue of a man and woman holding a hammer and sickle aloft.

This is, at first glance, the perfect illustration of an old Europe being crushed in the antagonism of two ideological extremes: Communism versus National Socialism, Stalin versus Hitler. But on closer inspection, the symbolism becomes less clear-cut. For one thing, there is a striking degree of formal similarity between the two pavilions. And when you think about it, these are strange monuments for states committed, in one case, to the glorification of the German race, and in the other, to the emancipation of workers from bourgeois domination. As was noted by the Nazi architect Albert Speer, who designed the German structure, both pavilions took the form of a simplified neoclassicism: a modern interpretation of ancient Greek, Roman, and Renaissance architecture.

These paradoxes point to some of the problems faced by totalitarian states of the 1920s and 30s in their efforts to use design as a political tool. They all believed in the transformative potential of aesthetics, regarding architecture, uniforms, graphic design and iconography as means for reshaping society and infusing it with a sense of ideological purpose. All used public space and ceremony to mobilise the masses. Italian Fascist rallies were politicised total artworks, as were those of the Nazis, with their massed banners, choreographed movements, and feverish oratory broadcast across the nation by radio. In Moscow, revolutionary holidays included the ritual of crowds filing past new buildings and displays of city plans, saluting the embodiments of Stalin’s mission to “build socialism.”

The beginnings of all this, as I wrote last week, can be seen in the Empire Style of Napoleon Bonaparte, a design language intended to cultivate an Enlightenment ethos of reason and progress. But whereas it is not surprising that, in the early 19th century, Napoleon assumed this language should be neoclassical, the return to that genre more than a century later revealed the contradictions of the modernising state more than its power.

One issue was the fraught nature of transformation itself. The regimes of Mussolini, Hitler and Stalin all wished to present themselves as revolutionary, breaking with the past (or at least a rhetorically useful idea of the past) while harnessing the Promethean power of mass politics and technology. Yet it had long been evident that the promise of modernity came with an undertow of alienation, stemming in particular from the perceived loss of a more rooted, organic form of existence. This tension had already been engrained in modern design through the medieval nostalgia of the Gothic revival and the arts and crafts movement, currents that carried on well into the 20th century; the Bauhaus, for instance, was founded on the model of the medieval guild.

This raised an obvious dilemma. Totalitarian states were inclined to brand themselves with a distinct, unified style, in order to clearly communicate their encompassing authority. But how can a single style represent the potency of modernity – of technology, rationality and social transformation – while also compensating for the insecurity produced by these same forces? The latter could hardly be neglected by regimes whose first priority was stability and control.

Another problem was that neither the designer nor the state can choose how a given style is received by society at large. People have expectations about how things ought to look, and a framework of associations that informs their response to any designed object. Influencing the public therefore means engaging it partly on its own terms. Not only does this limit what can be successfully communicated through design, it raises the question of whether communication is even possible between more radical designers and a mass audience, groups who are likely to have very different aesthetic intuitions. This too was already clear by the turn of 20th century, as various designers that tried to develop a socialist style, from William Morris to the early practitioners of art nouveau in Belgium, found themselves working for a small circle of progressive bourgeois clients.

Constraints like these decided much about the character of totalitarian design. They were least obvious in Mussolini’s Italy, since the Fascist mantra of restoring the grandeur of ancient Rome found a natural expression in modernised classical forms, the most famous example being the Palazzo della Civiltà Italiana in Rome. The implicit elitism of this enterprise was offset by the strikingly modern style of military dress Mussolini had pioneered in the 1920s, a deliberate contrast with the aristocratic attire of the preceding era. The Fascist blend of ancient and modern was also flexible enough to accommodate a more radical designers such as Giuseppe Terragni, whose work for the regime included innovative collages and buildings like the Casa Del Fascio in Como.

The situation in the Soviet Union was rather different. The aftermath of the October Revolution of 1917 witnessed an incredible florescence of creativity, as artists and designers answered the revolution’s call to build a new world. But as Stalin consolidated his dictatorship in the early 1930s, he looked upon cultural experimentation with suspicion. In theory Soviet planners still hoped the urban environment could be a tool for creating a socialist society, but the upheaval caused by Stalin’s policies of rapid industrial development and the new atmosphere of conservatism ultimately cautioned against radicalism in design.

Then there was the awkward fact that the proletariat on whose behalf the new society would be constructed showed little enthusiasm for the ideas of the avant garde. When it came to building the industrial city of Magnitogorsk, for instance, the  regime initially requested plans from the German Modernist Ernst May. But after enormous effort on May’s part, his functionalist approach to workers’ housing was eventually rejected for its abstraction and meanness. As Stephen Kotkin writes, “for the Soviet authorities, no less than many ordinary people, their buildings had to ‘look like something,’ had to make one feel proud, make one see that the proletariat… would have its attractive buildings.”

By the mid-1930s, the architectural establishment had come to the unlikely conclusion that a grandiose form of neoclassicism was the true expression of Soviet Communism. This was duly adopted as Stalin’s official style. Thus the Soviet Union became the most reactionary of the totalitarian states in design terms, smothering a period of extraordinary idealism in favour of what were deemed the eternally valid forms of ancient Greece and Rome. The irony was captured by Stalin’s decision to demolish one of the most scared buildings of the Russian Orthodox Church, the Cathedral of Christ the Saviour in Moscow, and resurrect in its place a Palace of the Soviets. Having received proposals from some of Europe’s most celebrated progressive architects, the regime instead chose Boris Iofan to build a gargantuan neoclassical structure topped by a statue of Lenin (the project was abandoned some years later). Iofan himself had previously worked for Mussolini’s regime in Libya.

If Stalinism ended up being represented by a combination of overcrowded industrial landscapes and homages to the classical past, this was more stylistic unity than Nazi Germany was able to achieve. Hitler’s regime was pulled in at least three directions, between its admiration for modern technology, its obsession with the culture of an imagined Nordic Volk (which, in a society traumatised by war and economic ruin, functioned partly as a retreat from modernity), and Germany’s own tradition of monumental neoclassicism inherited from the Enlightenment. Consequently there was no National Socialist style, but an assortment of ideological solutions in different contexts.

Despite closing the Bauhaus on coming to power in 1933, the Nazis imitated that school’s sleek functionalist aesthetic in its industrial and military design, including the Volkswagen cars designed to travel on the much-vaunted Autobahn. Yet the citizens who worked in its modern factories were sometimes provided housing in the Heimatstil, an imitation of a traditional rural vernacular. Propaganda could be printed in a Gothic Blackletter typeface or broadcast through mass-produced radios. But the absurdity of Nazi ideology was best demonstrated by the fact that, like Stalin, Hitler could not conceive of a monumental style to embellish his regime that did not continue in the cosmopolitan neoclassical tradition inspired by the ancient Mediterranean. The cut-stone embodiments of the Third Reich, including Hitler’s imagined imperial capital of Germania, were projected in the stark neoclassicism of Speer’s pavilion for the Paris World Fair. It was only in the regime’s theatrical public ceremonies that these clashing ideas were integrated into something like a unified aesthetic experience, as the goose-stepping traditions of Prussian militarism were updated with Hugo Boss uniforms and the crypto-Modernist swastika banner.

Of course it was not contradictions of style that ended the three classic totalitarian regimes; it was the destruction of National Socialism and Fascism in the Second World War, and Stalin’s death in 1953. Still, it seems safe to say that no state after them saw in design the same potential for a transformative mass politics. 

Dictatorships did make use of design in the later parts of the 20th century, but that is a subject for another day. As in the western world, they were strongly influenced by Modernism. A lot of concrete was poured, some of it into quite original forms – in Tito’s Yugoslavia for instance – and much of it into impoverished grey cityscapes. Stalinist neoclassicism continued sporadically in the Communist world, and many opulent palaces were constructed, in a partial reversion to older habits of royalty. Above all though, the chaos of ongoing urbanisation undermined any pretence of the state to shape the aesthetic environment of most of its citizens, a loss of control symbolised by the fate of the great planned capitals of the 1950s, Le Corbusier’s Chandigarh and Lúcio Costa’s Brasilia, which overflowed their margins with satellite cities and slums.

In the global market society of recent decades, the stylistic pluralism of the mega-city is the overwhelming pattern (or lack of pattern), seen even in the official buildings of an authoritarian state like China. On the other hand, I’ve recently argued elsewherethat various repressive regimes have found a kind of signature style in the spectacular works of celebrity architects, the purpose of which is not to set them apart but to confirm their rightful place in the global economic and financial order. But today the politics of built form feel like an increasingly marginal leftover from an earlier time. It has long been in the realm of media that aesthetics play their most important political role, a role that will only continue to grow.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Crisis and Heroic Design

This essay was first published at The Pathos of Things newsletter. Subscribe here.

One of my favourite artefacts is a series of banknotes designed by Herbert Bayer in 1923, during Weimar Germany’s famous hyperinflation. This was the period when, as you might recall from the images in your school history textbook, the German currency devalued so dramatically that people needed wheelbarrows of money to buy a loaf of bread, and cash was a cheaper way to start a fire than kindling.

Bayer’s banknotes, which came in denominations from one-million to fifty-million Marks, are emblematic of how crises can stimulate innovative design. If it wasn’t for the unusual problem of needing to produce an emergency supply of banknotes, it is unlikely the State Bank of Thuringia would have commissioned Bayer, who was then still a student at Bauhaus school of design. Bayer had no formal training in typography, but he did have some radical ideas involving highly simplified sans-serif numbers and letters, which he duly used for the banknotes. The descendants of those numbers and letters include the font you are reading right now.

This story resonates with an outlook we might call the heroic theory of design, where designers step up at moments of crisis to change the world. Typefaces don’t seem like a big deal, but Bayer’s ideas were part of wider movement to radically rethink every area of design for the practical benefit of society as a whole. By 1926, he had developed a “universal alphaphet” of lower-case-only, sans-serif letters, to make printing, typing and public communication more efficient and accessible. The Bauhaus (or bauhaus, as Bayer would have put it) was suffused with such urgent, experimental thinking, always framed as a response to the prevailing mood of crisis in Weimar Germany. This is part of the reason it remains the most influential design school in history, despite only operating for fourteen years.

The heroic theory is deeply appealing because it taps into the basic narrative of modern design: the promise of order in a world of constant change. The words “crisis” and “emergency” describe modernity at its most raw and contingent, a “moment of decision” (the original Greek translation of crisis) when the shape of the future is at stake in a fundamental way. Crises therefore seem to be the moments we are most in need of design and its order-giving potential, to solve problems and resolve uncertainty in an active, positive manner.

But to what extent can design actually play this heroic role in times of crisis, and under what conditions? This question is of more than academic interest, since we ourselves live in an era defined by multiple crises, from climate and pandemic to war, energy shortages, economic hardship and political turbulence. The German chancellor Olaf Scholz has even invoked the concept of Zeitenwende, “a time of transformation,” which was current during the Weimar years.

The eminent writer Alice Rawsthorn has responded with a new heroic theory of design, first labelled “design as an attitude” (the name of her 2018 book) and more recently “design emergency.” Rejecting the traditional image of design as a commercial discipline, Rawsthorn places her hope in resourceful individuals who find innovative answers to ecological, humanitarian and political problems. More broadly, she encourages us, collectively, to see crisis as an opportunity to actively remake the world for the better. There is even a link to Weimar, as Rawsthorn draws inspiration from another Bauhaus figure, the Hungarian polymath László Moholy-Nagy, who was Bayer’s teacher at the time he designed his banknotes.

The new breed of heroic designers includes the Dutch university student Boyan Slat, who crowd-funded an enormous initiative to tackle ocean pollution, and the duo of Saeed Khurram and Iffat Zafar, doctors who used video conferencing to deliver health care to women in remote Pakistan. Rawsthorn argues that, while many of our systems and institutions have shown themselves no longer fit for purpose, network technology is allowing such enterprising figures to find funding, collaborators and publicity for their ideas.

That point about the empowering potential of networks strikes me as crucial to the plausibility of this outlook. The Internet has definitely made it easier for individual initiatives to have an impact, but can this effect really scale enough to answer the crises we face? Going forward, the heroic theory hinges on this question, because history (including recent history) points in a different direction.

We need to draw a distinction between the general flourishing of creativity and ingenuity in times of crisis, and the most consequential design visions that are widely implemented. The latter, it seems, are overwhelmingly determined by established institutions. Take Bayer again; he could not have put his designs in hands of millions of people without the assistance of a state bank, any more than he could have actually solved the problem of hyperinflation. Likewise, the immediate impact of the Bauhaus and of Modernism in general, limited as it was, depended on its ability to persuade big manufacturers and municipal governments to adopt its ideas. Margarete Schütte-Lihotzky’s groundbreaking Frankfurt kitchen, which I wrote about recently, owed its success to the boldness of that city’s housing program.

Innovation in general tends to unfold in stages, and often with input from numerous sources, big and small. But in times of crisis, which typically demand large-scale, complex initiatives on a limited timescale, institutions with significant resources and organisational capacity play a decisive role. Insofar as individuals and smaller movements make a difference, they first need the levers of power to be put in their hands, as it were.

Wars are famously powerful engines of innovation, precisely because these are the moments when the state’s resources are most intensely focused. Addressing the problems of infectious disease and abject living conditions in the 19th century required not just city planners and sanitation experts, but governments to empower their designs. No one expects plucky outsiders to develop vaccines or mitigate the effects of financial crises. Even on the longer time horizon of climate change, the development of renewable energy is requiring extensive government and corporate involvement, partly to combat the vested interests of the status quo. The crucial breakthrough may turn out to be the emergence of a “green industrial complex,” a new set of powerful interests with a stake in the energy transition.

This does not mean the answers arrived at in this way are necessarily good ones, and they will certainly bear the stamp of the power structures that produce them. This is why slum clearances in the mid-19th century produced cities designed for property investors, while slum clearances in the mid-20th century produced public housing. That said, it is not straightforward to work out what a good answer to a crisis actually is.

Though crises usually have an underlying material reality, they are ultimately political phenomena: a crisis comes into existence with our perception of it (the “fear itself” that Franklin Roosevelt spoke of during the Great Depression). Thus an “effective” solution is one that addresses perceptions, even if its material results are questionable. Rawsthorn understands this, as did the Modernists of the 1920s, for these approaches to design are about transforming worldview as much as generating practical solutions. But ultimately, the political nature of crisis only reaffirms the importance of powerful institutions. For better or worse, there tends to be Hobbesian flight to authority in times of emergency, a search for leaders who can take control of the situation.

Another observation which undermines the heroic theory is that the most important designs in moments of crisis are rarely new ones. As Charles Leadbeater points out in his fascinating comparison of British efforts during the Second World War and in the Covid pandemic (and hat-tip Saloni Dattani for sharing), effective answers tend to come from the repurposing of existing technologies and ideas. This too has a strong institutional component, since knowledge needs to be built up over time before it can be repurposed during a crisis.

By way of illustration, Leadbeater’s remarks about the UK’s failed efforts to design new ventilators in the midst of the pandemic are worth quoting at length:

Code-named Operation Last Gasp when the Prime Minister first enlisted British manufacturers to pivot their production lines from aero engines, vacuum cleaners and racing cars to ventilators, five thousand companies and 7,500 staff responded to the challenge to design new ventilators, in what was billed as a showcase of British engineering prowess. Companies such as Dyson and Babcock joined universities and Formula 1 teams only to find they were sent down blind alleys to design, from scratch, machines that clinicians would not use.

Those in the industry who suggested that it would be more sensible to produce more machines based on existing designs were eventually vindicated… The main usable innovation was a version of an existing British machine which was upgraded so it could be exported.

The 2,500 ventilators the UK procured from abroad were more sophisticated machines needed to sustain people in intensive care for weeks on end. The most famous manufacturer of those high-end machines is the family-owned German company Draeger, founded in Lubeck in 1889, which made the first-ever ventilator, the Pulmotor. The company’s latest product, the Pulmovista 500 visualises the flow of air through the lungs in such detail that clinicians can monitor it in real-time and make minute adjustments to the flow. The company’s chief executive, Stefan Draeger is the fifth generation of the family to lead the company. You do not invent that kind of capability from scratch in a few weeks.

Even 1920s Modernism, the archetypal heroic design movement, did not emerge ex-nihilo. Its foundations were laid in the years before the First World War, through the patronage of German industrial giants like AEG, and in the Deutscher Werkbund before that.  

For Rawsthorn’s vision of crisis entrepreneurs to be realised on a bigger scale, network technology would have to replace this institutional development across timewith individual collaboration across space. For all the power of open source databases and information sharing, I’m yet to be convinced this is possible.

It remains true, of course, that crisis design which fails to have an immediate impact can still be revolutionary in the longer term. The Bauhaus is an excellent example of this. But it’s interesting to note that the lasting effects of crises on design are not always predictable. The experience of popular mobilisation for the First World War persuaded the survivors of the power of mass media and propaganda. The idea of “built-in obsolescence” – making minor alterations to products so that consumers want to buy the newer version – was widely taken up in response to the Great Depression. Research undertaken during the Second World War led to a boom in the use of plastic materials. Covid, it seems, has prompted the mass adoption of remote working technologies. 

Crises pave the way for such shifts, because by definition, these are moments when we see our current reality as provisional. At times of crisis, like the one we are in now, no one believes that the future will look like the recent past; we have, unconsciously, prepared ourselves for dramatic change. In this space of expectation new forms of design can emerge, though we don’t yet know what they will be.

This essay was first published at The Pathos of Things newsletter. Subscribe here.