The Architecture of Autocracy

This article was originally published by Unherd in November 2022. Read it here.

For a building project marketed like a Hollywood blockbuster, the latest footage from the deserts of northwestern Saudi Arabia is a little underwhelming. A column of trucks is moving sand, a row of diggers poking at the barren landscape like toys arranged on a beach. The soundtrack, an epic swirl of fast-paced, rising strings, doesn’t really belong here.

Still, the video got its message across: it’s really happening. The widest aerial shots reveal an enormous groove in the sand, stretching to the horizon. We are seeing the birth of “The Line”, an insanely ambitious project for a city extending 170km through the desert, sandwiched in a narrow space between two immense walls. The new construction footage is an update on the viral CGI trailers that overwhelmed the internet last year, showing us glimpses of what life will be like inside this linear chasm of a city: a city where there will be no cars or roads, where every amenity is always a five-minute walk away, and where, according to one planning document, there could be robot maids.

This scheme sounds mad enough, but The Line is only the centrepiece of a much bigger development, called Neom (a blend of neo and mustaqbal, Arabic for “future”). Neom will be a semi-autonomous state, encompassing 26,000 square kilometres of desert with new resorts and tech industry centres.

There may be no philosopher kings, but there are sci-fi princes. The dreams of Mohammed bin Salman, crown prince of Saudi Arabia and chairman of the Neom board, make the techno-futurism of Silicon Valley look down to earth. Bin Salman is especially fond of the cyber-punk genre of science fiction, which involves gritty hi-tech dystopias. He has enlisted a number of prominent Hollywood visual specialists for the Neom project, including Olivier Pron of Marvel’s Guardians of the Galaxy franchise. A team of consultants was asked to develop science-fiction aesthetics for a tourist resort, resulting in “37 options, arranged alphabetically from ‘Alien Invasion’ to ‘Utopia’”. One proposal for a luxury seaside destination, which featured a glowing beach of crushed marble, was deemed insufficiently imaginative.

Such spectacular indulgence must be causing envy among the high-flying architects and creative consultants not yet invited to join the project — if there are any left. But it also makes the moral dimension difficult to ignore: how should we judge those jumping on board bin Salman’s gravy train? Saudi Arabia — in case anyone has forgotten in the years since the journalist Jamal Khashoggi was murdered at its consulate in Istanbul — is a brutal authoritarian state.

In recent weeks, this has prompted some soul-searching in the architecture community, with several stinging rebukes aimed at Neom. Writing in Dezeen, the urbanist Adam Greenfield asks firms such as Morphosis, the California-based architects designing The Line, to consider “whether the satisfaction of working on this project, and the compensation that attends the work, will ever compensate you for your participation in an ecological and moral atrocity”. Ouch. Greenfield’s intervention came a week after Rowan Moore asked in The Observer: “When will whatever gain that might arise from the creation of extraordinary buildings cease to outweigh the atrocities that go with them?”

You see, bin Salman’s blank slate in the desert was not actually blank (they never are); settlements belonging to the Huwaitat tribespeople have been ruthlessly flattened to make space for Neom. One man leading resistance to the clearances, Abdul Rahim al-Huwaiti, was killed by security forces in 2020, and three others have been sentenced to execution. Critics also point to the absurd pretence that The Line is an eco-friendly project, given the industrial operations needed to build and maintain a city for nine million people in searing desert temperatures.

There is an obvious parallel here with the criticism of celebrities and commentators taking part in the World Cup in Qatar, another unsavoury petro-state. International sporting events are notorious for giving legitimacy to dictatorships, so why wouldn’t we see architectural monuments in the same way? With Neom there is barely a distinction to draw. Zaha Hadid Architects, the British firm that designed one of the Qatari football stadiums — a project synonymous with the shocking treatment of migrant construction workers — is also working on one of the Neom sites, an artificial ski resort that will host the 2029 Asian Winter Games.

In the 21st century, Western architects have helped to burnish the image of repressive regimes, especially those big-name architects who specialise in spectacular monumental buildings. Zaha Hadid was the most wide-ranging: her trademark swooshing structures include a museum honouring the ruling family in Azerbaijan and a conference hall in Muammar Gaddafi’s Libya (never completed due to Gaddafi’s demise). But the biggest patrons of globetrotting architects have been the Arab Gulf States — especially Qatar, Saudi Arabia and the United Arab Emirates — along with China. Among countless examples in these regions, the most infamous is probably Rem Koolhaas’s Chinese Television Headquarters in Beijing, a suitably sinister-looking building for a state organisation that shapes the information diet of hundreds of millions of people each day.

The uncomfortable truth is that autocrats and architects share complimentary motivations. The former use architecture to glorify their regimes, both domestically and internationally, whereas the latter are attracted to the creative freedom that only unconstrained state power can provide. In democratic societies, there is always tension between the grand visions of architects and the numerous interest groups that have a say in the final result. Why compromise with planning restrictions and irate neighbours when there is a dictator who, as Greenfield puts it, “offers you a fat purse for sharing the contents of your beautiful mind with the world?”

This is not just speculation. As Koolhaas himself stated: “What attracts me about China is that there is still a state. There is something that can take initiative on a scale and of a nature that almost nobody that we know of today could even afford or contemplate.”

But really this relationship between architect and state is a triangle, with financial interests making up the third pole. Despite the oft-repeated line that business loves the stability offered by the rule of law, when it comes to building things, the money-men are as fond of the autocrat’s empty canvas as the architects are. When he first pitched the Neom project to investors in 2017, bin Salman told them: “Imagine if you are the governor of New York without having any public demands. How much would you be able to create for the companies and the private sector?”

This points us to the deeper significance of the Gulf States and China as centres of high-profile architecture. These were crucial regions for post-Nineties global capitalism: the good illiberal states. Celebrity architects brought to these places the same spectacular style of building that was appearing in Europe and North America; each landmark “iconic” and distinct but, in their shared scale and audacity, also placeless and generic. Such buildings essentially provided a seal of legitimacy for the economic and financial networks of globalisation. Can this regime’s values really be so different to ours, an investor might say, when they have a museum by Jean Nouvel, or an arts centre by Norman Foster? British architects build football stadiums and skyscrapers in Qatar and Saudi Arabia, while those governments own football stadiums and skyscrapers in Britain, such as The Shard and Newcastle’s St James’s Park.

This is not to suggest some sort of conspiracy: the ethical issues of working for repressive states have often been debated by architects. When the tide of liberal capitalism seemed to be coming in around the world, they could say, and believe, that their buildings were optimistic gestures, representing a hoped-for convergence around a single global modernity. It is the collapse of those illusions over the last decade that makes such reasoning look increasingly suspect.

With Neom, bin Salman is making explicit the publicity value of architecture, by pushing it to a whole new degree. Aware that breakthroughs in clean energy would essentially render his kingdom a stranded asset, he is trying to rebrand Saudi Arabia as a high-tech green state. He offers investors a package they, like many architects, dream about: breath-taking novelty and innovation, combined with sustainability and an apparent humanistic concern.

But ironically, what bin Salman has really shown is that architects are increasingly unnecessary for conveying political messages. They are being replaced by those masters of unreality who use digital technology to the same ends, like the Marvel film magicians creating a vision of Neom in the global imagination. Whether or not a city like The Line actually exists is almost beside the point in terms of its publicity value. After all, this is an era where the superhero realm of Wakanda is praised as a depiction of Africa, and where America tore itself apart for four years over a wall that never actually came into being.

Likewise, given the technological challenges involved, we can be certain the vast furrow appearing in the Saudi desert will never become The Line as portrayed in the promotional videos. But videos will be enough to project the desired image of an innovative, progressive state. That bin Salman himself might really believe in his futuristic city, encouraged by his army of paid-up designers, will only make him a better salesman.

Design for the End Times

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Youtube is one of the most powerful educational tools ever created; so powerful, in fact, it can teach someone as inept as myself to fix things. I am slightly obsessed with DIY tutorials. Your local Internet handyman talks you through the necessary gear, then patiently demonstrates how to wire appliances, replace car batteries or plaster walls. I’ve even fantasised that, one day, these strangers with power tools will help me build a house.

To feel self-sufficient is deeply satisfying, though I have to admit there are more hysterical motives here too. I’ve always been haunted by the complacency of life in a reasonably well-functioning modern society, where we rely for our most basic needs on dazzlingly complex supply chains and financial arrangements. If everything went tits-up and we had to eke out an existence amidst the rubble of modernity, I would be almost useless; the years I have spent reading books about arcane subjects would be worth even less than they are today. Once the Internet goes down, I will not even have Youtube to teach me how to make a crossbow. 

But what if, instead of becoming more competent, you could simply create a technologically advanced bubble to shelter from the chaos of a collapsing society? Welcome to the world of post-apocalyptic hideouts for the super-rich, one of the most whacky and morbidly fascinating design fields to flourish in the last decade.

In a recent book, Survival of the Richest, media theorist Douglas Rushkoff describes the growing demand for these exclusive refuges. Rushkoff was invited to an exclusive conference where a group of billionaires, from “the upper echelon of the tech investing and hedge fund world,” interrogated him about survival strategies:

New Zealand or Alaska? Which region will be less impacted by the coming climate crisis? … Which was the greater threat: climate change or biological warfare? How long should one plan to be able to survive with no outside help? Should a shelter have its own air supply? What is the likelihood of groundwater contamination?

Apparently these elite preppers were especially vexed by the problem of ensuring the loyalty of their armed security personnel, who would be necessary “to protect their compounds from raiders as well as angry mobs.” One of their solutions was “making guards wear disciplinary collars of some kind in return for their survival.”

There is now a burgeoning industry of luxury bunker specialists, such as the former defence contractor Larry Hall, to address such dilemmas. In Kansas, Hall has managed to convert a 1960s silo for launching nuclear missiles into a “Survival Condo,” where seventy-five clients can allegedly outlive three to five years of nuclear winter. As I learned from this tour (thanks again, Youtube), Hall’s bunker is essentially a 200-foot cylinder sunk into the ground, lined with concrete and divided into fifteen floors. There are seven floors of luxury living quarters, practical areas such as food stores and medical facilities, and leisure amenities including a bar, a library, and a cinema with 3,000 films. Energy comes from multiple renewable sources.

What to make of this undertaking? There is a surreal quality to the designers’ efforts to simulate a familiar environment, which presumably have less to do with the realities of post-apocalyptic life than marketing to potential buyers. The swimming pool area is adorned with artificial boulders and umbrellas (yes, underground umbrellas), there are classrooms where children can continue their school syllabus (because who knows when those credentials will come in handy), and each client is provided with pre-downloaded Internet content based on keywords of their choice. You can even push a shopping trolley around a pitiful approximation of a supermarket. Honestly, it’s like the world never ended, except that a lack of space for toilet paper means you have to use bidet toilets.

But there is another way to look at these pretences of normality. Dystopian projections tend to reflect the social conditions from which they emerge: just as my own dreams of self-sufficiency are no doubt the standard insecurities of an alienated knowledge worker, luxury bunkers are merely an extension of the gated communities and exclusive lifestyles that many of the super-wealthy already inhabit. As Rushkoff suggests, these escape strategies smack of an outlook which has been “rejecting the collective polity all along, and embracing the hubristic notion that with enough money and technology, the world can be redesigned to one’s personal specifications.” From this perspective, society already resembles a savage horde lurking beyond the gate. Perhaps the ambition of retreating into an underground pleasure palace defended by armed guards is less a dystopia than a utopia.

One of Britain’s own nuclear refuges from the Cold War era – a vast underground complex in Wiltshire, including sixty miles of roads and a BBC recording studio – will apparently be difficult to convert into a luxury bunker because it has been listed as a historic structure. On the other hand, I suppose the post-apocalyptic property developers could charge extra for a heritage asset.

The overlap between escaping catastrophe and simply abandoning society is even more evident in the “seasteading” movement, which aims to create autonomous cities on the oceans. This project was hatched in 2008 by Google engineer Patri Friedman, grandson of the influential free-market economist Milton Friedman, with funding from tech investor Peter Thiel. The idea was that communities floating in international waters could serve as laboratories for new forms of libertarian self-governance, away from the clutches of centralised states. But as the movement evolved into different strands, the rhetoric became increasingly apocalyptic. A group called Ocean Builders, for instance, has presented the floating homes it is designing in Panama as a “lifeboat” to escape disasters such as the Covid pandemic, as well as government tyranny.

These “SeaPods” have much in common with the luxury bunkers back on terra firma. Designed by Dutch architect Koen Olthuis, they consist of a streamlined capsule elevated above the surface of the ocean, with steps leading down the inside of a floating pole to give access to a small block of underwater rooms. They are imagined as lavishly crafted, exclusive products, reminiscent of holiday retreats, with autonomous supplies of energy, food and water. The only problem is such designs need to be trialled in coastal waters, and for some reason governments have not been very receptive to anarcho-capitalist tax-dodgers trying to establish sovereign entities along their shorelines.

But entertaining as it is to ridicule these schemes, there is a danger that it becomes a kind of avoidance strategy. It is not actually far-fetched to acknowledge the possibility of a far-reaching social collapse (on a regional level, it has already occurred numerous times in living memory), and even the United Nations has embraced the speculative prepper mindset. Anticipating the potential effects of climate change, the UN is backing another version of seasteading, with modular floating islands designed by the fashionable architect Bjarke Ingels.

All civilisations must come to an end eventually, and ours is fairly fragile. In complex systems like those we rely on for basic goods and materials, a breakdown in one area of the network can have dramatic destabilising effects for the rest. We have already seen glimpses of this with medical shortages during the pandemic, and soaring energy costs due to the Ukraine war. How far we will fall in the event of a sudden collapse depends on the back-up systems in place. One of the more intriguing figures offering emergency retreats for the wealthy, the American businessman J.C. Cole, is also trying to develop a network of local farms to provide a broader population with a sustainable food supply. Cole witnessed the collapse of the Soviet Union the early 1990s, and inferred that Latvia experienced less violence because people were able to grow food on their dachas

But perhaps the most unnerving prospect, as well as the most likely, is that collapse won’t be sudden or dramatic. There won’t be a moment to rush into a bunker, or to use emergency mechanic skills acquired from Youtube. Rather, the fabric of civilised life will fray slowly, with people making adjustments and improvising solutions to specific problems as they appear. Only gradually will we transition into an entirely different kind of society.

In South Africa, the country where I was born and where I am writing now, this process seems to be going on all the time. There are pockets of extraordinary wealth here, but public infrastructure is crumbling. The power cuts out for several hours every day, so many businesses have essentially gone off-grid with their own generators. The rail network has largely disappeared. Private security companies have long since replaced various policing functions, even among the middle class, and better-organised neighbourhoods coordinate their own patrols. Meanwhile in poorer areas, people still inhabit a quasi-modern world of mobile phones and branded products, but face a constant struggle with badly maintained water, electricity and sewage systems. Housing is often improvised and travel involves paying for a seat on a minibus.

The point is not that South Africa is collapsing – it still has a lot going for it, and besides, most of its population has never enjoyed first-world comforts – but that this is how it might look if, like many civilisations in the past, the advanced societies of today were to “collapse” gradually, over generations. It would be a slow-motion version of the polarisation between survivors and rejects that we see in the escape plans of the super-rich. And though we would realise things are not as they should be, we would keep hoping the decline was just temporary. Only in the distant future, after a new civilisation had arisen, would people say that we lived through a kind of apocalypse.

Anyway, merry Christmas everyone.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

How We Got Hooked on Chips

This essay was first published at The Pathos of Things newsletter. Subscribe here.

As I am making the final edits to this article, the media are reporting that Chinese fighter jets are flying along the narrow strait separating China from Taiwan.

This dramatic gesture, along with other signals of military readiness, raises the spectre of a catastrophic conflict between the world’s two superpowers, China and the United States. I just hope that by the time this is published, you don’t have to read it in a bunker.

The immediate reason for this crisis is an official visit on Wednesday by Nancy Pelosi, the Speaker of the US House of Representatives, to Taiwan, an island that China claims as part of its territory. But remarkably, one of the deeper sources of this tension is a story about design.

Taiwan has something that every powerful nation on the planet wants, and needs access to. It has the Taiwan Semiconductor Manufacturing Company (TSMC), an industrial enterprise that manufactures half of the world’s semiconductors, or computer chips. More importantly, TSMC makes the vast majority of the most advanced logic chips (only South Korea’s Samsung can produce them to a similar standard). These advanced chips provide the computing power behind our most important gadgets, from smartphones and laptops to artificial intelligence, cloud software and state-of-the-art military technology.

Pelosi’s incendiary visit this week reflects a striking fact about the early-21st century: the world’s most powerful states cannot provide for themselves stamp-sized electronic components made in a factory. The precariousness of this situation has hit home in recent years, as trade-wars, lockdowns and supply chain disruptions have created a global semiconductor shortage. This cost the car industry an estimated $210 billion last year, while slashing Apple’s output by up to 10 million iPhones. Chip shortages are a major obstacle to US efforts to supply Ukraine with weapons (a Javelin rocket launcher uses around 250 semiconductors).

So why don’t states just make their own advanced chips? They are trying. The US and the European Union are each offering investments of around $50 billion for domestic semiconductor manufacturing. This will largely involve subsidising Intel, the last great American hope for advanced chip-making, in its ambitions to catch up with its rivals in Taiwan and South Korea. Meanwhile China, rapidly gaining ground in the chip race despite US efforts to hamper it, is spending much more than that.

It would be comforting to think that all this is just the result of complacency: that western governments did not realise their dangerous dependence on Taiwanese chips, and will now lessen that dependence. This would allow us to feel slightly more confident that a face-off over Taiwan will not provide the spark for World War Three.

The reality is more sobering. None of the plans being laid now appear likely to diminish the importance of Taiwan. What is more, TSMC represents only one aspect of the west’s dependence on Asia for the future of its chip industry. To understand why, we have to look at how semiconductors became the most astonishing design and engineering achievement of our age.

The surface of a computer chip is like a city, only it is measured not in miles, but in microns. This city is not made from buildings and streets, but from transistors etched in silicon: hundreds of millions of them carefully arranged in every square millimetre. Each transistor is essentially just a gate, which can be opened or closed to regulate an electric current. But billions of transistors, mapped out in a microscopic architecture, can serve as the brain of your smartphone.

Recognising our dependence on such artefacts is unnerving. It reveals how the texture of our lives is interwoven with economic and technological forces we can barely comprehend. Semiconductors are everywhere: in contactless cards, household appliances, solar panels, pacemakers, watches, and all kinds of medical equipment and transportation systems. Aside from the logic chips that provide computing ability, semiconductors are needed for functions like memory, power and remote connection. They are the intelligent hardware underpinning the entire virtual universe of the Internet: we think and feel and dream in languages that semiconductors have made possible.

All this is thanks to a somewhat esoteric doctrine known as Moore’s Law. In 1965, the research director Gordon Moore predicted that the number of transistors on a computer chip would double every year, an estimate he later revised to every two years. This was, in effect, a prediction of the exponential growth of computing power, as well as its falling cost, and it has proved remarkably accurate to this day. The strange thing is that Moore’s Law has no hard scientific underpinning: it was simply an extrapolation based on Moore’s observations of the early semiconductor industry. But his “law” has become an almost religious mission within that industry, a prescribed rate of progress that every generation of designers and engineers seeks to uphold.

The result has been an incredible series of innovations, allowing transistor density to keep increasing despite regular claims that the laws of physics won’t allow it. Moore’s Law gives semiconductor production its defining characteristic: chips are never something that you can learn how to make once and for all. Tomorrow’s chips are always just around the corner, and they will probably require a whole new technique.

This is why, by the late 1980s, it was becoming financially impossible for the great majority of firms to manufacture advanced chips. Doing so, then as now, means placing huge bets on new ideas that might produce the next breakthrough demanded by Moore’s Law. It means spending billions of dollars on a factory, which needs enough orders so that it can run 24/7 and recoup its costs. And it means doing this in the knowledge that everything will need to be upgraded or replaced in a matter of years.

The solution to this impasse came in 1987 with the founding of TSMC, assisted, ironically enough, by engineers and technology transfers from the United States. The Taiwanese company’s key innovation was to focus purely on manufacturing, allowing all the other firms that want to make chips to specialise in design. With its gigantic order book, TSMC makes enough money to continuously invest in new manufacturing techniques. Meanwhile, companies such as British-based ARM, and Apple and Qualcomm in the US, have focused on designing ever more revolutionary chips to be manufactured by TSMC.

With this basic division of labour in place, the semiconductor industry became a highly specialised, intensely competitive global enterprise. It takes millions of research and engineering hours to design new chips, many of which are done in India to make the process faster and cheaper (up to date statistics are hard to find, but by 2007 just 57% of engineers at American chip companies were based in the US). The semiconductors are made in Taiwan with Dutch machinery and Japanese chemicals and components, before being taken to China for testing, assembly and installation. And increasingly, the profits needed to keep everything going come from Asian consumers.

This is how Moore’s Law has been upheld, and how we have received ever-improving gadgets over the past three decades. But this chip-making system relies on each of its key points developing intense expertise in a specific area, to deliver constant techno-scientific progress and cost efficiency. TSMC is one of those key points, and its skills and experience cannot simply be copied elsewhere.

It is worth taking a moment to appreciate the mind-bending process, known as Extreme Ultraviolet Lithography, by which TSMC makes the most cutting-edge chips. It involves a droplet of molten tin, half the size of a single hair’s breadth, falling through a vacuum and being vaporised by a laser that fires fifty thousand times per second. This produces a burst of light which you cannot actually see, since its short wavelength renders it invisible. After bouncing off numerous lenses, that light will meet the chemically treated surface of a silicon wafer, where, over the course of numerous projections, it will etch the billions of transistors that allow a chip to function. These transistors are hundreds of times smaller than the cells in our bodies, not much larger than molecules.

Now we can begin to grasp why America and Europe are unlikely to replicate TSMC on their own shores. In fact, the Taiwanese company already operates factories in the US, making less advanced chips; but as its founder Morris Chang recently revealed, the lack of industrial expertise there prevents it from being competitive. Chang called the whole idea of an American semiconductor revival “a very expensive exercise in futility.” Analysts have similarly poured scorn on the EU’s chip manufacturing ambitions. 

Given the scale of the challenge, the investments on offer in the US and EU are practically chump change. They are already spent by companies like TSMC and Samsung every single year. And those companies can buy more with it too: according to one assessment, the cost of building and running a semiconductor plant in the US is one-third higher than in Taiwan, South Korea or Singapore. Another widely cited report estimated that if the US wanted semiconductor self-sufficiency, it would cost over $1 trillion, or twenty times the sum currently on offer.

As for the United Kingdom, the less said the better. The UK finds itself with a potentially pioneering semiconductor plant in Newport, Wales, but has allowed the research facility there to lapse into a storage area, and has been trying to sell it to a Chinese-owned company for several years. As Ed Conway concludes, the UK government’s semiconductor strategy is simply non-existent. Absurdly, the responsibility for devising one was given to the Department for Digital, Culture, Media and Sport.

In short, the US and Europe are a long way from making semiconductors at a scale and with a proficiency that would seriously reduce dependence on TSMC. It is not just about mastering the cutting-edge techniques of today; it is about generating enough revenue to research and develop the techniques of tomorrow. And it is not just about meeting current demand; it is about building capacity for a decade in which global demand is expected to double. Advanced chips will be needed for 5G gaming and video streaming, artificial intelligence and home offices.

I will leave it to the international relations people to assess what this means for US-China relations. The obvious conclusion is that western dependence on TSMC raises the stakes of a potential Chinese invasion of Taiwan, which in turn makes it more likely that the US will provoke China with its support for the island’s independence. But the picture is extremely knotty, given China’s own centrality to the business models of western companies, including chip designers.

What seems more clear is that the politics surrounding semiconductors in the west are highly misleading. Support for domestic chip-making has been tied into a narrative about moderating the excesses of globalisation and rebuilding industry at home. But in practice, we are talking about state subsidies for huge global companies which continue to rely on access to foreign labour markets and consumers. This contradiction is neatly captured by Intel banging the drum for more government investment, while simultaneously lobbying to ensure it can continue taking its technology to China.

Yet this is only fitting, since semiconductors are emblematic of the contradictions of post-1990s globalisation. A system defined by economic openness and expansion ultimately concentrated power in the hands of those supplying the most important resources, whether it be technological expertise or cheap fossil fuels. And even if the system unravels, our dependence on the resources will remain. The events surrounding Taiwan this week are just another reminder of that.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Fake It ‘Til You Break It

This article was originally published by The Critic magazine in November 2022. Read it here.

n Friday the saga of Elizabeth Holmes will move one step closer to its conclusion. Holmes, founder of the ill-fated health tech company Theranos, was convicted of fraud and conspiracy at the start of this year, and she will now receive her sentence. This is bad news for the army of hacks, podcasters and documentary makers who have spent years making hay from the Theranos debacle, a story of a charismatic young woman who fooled a wide swathe of elite America with her vision of innovation as a force for good. 

They shouldn’t worry. With impeccable timing, a new tale of investor credulity and disastrous personal ambition has burst into the headlines. Last week, the cryptocurrency exchange FTX dramatically imploded after details of its creative accounting techniques were leaked to a news website, and a rival exchange started a run on its digital tokens. Billions of dollars belonging to investors and customers evaporated more or less overnight. As the news unfolded, all eyes turned to Sam Bankman-Fried, the 30-year-old crypto whizz-kid and Effective Altruism guru who exercised close control over FTX and a powerful grip on the imagination of Silicon Valley investors.

Bankman-Fried has reportedly been taken into custody by authorities in the Bahamas, where FTX was based. I won’t comment on the legal implications of his demise. What we can say, though, is that between him and Holmes, we havemounting evidence that the cult of the disruptive genius at the heart of tech capitalism has become a danger to the public.

Bankman-Fried rose through the crypto world on a wave of high-minded talk and personal charm. He offered financial brilliance along with the image of a scruffy outsider, appearing on stage with Bill Clinton and Tony Blair in a t-shirt and shorts, playing video games during business calls and bragging about sleeping on a beanbag next to his desk. Closely associated with the Effective Altruism movement — a school of ethics that seeks the most rational ways of maximising human wellbeing — Bankman-Fried claimed he was getting stinking rich so he could give it away. He cemented his public profile by cooperating with lawmakers in Washington over crypto regulation, whilst sponsoring various sports teams and donating to the Democratic party. Naturally he also made time to hobnob with celebrities like the supermodel Gisele Bundchen, appointed to lead FTX’s environmental and social initiatives. 

The investors lapped it up. A now-deleted article on the website of Sequoia Capital, the venture capital firm that previously backed PayPal and Google, described the response of its partners when Bankman-Fried pitched to them: “I LOVE THIS FOUNDER”; “I am 10 out of 10”; “YES!!” The problem is, neither theynor anyone else beyond a tight circle of friends had the full picture of FTX’s financial dealings. If they had, they might have seen that customer deposits werebeing loaned to Alameda Research, another of Bankman-Fried’s companies, to shore up risky investments. In the end, the assets FTX held as collateral weremostly its own digital tokens, whose value crashed when panicked customers tried to withdraw their funds en masse.

Bankman-Fried’s rise and fall shows a more than passing resemblance to the case of Holmes and Theranos. She too cultivated a disruptive image, a Stanford drop-out turned founder at the age of nineteen, wearing black turtlenecks in a weird homage to Steve Jobs. Her promise of a revolutionary blood-testing technology, bypassing scary needles and replacing doctors’ appointments with a trip to Walmart, won her more illustrious supporters than it’s possible to enumerate. They included Barack Obama, Hillary Clinton, Henry Kissinger, Bill Gates, Rupert Murdoch and Betsy DeVos. As revealed by John Carreyou in his 2015 Wall Street Journal exposé, and by a succession of witnesses at Holmes’ trial, she lied about the capabilities of her so-called Edison machine, keeping her supposedly ground-breaking tech cloaked in secrecy to hide its failures. 

Prosecutors demanding a tough sentence for Holmes this week claim she “seized upon investors’ desire to make the world a better place in order to lure them into believing her lies”. Weare already seeing similar claims of moral betrayal from Bankman-Fried’s supporters, including a hand-wringing Twitter thread by leading Effective Altruism philosopher William MacAskill. It would be comforting to think that Holmes and Bankman-Fried were just massive grifters — sociopaths preying on the goodwill of others — but this would be letting their backers and the system that produced them off the hook. The real story here is surely the lack of scepticism shown towards these celebrity entrepreneurs once their messianic image was established. 

Holmes’ followers had no right to be astonished that Theranos turned out to be a dud, given that clinical scientists were raising flags about the company’s secrecy long before it became a public scandal. Writing a New Yorker profile of Holmes in2014, at the height of her fame, Ken Auletta described her explanation of the technology as “comically vague”. Did none of Theranos’ board members stop to ask why their illustrious colleagues included just two people with a medical licence?

With Bankman-Fried the signs were even more obvious. In a now-infamous Bloomberg interview, the founder described part of his business model as a magic box that people stuff money into simply because others are doing so, and which “literally does nothing” apart from generate tokens that can be traded based on the hypothetical potential of said box. The journalist Matt Levine paraphrased this theory as “I’m in the Ponzi business and it’s pretty good”, which Bankman-Fried admitted was “a pretty reasonable response”. 

The entrepreneur Antonio García Martínez has an interesting historical take on the FTX fiasco, pointing out that charisma and speculation are typical in the early stages of a new technological paradigm, before it settles into a more stable, regulated status-quo. “Innovation starts in mad genius and grift and bubbles,” he writes, “and ends in establishment institutions that go on to reject the next round of mayhem.” A good point, but hardly a reassuring one, given that there will always be another hyper-ambitious figure promising to open up a new frontier.

We know this because there is such obvious demand in American elite society for individuals who can legitimise tech capitalism, whether through their aura of personal brilliance or by demonstrating its potential for beneficial progress. The reverence for Steve Jobs and Elon Musk are proof of this, but Holmes and Bankman-Fried went a step further by presenting themselves as ambitious prodigies and evangelical do-gooders. Where did they learn this formula for adulation? It’s notable that both are themselves quintessential products of the elite: Holmes’ parents were Washington insiders; Bankman-Fried’s were Stanford law professors. 

Ironically, the danger posed by such figures is not so much that they are “disruptive”, but that they awaken a deeply conformist desire to worship the glamorous heralds of progress. A desire, it seems, that can make people rather gullible. 

According to the latest reports, the collapse of Bankman-Fried’s crypto empire could affect as many as a million creditors, which is saying nothing of the individuals whose savings were implicated through institutional investors. Still, we can be grateful it happened now, and not at a point where cryptocurrency had become large enough to pose a systemic threat to the financial system. As for Theranos, the testimony of patients who received false blood-test results ought to be warning enough about what a close call that was. How long will our luck last? Given the ever-growing role of technology in our lives, the next hyped-up young genius may cause more havoc still. 

Design for Dictators

This essay was first published at The Pathos of Things newsletter. Subscribe here.

The 1937 World Fair in Paris was the stage for one of the great symbolic confrontations of the 20th century. On either side of the unfortunately titled Avenue of Peace, with the Eiffel Tower in the immediate background, the pavilions of Nazi Germany and the Soviet Union faced one another. The former was soaring cuboid of limestone columns, crowned with the brooding figure of an eagle clutching a swastika; the latter was a stepped podium supporting an enormous statue of a man and woman holding a hammer and sickle aloft.

This is, at first glance, the perfect illustration of an old Europe being crushed in the antagonism of two ideological extremes: Communism versus National Socialism, Stalin versus Hitler. But on closer inspection, the symbolism becomes less clear-cut. For one thing, there is a striking degree of formal similarity between the two pavilions. And when you think about it, these are strange monuments for states committed, in one case, to the glorification of the German race, and in the other, to the emancipation of workers from bourgeois domination. As was noted by the Nazi architect Albert Speer, who designed the German structure, both pavilions took the form of a simplified neoclassicism: a modern interpretation of ancient Greek, Roman, and Renaissance architecture.

These paradoxes point to some of the problems faced by totalitarian states of the 1920s and 30s in their efforts to use design as a political tool. They all believed in the transformative potential of aesthetics, regarding architecture, uniforms, graphic design and iconography as means for reshaping society and infusing it with a sense of ideological purpose. All used public space and ceremony to mobilise the masses. Italian Fascist rallies were politicised total artworks, as were those of the Nazis, with their massed banners, choreographed movements, and feverish oratory broadcast across the nation by radio. In Moscow, revolutionary holidays included the ritual of crowds filing past new buildings and displays of city plans, saluting the embodiments of Stalin’s mission to “build socialism.”

The beginnings of all this, as I wrote last week, can be seen in the Empire Style of Napoleon Bonaparte, a design language intended to cultivate an Enlightenment ethos of reason and progress. But whereas it is not surprising that, in the early 19th century, Napoleon assumed this language should be neoclassical, the return to that genre more than a century later revealed the contradictions of the modernising state more than its power.

One issue was the fraught nature of transformation itself. The regimes of Mussolini, Hitler and Stalin all wished to present themselves as revolutionary, breaking with the past (or at least a rhetorically useful idea of the past) while harnessing the Promethean power of mass politics and technology. Yet it had long been evident that the promise of modernity came with an undertow of alienation, stemming in particular from the perceived loss of a more rooted, organic form of existence. This tension had already been engrained in modern design through the medieval nostalgia of the Gothic revival and the arts and crafts movement, currents that carried on well into the 20th century; the Bauhaus, for instance, was founded on the model of the medieval guild.

This raised an obvious dilemma. Totalitarian states were inclined to brand themselves with a distinct, unified style, in order to clearly communicate their encompassing authority. But how can a single style represent the potency of modernity – of technology, rationality and social transformation – while also compensating for the insecurity produced by these same forces? The latter could hardly be neglected by regimes whose first priority was stability and control.

Another problem was that neither the designer nor the state can choose how a given style is received by society at large. People have expectations about how things ought to look, and a framework of associations that informs their response to any designed object. Influencing the public therefore means engaging it partly on its own terms. Not only does this limit what can be successfully communicated through design, it raises the question of whether communication is even possible between more radical designers and a mass audience, groups who are likely to have very different aesthetic intuitions. This too was already clear by the turn of 20th century, as various designers that tried to develop a socialist style, from William Morris to the early practitioners of art nouveau in Belgium, found themselves working for a small circle of progressive bourgeois clients.

Constraints like these decided much about the character of totalitarian design. They were least obvious in Mussolini’s Italy, since the Fascist mantra of restoring the grandeur of ancient Rome found a natural expression in modernised classical forms, the most famous example being the Palazzo della Civiltà Italiana in Rome. The implicit elitism of this enterprise was offset by the strikingly modern style of military dress Mussolini had pioneered in the 1920s, a deliberate contrast with the aristocratic attire of the preceding era. The Fascist blend of ancient and modern was also flexible enough to accommodate a more radical designers such as Giuseppe Terragni, whose work for the regime included innovative collages and buildings like the Casa Del Fascio in Como.

The situation in the Soviet Union was rather different. The aftermath of the October Revolution of 1917 witnessed an incredible florescence of creativity, as artists and designers answered the revolution’s call to build a new world. But as Stalin consolidated his dictatorship in the early 1930s, he looked upon cultural experimentation with suspicion. In theory Soviet planners still hoped the urban environment could be a tool for creating a socialist society, but the upheaval caused by Stalin’s policies of rapid industrial development and the new atmosphere of conservatism ultimately cautioned against radicalism in design.

Then there was the awkward fact that the proletariat on whose behalf the new society would be constructed showed little enthusiasm for the ideas of the avant garde. When it came to building the industrial city of Magnitogorsk, for instance, the  regime initially requested plans from the German Modernist Ernst May. But after enormous effort on May’s part, his functionalist approach to workers’ housing was eventually rejected for its abstraction and meanness. As Stephen Kotkin writes, “for the Soviet authorities, no less than many ordinary people, their buildings had to ‘look like something,’ had to make one feel proud, make one see that the proletariat… would have its attractive buildings.”

By the mid-1930s, the architectural establishment had come to the unlikely conclusion that a grandiose form of neoclassicism was the true expression of Soviet Communism. This was duly adopted as Stalin’s official style. Thus the Soviet Union became the most reactionary of the totalitarian states in design terms, smothering a period of extraordinary idealism in favour of what were deemed the eternally valid forms of ancient Greece and Rome. The irony was captured by Stalin’s decision to demolish one of the most scared buildings of the Russian Orthodox Church, the Cathedral of Christ the Saviour in Moscow, and resurrect in its place a Palace of the Soviets. Having received proposals from some of Europe’s most celebrated progressive architects, the regime instead chose Boris Iofan to build a gargantuan neoclassical structure topped by a statue of Lenin (the project was abandoned some years later). Iofan himself had previously worked for Mussolini’s regime in Libya.

If Stalinism ended up being represented by a combination of overcrowded industrial landscapes and homages to the classical past, this was more stylistic unity than Nazi Germany was able to achieve. Hitler’s regime was pulled in at least three directions, between its admiration for modern technology, its obsession with the culture of an imagined Nordic Volk (which, in a society traumatised by war and economic ruin, functioned partly as a retreat from modernity), and Germany’s own tradition of monumental neoclassicism inherited from the Enlightenment. Consequently there was no National Socialist style, but an assortment of ideological solutions in different contexts.

Despite closing the Bauhaus on coming to power in 1933, the Nazis imitated that school’s sleek functionalist aesthetic in its industrial and military design, including the Volkswagen cars designed to travel on the much-vaunted Autobahn. Yet the citizens who worked in its modern factories were sometimes provided housing in the Heimatstil, an imitation of a traditional rural vernacular. Propaganda could be printed in a Gothic Blackletter typeface or broadcast through mass-produced radios. But the absurdity of Nazi ideology was best demonstrated by the fact that, like Stalin, Hitler could not conceive of a monumental style to embellish his regime that did not continue in the cosmopolitan neoclassical tradition inspired by the ancient Mediterranean. The cut-stone embodiments of the Third Reich, including Hitler’s imagined imperial capital of Germania, were projected in the stark neoclassicism of Speer’s pavilion for the Paris World Fair. It was only in the regime’s theatrical public ceremonies that these clashing ideas were integrated into something like a unified aesthetic experience, as the goose-stepping traditions of Prussian militarism were updated with Hugo Boss uniforms and the crypto-Modernist swastika banner.

Of course it was not contradictions of style that ended the three classic totalitarian regimes; it was the destruction of National Socialism and Fascism in the Second World War, and Stalin’s death in 1953. Still, it seems safe to say that no state after them saw in design the same potential for a transformative mass politics. 

Dictatorships did make use of design in the later parts of the 20th century, but that is a subject for another day. As in the western world, they were strongly influenced by Modernism. A lot of concrete was poured, some of it into quite original forms – in Tito’s Yugoslavia for instance – and much of it into impoverished grey cityscapes. Stalinist neoclassicism continued sporadically in the Communist world, and many opulent palaces were constructed, in a partial reversion to older habits of royalty. Above all though, the chaos of ongoing urbanisation undermined any pretence of the state to shape the aesthetic environment of most of its citizens, a loss of control symbolised by the fate of the great planned capitals of the 1950s, Le Corbusier’s Chandigarh and Lúcio Costa’s Brasilia, which overflowed their margins with satellite cities and slums.

In the global market society of recent decades, the stylistic pluralism of the mega-city is the overwhelming pattern (or lack of pattern), seen even in the official buildings of an authoritarian state like China. On the other hand, I’ve recently argued elsewherethat various repressive regimes have found a kind of signature style in the spectacular works of celebrity architects, the purpose of which is not to set them apart but to confirm their rightful place in the global economic and financial order. But today the politics of built form feel like an increasingly marginal leftover from an earlier time. It has long been in the realm of media that aesthetics play their most important political role, a role that will only continue to grow.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Crisis and Heroic Design

This essay was first published at The Pathos of Things newsletter. Subscribe here.

One of my favourite artefacts is a series of banknotes designed by Herbert Bayer in 1923, during Weimar Germany’s famous hyperinflation. This was the period when, as you might recall from the images in your school history textbook, the German currency devalued so dramatically that people needed wheelbarrows of money to buy a loaf of bread, and cash was a cheaper way to start a fire than kindling.

Bayer’s banknotes, which came in denominations from one-million to fifty-million Marks, are emblematic of how crises can stimulate innovative design. If it wasn’t for the unusual problem of needing to produce an emergency supply of banknotes, it is unlikely the State Bank of Thuringia would have commissioned Bayer, who was then still a student at Bauhaus school of design. Bayer had no formal training in typography, but he did have some radical ideas involving highly simplified sans-serif numbers and letters, which he duly used for the banknotes. The descendants of those numbers and letters include the font you are reading right now.

This story resonates with an outlook we might call the heroic theory of design, where designers step up at moments of crisis to change the world. Typefaces don’t seem like a big deal, but Bayer’s ideas were part of wider movement to radically rethink every area of design for the practical benefit of society as a whole. By 1926, he had developed a “universal alphaphet” of lower-case-only, sans-serif letters, to make printing, typing and public communication more efficient and accessible. The Bauhaus (or bauhaus, as Bayer would have put it) was suffused with such urgent, experimental thinking, always framed as a response to the prevailing mood of crisis in Weimar Germany. This is part of the reason it remains the most influential design school in history, despite only operating for fourteen years.

The heroic theory is deeply appealing because it taps into the basic narrative of modern design: the promise of order in a world of constant change. The words “crisis” and “emergency” describe modernity at its most raw and contingent, a “moment of decision” (the original Greek translation of crisis) when the shape of the future is at stake in a fundamental way. Crises therefore seem to be the moments we are most in need of design and its order-giving potential, to solve problems and resolve uncertainty in an active, positive manner.

But to what extent can design actually play this heroic role in times of crisis, and under what conditions? This question is of more than academic interest, since we ourselves live in an era defined by multiple crises, from climate and pandemic to war, energy shortages, economic hardship and political turbulence. The German chancellor Olaf Scholz has even invoked the concept of Zeitenwende, “a time of transformation,” which was current during the Weimar years.

The eminent writer Alice Rawsthorn has responded with a new heroic theory of design, first labelled “design as an attitude” (the name of her 2018 book) and more recently “design emergency.” Rejecting the traditional image of design as a commercial discipline, Rawsthorn places her hope in resourceful individuals who find innovative answers to ecological, humanitarian and political problems. More broadly, she encourages us, collectively, to see crisis as an opportunity to actively remake the world for the better. There is even a link to Weimar, as Rawsthorn draws inspiration from another Bauhaus figure, the Hungarian polymath László Moholy-Nagy, who was Bayer’s teacher at the time he designed his banknotes.

The new breed of heroic designers includes the Dutch university student Boyan Slat, who crowd-funded an enormous initiative to tackle ocean pollution, and the duo of Saeed Khurram and Iffat Zafar, doctors who used video conferencing to deliver health care to women in remote Pakistan. Rawsthorn argues that, while many of our systems and institutions have shown themselves no longer fit for purpose, network technology is allowing such enterprising figures to find funding, collaborators and publicity for their ideas.

That point about the empowering potential of networks strikes me as crucial to the plausibility of this outlook. The Internet has definitely made it easier for individual initiatives to have an impact, but can this effect really scale enough to answer the crises we face? Going forward, the heroic theory hinges on this question, because history (including recent history) points in a different direction.

We need to draw a distinction between the general flourishing of creativity and ingenuity in times of crisis, and the most consequential design visions that are widely implemented. The latter, it seems, are overwhelmingly determined by established institutions. Take Bayer again; he could not have put his designs in hands of millions of people without the assistance of a state bank, any more than he could have actually solved the problem of hyperinflation. Likewise, the immediate impact of the Bauhaus and of Modernism in general, limited as it was, depended on its ability to persuade big manufacturers and municipal governments to adopt its ideas. Margarete Schütte-Lihotzky’s groundbreaking Frankfurt kitchen, which I wrote about recently, owed its success to the boldness of that city’s housing program.

Innovation in general tends to unfold in stages, and often with input from numerous sources, big and small. But in times of crisis, which typically demand large-scale, complex initiatives on a limited timescale, institutions with significant resources and organisational capacity play a decisive role. Insofar as individuals and smaller movements make a difference, they first need the levers of power to be put in their hands, as it were.

Wars are famously powerful engines of innovation, precisely because these are the moments when the state’s resources are most intensely focused. Addressing the problems of infectious disease and abject living conditions in the 19th century required not just city planners and sanitation experts, but governments to empower their designs. No one expects plucky outsiders to develop vaccines or mitigate the effects of financial crises. Even on the longer time horizon of climate change, the development of renewable energy is requiring extensive government and corporate involvement, partly to combat the vested interests of the status quo. The crucial breakthrough may turn out to be the emergence of a “green industrial complex,” a new set of powerful interests with a stake in the energy transition.

This does not mean the answers arrived at in this way are necessarily good ones, and they will certainly bear the stamp of the power structures that produce them. This is why slum clearances in the mid-19th century produced cities designed for property investors, while slum clearances in the mid-20th century produced public housing. That said, it is not straightforward to work out what a good answer to a crisis actually is.

Though crises usually have an underlying material reality, they are ultimately political phenomena: a crisis comes into existence with our perception of it (the “fear itself” that Franklin Roosevelt spoke of during the Great Depression). Thus an “effective” solution is one that addresses perceptions, even if its material results are questionable. Rawsthorn understands this, as did the Modernists of the 1920s, for these approaches to design are about transforming worldview as much as generating practical solutions. But ultimately, the political nature of crisis only reaffirms the importance of powerful institutions. For better or worse, there tends to be Hobbesian flight to authority in times of emergency, a search for leaders who can take control of the situation.

Another observation which undermines the heroic theory is that the most important designs in moments of crisis are rarely new ones. As Charles Leadbeater points out in his fascinating comparison of British efforts during the Second World War and in the Covid pandemic (and hat-tip Saloni Dattani for sharing), effective answers tend to come from the repurposing of existing technologies and ideas. This too has a strong institutional component, since knowledge needs to be built up over time before it can be repurposed during a crisis.

By way of illustration, Leadbeater’s remarks about the UK’s failed efforts to design new ventilators in the midst of the pandemic are worth quoting at length:

Code-named Operation Last Gasp when the Prime Minister first enlisted British manufacturers to pivot their production lines from aero engines, vacuum cleaners and racing cars to ventilators, five thousand companies and 7,500 staff responded to the challenge to design new ventilators, in what was billed as a showcase of British engineering prowess. Companies such as Dyson and Babcock joined universities and Formula 1 teams only to find they were sent down blind alleys to design, from scratch, machines that clinicians would not use.

Those in the industry who suggested that it would be more sensible to produce more machines based on existing designs were eventually vindicated… The main usable innovation was a version of an existing British machine which was upgraded so it could be exported.

The 2,500 ventilators the UK procured from abroad were more sophisticated machines needed to sustain people in intensive care for weeks on end. The most famous manufacturer of those high-end machines is the family-owned German company Draeger, founded in Lubeck in 1889, which made the first-ever ventilator, the Pulmotor. The company’s latest product, the Pulmovista 500 visualises the flow of air through the lungs in such detail that clinicians can monitor it in real-time and make minute adjustments to the flow. The company’s chief executive, Stefan Draeger is the fifth generation of the family to lead the company. You do not invent that kind of capability from scratch in a few weeks.

Even 1920s Modernism, the archetypal heroic design movement, did not emerge ex-nihilo. Its foundations were laid in the years before the First World War, through the patronage of German industrial giants like AEG, and in the Deutscher Werkbund before that.  

For Rawsthorn’s vision of crisis entrepreneurs to be realised on a bigger scale, network technology would have to replace this institutional development across timewith individual collaboration across space. For all the power of open source databases and information sharing, I’m yet to be convinced this is possible.

It remains true, of course, that crisis design which fails to have an immediate impact can still be revolutionary in the longer term. The Bauhaus is an excellent example of this. But it’s interesting to note that the lasting effects of crises on design are not always predictable. The experience of popular mobilisation for the First World War persuaded the survivors of the power of mass media and propaganda. The idea of “built-in obsolescence” – making minor alterations to products so that consumers want to buy the newer version – was widely taken up in response to the Great Depression. Research undertaken during the Second World War led to a boom in the use of plastic materials. Covid, it seems, has prompted the mass adoption of remote working technologies. 

Crises pave the way for such shifts, because by definition, these are moments when we see our current reality as provisional. At times of crisis, like the one we are in now, no one believes that the future will look like the recent past; we have, unconsciously, prepared ourselves for dramatic change. In this space of expectation new forms of design can emerge, though we don’t yet know what they will be.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

The Kitchen as a Theatre of History

This essay was first published at The Pathos of Things newsletter. Subscribe here.

In Britain, where the saying goes that every man’s home is his castle, we like to see domestic space as something to be improved. Even if we have to save until middle age to own a decent home, we do so, in part, so that we can hand it over to builders for six months, after which there will be fewer carpets and more sunrooms. 

But domestic space is also a medium through which external forces shape us, in what we mistakenly consider our private existence. Nothing illustrates this better than the evolution of the modern kitchen.

In one of my favourite essays, former Design Museum director Deyan Sudjic describes how the British middle-class kitchen was transformed over the course of a century, from the early 1900s until today. Beginning as a “no-man’s land” where suburban housewives maintained awkward relations with their working-class servants, it has become “a domestic shrine to the idea of family life and conviviality.” Whereas the kitchen’s association with work and working people once ensured that it was partitioned, physically and socially, from the rest of the home, today the image of domestic bliss tends to centre on a spacious open-plan kitchen, with its granite-topped islands, its ranks of cupboard doors in crisp colours, its barstools and dining tables.

And in the process of being transformed, the kitchen transformed us. The other thing we find in this space today is an assortment of appliances, from toasters and kettles to expensive blenders and coffee machines, reflecting a certain admiration for efficiency in domestic life. This does not seem so striking in a world where smartphones and laptops are ubiquitous, but as Sudjic points out, the kitchen was the Trojan horse through which the cult of functionality first penetrated the private sphere.

A hundred years ago, sewing machines and radios had to be disguised as antique furniture, lest they contaminate the home with the feeling of a factory. It was after the middle-classes began to occupy the formerly menial world of the kitchen that everyday communion with machines became acceptable.

In its most idealised and affluent form, the contemporary kitchen has almost become a parody of the factory. Labour in conditions of mechanised order – the very thing the respectable home once defined itself against – is now a kind of luxury, a form of self-expression and appreciation for the finer things in life. We see the same tendency in the success of cooking shows like Master Chef, and in the design of fashionable restaurants, where the kitchen is made visible to diners like a theatre.

What paved the way for this strange marriage of the therapeutic and the functional was the design of the modern kitchen. During this process, the kitchen was a stage where history’s grand struggles played out on an intimate scale, often refracted through contests over women’s role in society. The central theme of this story is how the disenchanting forces of modern rationality have also produced enchanting visions of their own, visions long associated with social progress but eventually absorbed into the realm of private aspiration.

The principles underpinning the modern kitchen came from the northern United States, where the absence of servants demanded a more systematic approach to domestic work. That approach was defined in the mid-19th century by Catharine Beecher, sister of the novelist Harriet Beecher Stowe. In her hugely popular Treatise on Domestic Economy, addressed specifically to American women, Beecher gave detailed instructions on everything from building a house to raising a child, from cooking and cleaning to gardening and plumbing. Identifying the organised, self-contained space of the ship’s galley as the ideal model for the kitchen, she provided designs for various labour-saving devices, setting in motion the process of household automation.

Beecher promoted an ethic of hard work and self-denial that she derived from a stern Calvinist upbringing. Yet she was also a leading campaigner for educational equality, establishing numerous schools and seminaries for women. Her professional approach to household work was an attempt, within the parameters of her culture, to give women a central role in the national myth of progress, though its ultimate effect was to deepen the association of women with the domestic sphere.

Something similar could be said about Christine Frederick, a former teacher from Boston, who in the early-20th century took some of Beecher’s ideas much further. Frederick’s faith was not Calvinism but the Taylorist doctrines of scientific management being implemented in American factories. What she called “household engineering” involved an obsessive analysis and streamlining of tasks as mundane as dishwashing. “I felt I was working hand in hand with the efficiency engineers in business,” she said, “and what they were accomplishing in industry, I too was accomplishing in the home.”

By this time Europe was ready for American modernity in the household, as relations between the classes and sexes shifted radically in the wake of the First World War. Women were entering a wider range of occupations, which meant fewer wives at home and especially fewer servants. At the same time, the provision of housing for the working class demanded new thinking about the kitchen.

In the late-1920s one of Christine Frederick’s disciples, the Austrian architect Margarete Schütte-Lihotzky, designed perhaps the most celebrated kitchen in history. The Frankfurt kitchen, as it came to be known, was one of many efforts at this time to repurpose the insights of American industry for the cause of socialism, for Schütte-Lihotzky was an ardent radical. She would, during her remarkably long life, offer her skills to a succession of socialist regimes, from the Soviet Union to Fidel Castro’s Cuba, as well as spending four years in a concentration camp for her resistance to Nazism.

For the Modernist architects among whom Schütte-Lihotzky worked in the 1920s, the social and technical challenge of the moment was the design of low-cost public housing. Cash-strapped government agencies were struggling to provide accommodation for war widows, disabled veterans, pensioners and slum-dwelling workers. It was for a project like this in Frankfurt that Schütte-Lihotzky produced her masterpiece, a compact, meticulously organized galley kitchen, offering a maximum of amenities in a minimum of space.

By the end of the decade, different versions of the Frankfurt kitchen had been installed in 10,000 German apartments, and were inspiring imitations elsewhere. Its innovations included a suspended lamp that moved along a ceiling runner, a height-adjustable revolving stool, and a sliding door that allowed women to observe their children in the living area. It was not devoid of style either, with ultramarine blue cupboards and drawers, ochre wall tiles and a black floor. Schütte-Lihotzky would later claim she designed it for professional women, having never done much cooking herself.

The Frankfurt kitchen was essentially the prototype of the fitted kitchens we are familiar with today, but we shouldn’t overlook what a technological marvel it represented at the time. Across much of working class Europe, a separate kitchen was unheard of (cooking and washing were done in the same rooms as working and sleeping), let alone a kitchen that combined water, gas and electricity in a single integrated system of appliances, workspaces and storage units.

But even as this template became a benchmark of modernity and social progress in Europe, the next frontier of domestic life was already appearing in the United States. During the 1920s and 30s, American manufacturers developed the design and marketing strategies for a full-fledged consumer culture, turning functional household items into objects of desire. This culture duly took off with the economic boom that followed the Second World War, as the kitchen became the symbol of a new domestic ideal.

With the growth of suburbia, community-based ways of life were replaced by the nuclear family and its neighbours, whose rituals centred on the kitchen as a place of social interaction and display. The role of women in the home, firmly asserted by various cultural authorities, served as a kind of traditional anchor in a world of change. Thanks to steel-beam construction and central heating, the kitchen could now become a large, open-plan space. It was, moreover, increasingly populated by colourful plastic-laminated surfaces, double cookers, washing machines and other novel technologies. Advertisers had learned to target housewives as masters of the family budget, so that huge lime green or salmon pink fridges became no less a status symbol than the cars whose streamlined forms they imitated.

Despite their own post-war boom, most Europeans could only dream of such domestic affluence, and dream they did, for the mass media filled their cinema and television screens with the comforts of American suburbia. This was after all the era of the Cold War, and the American kitchen was on the front line of the campaign to promote the wonders of capitalism. On the occasion of the 1959 American National Exhibition in Moscow, US vice-president Richard Nixon got the chance to lecture Soviet premier Nikita Khrushchev on the virtues of a lemon yellow kitchen designed by General Electric.

In this ideological competition, the technologies of the modern kitchen were still assumed to represent an important form of social progress; Nixon’s PR victory in the Moscow “kitchen debate” was significant because Khrushchev himself had promised to overtake the United States in the provision of domestic consumer goods. This battle for abundance was famously one that Communism would lose, but by the time the Soviet challenge had disappeared in the 1990s, it was increasingly unlikely that someone in the west would see their microwave as emblematic of a collective project of modernity.

Perhaps capitalism has been a victim of its own success in this regard; being able to buy a Chinese manufactured oven for a single day’s wages, as many people now can, makes it difficult to view that commodity as a profound achievement. Yet there is also a sense in which progress, at least in this domain, has become a private experience, albeit one that tends to emerge from a comparison with others. The beautiful gadgetsthat occupy the contemporary home are tools of pleasure and convenience, but also milestones in the personal quest for happiness and perfection. 

The open-plan kitchen descended from mid-century America has become a desired destination for that quest in much of the developed world, even if it is often disguised in a local vernacular. It is no coincidence that in 1999, such a kitchen featured in the first episode of Grand Designs, the show which embodies the British middle-class love affair with domestic improvement. But the conspicuous efficiency and functional aesthetics of today’s kitchen dream show that it is equally indebted to Margrete Schütte-Lihotzky’s utopian efforts of the 1920s. This is a cruel irony, given that for most people today, and most of them still women, working in the kitchen is not a form of mechanised leisure but a stressful necessity, if there is time for it at all. 

Then again, Schütte-Lihotzky is part of a longer story about the modern world’s fascination with rational order. When Kathryn Kish Sklar writes about Catharine Beecher’s kitchen from the 1850s, she could equally be describing the satisfaction our own culture longs to find in the well-organised home: “It demonstrates the belief that for every space there is an object, for every question an answer. It speaks of interrelated certainties and completion.”

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Napoleon’s Furniture

This essay was first published at The Pathos of Things newsletter. Subscribe here.

The design of chairs is not normally listed among the achievements of Napoleon Bonaparte, France’s famous post-revolutionary emperor, but the importance of furniture should never be underestimated. Besides redrawing the map of the Europe, establishing institutions and writing law codes, Napoleon should be seen as a seminal figure in the development of modern design.

Napoleon embodies modernity in its heroic phase. He was celebrated as an icon of both Romanticism and the Enlightenment; a symbol of unstoppable willpower who crossed the Alps on his rearing wild-eyed stallion (or at least was painted by doing so by Jacques-Louis David), as well as the ultimate Enlightened despot, aspiring to replace feudal superstition with the universal principles of Reason. Between these two sides of the Napoleonic myth we can glimpse his remarkable understanding of modern authority, which rests on the active creation of order in a world of turbulent change.

Design was an integral part of that authority. With the assistance of designers Charles Percier and Pierre Fontaine, Napoleon implemented what came to be known as the Empire Style. This was a grand but sober form of neoclassicism, with rigid lines and a large repertoire of motifs drawn from the ancient world: acanthus, palms leaves, wreaths and eagles from Greece and Rome; obelisks, pyramids, winged lions and caryatids from Egypt. Through this official style, whose most famous example is the Arc de Triomphe in Paris, Napoleon linked his regime to the timeless values of reason associated with classical civilisation.

But the Empire Style also portrayed this order as dynamic and expanding, drawing attention to the epic agency of its central figure. The Egyptian iconography recalled Napoleon’s expedition to the near east in 1798, which had sparked a fascination with Egypt in European fashion and intellectual life. More obvious still was the frequently used capital letter “N.” Blending the grandeur of the past with progress and celebrity, Napoleonic design showed a distinctly modern formula for authority, one that would be echoed by Mussolini, Hitler and Stalin more than a century later.

It also suggested the arrival of modernity in more concrete ways, as François Baudot has pointed out. The square proportions and functional character of its furniture, along with its catalogue of reproducible symbols, reflected the standardised methods used at France’s state workshop. As such, it anticipated the age of mass-production in the later 19th and 20th centuries. Large-scale production was needed, in part, to supply the burgeoning administration of the Napoleonic state: a state whose power derived not just from the court and army, but increasingly from bureaucracy too. “It proved a short step,” Baudot quips, “from the Empire desk to the empire of the desk.”

What feels especially familiar about the Empire Style is its ambition to create an aesthetic totality, a “brand identity” whose unity of style would encompass everything from the largest structure to the finest detail. It was, writes Baudot, “a style whose practitioners were equally adept at cutlery and facades, at the detailing of a frieze and of a chair, at the plan of a fortress and shape of a gown to be worn at court.” This concern for a fully designed environment brings to mind the fastidious approach of later styles like art nouveau and the moderne (when the Belgian designer Henry van de Velde conceived a house for himself in 1895, he produced not just matching cutlery and furnishings but a new wardrobe for his wife). It also anticipates the commercial designers of our time, hired to create an immersive aesthetic experience for a pop star or retail brand. 

Admittedly the principle that power spoke with a distinct voice was not new, especially not in France, where Louis XIV had already overseen an extensive system of state workshops and artisans in the 17th century. Neoclassicism had been in vogue since the mid-18th century, and Napoleon’s version of it can be seen as a careful attempt to position his regime in relation to its predecessors. Without returning to the full opulence of the royal ancien régime, whose excesses had been repudiated in the revolutionary decade of the 1790s, the Empire Style was notably more grandiose than the republican Directory Style which came before it. Subtly but unmistakably, Napoleon was recalling the majesty of the Bourbons. 

Nonetheless, the Empire Style did express real Enlightenment convictions. As Ruth Scurr details in her fascinating biography, A Life in Gardens and Shadows, Napoleon’s passion for neoclassical garden design reflected his deeply engrained rationalism and love of order. Right until his last days in exile on Saint Helena, where he diverted his frustration into horticulture, Napoleon liked gardens to display straight lines, precision and symmetry. These are the same characteristics that defined the Empire Style. In such apparently superficial details we see principles that would resonate through European history for centuries. Napoleon quarrelled with his first wife, Joséphine, over her preference for the more unruly and picturesque English style of garden. That English style was a portent of a very different response to modernity that would soon emerge in Britain, where aesthetic harmony was sought not in classical Reason but in the organic rootedness of the medieval Gothic.     

Ultimately, what makes the Empire Style modern was the role it gave design in relation to society at large. Appropriately for an emperor who loved gardening, Napoleonic design reveals the emergence of what Zygmunt Bauman has called “the gardening state”: the modern regime that does not just aim to rule over its subjects, but seeks to transform society in pursuit of progress and even utopian perfection. The Empire Style communicated the ambition of the state – which, after the French Revolution, was meant to embody the nation and its citizens – to remake the world in the image of its ideals. But more than that, it showed a belief that design could be an active part of this project, its didactic powers helping to bring the state into being, and to instil it with an ideological purpose. Chairs and tables, buildings, interiors and monuments were not only intended to demonstrate reason and progress; they were intended to impart these values to the society where they appeared. 

This entanglement with the modern progressive state or movement would continue to haunt design up until the ruptures of the mid-20th century. In the process, the aims of representing abstract ideals, securing the commitment of the masses and showing the promise of the future would turn out to be rife with contradictions. But we will have to leave all of that until next week.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Designing Modernity

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Somewhere in my room (I forget where exactly) there is a box containing four smartphones I’ve cycled through in the last decade or so. Each of these phones appeared shockingly new when I first removed it from its neat cuboid packaging, though now there is a clear progression of models, with the earliest almost looking fit for a museum. This effect is part of their design, of course: these objects were made to look original at first, and then, by contrast to newer models, out of date. That all have cracked screens only emphasises their consignment to the oblivion of the obsolete.

The point of this sketch is not just to make you reflect on your consumer habits. I think it represents something more profound. This series of phones is like an oblique record of the transformation of society, describing the emergence of a new paradigm for organising human existence. It captures a slice of time in which the smartphone has changed every dimension of our lives, from work and leisure to knowledge and personal relations. This small device has upended professions from taxi driving to journalism, and shaped global politics by bringing media from around the world to even the poorest countries. It has significantly altered language. It has enabled new forms of surveillance by private companies and government agencies alike. A growing number of services are inaccessible without it.

Yet with its sleek plastic shell and vibrant interfaces, the smartphone is nonetheless a formidable object of desire: a kind of gateway to the possibilities of the 21st century. Ultimately, what it represents is paradoxical. An exhilarating sense of novelty, progress and opportunity; but also the countless adaptations we have to make as technology reshapes our lives, the new systems into which we are forced to fit ourselves.

To understand how a designed object can have this kind of power, defining both the practical and imaginative horizons of our age, we have to look beyond the immediate circumstances in which it appeared. The smartphone is a truly modern artefact: modern not just in the sense that it represents something distinctive about this era, but modern in another, deeper sense too. It belongs to a longer chapter of history, modernity, which is composed of moments that feel “modern” in their own ways.

The story of modernity shows us the conditions that enable design to shape our lives today. But the reverse is also true: the growing power of design is crucial to understanding modernity itself.


The very idea of design, as we understand it now, points to what is fundamentally at stake in modernity. To say that something is designed implies that it is not natural; that it is artificial, conceived and constructed in a certain way for a human purpose. Something which is not designed might be some form of spontaneous order, like a path repeatedly trodden through a field; but we still view such order as in some sense natural. The other antonym of the designed is the disordered, the chaotic.  

These contrasts are deeply modern. If we wind the clock back a few centuries – and in many places, much less than that – a hard distinction between human order and nature or chaos becomes unfamiliar. In medieval Europe, for instance, design and its synonyms (form, plan, intention) came ultimately from a transcendent order, ordained by God, that was manifest in nature and society alike. Human designs, such as the ornamentation of Gothic cathedrals or the symbols and trappings of noble rank, drew their meaning from that transcendent order.

In practical terms though, the question of where order came from was really a question about the authority of the past. It was the continuity of customs, traditions, and social structures in general which provided evidence that order came from somewhere beyond society, that it was natural. This in turn meant the existing order, inherited from the past, placed constraints of what human ambition could fathom.

To be modern, by contrast, is to view the world without such limitations. It is to view the world as something human beings must shape, or design, according to their own goals.

This modern spirit, as it is sometimes called, was bubbling up in European politics and philosophy over centuries. But it could only be fully realised after a dramatic rupture from the past, and this came around the turn of the 19th century. The French Revolution overturned the established order with its ancient hierarchies across large parts of Europe. It spread the idea that the legitimacy of rulers came from “the people” or “the nation,” a public whose desires and expectations made politics increasingly volatile. At the same time, the seismic changes known as the Industrial Revolution were underway. There emerged an unpredictable, dynamic form of capitalism, transforming society with its generation of new technologies, industries and markets.

These developments signalled a world that was unmistakably new and highly unstable. The notion of a transcendent order inherited from the past became absurd, because the past was clearly vanishing. What replaced it was the modern outlook that, in its basic assumptions, we still have today. This outlook assumes the world is constantly changing, and that human beings are responsible for giving it order, preventing it from sliding into chaos.

Modernity was and is most powerfully expressed in certain experiences of space and time. It is rooted in artificial landscapes, worlds built and managed by human beings, of which cities are still the best example. And since it involves constant change, modernity creates a sense of the present as a distinct moment with its own fashions, problems and ideas; a moment that is always slipping away into a redundant past, giving way to an uncertain future. “Modernity,” in the poet Charles Baudelaire’s famous expression, “is the transient, the fleeting, the contingent.”


Design was present at the primal scenes of modernity. The French Revolutionaries, having broken dramatically with the past, tried to reengineer various aspects of social life. They devised new ways of measuring space (the metric system) and time (the revolutionary calendar, beginning at Year Zero, and the decimal clock). They tried to establish a new religion called the Cult of the Supreme Being, for which the artist Jacques-Louis David designed sets and costumes.

Likewise, the Industrial Revolution emerged in part through the design activities of manufacturers. In textiles, furniture, ceramics and print, entrepreneurs fashioned their goods for the rising middle-classes, encouraging a desire to display social status and taste. They devised more efficient production processes to increase profits, ushering in the age of factories and machines.

These early examples illustrate forces that have shaped design to this day. The French Revolution empowered later generations to believe that radical change could be conceived and implemented. In its more extreme phases, it also foreshadowed the attempts of some modern regimes to demolish an existing society and design a new one. This utopian impulse towards order and perfection is the ever-present dark side of design, in that it risks treating people as mere material to be moulded according to an abstract blueprint. Needless to say, design normally takes place on a much more granular level, and with somewhat less grandiose ambitions. 

Modern politics and commerce both require the persuasion of large groups of people, to engineer desire, enthusiasm, fear and trust. This is the realm of propaganda and advertising, a big part of what the aesthetic design of objects and spaces tries to achieve. But modern politics and commerce also require efficient, systematic organisation, to handle complexity and adapt to competition and change. Here design plays its more functional role of devising processes and tools.

Typically we find design practices connected in chains or webs, with functional and aesthetic components. Such is the connection between the machine humming in the factory and the commodity gleaming in the shop window, between urban infrastructure and the facades which project the glory of a regime, between software programmes and the digital interface that keeps you scrolling.

But modernity also creates space for idealism. Modern people have an acute need of ideals, whether or not they can be articulated or made consistent, because modern people have an acute need to feel that change is meaningful.

The modern mind anticipates constant change, and understands order as human, but by themselves these principles are far from reassuring. Each generation experiences them through the loss of a familiar world to new ideas, new technologies, new social and cultural patterns. We therefore need a way to understand change as positive, or at least a sense of what positive change might look like (even if that means returning to the past). Modernity creates a need for horizons towards which we can orient ourselves: a vision of the future in relation to which we can define who we are.

Such horizons can take the form of a collective project, where people feel part of a movement aiming at a vision of the future. But for a project to get off the ground, it again needs design for persuasion and efficiency. From Napoleon Bonaparte’s Empire Style furniture, with which he fitted out a vast army of bureaucrats, to Barack Obama’s pioneering Internet campaigns, successful leaders have used a distinctive aesthetic style and careful planning to bring projects to life.

Indeed, the search for effective design is one of modernity’s common denominators, creating an overlap between very different visions of society. In the aftermath of the Russian Revolution of October 1917, the ideals of communist artists and designers diverged from those dominant in the capitalist west. But the similarities between Soviet and western design in the 1920s and 30s are as striking as the differences. Communist propaganda posters and innovative capitalist advertising mirrored one another. Soviet industrial centres used the same principles of efficiency as the factories of Ford Motor Company in the United States. There was even much in common between the 1935 General Plan for Moscow and the redevelopment of Paris in the 1850s, from the rationalisation of transport arteries to the preference for neoclassical architecture.

But horizons can also be personal. The basis of consumerism has long been to encourage individuals to see their own lives as a trajectory of self-improvement, which can be measured by having the latest products and moving towards the idealised versions of ourselves presented in advertising. At the very least, indulging in novelty can help us feel part of the fashions and trends that define “the now”: a kind of unspoken collective project with its own sense of forward movement that consumerism arranges for us.


Above all though, design has provided horizons for modern people through technology. Technological change is a curiously two-sided phenomenon, epitomising our relative helplessness in the face of complex processes governing the modern world, while also creating many of the opportunities and material improvements that make modern ways of life desirable. Technology embodies the darkest aspects of modernity – alienation, exploitation, the constant displacement of human beings – as well as the most miraculous and exhilarating.

Design gives technology its practical applications and its aesthetic character. A series of design processes are involved, for instance, in turning the theory of internal combustion into an engine, combining that engine with countless other forms of engineering to produce an aeroplane, and finally, making the aeroplane signify something in the imagination of consumers. In this way, design determines the forms that technology will take, but also shapes the course of technological change by influencing how we respond to it.

Technology can always draw on a deep well of imaginative power, despite its ambiguous nature, because it ties together the two core modern ideals: reason and progress. Reason essentially describes a faith that human beings have the intellectual resources to shape the world according to their goals. Progress, meanwhile, describes a faith that change is unfolding in a positive direction, or could be made to do so. By giving concrete evidence of what reason can achieve, technology makes it easier to believe in progress.

But a small number of artefacts achieve something much greater. They dominate the horizons of their era, defining what it means to be modern at that moment. These artefacts tend to represent technological changes that are, in a very practical sense, transforming society. More than that, they package revolutionary technology in a way that communicates empowerment, turning a disorientating process of change into a new paradigm of human potential.

One such artefact was the railway, the most compelling symbol of 19th century industrial civilisation, its precise schedules and remorseless passage across continents transforming the meaning of time and space. Another was the factory, which in the first half of the 20th century became an aesthetic and political ideal, providing Modernist architects as well as dictators with a model of efficiency, mass participation and material progress. And probably the most iconic product ever to emerge from a factory was the automobile, which, especially in the United States, served for decades as an emblem of modern freedom and prosperity, its streamlined form copied in everything from kitchen appliances to radios.   

Streamlining: the Zephyr electric clock, designed by Kem Weber in the 1930s, shows the influence of automobile forms in other design areas.

I will write in more detail about such era-defining artefacts in later instalments of this newsletter. For now, I only want to say that I believe the smartphone also belongs in this series.

Obviously the smartphone arrived in a world very different from the factory or car. The western experience is now just one among numerous distinct modernities, from East Asia to Latin America. For those of us who are in the west, social and cultural identity are no longer defined by ideas like nation or class, but increasingly by the relations between individuals and corporate business, mediated by an immersive media environment.

But the smartphone’s conquest of society implies that this fragmented form of modernity still sustains a collective imagination. What we have in common is precisely what defines the smartphone’s power: a vision of compact individual agency in a fluid, mobile, competitive age. The smartphone is like a Swiss army knife for the ambitious explorer of two worlds, the physical and the virtual; it offers self-sufficiency to the footloose traveller, and access to the infinite realms of online culture. It provides countless ways to structure and reflect on individual life, with its smorgasbord of maps, photographs, accounts and data. It allows us to seal ourselves in a personal enclave of headphones and media wherever we may be.

Yet the smartphone also communicates a social vision of sorts. One of its greatest achievements is to relieve the tension between personal desire and sociability, since we can be in contact with scores of others, friends and strangers alike, even as we pursue our own ends. It allows us to imagine collective life as flashes of connectivity between particles floating freely through distant reaches of the world.

It is not uniquely modern for a society to find its imagined centre in a singular technological and aesthetic achievement, as Roland Barthes suggested in the 1950s by comparing a new model Citroën to the cathedrals of medieval Europe. The difference is that, in modernity, such objects can never be felt to reflect a continuous, transcendent order. They must always point towards a future very different from the present, and as such, towards their own obsolescence.

The intriguing question raised by the smartphone is whether the next such artefact will have a physical existence at all, or will emerge on the other side of the door opened by the touch screen, in the virtual world. 

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Tooze and the Tragedy of the Left

Adam Tooze is one of the most impressive public intellectuals of our time. No other writer has the Columbia historian’s skill for laying bare the political, economic and financial sinews that tie together the modern world.

Tooze’s new book, Shutdown: How Covid Shook the World’s Economy, provides everything his readers have come to expect: a densely woven, relentlessly analytical narrative that uncovers the inner workings of a great crisis – in this case, the global crisis sparked by the Covid pandemic in 2020.

But Shutdown provides something else, too. It shows with unusual clarity that, for all his dry detachment and attention to detail, Tooze’s view of history is rooted in a deep sense of tragedy.

Towards the end of the book, Tooze reflects on the escalating “polycrisis” of the 21st century – overlapping political, economic and environmental conflagrations:

In an earlier period of history this sort of diagnosis might have been coupled with a forecast of revolution. If anything is unrealistic today, that prediction surely is. Indeed, radical reform is a stretch. The year 2020 was not a moment of victory for the left. The chief countervailing force to the escalation of global tension in political, economic, and ecological realms is therefore crisis management on an ever-larger scale, crisis-driven and ad hoc. … It is the choice between the third- and fourth-best options.

This seems at first typical of Tooze’s hard-nosed realism. He has long presented readers with a world shaped by “crisis management on an ever-larger scale.” Most of his work focuses on what, in Shutdown, he calls “functional elites” – small networks of technocratic professionals wielding enormous levers of power, whether in the Chinese Communist Party or among the bureaucrats and bankers of the global financial system.

These authorities, Tooze emphasises, are unable or unwilling to reform the dynamics of “heedless global growth” which keep plunging the world into crisis. But their ability to act in moments of extreme danger – the ability of the US Federal Reserve, for instance, to calm financial markets by buying assets at a rate of $1 million per second, as it did in March last year – is increasingly our last line of defence against catastrophe. The success or failure of these crisis managers is the difference between our third- and fourth-best options.

But when Tooze notes that radical change would have been thinkable “in an earlier period of history,” it is not without pathos. It calls to mind a historical moment that looms large in Tooze’s work. 

That moment is the market revolution of the 1980s, the birth of neoliberalism. For Tooze, this did not just bring about an economic order based on privatisation, the free movement of goods and capital, the destruction of organised labour and the dramatic growth of finance.

More fundamentally, neoliberalism was about what Tooze calls “depoliticisation.” As the west’s governing elites were overtaken by dogmas about market efficiency, the threat of inflation and the dangers of government borrowing, they hard-wired these principles into the framework of globalisation. Consequently, an entire spectrum of possibilities concerning how wealth and power might be distributed were closed-off to democratic politics. 

And so the inequalities created by the neoliberal order became, as Tony Blair said of globalisation, as inevitable as the seasons. Or in Thatcher’s more famous formulation, There Is No Alternative.

Tooze’s view of the present exists in the shadow of this earlier failure; it is haunted by what might have been. As he bitterly observes in Shutdown, it might appear that governments have suddenly discovered the joys of limitless spending, but this is only because the political forces that once made them nervous about doing so – most notably, a labour movement driving inflation through wage demands – have long since been “eviscerated.”

But it seems to me that Tooze’s tragic worldview reveals a trap facing the left today. It raises the question: what does it mean to accept, or merely to suspect, that radical change is off the table? 

We glimpse an answer of sorts when Tooze writes about how 2020 vindicated his own political movement, the environmentalist left. The pandemic, he claims, showed that huge state intervention against climate change and inequality is not just necessary, but possible. With all the talk of “Building Back Better” and “Green Deals,” centrist governments appear to be getting the message. Even Wall Street is “learning to love green capitalism.”

Of course, as per the tragic formula, Tooze does not imagine this development will be as transformative as advertised. A green revolution from the centre will likely be directed towards a conservative goal: “Everything must change so that everything remains the same.” The climate agenda, in other words, is being co-opted by a mutating neoliberalism. 

But if we follow the thrust of Tooze’s analysis, it’s difficult to avoid the conclusion that realistic progressives should embrace this third-best option. Given the implausibility of a genuine “antisystemic challenge” – and in light of the fragile systems of global capitalism, geopolitics and ecology which are now in play – it seems the best we can hope for is enlightened leadership by “functional elites.”

This may well be the true. But I think the price of this bargain will be higher than Tooze acknowledges. 

Whether it be climate, state investment, or piecemeal commitments to social justice, the guardians of the status quo have not accepted the left’s diagnosis simply because they realise change is now unavoidable. Rather, these policies are appealing because, with all their moral and existential urgency, they can provide fresh justification for the unaccountable power that will continue to be wielded by corporate, financial and bureaucratic interests. 

In other words, now that the free-market nostrums of neoliberalism 1.0 are truly shot, it is the left’s narratives of crisis that will offer a new basis for depoliticisation – another way of saying There Is No Alternative.

And therein lies the really perverse tragedy for a thinker like Tooze. If he believes the choice is survival on these terms or not at all, then he will have to agree.