This article was originally published by The Critic Magazine in October 2022. Read it here.
Approaching the revamped Battersea Power Station on a sunny autumn morning, the area around the building is dotted with slack-jawed visitors, peering skywards in awe through the lenses of theirsmartphones. This masterpiece on the Thames, designed by Giles Gilbert Scott in the early 1930s, has been shrouded by cranes and scaffolding for years, its quartet of cream-coloured chimneys a familiar but unapproachable part of the London skyline. Now, thanks to a consortium of Malaysian property developers and architecture firm WilkinsonEyre, Scott’s building has been restored in all its monumental glory.
The result really is sublime, a vast block of rhythmic brick facades and stepped parapets, resembling something between a cathedral and a fortress. Scott was a versatile architect, adept at combining historicist and modern styles with his output ranging from Liverpool Cathedral to the red phone box (though he wanted it light blue). At Battersea Power Station, an industrial building somehow presents us with a synthesis of gothic, neoclassical and jazz-age rhetoric. For this we can thank the great British tradition of NIMBYism, since Scott was drafted in to beautify the power station after complaints from Westminster and Chelsea residents about their property values.
This Instagram-ready spectacle comes with a note of unease. For what purpose has this historic structure been so lavishly recreated? Bear in mind that, after being decommissioned in the early-1980s, it sat here for many years without a roof, in the course of failed proposals to turn it into (among other things) a rubbish incinerator, a theme park and a football stadium.
The short answer is that heritage is being honoured here, but we ought to ask what exactly this means. The redevelopment clearly has little in common with maypoles and William Morris wallpaper. It is, rather, an orgy of commodification. The former power station is now squeezed between woozy sculptural buildings designed by starchitects Frank Gehry and Norman Foster, which are packed with luxury apartments, hotels and retail. Stepping inside Scott’s restored structure, you find a banquet of historical elements — including art deco fluted pillars, steel beams and exposed brickwork — encompassing what otherwise feels a lot like a duty free shopping area, with brands like Rolex and Cartier taking the prime spots alongside Starbucks and Pret.
Elsewhere in the building, some 250 apartments, penthouses and rooftop villas are being sold for between £1 million and £18 million. Much of the 45,000 square metres of office space will be occupied by the new UK headquarters (sorry, “campus”) of tech giant Apple Inc.
The point of heritage is that people living in a particular place derive their shared identity, in part, from a connection with the history of that place. What we havehere is more like zombie heritage, where the past is kept alive in the sterile form of a branded product. It even feels misleading to say the building has been repurposed. Really it has been resurrected as a kind of Madame Tussauds waxwork, to provide a themed backdrop for property investment and shopping.
This is most obvious in the architects’ obsessive attention to “authentic” details. Bricks were sourced from the original suppliers, who made them using traditional methods of hand-moulding and wire-cutting. In the former control rooms, retro panels of dials, buttons and levers have been meticulously restored as decoration for a cocktail bar and a private events space. Someone has even contrived a puff of smoke rising from one of the chimneys. The power station has become its own death mask.
None of this should be surprising though, since Battersea is just the latest instance in a trend for turning heritage into exclusive real estate. A similar fate has ironically befallen East London’s brutalist landmark Balfron Tower, originally designed by socialist architect Erno Goldfinger. Balfron’s council tenants have now been booted out to make room for expensive “heritage flats”, adorned with 1970s period fittings and décor. Likewise, in Manhattan, the art deco profile and interiors of the latest “skinny-scraper” on 57th street, also known as Billionaires’ Row, uses as its base the restored 1920s Steinway Tower.
Back in London, the recent redevelopment of King’s Cross began with the conversion of a Victorian granary into Central Saint Martins art college, and the reinvention of 19th century brick warehouses as the retail complex Coal Drops Yard. The surrounding area now hosts the offices of various cutting-edge multinationals, including Meta and Google. The Battersea architects were involved here too, constructing luxury apartments inside the cylindrical iron frames of whatwere once gas holders.
It’s easy to imagine why the trappings of heritage might appeal to rich urbanites and corporations. Besides making wealth and power appear more humane, it is just much nicer to feel rooted in the history and fabric of a city than to look down at it, literally and figuratively, from a glass box in the sky. At the same time, because heritage is precious and scarce, it can still serve as a marker of status. This trend has doubtless prevented some fine buildings from being destroyed, and for that much we should be grateful. Markus Binney, the conservationist who has done more than anyone else to save Battersea Power Station, is thrilled by that site’s redevelopment. He says it will now be “buzzing for years to come”, providing “a giant boost for schemes aimed at bringing Britain’s many industrial landmarks back to life”.
Buzzing or not, what we are seeing today is quite different from the old arrangement, where a historic setting would provide an attractive venue for a restaurant or business park. Heritage can maintain its civic value whilst also being a commercial asset, but increasingly, its civic value is the thing being commercialised. Developers use a whole language of place names, typefaces and design details to emphasise the presence of history, whilst only letting you access it as a consumer or a spectator of the lifestyles of the wealthy. This is privatisation in its metaphysical stage.
These developments seem especially corrosive in a case like Battersea Power Station, which has a public character due to its monumental presence in the city. To see a better outcome you only need to follow the Thames to the former Bankside Power Station, also designed by Scott, which now serves as the Tate Modern gallery. Bankside has been repurposed in a much more imaginative way, respecting the original structure without treating it as sacred. This flows naturally from the fact that the Tate has a civic function to perform. The same could be said, for instance, about St Pancras International station, or the Camden Roundhouse, or any number of London’s historic museums.
The new Battersea Power Station, by contrast, recreates an “iconic landmark” (as the estate agents have it) only to infuse it with a luxury ethos that, for all its charms, is anything but unique. The grandeur of Scott’s building, so inspiring at first sight, ultimately becomes another gimmick.
This essay was first published at The Pathos of Things newsletter. Subscribe here.
Many British people will hear about Felixstowe for the first time this month, thanks to a planned workers’ strike that promises yet more economic pain. Located on the Suffolk coast, Felixstowe is the site of the UK’s biggest container port; almost half of the goods coming and going from our shores pass through here, stowed away in brightly coloured shipping containers that resemble enormous Lego bricks.
The absence of Felixstowe from the national vocabulary speaks volumes about the era we live in. Britain is an island after all, and its various port towns have been central to its history for centuries. Now we are more dependent on the sea than ever (around ninety percent of the world’s traded goods travel by ship), but we barely realise it.
So what happened? That is the question I want to consider today, with the help of David Abulafia’s The Boundless Sea, an epic history of human activity on the ocean. One of the themes in this book is the relationship between the intimate and the global: how our sense of what is valuable or important is tied up with our impressions of the world at large.
Container ports appear in the final, slim section of TheBoundless Sea, where Abulafia describes the disappearance, since the 1950s, of the ancient maritime patterns he has detailed for some 900 pages. “By the beginning of the 21st century,” he writes, “the ocean world of the last four millennia had ceased to exist.”
Given the dramatic nature of this change – a mass extinction of seafaring cultures around the world – the treatment is strikingly brief. Then again, this is a useful reminder that modernity is a tiny slice of time containing enormous transformations.
Container ports symbolise this rupture from the past: mechanised coastal nodes where huge vessels, each bearing thousands of standardised containers, load and unload goods from around the world. By contrast to the lively port towns that litter The Boundless Sea, container ports “are not centres of trade inhabited by a colourful variety of people from many backgrounds, but processing plants in which machinery, not men, do the heavy work and no one sees the cargoes… sealed inside their big boxes.” Felixstowe, says Abulafia, is “a great machine.”
Who crossed the oceans before the container ships did? Polynesian navigators explored the vastness of the Pacific over millennia, with only the stars for a compass. Bronze Age Egyptians ventured down the Red Sea in search of frankincense and myrrh. Merchants in sewn-plank boats spread Buddhism and Islam in the southern Indian Ocean, even as Vikings set out from their Greenland farmsteads in search of narwhal tusks. In the early modern era, pirates, traders and profit-hungry explorers swarmed the coasts of Africa and the Americas. These examples are just a drop in the ocean of Abulafia’s sweeping narrative.
But despite its enormous scope, there is a golden thread running through this book, uniting different eras and pulling continents together: the human desirefor rare, beautiful, and exceptionally useful things.
The main protagonists of maritime history are merchants, since buying and selling has been the most common reason to cross the seas. But what is difficult to grasp today, when even the most mundane products have supply lines spanning the oceans, is the special value which has often been attached to seaborne goods, especially before the 18th century. Some cities, most famously Rome, did rely on short-distance shipping for basic needs like food. And some products, like English wool or Chinese ceramics, were crossing the water in large volumes centuries ago. But generally the risks and expenses of taking to sea, especially over large distances, demanded that merchants focus on the most sought-after goods. And conversely, goods were particularly precious if they could only be delivered by ship.
So seaborne cargoes show us what was considered valuable in the places they docked, or at least among the elites of those places. The human history of the oceans is in large part a catalogue of highly prized things: ornate weapons and exotic animals, spices and textiles, materials like sandalwood and ivory, or foodstuffs like honey, oil and figs. Of course that catalogue also includes human beings reduced to the status of objects, such as eunuchs, performers and slaves.
If the value of such things was generally financial for merchants, it took many forms in the cultures where they arrived. Before the ocean could be reliably traversed with steamships and (eventually) aeroplanes, foreign products bore the mystery of unknown lands. They often became tokens of social status, symbols of spiritual significance, or preferred forms of sensual pleasure and beauty. Ivory from African elephants and north-Atlantic walruses were treasured materials for religious sculpture in medieval Europe, just as red Portuguese cloth was prized by West African elites in the 17th century.
This traffic in desirable objects made the world we know today. The European expansion that began in the late-15th century was driven by the prospect of delivering expensive goods in ever-larger quantities, making them accessible to an ever-larger market. These included products only available in East Asia, like silk, spices and high-quality ceramics, and those that could only be produced with slave labour in tropical climates, such as sugar, coffee and tobacco.
Once the Spanish had established a Pacific route between the Americas and the Philippines, the first truly global networks appeared. The volume of maritime trade began to grow, and one of the foundations of modern capitalism was in place. Abulafia aptly describes Chinese junks arriving in Spanish Manila as “the 16th century equivalent of a floating department store.” Among the items in their holds were “linen and cotton cloth, hangings, coverlets, tapestries, metal goods including copper kettles, gunpowder, wheat flower, fresh and preserved fruits, decorated writing cases, gilded benches, live birds and pack animals.”
But no less dramatic than the growing movement of goods, people and ideas was the emergence, for the first time, of a global consciousness. This is strikingly visualised by the maps that accompany each of the fifty-one chapters of The Boundless Sea. In the first half of the book, these maps show the relatively small regions in which maritime connections existed, with the exception of the world’s oldest trans-oceanic network in the Indian Ocean. In the second half, the maps zoom dizzyingly outwards, eventually incorporating the entire world.
That world map is something we take for granted in an era of instant communication and accessible satellite imagery, but for most of history, huge swathes of the globe were completely unknown to any given group of people. To be fully aware of our species’ planetary parameters marks nothing less than a revolution in how human beings understand themselves. And one of the driving forces behind that revolution was the ambition to bring desirable (and profitable) things from across the ocean.
But if trade underpinned seafaring ways of life throughout history, it finally led to their extinction. More and more shipping did not just make formerly exotic goods commonplace, it eventually made most states integrate their economies into a global marketplace, so that seafaring became more like a conveyor belt than a culture. This culminated in the container ships that now have the oceans almost to themselves, their efficiencies of scale rendering other forms of seaborne trade obsolete.
In the age of the container, most products do not even come from a particular place. They are devised, extracted, processed, manufactured and assembled in many different places, so as to achieve the lowest cost. Even things that do come from distant lands no longer have the same aura of the unfamiliar, since the world is now almost entirely visible through imagery and media.
And that is where this story provides an important insight into the way we design, exchange and value objects today. In consumer societies, enormous resources are devoted to engineering desire, by making products appear uncommon and exclusive. We are used to thinking of this practice as peculiarly modern, and in many ways it is. But maybe we should also see it as an attempt to recreate something of the lost value that, for most of human history, belonged to things from across the ocean.
This essay was first published at The Pathos of Things newsletter. Subscribe here.
This essay was first published at The Pathos of Things newsletter. Subscribe here.
In Britain, where the saying goes that every man’s home is his castle, we like to see domestic space as something to be improved. Even if we have to save until middle age to own a decent home, we do so, in part, so that we can hand it over to builders for six months, after which there will be fewer carpets and more sunrooms.
But domestic space is also a medium through which external forces shape us, in what we mistakenly consider our private existence. Nothing illustrates this better than the evolution of the modern kitchen.
In one of my favourite essays, former Design Museum director Deyan Sudjic describes how the British middle-class kitchen was transformed over the course of a century, from the early 1900s until today. Beginning as a “no-man’s land” where suburban housewives maintained awkward relations with their working-class servants, it has become “a domestic shrine to the idea of family life and conviviality.” Whereas the kitchen’s association with work and working people once ensured that it was partitioned, physically and socially, from the rest of the home, today the image of domestic bliss tends to centre on a spacious open-plan kitchen, with its granite-topped islands, its ranks of cupboard doors in crisp colours, its barstools and dining tables.
And in the process of being transformed, the kitchen transformed us. The other thing we find in this space today is an assortment of appliances, from toasters and kettles to expensive blenders and coffee machines, reflecting a certain admiration for efficiency in domestic life. This does not seem so striking in a world where smartphones and laptops are ubiquitous, but as Sudjic points out, the kitchen was the Trojan horse through which the cult of functionality first penetrated the private sphere.
A hundred years ago, sewing machines and radios had to be disguised as antique furniture, lest they contaminate the home with the feeling of a factory. It was after the middle-classes began to occupy the formerly menial world of the kitchen that everyday communion with machines became acceptable.
In its most idealised and affluent form, the contemporary kitchen has almost become a parody of the factory. Labour in conditions of mechanised order – the very thing the respectable home once defined itself against – is now a kind of luxury, a form of self-expression and appreciation for the finer things in life. We see the same tendency in the success of cooking shows like Master Chef,and in the design of fashionable restaurants, where the kitchen is made visible to diners like a theatre.
What paved the way for this strange marriage of the therapeutic and the functional was the design of the modern kitchen. During this process, the kitchen was a stage where history’s grand struggles played out on an intimate scale, often refracted through contests over women’s role in society. The central theme of this story is how the disenchanting forces of modern rationality have also produced enchanting visions of their own, visions long associated with social progress but eventually absorbed into the realm of private aspiration.
The principles underpinning the modern kitchen came from the northern United States, where the absence of servants demanded a more systematic approach to domestic work. That approach was defined in the mid-19th century by Catharine Beecher, sister of the novelist Harriet Beecher Stowe. In her hugely popular Treatise on Domestic Economy, addressed specifically to American women, Beecher gave detailed instructions on everything from building a house to raising a child, from cooking and cleaning to gardening and plumbing. Identifying the organised, self-contained space of the ship’s galley as the ideal model for the kitchen, she provided designs for various labour-saving devices, setting in motion the process of household automation.
Beecher promoted an ethic of hard work and self-denial that she derived from a stern Calvinist upbringing. Yet she was also a leading campaigner for educational equality, establishing numerous schools and seminaries for women. Her professional approach to household work was an attempt, within the parameters of her culture, to give women a central role in the national myth of progress, though its ultimate effect was to deepen the association of women with the domestic sphere.
Something similar could be said about Christine Frederick, a former teacher from Boston, who in the early-20th century took some of Beecher’s ideas much further. Frederick’s faith was not Calvinism but the Taylorist doctrines of scientific management being implemented in American factories. What she called “household engineering” involved an obsessive analysis and streamlining of tasks as mundane as dishwashing. “I felt I was working hand in hand with the efficiency engineers in business,” she said, “and what they were accomplishing in industry, I too was accomplishing in the home.”
By this time Europe was ready for American modernity in the household, as relations between the classes and sexes shifted radically in the wake of the First World War. Women were entering a wider range of occupations, which meant fewer wives at home and especially fewer servants. At the same time, the provision of housing for the working class demanded new thinking about the kitchen.
In the late-1920s one of Christine Frederick’s disciples, the Austrian architect Margarete Schütte-Lihotzky, designed perhaps the most celebrated kitchen in history. The Frankfurt kitchen, as it came to be known, was one of many efforts at this time to repurpose the insights of American industry for the cause of socialism, for Schütte-Lihotzky was an ardent radical. She would, during her remarkably long life, offer her skills to a succession of socialist regimes, from the Soviet Union to Fidel Castro’s Cuba, as well as spending four years in a concentration camp for her resistance to Nazism.
For the Modernist architects among whom Schütte-Lihotzky worked in the 1920s, the social and technical challenge of the moment was the design of low-cost public housing. Cash-strapped government agencies were struggling to provide accommodation for war widows, disabled veterans, pensioners and slum-dwelling workers. It was for a project like this in Frankfurt that Schütte-Lihotzky produced her masterpiece, a compact, meticulously organized galley kitchen, offering a maximum of amenities in a minimum of space.
By the end of the decade, different versions of the Frankfurt kitchen had been installed in 10,000 German apartments, and were inspiring imitations elsewhere. Its innovations included a suspended lamp that moved along a ceiling runner, a height-adjustable revolving stool, and a sliding door that allowed women to observe their children in the living area. It was not devoid of style either, with ultramarine blue cupboards and drawers, ochre wall tiles and a black floor. Schütte-Lihotzky would later claim she designed it for professional women, having never done much cooking herself.
The Frankfurt kitchen was essentially the prototype of the fitted kitchens we are familiar with today, but we shouldn’t overlook what a technological marvel it represented at the time. Across much of working class Europe, a separate kitchen was unheard of (cooking and washing were done in the same rooms as working and sleeping), let alone a kitchen that combined water, gas and electricity in a single integrated system of appliances, workspaces and storage units.
But even as this template became a benchmark of modernity and social progress in Europe, the next frontier of domestic life was already appearing in the United States. During the 1920s and 30s, American manufacturers developed the design and marketing strategies for a full-fledged consumer culture, turning functional household items into objects of desire. This culture duly took off with the economic boom that followed the Second World War, as the kitchen became the symbol of a new domestic ideal.
With the growth of suburbia, community-based ways of life were replaced by the nuclear family and its neighbours, whose rituals centred on the kitchen as a place of social interaction and display. The role of women in the home, firmly asserted by various cultural authorities, served as a kind of traditional anchor in a world of change. Thanks to steel-beam construction and central heating, the kitchen could now become a large, open-plan space. It was, moreover, increasingly populated by colourful plastic-laminated surfaces, double cookers, washing machines and other novel technologies. Advertisers had learned to target housewives as masters of the family budget, so that huge lime green or salmon pink fridges became no less a status symbol than the cars whose streamlined forms they imitated.
Despite their own post-war boom, most Europeans could only dream of such domestic affluence, and dream they did, for the mass media filled their cinema and television screens with the comforts of American suburbia. This was after all the era of the Cold War, and the American kitchen was on the front line of the campaign to promote the wonders of capitalism. On the occasion of the 1959 American National Exhibition in Moscow, US vice-president Richard Nixon got the chance to lecture Soviet premier Nikita Khrushchev on the virtues of a lemon yellow kitchen designed by General Electric.
In this ideological competition, the technologies of the modern kitchen were still assumed to represent an important form of social progress; Nixon’s PR victory in the Moscow “kitchen debate” was significant because Khrushchev himself had promised to overtake the United States in the provision of domestic consumer goods. This battle for abundance was famously one that Communism would lose, but by the time the Soviet challenge had disappeared in the 1990s, it was increasingly unlikely that someone in the west would see their microwave as emblematic of a collective project of modernity.
Perhaps capitalism has been a victim of its own success in this regard; being able to buy a Chinese manufactured oven for a single day’s wages, as many people now can, makes it difficult to view that commodity as a profound achievement. Yet there is also a sense in which progress, at least in this domain, has become a private experience, albeit one that tends to emerge from a comparison with others. The beautiful gadgetsthat occupy the contemporary home are tools of pleasure and convenience, but also milestones in the personal quest for happiness and perfection.
The open-plan kitchen descended from mid-century America has become a desired destination for that quest in much of the developed world, even if it is often disguised in a local vernacular. It is no coincidence that in 1999, such a kitchen featured in the first episode of Grand Designs, the show which embodies the British middle-class love affair with domestic improvement. But the conspicuous efficiency and functional aesthetics of today’s kitchen dream show that it is equally indebted to Margrete Schütte-Lihotzky’s utopian efforts of the 1920s. This is a cruel irony, given that for most people today, and most of them still women, working in the kitchen is not a form of mechanised leisure but a stressful necessity, if there is time for it at all.
Then again, Schütte-Lihotzky is part of a longer story about the modern world’s fascination with rational order. When Kathryn Kish Sklar writes about Catharine Beecher’s kitchen from the 1850s, she could equally be describing the satisfaction our own culture longs to find in the well-organised home: “It demonstrates the belief that for every space there is an object, for every question an answer. It speaks of interrelated certainties and completion.”
This essay was first published at The Pathos of Things newsletter. Subscribe here.
This essay was first published at The Pathos of Things newsletter. Subscribe here.
I recently found myself browsing a Financial Times feature about “great tech for greener living,” a selection of stylish items for the principled customer. They included an oak iPhone stand sustainably crafted by Polish artisans (holding your phone “at a perfect 25-degree tilt”); wireless earphones with a wood inlay by House of Marley, the eco-friendly studio founded by Bob Marley’s son Rohan; and an app, Ethy, that audits brands for their environmental credentials.
This is a good snapshot of the environmental consciousness that has emerged among upmarket consumers. They still want fashionable, functional and beautiful products, but these qualities are no longer enough. The casual pillaging of the planet that once lay concealed behind the shiny exterior of consumer goods is gradually coming into focus, so that every object now carries the risk of moral contamination. The devil, we have learned, is in the detail: “Even the Scandinavian-style minimalist interiors that seem so pure and clean,” writes sustainability consultant Edwin Datschefski, have a “hidden ugliness – formaldehyde in the plywood and mdf, hexavalent chromium pollution from tanning leather, and damage to communities and the landscape from mining the pigments used in white paint.”
And designers are more than happy to remove that taint of evil. Increasingly, the green ethos is providing design with a sense of mission not seen since the Modernist era of the 1920s-60s. With Modernism, the goal was to harness the power of mass-production to improve the material and aesthetic conditions of ordinary people. For green design, it is to minimise the environmental damage, as well as the human exploitation, caused by a product in each stage of its lifecycle: materials, supply, manufacturing, use and disposal. The two movements share a vision of design as a moral crusade, as well as a certain phobic quality; green designers tend to avoid any suggestion of industry and labour with the same fastidiousness that Modernists applied to cleanliness and hygiene.
This sense of purpose has delivered some notable achievements in the 21st century. Most obviously, green design has consistently generated ingenious new materials and methods, from timber skyscrapers and lampshades made of sugar to the use of mycelium, a fungal substrate, for 3D-printed architectural elements. This year’s winners of the Earthshot sustainable design prize include a seaweed-based alternative to plastic packaging, and a flat-pack greenhouse that will allow small-scale farmers to produce higher yields using much less water. Green designers have also shown an interest in humanising production, preferring to use less alienating forms of labour and trying to integrate aspects of local heritage from the regions where they work.
Last but not least, green design is good at artistic propaganda. Its back catalogue is full of works that communicate the ideals of environmentalism in evocative and inspiring ways, such as Stuart Haygarth’s chandelier made from recycled prescription glasses, or Tomas Gabzdil Libertiny’s extraordinary honeycomb vases, each of which are manufactured by bees inside a hive over the course of a week.
Yet there is often an air of unreality about green design, a not-quite-right feeling that starts to nag at you the more you think about it. The problem is most apparent in the grand philosophical ambitions that frequently emanate from the movement. According to its theorists, the mission of green design is nothing less than the transformation of the relationship between humanity and nature, rejecting the modern (and Modernist) project of shaping the world for our own ends and recognising ourselves as natural and ecologically limited beings. A few examples from the archives of Domus magazine will give a sense of this discourse. In 1997 one author demanded a “realisation that man will be able to sustain himself only if the self-regulating ecosystem of the universe continues and is not disrupted by man’s intervention.” More recently, former MoMA design director Emilio Ambasz told the magazine that “Building inevitably changes Nature… into a human-made nature. The goal should be to reduce and, if possible, to compensate for our intrusion in the Vegetal Kingdom.” Finally, consider the words of the eminent furniture design and research duo Formafantasma:
sustainability is a strong utopia because it goes beyond modernity. It’s remote from twentieth-century culture and fully inserted in our new way of understanding our relationship to nature. […] Contemporary civilisation has a growing awareness that we can continue to live only if we work together with other living beings. As designers, but above all as human beings, we have to take care not only of ourselves, but all the other species on the planet.
All of this sounds excellent, but there is a yawning gap between these lofty aspirations and what green design actually does for the most part, which is to develop marginal alternatives, communicate ideas, and as that Financial Times feature suggests, offer boutique products to those who can afford ethics as a lifestyle choice. What to make of this discrepancy? It raises the possibility that green design has become trapped in a comfortable role which is less about changing the world than legitimising a consumer culture which is really not very green. With eye-catching sustainable product lines and utopian language, big brands can trumpet their green ambitions even as they keep plying their destructive trade in garments, furniture and cars. Occasionally buying eco-friendly goods is an excellent way to feel better about all the other things you buy. It’s almost like the indulgences sold by the medieval church: pay a bit more, fear a bit less for your soul.
There is surely some truth in this cynical interpretation, although I wouldn’t pin the blame on the designers. Like all of us, they have to reconcile many conflicting desires in their lives, including the desire for financial security and for success in their craft. Developing a practice with integrity is admirable, even if it can only serve a small audience. In any case, there is a more generous and, I think, equally plausible way of understanding the role of green design.
The burden of living in a complex society is the knowledge of one’s powerlessness to change the systems in which one is trapped. Reducing the environmental impact of our material culture is perhaps the ultimate example of this, since it ultimately hinges on countless technical issues. At scale, improvements tend to come less from green design than from the greening of design, or techniques that do better than the alternatives without fully solving the problem; architecture that passively regulates temperature, for instance, or electric cars. Progress depends on questions such as: will the more sustainable fibres being developed by Scandinavian companies become a viable alternative to cotton? Will electricity ever be capable of replacing fossil fuels in the most energy-intensive manufacturing processes? How much can we reduce the CO2 emissions associated with cement? This trajectory is bound to be slow, messy, frustrating, tragic, and uncertain of success. But for the time being, it’s all we’ve got.
Against this background, green design can be seen as a kind of informal arrangement between designers and consumers that allows each party to express ideals reality cannot accommodate. These include hope, imagination, and above all responsibility. You could say this is a fiction, but as long as no one mistakes it for an answer to the world’s problems, it seems like a valuable fiction. Besides, it’s better than just making and buying more crap.
This essay was first published at The Pathos of Things newsletter. Subscribe here.
This essay was first published at The Pathos of Things newsletter. Subscribe here.
The design of chairs is not normally listed among the achievements of Napoleon Bonaparte, France’s famous post-revolutionary emperor, but the importance of furniture should never be underestimated. Besides redrawing the map of the Europe, establishing institutions and writing law codes, Napoleon should be seen as a seminal figure in the development of modern design.
Napoleon embodies modernity in its heroic phase. He was celebrated as an icon of both Romanticism and the Enlightenment; a symbol of unstoppable willpower who crossed the Alps on his rearing wild-eyed stallion (or at least was painted by doing so by Jacques-Louis David), as well as the ultimate Enlightened despot, aspiring to replace feudal superstition with the universal principles of Reason. Between these two sides of the Napoleonic myth we can glimpse his remarkable understanding of modern authority, which rests on the active creation of order in a world of turbulent change.
Design was an integral part of that authority. With the assistance of designers Charles Percier and Pierre Fontaine, Napoleon implemented what came to be known as the Empire Style. This was a grand but sober form of neoclassicism, with rigid lines and a large repertoire of motifs drawn from the ancient world: acanthus, palms leaves, wreaths and eagles from Greece and Rome; obelisks, pyramids, winged lions and caryatids from Egypt. Through this official style, whose most famous example is the Arc de Triomphe in Paris, Napoleon linked his regime to the timeless values of reason associated with classical civilisation.
But the Empire Style also portrayed this order as dynamic and expanding, drawing attention to the epic agency of its central figure. The Egyptian iconography recalled Napoleon’s expedition to the near east in 1798, which had sparked a fascination with Egypt in European fashion and intellectual life. More obvious still was the frequently used capital letter “N.” Blending the grandeur of the past with progress and celebrity, Napoleonic design showed a distinctly modern formula for authority, one that would be echoed by Mussolini, Hitler and Stalin more than a century later.
It also suggested the arrival of modernity in more concrete ways, as François Baudot has pointed out. The square proportions and functional character of its furniture, along with its catalogue of reproducible symbols, reflected the standardised methods used at France’s state workshop. As such, it anticipated the age of mass-production in the later 19th and 20th centuries. Large-scale production was needed, in part, to supply the burgeoning administration of the Napoleonic state: a state whose power derived not just from the court and army, but increasingly from bureaucracy too. “It proved a short step,” Baudot quips, “from the Empire desk to the empire of the desk.”
What feels especially familiar about the Empire Style is its ambition to create an aesthetic totality, a “brand identity” whose unity of style would encompass everything from the largest structure to the finest detail. It was, writes Baudot, “a style whose practitioners were equally adept at cutlery and facades, at the detailing of a frieze and of a chair, at the plan of a fortress and shape of a gown to be worn at court.” This concern for a fully designed environment brings to mind the fastidious approach of later styles like art nouveau and the moderne (when the Belgian designer Henry van de Velde conceived a house for himself in 1895, he produced not just matching cutlery and furnishings but a new wardrobe for his wife). It also anticipates the commercial designers of our time, hired to create an immersive aesthetic experience for a pop star or retail brand.
Admittedly the principle that power spoke with a distinct voice was not new, especially not in France, where Louis XIV had already overseen an extensive system of state workshops and artisans in the 17th century. Neoclassicism had been in vogue since the mid-18th century, and Napoleon’s version of it can be seen as a careful attempt to position his regime in relation to its predecessors. Without returning to the full opulence of the royal ancien régime, whose excesses had been repudiated in the revolutionary decade of the 1790s, the Empire Style was notably more grandiose than the republican Directory Style which came before it. Subtly but unmistakably, Napoleon was recalling the majesty of the Bourbons.
Nonetheless, the Empire Style did express real Enlightenment convictions. As Ruth Scurr details in her fascinating biography, A Life in Gardens and Shadows, Napoleon’s passion for neoclassical garden design reflected his deeply engrained rationalism and love of order. Right until his last days in exile on Saint Helena, where he diverted his frustration into horticulture, Napoleon liked gardens to display straight lines, precision and symmetry. These are the same characteristics that defined the Empire Style. In such apparently superficial details we see principles that would resonate through European history for centuries. Napoleon quarrelled with his first wife, Joséphine, over her preference for the more unruly and picturesque English style of garden. That English style was a portent of a very different response to modernity that would soon emerge in Britain, where aesthetic harmony was sought not in classical Reason but in the organic rootedness of the medieval Gothic.
Ultimately, what makes the Empire Style modern was the role it gave design in relation to society at large. Appropriately for an emperor who loved gardening, Napoleonic design reveals the emergence of what Zygmunt Bauman has called “the gardening state”: the modern regime that does not just aim to rule over its subjects, but seeks to transform society in pursuit of progress and even utopian perfection. The Empire Style communicated the ambition of the state – which, after the French Revolution, was meant to embody the nation and its citizens – to remake the world in the image of its ideals. But more than that, it showed a belief that design could be an active part of this project, its didactic powers helping to bring the state into being, and to instil it with an ideological purpose. Chairs and tables, buildings, interiors and monuments were not only intended to demonstrate reason and progress; they were intended to impart these values to the society where they appeared.
This entanglement with the modern progressive state or movement would continue to haunt design up until the ruptures of the mid-20th century. In the process, the aims of representing abstract ideals, securing the commitment of the masses and showing the promise of the future would turn out to be rife with contradictions. But we will have to leave all of that until next week.
This essay was first published at The Pathos of Things newsletter. Subscribe here.
This essay was first published at The Pathos of Things newsletter. Subscribe here.
Somewhere in my room (I forget where exactly) there is a box containing four smartphones I’ve cycled through in the last decade or so. Each of these phones appeared shockingly new when I first removed it from its neat cuboid packaging, though now there is a clear progression of models, with the earliest almost looking fit for a museum. This effect is part of their design, of course: these objects were made to look original at first, and then, by contrast to newer models, out of date. That all have cracked screens only emphasises their consignment to the oblivion of the obsolete.
The point of this sketch is not just to make you reflect on your consumer habits. I think it represents something more profound. This series of phones is like an oblique record of the transformation of society, describing the emergence of a new paradigm for organising human existence. It captures a slice of time in which the smartphone has changed every dimension of our lives, from work and leisure to knowledge and personal relations. This small device has upended professions from taxi driving to journalism, and shaped global politics by bringing media from around the world to even the poorest countries. It has significantly altered language. It has enabled new forms of surveillance by private companies and government agencies alike. A growing number of services are inaccessible without it.
Yet with its sleek plastic shell and vibrant interfaces, the smartphone is nonetheless a formidable object of desire: a kind of gateway to the possibilities of the 21st century. Ultimately, what it represents is paradoxical. An exhilarating sense of novelty, progress and opportunity; but also the countless adaptations we have to make as technology reshapes our lives, the new systems into which we are forced to fit ourselves.
To understand how a designed object can have this kind of power, defining both the practical and imaginative horizons of our age, we have to look beyond the immediate circumstances in which it appeared. The smartphone is a truly modern artefact: modern not just in the sense that it represents something distinctive about this era, but modern in another, deeper sense too. It belongs to a longer chapter of history, modernity, which is composed of moments that feel “modern” in their own ways.
The story of modernity shows us the conditions that enable design to shape our lives today. But the reverse is also true: the growing power of design is crucial to understanding modernity itself.
The very idea of design, as we understand it now, points to what is fundamentally at stake in modernity. To say that something is designed implies that it is not natural; that it is artificial, conceived and constructed in a certain way for a human purpose. Something which is not designed might be some form of spontaneous order, like a path repeatedly trodden through a field; but we still view such order as in some sense natural. The other antonym of the designed is the disordered, the chaotic.
These contrasts are deeply modern. If we wind the clock back a few centuries – and in many places, much less than that – a hard distinction between human order and nature or chaos becomes unfamiliar. In medieval Europe, for instance, design and its synonyms (form, plan, intention) came ultimately from a transcendent order, ordained by God, that was manifest in nature and society alike. Human designs, such as the ornamentation of Gothic cathedrals or the symbols and trappings of noble rank, drew their meaning from that transcendent order.
In practical terms though, the question of where order came from was really a question about the authority of the past. It was the continuity of customs, traditions, and social structures in general which provided evidence that order came from somewhere beyond society, that it was natural. This in turn meant the existing order, inherited from the past, placed constraints of what human ambition could fathom.
To be modern, by contrast, is to view the world without such limitations. It is to view the world as something human beings must shape, or design, according to their own goals.
This modern spirit, as it is sometimes called, was bubbling up in European politics and philosophy over centuries. But it could only be fully realised after a dramatic rupture from the past, and this came around the turn of the 19th century. The French Revolution overturned the established order with its ancient hierarchies across large parts of Europe. It spread the idea that the legitimacy of rulers came from “the people” or “the nation,” a public whose desires and expectations made politics increasingly volatile. At the same time, the seismic changes known as the Industrial Revolution were underway. There emerged an unpredictable, dynamic form of capitalism, transforming society with its generation of new technologies, industries and markets.
These developments signalled a world that was unmistakably new and highly unstable. The notion of a transcendent order inherited from the past became absurd, because the past was clearly vanishing. What replaced it was the modern outlook that, in its basic assumptions, we still have today. This outlook assumes the world is constantly changing, and that human beings are responsible for giving it order, preventing it from sliding into chaos.
Modernity was and is most powerfully expressed in certain experiences of space and time. It is rooted in artificial landscapes, worlds built and managed by human beings, of which cities are still the best example. And since it involves constant change, modernity creates a sense of the present as a distinct moment with its own fashions, problems and ideas; a moment that is always slipping away into a redundant past, giving way to an uncertain future. “Modernity,” in the poet Charles Baudelaire’s famous expression, “is the transient, the fleeting, the contingent.”
Design was present at the primal scenes of modernity. The French Revolutionaries, having broken dramatically with the past, tried to reengineer various aspects of social life. They devised new ways of measuring space (the metric system) and time (the revolutionary calendar, beginning at Year Zero, and the decimal clock). They tried to establish a new religion called the Cult of the Supreme Being, for which the artist Jacques-Louis David designed sets and costumes.
Likewise, the Industrial Revolution emerged in part through the design activities of manufacturers. In textiles, furniture, ceramics and print, entrepreneurs fashioned their goods for the rising middle-classes, encouraging a desire to display social status and taste. They devised more efficient production processes to increase profits, ushering in the age of factories and machines.
These early examples illustrate forces that have shaped design to this day. The French Revolution empowered later generations to believe that radical change could be conceived and implemented. In its more extreme phases, it also foreshadowed the attempts of some modern regimes to demolish an existing society and design a new one. This utopian impulse towards order and perfection is the ever-present dark side of design, in that it risks treating people as mere material to be moulded according to an abstract blueprint. Needless to say, design normally takes place on a much more granular level, and with somewhat less grandiose ambitions.
Modern politics and commerce both require the persuasion of large groups of people, to engineer desire, enthusiasm, fear and trust. This is the realm of propaganda and advertising, a big part of what the aesthetic design of objects and spaces tries to achieve. But modern politics and commerce also require efficient, systematic organisation, to handle complexity and adapt to competition and change. Here design plays its more functional role of devising processes and tools.
Typically we find design practices connected in chains or webs, with functional and aesthetic components. Such is the connection between the machine humming in the factory and the commodity gleaming in the shop window, between urban infrastructure and the facades which project the glory of a regime, between software programmes and the digital interface that keeps you scrolling.
But modernity also creates space for idealism. Modern people have an acute need of ideals, whether or not they can be articulated or made consistent, because modern people have an acute need to feel that change is meaningful.
The modern mind anticipates constant change, and understands order as human, but by themselves these principles are far from reassuring. Each generation experiences them through the loss of a familiar world to new ideas, new technologies, new social and cultural patterns. We therefore need a way to understand change as positive, or at least a sense of what positive change might look like (even if that means returning to the past). Modernity creates a need for horizons towards which we can orient ourselves: a vision of the future in relation to which we can define who we are.
Such horizons can take the form of a collective project, where people feel part of a movement aiming at a vision of the future. But for a project to get off the ground, it again needs design for persuasion and efficiency. From Napoleon Bonaparte’s Empire Style furniture, with which he fitted out a vast army of bureaucrats, to Barack Obama’s pioneering Internet campaigns, successful leaders have used a distinctive aesthetic style and careful planning to bring projects to life.
Indeed, the search for effective design is one of modernity’s common denominators, creating an overlap between very different visions of society. In the aftermath of the Russian Revolution of October 1917, the ideals of communist artists and designers diverged from those dominant in the capitalist west. But the similarities between Soviet and western design in the 1920s and 30s are as striking as the differences. Communist propaganda posters and innovative capitalist advertising mirrored one another. Soviet industrial centres used the same principles of efficiency as the factories of Ford Motor Company in the United States. There was even much in common between the 1935 General Plan for Moscow and the redevelopment of Paris in the 1850s, from the rationalisation of transport arteries to the preference for neoclassical architecture.
But horizons can also be personal. The basis of consumerism has long been to encourage individuals to see their own lives as a trajectory of self-improvement, which can be measured by having the latest products and moving towards the idealised versions of ourselves presented in advertising. At the very least, indulging in novelty can help us feel part of the fashions and trends that define “the now”: a kind of unspoken collective project with its own sense of forward movement that consumerism arranges for us.
Above all though, design has provided horizons for modern people through technology. Technological change is a curiously two-sided phenomenon, epitomising our relative helplessness in the face of complex processes governing the modern world, while also creating many of the opportunities and material improvements that make modern ways of life desirable. Technology embodies the darkest aspects of modernity – alienation, exploitation, the constant displacement of human beings – as well as the most miraculous and exhilarating.
Design gives technology its practical applications and its aesthetic character. A series of design processes are involved, for instance, in turning the theory of internal combustion into an engine, combining that engine with countless other forms of engineering to produce an aeroplane, and finally, making the aeroplane signify something in the imagination of consumers. In this way, design determines the forms that technology will take, but also shapes the course of technological change by influencing how we respond to it.
Technology can always draw on a deep well of imaginative power, despite its ambiguous nature, because it ties together the two core modern ideals: reason and progress. Reason essentially describes a faith that human beings have the intellectual resources to shape the world according to their goals. Progress, meanwhile, describes a faith that change is unfolding in a positive direction, or could be made to do so. By giving concrete evidence of what reason can achieve, technology makes it easier to believe in progress.
But a small number of artefacts achieve something much greater. They dominate the horizons of their era, defining what it means to be modern at that moment. These artefacts tend to represent technological changes that are, in a very practical sense, transforming society. More than that, they package revolutionary technology in a way that communicates empowerment, turning a disorientating process of change into a new paradigm of human potential.
One such artefact was the railway, the most compelling symbol of 19th century industrial civilisation, its precise schedules and remorseless passage across continents transforming the meaning of time and space. Another was the factory, which in the first half of the 20th century became an aesthetic and political ideal, providing Modernist architects as well as dictators with a model of efficiency, mass participation and material progress. And probably the most iconic product ever to emerge from a factory was the automobile, which, especially in the United States, served for decades as an emblem of modern freedom and prosperity, its streamlined form copied in everything from kitchen appliances to radios.
Streamlining: the Zephyr electric clock, designed by Kem Weber in the 1930s, shows the influence of automobile forms in other design areas.
I will write in more detail about such era-defining artefacts in later instalments of this newsletter. For now, I only want to say that I believe the smartphone also belongs in this series.
Obviously the smartphone arrived in a world very different from the factory or car. The western experience is now just one among numerous distinct modernities, from East Asia to Latin America. For those of us who are in the west, social and cultural identity are no longer defined by ideas like nation or class, but increasingly by the relations between individuals and corporate business, mediated by an immersive media environment.
But the smartphone’s conquest of society implies that this fragmented form of modernity still sustains a collective imagination. What we have in common is precisely what defines the smartphone’s power: a vision of compact individual agency in a fluid, mobile, competitive age. The smartphone is like a Swiss army knife for the ambitious explorer of two worlds, the physical and the virtual; it offers self-sufficiency to the footloose traveller, and access to the infinite realms of online culture. It provides countless ways to structure and reflect on individual life, with its smorgasbord of maps, photographs, accounts and data. It allows us to seal ourselves in a personal enclave of headphones and media wherever we may be.
Yet the smartphone also communicates a social vision of sorts. One of its greatest achievements is to relieve the tension between personal desire and sociability, since we can be in contact with scores of others, friends and strangers alike, even as we pursue our own ends. It allows us to imagine collective life as flashes of connectivity between particles floating freely through distant reaches of the world.
It is not uniquely modern for a society to find its imagined centre in a singular technological and aesthetic achievement, as Roland Barthes suggested in the 1950s by comparing a new model Citroën to the cathedrals of medieval Europe. The difference is that, in modernity, such objects can never be felt to reflect a continuous, transcendent order. They must always point towards a future very different from the present, and as such, towards their own obsolescence.
The intriguing question raised by the smartphone is whether the next such artefact will have a physical existence at all, or will emerge on the other side of the door opened by the touch screen, in the virtual world.
This essay was first published at The Pathos of Things newsletter. Subscribe here.
This essay was first published by IM1776 on 17th August 2021
A tumble-drier is dragged out into someone’s garden and filled with something heavy — a brick perhaps. After setting it spinning, a figure in a camouflage jacket and protective face visor retreats from the camera frame. Immediately the machine begins to shudder violently, and soon disintegrates as parts fly off onto the surrounding lawn.
This is the opening shot of Mainsqueeze, a 2014 video collage by the Canadian artist Jon Rafman. What comes after is no less unsettling: a young woman holds a small shellfish, stroking it affectionately, before placing it on the ground and crushing it slowly under her heel; an amateur bodybuilder, muscles straining grotesquely, splits a watermelon between his thighs.
Rafman, concerned about the social and existential impact of technology on contemporary life, discovered these and many other strange performances while obsessively trawling the subaltern corners of the internet — communities of trolls, pranksters and fetishists. The artist’s aim, however, isn’t to ridicule these characters as freaks: to the contrary, he maintains: “The more marginal, the more ephemeral the culture is, the more fleeting the object is… the more it can actually reflect and reveal ‘culture at large.’” What looks at first like a glimpse into the perverse fringes, is really meant to be a portrait of online culture in general: a fragmented world of niche identities and uneasy escapism, where humor and pleasure carry undercurrents of aggression and despair. With such an abundance of stimulation, it’s difficult to say where satisfaction ends and enslavement begins.
Even as we joke about the pathologies of online life, we often lose sight of the depressing arc the internet revolution has followed during the past decade. It’s impossible to know exactly what lies behind the playful tone of Twitter and the carefree images of Instagram, but judging by the personal stories we hear, there’s no shortage of addiction (to social media, porn, smartphones), identity crisis, and anxiety about being judged or exposed. It seems much of our online existence is now characterized by the same sense of hyper-alert boredom, claustrophobia and social estrangement that Rafman found at the margins of the internet years ago.
Indeed, the destructive impulses of Rafman’s trolls seem almost quaint by comparison to the shaming and malicious gossip we take for granted on social media. And whereas a plurality of outlooks and personalities was once the glory of the internet, today every conceivable subject, from art and sports to haircuts, food, and knitting, is reified as a divisive issue within a vast political metanarrative.
In somewhat of an ironic twist, last year, Rafman himself was dropped or suspended by numerous galleriesfollowing accusations of inappropriate sexual behavior, leveled through the anonymous Instagram account Surviving the Artworld (which publishes allegations of abusive behavior in the art industry). The accusers say they felt taken advantage of by the artist; Rafman insists that there was a misunderstanding. It’s always hard to know what to make of such cases, but that social media now serves as a mechanism for this kind of summary justice seems symptomatic of the social disintegration portrayed in works like Mainsqueeze.
Even if these accusations mark the end of Rafman’s career, his efforts to document online culture now seem more valuable than ever. His art gives us a way of thinking about the internet and its discontents that goes beyond manipulative social media algorithms, ideological debasement or the culture wars. The artist’s work shows the evolution of the virtual realm above all as a new chapter of human experience, seeking to represent the structures of feeling that made this world so enticing and, ultimately, troubled.
The first video by Rafman I came across reminded me of Swift’s Gulliver’s Travels. Begun in 2008, the visionary Kool-Aid Man in Second Life consists of a series of tours through the virtual world platform Second Life, where users have designed a phantasmagorical array of settings in which their avatars can lead, as the name suggests, another life. In the video, our guide is Rafman’s own avatar, the famous Kool-Aid advertising mascot (a jug of red liquid with the weird rictus grin) — a protagonist that reminds us we’ve entered an era where, as Rafman puts it, “different symbols float around equally and free from the weight of history.” For the entire duration, Kool-Aid Man wanders around aimlessly in a surreal, artificial universe, sauntering in magical forests and across empty plains, through run-down cityscapes and futuristic metropolises, placidly observing nightclub dance floors, ancient temples, and the endless stages where the denizens of Second Life perform their sexual fantasies.
Kool-Aid Man in Second Life is best viewed against the backdrop of the great migration onto the internet which started in the mid-2000s, facilitated by emerging tech giants like Amazon, Google and Facebook. For the great majority of people, this was when the internet ceased being merely a toolbox for particular tasks and became part of everyday life (the art world jargon for this was ‘post-internet’). The artwork can be seen as a celebration of the curiosity, fun, and boundless sense of possibility that accompanied this transition. Humanity was stepping en-masse out of the limits of physical space, and what it found was both trivial and sublime: a kitsch world of selfies and cute animal as well as effortless new forms of association and access to knowledge. The euphoric smile of Kool-Aid Man speaks to the birth of online mass culture as an innocent adventure.
Similar themes appear also in Rafman’s more famous (and ongoing) early work The Nine Eyes of Google Street View, in which the artist collects peculiar images captured by Google Maps’ vehicles. Scenes include a magnificent stag bounding down a coastal highway, a clown stepping into a minibus, a lone woman breastfeeding her child in a desolate landscape of dilapidated buildings. As in Rafman’s treatment of Second Life, such eclectic scenes are juxtaposed to portray the internet as an emotional voyage of discovery, marked by novel combinations of empathy and detachment, sincerity and irony, humour and desire. But in hindsight, no less striking than the spirit of wonder in these works are the ways they seem to anticipate the unravelling of online culture.
If there’s something ominous about the ornate dream palaces of Second Life, it comes from our intuition that the stimulation and belonging offered by this virtual community is also a measure of alienation. The internet gives us relations with people and things that have the detached simplicity of a game, which only become more appealing as we find niches offering social participation and identity. But inevitably, these ersatz lives become a form of compulsive retreat from the difficulties of the wider world and a source of personal and social tension. Rafman’s Second Life is a vivid metaphor for how virtual experience tempts us with the prospect of a weightless existence, one that can’t possibly be realised and must, ultimately, lead to resentment.
Equally prescient was Rafman’s emphasis on the breakdown of meaning, as words, images, and symbols of all kinds become unmoored from any stable context. Today, all ‘content’ presents itself much like the serendipitous scenes in The Nine-Eyes of Google Street View – an arbitrary jumble of trivial and profound, comic and tragic, impressions stripped of semantic coherence and flattened into passing flickers of stimulation. Symbols are no longer held firm in their meaning by clearly defined contexts where we might expect to find them, but can be endlessly mixed and refashioned in the course of online communication. This has been a great source of creativity, most obviously in the form of memes, but it has also produced neurosis. Today’s widespread sensitivity to the alleged violence concealed in language and representation, and the resulting desire to police expression, seems to reflect deep anxiety about a world where nothing has fixed significance.
These more ominous trends dominate the next phase of Rafman’s work, where we find pieces like Mainsqueeze. Here Rafman plunges us into the sordid underworld of the internet, a carnival of adolescent rebellion and perverse obsessions. A sequence of images showing a group of people passed-out drunk, one with the word “LOSER” scrawled on his forehead, captures the overall tone. In contrast to Rafman’s Second Life, where the diversity of the virtual realm could be encompassed by a single explorer, we now find insular and inaccessible communities, apparently basking in an angry sense of estrangement from the mainstream of culture. Their various transgressive gestures — swastikas, illicit porn, garish make-up — seem tinted with desperation, as though they’re more about finding boundaries than breaking them.
This portrayal of troll culture has some unsettling resonances with the boredom and anxiety of internet life today. According to Rafman himself, however, the wider relevance of these outcasts concerns their inability to confront the forces shaping their frustrated existence. Trapped in a numbing cycle of distraction, their subversive energy is channelled into escapist rituals rather than any kind of meaningful criticism of the society they seem to resent. Seen from this perspective, online life comes to resemble a form of unknowing servitude, a captive state unable to grasp the conditions of its own deprivation.
All of this points to the broader context which is always dimly present in Rafman’s work: the architecture of the virtual world itself through which Silicon Valley facilitated the great migration onto the internet over the past fifteen-odd year. In this respect, Rafman’s documentation of Second Life becomes even more interesting, since that platform really belonged to the pre-social media Cyberpunk era, which would make it a eulogy for the utopian ethos of the early internet, with its dreams of transcending the clutches of centralised authority. The power that would crush those dreams is represented, of course, by Rafman’s Google Street View’s car — the outrider of big tech on its endless mission to capitalise on all the information it can gather.
But how does this looming corporate presence relates to the disintegration of online culture traced by Rafman? The artist’s comments about misdirected critical potential suggest one depressing possibility: the internet is a power structure which sustains itself through our distraction, addiction and alienation. We might think of Huxley’s Brave New World, but with shitposting and doom-scrolling instead of the pleasure-drug soma. Rafman’s most recent animation work, Disaster under the Sun, seems to underscore this dystopian picture. We are given a God’s-eye perspective over a featureless grey landscape, where crowds of faceless human forms attack and merge into one another, their activities as frantic and vicious as they are lacking any apparent purpose.
It’s certainly true that the internet giants have gained immense wealth and power while overseeing the profound social and political dislocations of the last decade. But it’s also true that there are limits to how far they can benefit from anarchy. This, might explain why we are now seeing the emergence of something like a formal constitutional structure to govern the internet’s most popular platforms, such as with Facebook, whose Oversight Board now even provides a court of appeal for its users — but also Twitter, Google, and now PayPal. The consolidation of centralized authority over the internet resembles the closing of a frontier, as a once-lawless space of discovery, chaos and potential is settled and brought under official control.
Rafmans’ work allows us to grasp how this process of closure has also been a cultural and psychological one. We have seen how, in his art, the boundlessness of the virtual realm, and our freedom within it, are portrayed not just as a source of wonder but also of disorientation and insecurity. There have been plenty of indications that these feelings of flux have made people anxious to impose order, whether in the imagined form of conspiracy theories or by trying to enforce new norms and moral codes.
This isn’t to say that growing regulation will relax the tensions that have overtaken online culture. Given the divergence of identities and worldviews illustrated by Rafman’s depiction of the marginal internet, it seems highly unlikely that official authority can be impartial; drawing boundaries will involve taking sides and identifying who must be considered subversive. But all of this just emphasises that the revolutionary first chapter of internet life is drawing to a close. For better or worse, the particular spirit of discovery that marked the crossing of this frontier will never return.
Adam Tooze is one of the most impressive public intellectuals of our time. No other writer has the Columbia historian’s skill for laying bare the political, economic and financial sinews that tie together the modern world.
Tooze’s new book, Shutdown: How Covid Shook the World’s Economy, provides everything his readers have come to expect: a densely woven, relentlessly analytical narrative that uncovers the inner workings of a great crisis – in this case, the global crisis sparked by the Covid pandemic in 2020.
But Shutdown provides something else, too. It shows with unusual clarity that, for all his dry detachment and attention to detail, Tooze’s view of history is rooted in a deep sense of tragedy.
Towards the end of the book, Tooze reflects on the escalating “polycrisis” of the 21st century – overlapping political, economic and environmental conflagrations:
In an earlier period of history this sort of diagnosis might have been coupled with a forecast of revolution. If anything is unrealistic today, that prediction surely is. Indeed, radical reform is a stretch. The year 2020 was not a moment of victory for the left. The chief countervailing force to the escalation of global tension in political, economic, and ecological realms is therefore crisis management on an ever-larger scale, crisis-driven and ad hoc. … It is the choice between the third- and fourth-best options.
This seems at first typical of Tooze’s hard-nosed realism. He has long presented readers with a world shaped by “crisis management on an ever-larger scale.” Most of his work focuses on what, in Shutdown, he calls “functional elites” – small networks of technocratic professionals wielding enormous levers of power, whether in the Chinese Communist Party or among the bureaucrats and bankers of the global financial system.
These authorities, Tooze emphasises, are unable or unwilling to reform the dynamics of “heedless global growth” which keep plunging the world into crisis. But their ability to act in moments of extreme danger – the ability of the US Federal Reserve, for instance, to calm financial markets by buying assets at a rate of $1 million per second, as it did in March last year – is increasingly our last line of defence against catastrophe. The success or failure of these crisis managers is the difference between our third- and fourth-best options.
But when Tooze notes that radical change would have been thinkable “in an earlier period of history,” it is not without pathos. It calls to mind a historical moment that looms large in Tooze’s work.
That moment is the market revolution of the 1980s, the birth of neoliberalism. For Tooze, this did not just bring about an economic order based on privatisation, the free movement of goods and capital, the destruction of organised labour and the dramatic growth of finance.
More fundamentally, neoliberalism was about what Tooze calls “depoliticisation.” As the west’s governing elites were overtaken by dogmas about market efficiency, the threat of inflation and the dangers of government borrowing, they hard-wired these principles into the framework of globalisation. Consequently, an entire spectrum of possibilities concerning how wealth and power might be distributed were closed-off to democratic politics.
And so the inequalities created by the neoliberal order became, as Tony Blair said of globalisation, as inevitable as the seasons. Or in Thatcher’s more famous formulation, There Is No Alternative.
Tooze’s view of the present exists in the shadow of this earlier failure; it is haunted by what might have been. As he bitterly observes in Shutdown, it might appear that governments have suddenly discovered the joys of limitless spending, but this is only because the political forces that once made them nervous about doing so – most notably, a labour movement driving inflation through wage demands – have long since been “eviscerated.”
But it seems to me that Tooze’s tragic worldview reveals a trap facing the left today. It raises the question: what does it mean to accept, or merely to suspect, that radical change is off the table?
We glimpse an answer of sorts when Tooze writes about how 2020 vindicated his own political movement, the environmentalist left. The pandemic, he claims, showed that huge state intervention against climate change and inequality is not just necessary, but possible. With all the talk of “Building Back Better” and “Green Deals,” centrist governments appear to be getting the message. Even Wall Street is “learning to love green capitalism.”
Of course, as per the tragic formula, Tooze does not imagine this development will be as transformative as advertised. A green revolution from the centre will likely be directed towards a conservative goal: “Everything must change so that everything remains the same.” The climate agenda, in other words, is being co-opted by a mutating neoliberalism.
But if we follow the thrust of Tooze’s analysis, it’s difficult to avoid the conclusion that realistic progressives should embrace this third-best option. Given the implausibility of a genuine “antisystemic challenge” – and in light of the fragile systems of global capitalism, geopolitics and ecology which are now in play – it seems the best we can hope for is enlightened leadership by “functional elites.”
This may well be the true. But I think the price of this bargain will be higher than Tooze acknowledges.
Whether it be climate, state investment, or piecemeal commitments to social justice, the guardians of the status quo have not accepted the left’s diagnosis simply because they realise change is now unavoidable. Rather, these policies are appealing because, with all their moral and existential urgency, they can provide fresh justification for the unaccountable power that will continue to be wielded by corporate, financial and bureaucratic interests.
In other words, now that the free-market nostrums of neoliberalism 1.0 are truly shot, it is the left’s narratives of crisis that will offer a new basis for depoliticisation – another way of saying There Is No Alternative.
And therein lies the really perverse tragedy for a thinker like Tooze. If he believes the choice is survival on these terms or not at all, then he will have to agree.
This article was originally published by Unherd on 1st July 2021
It was a moment South Africans thought would never come. On Tuesday the Constitutional Court sentenced former president Jacob Zuma to 15 months in prison, after he refused to testify at an inquiry into corruption during his time in office.
When that inquiry reaches its conclusion, Zuma could face a much longer sentence — an amazing prospect. For now though, the simple willingness of the court to punish such blatant recalcitrance offers tantalising hope that the rule of law is not dead in South Africa.
The verdict was surprising given that Zuma still commands a significant power base in the ruling African National Congress party. The eye-watering levels of graft that marked his 2009-18 presidency means there are plenty of ANC figures at every level of government who want the anti-corruption drive of his successor, Cyril Ramaphosa, to fail.
And therein lies the more ominous question posed by Tuesday’s ruling. Even if Zuma hands himself over to the authorities as instructed, he won’t do it quietly. So could this lead to an escalation of the already murderous internal politics of the ANC – an all-out civil war within the party that drags the nation into the abyss?
The Zuma presidency was a waking nightmare for those of us who prayed that, after its miraculously peaceful transition from apartheid to democracy, South Africa’s governing elite would resist the slide into gangsterism which has squandered the potential of so many African nations. This was always a danger with the ANC because, being the party of Mandela and the heroic anti-apartheid struggle, it was destined to rule virtually unopposed during the first decades of democracy.
Zuma’s infamous Nkandla homestead in KwaZulu-Natal, for which he fleeced the public purse to the tune of £14 million, offers a flavour of his regime’s conspicuous venality. More serious was his gutting of the criminal justice system, paving the way for the kind of corruption that would make a hardened kleptocrat blush. At the current inquiry, witnesses have lined up to detail how Zuma effectively handed control of much of the state to a notorious trio of shady businessmen known as Gupta brothers. Apparently these cronies installed government ministers, siphoned money from state-owned companies and cashed-in on lucrative contracts. Prosecutors claim as much as £50 billion was swindled from state coffers.
With the ANC having lost ground in recent elections, Ramaphosa’s campaign to clean up the party might be a sign of democratic pressures finally kicking in. More cynically, we might note that the president needs to purge Zuma’s faction to consolidate his own leadership. At any rate, Ramaphosa knows corruption has to be addressed if South Africa is to attract the investors it sorely needs. Youth unemployment stands at a grim 75%, while millions of its citizens have only the most rudimentary housing and sanitation. Its tax base continues to shrink as wealthier citizens flee appalling levels of violent crime.
By insisting that Zuma be subject to the law, the Constitutional Court’s latest ruling suggests a positive outcome to this saga is still possible. But it remains far from clear what direction the ANC’s internal struggle will take — and ultimately, it’s this struggle that will determine the country’s future.
We live in an era where catastrophe looms large in the political imagination. On the one side, we find hellacious visions of climate crisis and ecological collapse; on the other, grim warnings of social disintegration through plummeting birth rates, mass immigration and crime. Popular culture’s vivid post-apocalyptic worlds, from Cormac McCarthy’s The Road to Margaret Atwood’s Handmaid’s Tale, increasingly echo in political discourse – most memorably in Donald Trump’s 2016 inauguration speech on the theme of “American Carnage.” For more imaginative doom-mongers there are various technological dystopias to contemplate, whether AI run amok, a digital surveillance state, or simply the replacement of physical experience with virtual surrogates. Then in 2020, with the eruption of a global pandemic, catastrophe crossed from the silver screen to the news studio, as much of the world sat transfixed by a profusion of statistics, graphs and harrowing reports of sickness and death.
If you are anything like me, the role of catastrophe in politics and culture raises endless fascinating questions. How should we explain our visceral revulsion at fellow citizens dying en mass from an infectious disease, and our contrasting apathy to other forms of large-scale suffering and death? Why can we be terrified by climate change without necessarily feeling a commensurate urgency to do something about it? Why do certain political tribes obsess over certain disasters?
It was questions like these that led me to pick up Niall Ferguson’s new book, Doom: The Politics of Catastrophe. I did this somewhat nervously, it must be said. I found one of Ferguson’s previous books extremely boring, and tend to cringe at his use of intellectual gimmicks – like his idea that the past success of Western civilisation can be attributed to six “killer apps.” Then again, Ferguson’s contrarianism does occasionally produce an interesting perspective, such as his willingness to weigh the negative aspects of the British Empire against the positive, as historians do with most other empires. But as I say, it was really the subject of this latest book that drew me in.
I might as well say upfront that I found it very disappointing. This is going to be a bad review – though hopefully not a pointless one. The flaws of this book can, I think, point us towards a richer understanding of catastrophe than Ferguson himself offers.
Firstly, Doom is not really about “the politics of catastrophe” as I understand that phrase. A few promising questions posed in the introduction – “Why do some societies and states respond to catastrophe so much better than others? Why do some fall apart, most hold together, and a few emerge stronger? Why does politics sometimes cause catastrophe?” – are not addressed in any sustained way. What this book is really about is the difficulty of predicting and mitigating statistically irregular events which cause excess deaths. That sounds interesting enough, to be sure, but there’s just one fundamental problem: Ferguson never gets to grips with what actually makes such events catastrophic, leaving a rather large hole where the subject of the book should be.
The alarm bells start ringing when Ferguson introduces the book as “a general history of catastrophe” and, in case we didn’t grasp how capacious that sounds, tells us it will include:
not just pandemics but all kinds of disasters, from the geological (earthquakes) to the geopolitical (wars), from the biological (pandemics) to the technological (nuclear accidents). Asteroid strikes, volcanic eruptions, extreme weather events, famines, catastrophic accidents, depressions, revolutions, wars, and genocides: all life – and much death – is here.
You may be asking if there is really much of a relationship, throughout all the ages of history, between asteroid strikes, nuclear accidents and revolutions – and I’d say this gets to a pretty basic problem with tackling a subject like this. Writing about catastrophe (or disaster – the two are used a synonyms) requires finding a way to coherently group together the extremely diverse phenomena that might fall into this category. It requires, in other words, developing an understanding of what catastrophe actually means, in a way that allows for useful parallels between its different manifestations.
Ferguson seems to acknowledge this when he rounds off his list by asking “For how else are we to see our disaster [i.e. Covid] – any disaster – in proper perspective?” Yet his concept of catastrophe turns out to be circular, inconsistent and inadequate. Whatever aspect of catastrophe Ferguson happens to be discussing in a particular chapter becomes, temporarily, his definition of catastrophe as such. When he is talking about mortality, mortality becomes definitive of catastrophe (“disaster, in the sense of excess mortality, can take diverse forms and yet pose similar challenges”). Likewise when he is showing how infrequent and therefore hard to predict catastrophes are (“the rare, large scale disasters that are the subject of this book”). In Ferguson’s chapter seeking similarities between smaller and larger disasters, he seems happy to simply accept whatever is viewed as a disaster in the popular memory: the Titanic, Chernobyl, the failed launch of NASA’s Challenger spacecraft.
This is not nitpicking. I’m not expecting the metaphysical rigor of Immanuel Kant. I like an ambitious, wide-ranging discussion, even if that means sacrificing some depth. But attempting this without any real thesis, or even a firm conceptual framework, risks descending into a series of aimless and confusing digressions which don’t add up to anything. And that is more or less what happens in this book.
Consider Ferguson’s chapter on “The Psychology of Political Incompetence.” After a plodding and not especially relevant summary of Tolstoy’s concluding essay in War and Peace, Ferguson briefly introduces the idea that political leaders’ power is curtailed by the bureaucratic structures they inhabit. He then cuts to a discussion of the role of ideology in creating disastrous food shortages, by way of supporting Amartya Sen’s argument that democratic regimes respond better to famines than non-democratic ones. It’s not clear how this relates to the theme of bureaucracy and leadership, but this is one of the few sections where Ferguson is actually addressing something like “the politics of catastrophe;” and when he poses the interesting question of “why Sen’s theory does not apply to all forms of disaster” it feels like we are finally getting somewhere.
Alas, as tends to be the case in this book, Ferguson doesn’t answer the question, but embarks on a series of impromptu arguments against straw men. A winding discussion of British ineptness during the two World Wars brings him to the conclusion that “Democracy may insure a country against famine; it clearly does not insure against military disaster.” Who said that it does? Then Ferguson has suddenly returned to the issue of individual leadership, arguing that “it makes little sense” to hold Churchill solely responsible for the fall of Singapore to the Japanese in 1942. Again, who said we should? Ferguson then rounds off the chapter with an almost insultingly cursory discussion of “How Empires Fall,” cramming eight empires into less than five pages, to make the highly speculative argument that that imperial collapse is as unpredictable as various other kinds of disaster.
Insofar as anything holds this book together, it is the thin sinews of statistical probability models and network science. These do furnish a few worthwhile insights. Many of the events Ferguson classes as disasters follow power-law distributions, which is to say there is no regular relationship between their scale and the frequency with which they occur. So big disasters are essentially impossible to predict. In many cases, this is because they emerge from complex systems – natural, economic and social – which can unexpectedly amplify small events into enormous ones. In hindsight, these often seem to have been entirely predictable, and the Cassandras who warned of them are vindicated. But a regime that listened to every Cassandra would incur significant political costs in preparing for disasters that usually won’t materialize.
I also liked Ferguson’s observation that the key factor determining the scale of a disaster, in terms of mortality, is “whether or not there is contagion – that is, some way of propagating the initial shock through the biological networks of life or the social networks of humanity.” But his other useful comments about networks come in a single paragraph, and can be quoted without much further explanation:
If Cassandras had higher centrality [in the network], they might be more often heeded. If erroneous doctrines [i.e. misinformation] spread virally through a large social network, effective mitigation of disaster becomes much harder. Finally… hierarchical structures such as states exist principally because, while inferior to distributed networks when it comes to innovation, they are superior when it comes to defence.
I’m not sure it was necessary to have slogged through an entire chapter on network science, recycled from Ferguson’s last book, The Square and the Tower, to understand these points.
But returning to my main criticism, statistical and network analysis doesn’t really allow for meaningful parallels between different kinds of catastrophe. This is already evident in the introduction, when Ferguson states that “disaster takes too many forms for us to process with conventional approaches to risk mitigation. No sooner have we focused our minds on the threat of Salafi jihad than we find ourselves in a financial crisis originating in subprime mortgages.” As this strange comment suggests, the implied perspective of the book is that of a single government agency tasked with predicting everything from financial crises and terrorist attacks to volcanic eruptions and genocides. But no such agency exists, of course, for the simple reason that when you zoom in from lines plotted on a graph, the illusion that these risks are similar dissolves into a range of totally different phenomena attached to various concrete situations. The problem is absurdly illustrated when, having cited a statistical analysis of 315 conflicts between 1820-1950, Ferguson declares that in terms of predictability, “wars do indeed resemble pandemics and earthquakes. We cannot know in advance when or where a specific event will strike, nor on what scale.” Which makes it sound like we simply have no way of knowing whether the next conflict is more likely to break out in Gaza or Switzerland.
In any case, there is something patently inadequate about measuring catastrophe in terms of mortality figures and QALYs (quality-adjusted life years), as though the only thing we have in common is a desire to live for as long as possible. Not once is the destruction of culture or ways of life mentioned in the book, despite the fact that throughout history these forms of loss have loomed large in people’s sense of catastrophe. Ferguson even mentions several times that the most prolific causes of mortality are often not recognised as catastrophes – but does not seem to grasp the corollary that catastrophe is about something more than large numbers of deaths.
Indeed, maybe the best thing that can be said about Doom is that its shortcomings help us to realise what does need to be included in an understanding of catastrophe. Throughout the book, we see such missing dimensions flicker briefly into view. In his discussion of the flu pandemic of the late 1950s, Ferguson notes in passing that the Soviet launch of the Sputnik satellite in October 1957 “may help to explain why the memory of the Asian flu has faded” in the United States. This chimes with various other hints that this pandemic was not really perceived as a catastrophe. But why? And it what sense was it competing with the Cold War in the popular imagination? Likewise, Ferguson mentions that during the 1930s the lawyer Basil O’Connor used “the latest techniques in advertising and fundraising” to turn the “horrific but relatively rare disease” of polio into “the most feared affliction of the age.” This episode is briefly contrasted to the virtual silence of the American media and political class over AIDS during the 1980s.
In fact, unacknowledged catastrophes are an unacknowledged theme of the book. It re-emerges in several intriguing mentions of the opioid epidemic in the United States, with its associated “deaths of despair.” At the same time as there was “obsessive discussion” of global warming among the American elite, Ferguson points out, “the chance of dying from an overdose was two hundred times greater than the chance of being killed by a cataclysmic storm.” He also describes the opioid crisis as “the biggest disaster of the Obama presidency,” and suggests that although “the media assigned almost no blame to Obama” for it, “such social trends did much to explain Donald J. Trump’s success.” Finally, Ferguson notes that during the current Covid crisis, the relative importance of protecting the vulnerable from the disease versus maintaining economic activity became an active front in the American culture war.
The obvious implication of all this is that, while Ferguson does not really engage with “the politics of catastrophe,” the concept and reality of catastrophe is inherently political. There isn’t really an objective measure of catastrophe: the concept implies judging the nature and consequences of an event to be tragic. Whether or not something meets this standard often depends on who it affects and whether it fits into the emotionally compelling narratives of the day. The AIDS and opioid epidemics initially went unrecognized because their victims were homosexuals and working class people respectively. To take another example, the 1921 pogrom against the affluent African American community in Tulsa, Oklahoma, was for the longest time barely known about, let alone mourned (except of course by African Americans themselves); yet a hundred years later it is being widely recognised as a travesty. Last week’s volcanic eruption in the Democratic Republic of Congo, which may have left 20,000 people homeless, would probably be acknowledged as catastrophic by a Westerner who happened to read about it in the news. But we are much more likely to be aware of, and emotionally invested in, the disastrous Israeli-Palestinian conflict of recent weeks.
Catastrophe, in other words, is inextricably bound up with popular perception and imagination. It is rooted in the emotions of fear, anger, sadness, horror and titillation with which certain events are experienced, remembered or anticipated. This is how we can make sense of apathy to the late-1950s flu pandemic: such hazards, as Ferguson mentions, were still considered a normal part of life rather than an exceptional danger, and people’s minds were focused on the potential escalation of the Cold War. Hence also the importance of the media in determining whether and how disasters become embedded in public discourse. While every culture has its religious and mythical visions of catastrophe (a few are mentioned in a typically fleeting discussion near the start of Doom), today Netflix and the news media have turned us into disaster junkies, giving form and content to our apocalyptic impulses. The Covid pandemic has been a fully mediated experience, an epic rollercoaster of the imagination, its personal and social significance shaped by a constant drumbeat of new information. It is because climate change cannot be made to fit this urgent tempo that is has been cast in stead as a source of fatalism and dread, always looming on the horizon and inspiring millions with a sense of terrified helplessness.
Overlooking the central role of such cultural and political narratives probably meant that Ferguson’s Doom was doomed from the start. For one thing, this missing perspective immediately shows the problem with trying to compare catastrophes across all human history. Yes, there are fascinating patterns even at this scale, like the tendency of extreme ideological movements to emerge in the midst of disasters – whether the flagellant orders that sprang from the 14th century Black Death, or the spread of Bolshevism in the latter part of the First World War. But to really understand any catastrophe, we have to know what it meant to the people living through it, and this means looking at the particulars of culture, politics and religion which vary enormously between epochs. This, I would argue, is why Ferguson’s attempt to compare the Athenian plague of the late 5th century BC to the Black Death in medieval England feels rather superficial.
And whatever the historical scope, statistics simply don’t get close to the imaginative essence of catastrophe. Whether or not a disaster actually happens is incidental to its significance in our lives; many go unnoticed, others transform culture through mere anticipation. Nor do we experience catastrophes as an aggregate of death-fearing individuals. We do so as social beings whose concerns are much more elaborate and interesting than mere life and death.