The Kitchen as a Theatre of History

This essay was first published at The Pathos of Things newsletter. Subscribe here.

In Britain, where the saying goes that every man’s home is his castle, we like to see domestic space as something to be improved. Even if we have to save until middle age to own a decent home, we do so, in part, so that we can hand it over to builders for six months, after which there will be fewer carpets and more sunrooms. 

But domestic space is also a medium through which external forces shape us, in what we mistakenly consider our private existence. Nothing illustrates this better than the evolution of the modern kitchen.

In one of my favourite essays, former Design Museum director Deyan Sudjic describes how the British middle-class kitchen was transformed over the course of a century, from the early 1900s until today. Beginning as a “no-man’s land” where suburban housewives maintained awkward relations with their working-class servants, it has become “a domestic shrine to the idea of family life and conviviality.” Whereas the kitchen’s association with work and working people once ensured that it was partitioned, physically and socially, from the rest of the home, today the image of domestic bliss tends to centre on a spacious open-plan kitchen, with its granite-topped islands, its ranks of cupboard doors in crisp colours, its barstools and dining tables.

And in the process of being transformed, the kitchen transformed us. The other thing we find in this space today is an assortment of appliances, from toasters and kettles to expensive blenders and coffee machines, reflecting a certain admiration for efficiency in domestic life. This does not seem so striking in a world where smartphones and laptops are ubiquitous, but as Sudjic points out, the kitchen was the Trojan horse through which the cult of functionality first penetrated the private sphere.

A hundred years ago, sewing machines and radios had to be disguised as antique furniture, lest they contaminate the home with the feeling of a factory. It was after the middle-classes began to occupy the formerly menial world of the kitchen that everyday communion with machines became acceptable.

In its most idealised and affluent form, the contemporary kitchen has almost become a parody of the factory. Labour in conditions of mechanised order – the very thing the respectable home once defined itself against – is now a kind of luxury, a form of self-expression and appreciation for the finer things in life. We see the same tendency in the success of cooking shows like Master Chef, and in the design of fashionable restaurants, where the kitchen is made visible to diners like a theatre.

What paved the way for this strange marriage of the therapeutic and the functional was the design of the modern kitchen. During this process, the kitchen was a stage where history’s grand struggles played out on an intimate scale, often refracted through contests over women’s role in society. The central theme of this story is how the disenchanting forces of modern rationality have also produced enchanting visions of their own, visions long associated with social progress but eventually absorbed into the realm of private aspiration.

The principles underpinning the modern kitchen came from the northern United States, where the absence of servants demanded a more systematic approach to domestic work. That approach was defined in the mid-19th century by Catharine Beecher, sister of the novelist Harriet Beecher Stowe. In her hugely popular Treatise on Domestic Economy, addressed specifically to American women, Beecher gave detailed instructions on everything from building a house to raising a child, from cooking and cleaning to gardening and plumbing. Identifying the organised, self-contained space of the ship’s galley as the ideal model for the kitchen, she provided designs for various labour-saving devices, setting in motion the process of household automation.

Beecher promoted an ethic of hard work and self-denial that she derived from a stern Calvinist upbringing. Yet she was also a leading campaigner for educational equality, establishing numerous schools and seminaries for women. Her professional approach to household work was an attempt, within the parameters of her culture, to give women a central role in the national myth of progress, though its ultimate effect was to deepen the association of women with the domestic sphere.

Something similar could be said about Christine Frederick, a former teacher from Boston, who in the early-20th century took some of Beecher’s ideas much further. Frederick’s faith was not Calvinism but the Taylorist doctrines of scientific management being implemented in American factories. What she called “household engineering” involved an obsessive analysis and streamlining of tasks as mundane as dishwashing. “I felt I was working hand in hand with the efficiency engineers in business,” she said, “and what they were accomplishing in industry, I too was accomplishing in the home.”

By this time Europe was ready for American modernity in the household, as relations between the classes and sexes shifted radically in the wake of the First World War. Women were entering a wider range of occupations, which meant fewer wives at home and especially fewer servants. At the same time, the provision of housing for the working class demanded new thinking about the kitchen.

In the late-1920s one of Christine Frederick’s disciples, the Austrian architect Margarete Schütte-Lihotzky, designed perhaps the most celebrated kitchen in history. The Frankfurt kitchen, as it came to be known, was one of many efforts at this time to repurpose the insights of American industry for the cause of socialism, for Schütte-Lihotzky was an ardent radical. She would, during her remarkably long life, offer her skills to a succession of socialist regimes, from the Soviet Union to Fidel Castro’s Cuba, as well as spending four years in a concentration camp for her resistance to Nazism.

For the Modernist architects among whom Schütte-Lihotzky worked in the 1920s, the social and technical challenge of the moment was the design of low-cost public housing. Cash-strapped government agencies were struggling to provide accommodation for war widows, disabled veterans, pensioners and slum-dwelling workers. It was for a project like this in Frankfurt that Schütte-Lihotzky produced her masterpiece, a compact, meticulously organized galley kitchen, offering a maximum of amenities in a minimum of space.

By the end of the decade, different versions of the Frankfurt kitchen had been installed in 10,000 German apartments, and were inspiring imitations elsewhere. Its innovations included a suspended lamp that moved along a ceiling runner, a height-adjustable revolving stool, and a sliding door that allowed women to observe their children in the living area. It was not devoid of style either, with ultramarine blue cupboards and drawers, ochre wall tiles and a black floor. Schütte-Lihotzky would later claim she designed it for professional women, having never done much cooking herself.

The Frankfurt kitchen was essentially the prototype of the fitted kitchens we are familiar with today, but we shouldn’t overlook what a technological marvel it represented at the time. Across much of working class Europe, a separate kitchen was unheard of (cooking and washing were done in the same rooms as working and sleeping), let alone a kitchen that combined water, gas and electricity in a single integrated system of appliances, workspaces and storage units.

But even as this template became a benchmark of modernity and social progress in Europe, the next frontier of domestic life was already appearing in the United States. During the 1920s and 30s, American manufacturers developed the design and marketing strategies for a full-fledged consumer culture, turning functional household items into objects of desire. This culture duly took off with the economic boom that followed the Second World War, as the kitchen became the symbol of a new domestic ideal.

With the growth of suburbia, community-based ways of life were replaced by the nuclear family and its neighbours, whose rituals centred on the kitchen as a place of social interaction and display. The role of women in the home, firmly asserted by various cultural authorities, served as a kind of traditional anchor in a world of change. Thanks to steel-beam construction and central heating, the kitchen could now become a large, open-plan space. It was, moreover, increasingly populated by colourful plastic-laminated surfaces, double cookers, washing machines and other novel technologies. Advertisers had learned to target housewives as masters of the family budget, so that huge lime green or salmon pink fridges became no less a status symbol than the cars whose streamlined forms they imitated.

Despite their own post-war boom, most Europeans could only dream of such domestic affluence, and dream they did, for the mass media filled their cinema and television screens with the comforts of American suburbia. This was after all the era of the Cold War, and the American kitchen was on the front line of the campaign to promote the wonders of capitalism. On the occasion of the 1959 American National Exhibition in Moscow, US vice-president Richard Nixon got the chance to lecture Soviet premier Nikita Khrushchev on the virtues of a lemon yellow kitchen designed by General Electric.

In this ideological competition, the technologies of the modern kitchen were still assumed to represent an important form of social progress; Nixon’s PR victory in the Moscow “kitchen debate” was significant because Khrushchev himself had promised to overtake the United States in the provision of domestic consumer goods. This battle for abundance was famously one that Communism would lose, but by the time the Soviet challenge had disappeared in the 1990s, it was increasingly unlikely that someone in the west would see their microwave as emblematic of a collective project of modernity.

Perhaps capitalism has been a victim of its own success in this regard; being able to buy a Chinese manufactured oven for a single day’s wages, as many people now can, makes it difficult to view that commodity as a profound achievement. Yet there is also a sense in which progress, at least in this domain, has become a private experience, albeit one that tends to emerge from a comparison with others. The beautiful gadgetsthat occupy the contemporary home are tools of pleasure and convenience, but also milestones in the personal quest for happiness and perfection. 

The open-plan kitchen descended from mid-century America has become a desired destination for that quest in much of the developed world, even if it is often disguised in a local vernacular. It is no coincidence that in 1999, such a kitchen featured in the first episode of Grand Designs, the show which embodies the British middle-class love affair with domestic improvement. But the conspicuous efficiency and functional aesthetics of today’s kitchen dream show that it is equally indebted to Margrete Schütte-Lihotzky’s utopian efforts of the 1920s. This is a cruel irony, given that for most people today, and most of them still women, working in the kitchen is not a form of mechanised leisure but a stressful necessity, if there is time for it at all. 

Then again, Schütte-Lihotzky is part of a longer story about the modern world’s fascination with rational order. When Kathryn Kish Sklar writes about Catharine Beecher’s kitchen from the 1850s, she could equally be describing the satisfaction our own culture longs to find in the well-organised home: “It demonstrates the belief that for every space there is an object, for every question an answer. It speaks of interrelated certainties and completion.”

This essay was first published at The Pathos of Things newsletter. Subscribe here.

The Consolations of Green Design

This essay was first published at The Pathos of Things newsletter. Subscribe here.

I recently found myself browsing a Financial Times feature about “great tech for greener living,” a selection of stylish items for the principled customer. They included an oak iPhone stand sustainably crafted by Polish artisans (holding your phone “at a perfect 25-degree tilt”); wireless earphones with a wood inlay by House of Marley, the eco-friendly studio founded by Bob Marley’s son Rohan; and an app, Ethy, that audits brands for their environmental credentials.

This is a good snapshot of the environmental consciousness that has emerged among upmarket consumers. They still want fashionable, functional and beautiful products, but these qualities are no longer enough. The casual pillaging of the planet that once lay concealed behind the shiny exterior of consumer goods is gradually coming into focus, so that every object now carries the risk of moral contamination. The devil, we have learned, is in the detail: “Even the Scandinavian-style minimalist interiors that seem so pure and clean,” writes sustainability consultant Edwin Datschefski, have a “hidden ugliness – formaldehyde in the plywood and mdf, hexavalent chromium pollution from tanning leather, and damage to communities and the landscape from mining the pigments used in white paint.”

And designers are more than happy to remove that taint of evil. Increasingly, the green ethos is providing design with a sense of mission not seen since the Modernist era of the 1920s-60s. With Modernism, the goal was to harness the power of mass-production to improve the material and aesthetic conditions of ordinary people. For green design, it is to minimise the environmental damage, as well as the human exploitation, caused by a product in each stage of its lifecycle: materials, supply, manufacturing, use and disposal. The two movements share a vision of design as a moral crusade, as well as a certain phobic quality; green designers tend to avoid any suggestion of industry and labour with the same fastidiousness that Modernists applied to cleanliness and hygiene.

This sense of purpose has delivered some notable achievements in the 21st century. Most obviously, green design has consistently generated ingenious new materials and methods, from timber skyscrapers and lampshades made of sugar to the use of mycelium, a fungal substrate, for 3D-printed architectural elements. This year’s winners of the Earthshot sustainable design prize include a seaweed-based alternative to plastic packaging, and a flat-pack greenhouse that will allow small-scale farmers to produce higher yields using much less water. Green designers have also shown an interest in humanising production, preferring to use less alienating forms of labour and trying to integrate aspects of local heritage from the regions where they work.

Last but not least, green design is good at artistic propaganda. Its back catalogue is full of works that communicate the ideals of environmentalism in evocative and inspiring ways, such as Stuart Haygarth’s chandelier made from recycled prescription glasses, or Tomas Gabzdil Libertiny’s extraordinary honeycomb vases, each of which are manufactured by bees inside a hive over the course of a week.

Yet there is often an air of unreality about green design, a not-quite-right feeling that starts to nag at you the more you think about it. The problem is most apparent in the grand philosophical ambitions that frequently emanate from the movement. According to its theorists, the mission of green design is nothing less than the transformation of the relationship between humanity and nature, rejecting the modern (and Modernist) project of shaping the world for our own ends and recognising ourselves as natural and ecologically limited beings. A few examples from the archives of Domus magazine will give a sense of this discourse. In 1997 one author demanded a “realisation that man will be able to sustain himself only if the self-regulating ecosystem of the universe continues and is not disrupted by man’s intervention.” More recently, former MoMA design director Emilio Ambasz told the magazine that “Building inevitably changes Nature… into a human-made nature. The goal should be to reduce and, if possible, to compensate for our intrusion in the Vegetal Kingdom.” Finally, consider the words of the eminent furniture design and research duo Formafantasma:

sustainability is a strong utopia because it goes beyond modernity. It’s remote from twentieth-century culture and fully inserted in our new way of understanding our relationship to nature. […] Contemporary civilisation has a growing awareness that we can continue to live only if we work together with other living beings. As designers, but above all as human beings, we have to take care not only of ourselves, but all the other species on the planet. 

All of this sounds excellent, but there is a yawning gap between these lofty aspirations and what green design actually does for the most part, which is to develop marginal alternatives, communicate ideas, and as that Financial Times feature suggests, offer boutique products to those who can afford ethics as a lifestyle choice. What to make of this discrepancy? It raises the possibility that green design has become trapped in a comfortable role which is less about changing the world than legitimising a consumer culture which is really not very green. With eye-catching sustainable product lines and utopian language, big brands can trumpet their green ambitions even as they keep plying their destructive trade in garments, furniture and cars. Occasionally buying eco-friendly goods is an excellent way to feel better about all the other things you buy. It’s almost like the indulgences sold by the medieval church: pay a bit more, fear a bit less for your soul.

There is surely some truth in this cynical interpretation, although I wouldn’t pin the blame on the designers. Like all of us, they have to reconcile many conflicting desires in their lives, including the desire for financial security and for success in their craft. Developing a practice with integrity is admirable, even if it can only serve a small audience. In any case, there is a more generous and, I think, equally plausible way of understanding the role of green design.

The burden of living in a complex society is the knowledge of one’s powerlessness to change the systems in which one is trapped. Reducing the environmental impact of our material culture is perhaps the ultimate example of this, since it ultimately hinges on countless technical issues. At scale, improvements tend to come less from green design than from the greening of design, or techniques that do better than the alternatives without fully solving the problem; architecture that passively regulates temperature, for instance, or electric cars. Progress depends on questions such as: will the more sustainable fibres being developed by Scandinavian companies become a viable alternative to cotton? Will electricity ever be capable of replacing fossil fuels in the most energy-intensive manufacturing processes? How much can we reduce the CO2 emissions associated with cement? This trajectory is bound to be slow, messy, frustrating, tragic, and uncertain of success. But for the time being, it’s all we’ve got. 

Against this background, green design can be seen as a kind of informal arrangement between designers and consumers that allows each party to express ideals reality cannot accommodate. These include hope, imagination, and above all responsibility. You could say this is a fiction, but as long as no one mistakes it for an answer to the world’s problems, it seems like a valuable fiction. Besides, it’s better than just making and buying more crap.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Napoleon’s Furniture

This essay was first published at The Pathos of Things newsletter. Subscribe here.

The design of chairs is not normally listed among the achievements of Napoleon Bonaparte, France’s famous post-revolutionary emperor, but the importance of furniture should never be underestimated. Besides redrawing the map of the Europe, establishing institutions and writing law codes, Napoleon should be seen as a seminal figure in the development of modern design.

Napoleon embodies modernity in its heroic phase. He was celebrated as an icon of both Romanticism and the Enlightenment; a symbol of unstoppable willpower who crossed the Alps on his rearing wild-eyed stallion (or at least was painted by doing so by Jacques-Louis David), as well as the ultimate Enlightened despot, aspiring to replace feudal superstition with the universal principles of Reason. Between these two sides of the Napoleonic myth we can glimpse his remarkable understanding of modern authority, which rests on the active creation of order in a world of turbulent change.

Design was an integral part of that authority. With the assistance of designers Charles Percier and Pierre Fontaine, Napoleon implemented what came to be known as the Empire Style. This was a grand but sober form of neoclassicism, with rigid lines and a large repertoire of motifs drawn from the ancient world: acanthus, palms leaves, wreaths and eagles from Greece and Rome; obelisks, pyramids, winged lions and caryatids from Egypt. Through this official style, whose most famous example is the Arc de Triomphe in Paris, Napoleon linked his regime to the timeless values of reason associated with classical civilisation.

But the Empire Style also portrayed this order as dynamic and expanding, drawing attention to the epic agency of its central figure. The Egyptian iconography recalled Napoleon’s expedition to the near east in 1798, which had sparked a fascination with Egypt in European fashion and intellectual life. More obvious still was the frequently used capital letter “N.” Blending the grandeur of the past with progress and celebrity, Napoleonic design showed a distinctly modern formula for authority, one that would be echoed by Mussolini, Hitler and Stalin more than a century later.

It also suggested the arrival of modernity in more concrete ways, as François Baudot has pointed out. The square proportions and functional character of its furniture, along with its catalogue of reproducible symbols, reflected the standardised methods used at France’s state workshop. As such, it anticipated the age of mass-production in the later 19th and 20th centuries. Large-scale production was needed, in part, to supply the burgeoning administration of the Napoleonic state: a state whose power derived not just from the court and army, but increasingly from bureaucracy too. “It proved a short step,” Baudot quips, “from the Empire desk to the empire of the desk.”

What feels especially familiar about the Empire Style is its ambition to create an aesthetic totality, a “brand identity” whose unity of style would encompass everything from the largest structure to the finest detail. It was, writes Baudot, “a style whose practitioners were equally adept at cutlery and facades, at the detailing of a frieze and of a chair, at the plan of a fortress and shape of a gown to be worn at court.” This concern for a fully designed environment brings to mind the fastidious approach of later styles like art nouveau and the moderne (when the Belgian designer Henry van de Velde conceived a house for himself in 1895, he produced not just matching cutlery and furnishings but a new wardrobe for his wife). It also anticipates the commercial designers of our time, hired to create an immersive aesthetic experience for a pop star or retail brand. 

Admittedly the principle that power spoke with a distinct voice was not new, especially not in France, where Louis XIV had already overseen an extensive system of state workshops and artisans in the 17th century. Neoclassicism had been in vogue since the mid-18th century, and Napoleon’s version of it can be seen as a careful attempt to position his regime in relation to its predecessors. Without returning to the full opulence of the royal ancien régime, whose excesses had been repudiated in the revolutionary decade of the 1790s, the Empire Style was notably more grandiose than the republican Directory Style which came before it. Subtly but unmistakably, Napoleon was recalling the majesty of the Bourbons. 

Nonetheless, the Empire Style did express real Enlightenment convictions. As Ruth Scurr details in her fascinating biography, A Life in Gardens and Shadows, Napoleon’s passion for neoclassical garden design reflected his deeply engrained rationalism and love of order. Right until his last days in exile on Saint Helena, where he diverted his frustration into horticulture, Napoleon liked gardens to display straight lines, precision and symmetry. These are the same characteristics that defined the Empire Style. In such apparently superficial details we see principles that would resonate through European history for centuries. Napoleon quarrelled with his first wife, Joséphine, over her preference for the more unruly and picturesque English style of garden. That English style was a portent of a very different response to modernity that would soon emerge in Britain, where aesthetic harmony was sought not in classical Reason but in the organic rootedness of the medieval Gothic.     

Ultimately, what makes the Empire Style modern was the role it gave design in relation to society at large. Appropriately for an emperor who loved gardening, Napoleonic design reveals the emergence of what Zygmunt Bauman has called “the gardening state”: the modern regime that does not just aim to rule over its subjects, but seeks to transform society in pursuit of progress and even utopian perfection. The Empire Style communicated the ambition of the state – which, after the French Revolution, was meant to embody the nation and its citizens – to remake the world in the image of its ideals. But more than that, it showed a belief that design could be an active part of this project, its didactic powers helping to bring the state into being, and to instil it with an ideological purpose. Chairs and tables, buildings, interiors and monuments were not only intended to demonstrate reason and progress; they were intended to impart these values to the society where they appeared. 

This entanglement with the modern progressive state or movement would continue to haunt design up until the ruptures of the mid-20th century. In the process, the aims of representing abstract ideals, securing the commitment of the masses and showing the promise of the future would turn out to be rife with contradictions. But we will have to leave all of that until next week.

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Designing Modernity

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Somewhere in my room (I forget where exactly) there is a box containing four smartphones I’ve cycled through in the last decade or so. Each of these phones appeared shockingly new when I first removed it from its neat cuboid packaging, though now there is a clear progression of models, with the earliest almost looking fit for a museum. This effect is part of their design, of course: these objects were made to look original at first, and then, by contrast to newer models, out of date. That all have cracked screens only emphasises their consignment to the oblivion of the obsolete.

The point of this sketch is not just to make you reflect on your consumer habits. I think it represents something more profound. This series of phones is like an oblique record of the transformation of society, describing the emergence of a new paradigm for organising human existence. It captures a slice of time in which the smartphone has changed every dimension of our lives, from work and leisure to knowledge and personal relations. This small device has upended professions from taxi driving to journalism, and shaped global politics by bringing media from around the world to even the poorest countries. It has significantly altered language. It has enabled new forms of surveillance by private companies and government agencies alike. A growing number of services are inaccessible without it.

Yet with its sleek plastic shell and vibrant interfaces, the smartphone is nonetheless a formidable object of desire: a kind of gateway to the possibilities of the 21st century. Ultimately, what it represents is paradoxical. An exhilarating sense of novelty, progress and opportunity; but also the countless adaptations we have to make as technology reshapes our lives, the new systems into which we are forced to fit ourselves.

To understand how a designed object can have this kind of power, defining both the practical and imaginative horizons of our age, we have to look beyond the immediate circumstances in which it appeared. The smartphone is a truly modern artefact: modern not just in the sense that it represents something distinctive about this era, but modern in another, deeper sense too. It belongs to a longer chapter of history, modernity, which is composed of moments that feel “modern” in their own ways.

The story of modernity shows us the conditions that enable design to shape our lives today. But the reverse is also true: the growing power of design is crucial to understanding modernity itself.


The very idea of design, as we understand it now, points to what is fundamentally at stake in modernity. To say that something is designed implies that it is not natural; that it is artificial, conceived and constructed in a certain way for a human purpose. Something which is not designed might be some form of spontaneous order, like a path repeatedly trodden through a field; but we still view such order as in some sense natural. The other antonym of the designed is the disordered, the chaotic.  

These contrasts are deeply modern. If we wind the clock back a few centuries – and in many places, much less than that – a hard distinction between human order and nature or chaos becomes unfamiliar. In medieval Europe, for instance, design and its synonyms (form, plan, intention) came ultimately from a transcendent order, ordained by God, that was manifest in nature and society alike. Human designs, such as the ornamentation of Gothic cathedrals or the symbols and trappings of noble rank, drew their meaning from that transcendent order.

In practical terms though, the question of where order came from was really a question about the authority of the past. It was the continuity of customs, traditions, and social structures in general which provided evidence that order came from somewhere beyond society, that it was natural. This in turn meant the existing order, inherited from the past, placed constraints of what human ambition could fathom.

To be modern, by contrast, is to view the world without such limitations. It is to view the world as something human beings must shape, or design, according to their own goals.

This modern spirit, as it is sometimes called, was bubbling up in European politics and philosophy over centuries. But it could only be fully realised after a dramatic rupture from the past, and this came around the turn of the 19th century. The French Revolution overturned the established order with its ancient hierarchies across large parts of Europe. It spread the idea that the legitimacy of rulers came from “the people” or “the nation,” a public whose desires and expectations made politics increasingly volatile. At the same time, the seismic changes known as the Industrial Revolution were underway. There emerged an unpredictable, dynamic form of capitalism, transforming society with its generation of new technologies, industries and markets.

These developments signalled a world that was unmistakably new and highly unstable. The notion of a transcendent order inherited from the past became absurd, because the past was clearly vanishing. What replaced it was the modern outlook that, in its basic assumptions, we still have today. This outlook assumes the world is constantly changing, and that human beings are responsible for giving it order, preventing it from sliding into chaos.

Modernity was and is most powerfully expressed in certain experiences of space and time. It is rooted in artificial landscapes, worlds built and managed by human beings, of which cities are still the best example. And since it involves constant change, modernity creates a sense of the present as a distinct moment with its own fashions, problems and ideas; a moment that is always slipping away into a redundant past, giving way to an uncertain future. “Modernity,” in the poet Charles Baudelaire’s famous expression, “is the transient, the fleeting, the contingent.”


Design was present at the primal scenes of modernity. The French Revolutionaries, having broken dramatically with the past, tried to reengineer various aspects of social life. They devised new ways of measuring space (the metric system) and time (the revolutionary calendar, beginning at Year Zero, and the decimal clock). They tried to establish a new religion called the Cult of the Supreme Being, for which the artist Jacques-Louis David designed sets and costumes.

Likewise, the Industrial Revolution emerged in part through the design activities of manufacturers. In textiles, furniture, ceramics and print, entrepreneurs fashioned their goods for the rising middle-classes, encouraging a desire to display social status and taste. They devised more efficient production processes to increase profits, ushering in the age of factories and machines.

These early examples illustrate forces that have shaped design to this day. The French Revolution empowered later generations to believe that radical change could be conceived and implemented. In its more extreme phases, it also foreshadowed the attempts of some modern regimes to demolish an existing society and design a new one. This utopian impulse towards order and perfection is the ever-present dark side of design, in that it risks treating people as mere material to be moulded according to an abstract blueprint. Needless to say, design normally takes place on a much more granular level, and with somewhat less grandiose ambitions. 

Modern politics and commerce both require the persuasion of large groups of people, to engineer desire, enthusiasm, fear and trust. This is the realm of propaganda and advertising, a big part of what the aesthetic design of objects and spaces tries to achieve. But modern politics and commerce also require efficient, systematic organisation, to handle complexity and adapt to competition and change. Here design plays its more functional role of devising processes and tools.

Typically we find design practices connected in chains or webs, with functional and aesthetic components. Such is the connection between the machine humming in the factory and the commodity gleaming in the shop window, between urban infrastructure and the facades which project the glory of a regime, between software programmes and the digital interface that keeps you scrolling.

But modernity also creates space for idealism. Modern people have an acute need of ideals, whether or not they can be articulated or made consistent, because modern people have an acute need to feel that change is meaningful.

The modern mind anticipates constant change, and understands order as human, but by themselves these principles are far from reassuring. Each generation experiences them through the loss of a familiar world to new ideas, new technologies, new social and cultural patterns. We therefore need a way to understand change as positive, or at least a sense of what positive change might look like (even if that means returning to the past). Modernity creates a need for horizons towards which we can orient ourselves: a vision of the future in relation to which we can define who we are.

Such horizons can take the form of a collective project, where people feel part of a movement aiming at a vision of the future. But for a project to get off the ground, it again needs design for persuasion and efficiency. From Napoleon Bonaparte’s Empire Style furniture, with which he fitted out a vast army of bureaucrats, to Barack Obama’s pioneering Internet campaigns, successful leaders have used a distinctive aesthetic style and careful planning to bring projects to life.

Indeed, the search for effective design is one of modernity’s common denominators, creating an overlap between very different visions of society. In the aftermath of the Russian Revolution of October 1917, the ideals of communist artists and designers diverged from those dominant in the capitalist west. But the similarities between Soviet and western design in the 1920s and 30s are as striking as the differences. Communist propaganda posters and innovative capitalist advertising mirrored one another. Soviet industrial centres used the same principles of efficiency as the factories of Ford Motor Company in the United States. There was even much in common between the 1935 General Plan for Moscow and the redevelopment of Paris in the 1850s, from the rationalisation of transport arteries to the preference for neoclassical architecture.

But horizons can also be personal. The basis of consumerism has long been to encourage individuals to see their own lives as a trajectory of self-improvement, which can be measured by having the latest products and moving towards the idealised versions of ourselves presented in advertising. At the very least, indulging in novelty can help us feel part of the fashions and trends that define “the now”: a kind of unspoken collective project with its own sense of forward movement that consumerism arranges for us.


Above all though, design has provided horizons for modern people through technology. Technological change is a curiously two-sided phenomenon, epitomising our relative helplessness in the face of complex processes governing the modern world, while also creating many of the opportunities and material improvements that make modern ways of life desirable. Technology embodies the darkest aspects of modernity – alienation, exploitation, the constant displacement of human beings – as well as the most miraculous and exhilarating.

Design gives technology its practical applications and its aesthetic character. A series of design processes are involved, for instance, in turning the theory of internal combustion into an engine, combining that engine with countless other forms of engineering to produce an aeroplane, and finally, making the aeroplane signify something in the imagination of consumers. In this way, design determines the forms that technology will take, but also shapes the course of technological change by influencing how we respond to it.

Technology can always draw on a deep well of imaginative power, despite its ambiguous nature, because it ties together the two core modern ideals: reason and progress. Reason essentially describes a faith that human beings have the intellectual resources to shape the world according to their goals. Progress, meanwhile, describes a faith that change is unfolding in a positive direction, or could be made to do so. By giving concrete evidence of what reason can achieve, technology makes it easier to believe in progress.

But a small number of artefacts achieve something much greater. They dominate the horizons of their era, defining what it means to be modern at that moment. These artefacts tend to represent technological changes that are, in a very practical sense, transforming society. More than that, they package revolutionary technology in a way that communicates empowerment, turning a disorientating process of change into a new paradigm of human potential.

One such artefact was the railway, the most compelling symbol of 19th century industrial civilisation, its precise schedules and remorseless passage across continents transforming the meaning of time and space. Another was the factory, which in the first half of the 20th century became an aesthetic and political ideal, providing Modernist architects as well as dictators with a model of efficiency, mass participation and material progress. And probably the most iconic product ever to emerge from a factory was the automobile, which, especially in the United States, served for decades as an emblem of modern freedom and prosperity, its streamlined form copied in everything from kitchen appliances to radios.   

Streamlining: the Zephyr electric clock, designed by Kem Weber in the 1930s, shows the influence of automobile forms in other design areas.

I will write in more detail about such era-defining artefacts in later instalments of this newsletter. For now, I only want to say that I believe the smartphone also belongs in this series.

Obviously the smartphone arrived in a world very different from the factory or car. The western experience is now just one among numerous distinct modernities, from East Asia to Latin America. For those of us who are in the west, social and cultural identity are no longer defined by ideas like nation or class, but increasingly by the relations between individuals and corporate business, mediated by an immersive media environment.

But the smartphone’s conquest of society implies that this fragmented form of modernity still sustains a collective imagination. What we have in common is precisely what defines the smartphone’s power: a vision of compact individual agency in a fluid, mobile, competitive age. The smartphone is like a Swiss army knife for the ambitious explorer of two worlds, the physical and the virtual; it offers self-sufficiency to the footloose traveller, and access to the infinite realms of online culture. It provides countless ways to structure and reflect on individual life, with its smorgasbord of maps, photographs, accounts and data. It allows us to seal ourselves in a personal enclave of headphones and media wherever we may be.

Yet the smartphone also communicates a social vision of sorts. One of its greatest achievements is to relieve the tension between personal desire and sociability, since we can be in contact with scores of others, friends and strangers alike, even as we pursue our own ends. It allows us to imagine collective life as flashes of connectivity between particles floating freely through distant reaches of the world.

It is not uniquely modern for a society to find its imagined centre in a singular technological and aesthetic achievement, as Roland Barthes suggested in the 1950s by comparing a new model Citroën to the cathedrals of medieval Europe. The difference is that, in modernity, such objects can never be felt to reflect a continuous, transcendent order. They must always point towards a future very different from the present, and as such, towards their own obsolescence.

The intriguing question raised by the smartphone is whether the next such artefact will have a physical existence at all, or will emerge on the other side of the door opened by the touch screen, in the virtual world. 

This essay was first published at The Pathos of Things newsletter. Subscribe here.

How the Internet Turned Sour: Jon Rafman and the Closing of the Digital Frontier

This essay was first published by IM1776 on 17th August 2021

A tumble-drier is dragged out into someone’s garden and filled with something heavy — a brick perhaps. After setting it spinning, a figure in a camouflage jacket and protective face visor retreats from the camera frame. Immediately the machine begins to shudder violently, and soon disintegrates as parts fly off onto the surrounding lawn. 

This is the opening shot of Mainsqueeze, a 2014 video collage by the Canadian artist Jon Rafman. What comes after is no less unsettling: a young woman holds a small shellfish, stroking it affectionately, before placing it on the ground and crushing it slowly under her heel; an amateur bodybuilder, muscles straining grotesquely, splits a watermelon between his thighs. 

Rafman, concerned about the social and existential impact of technology on contemporary life, discovered these and many other strange performances while obsessively trawling the subaltern corners of the internet — communities of trolls, pranksters and fetishists. The artist’s aim, however, isn’t to ridicule these characters as freaks: to the contrary, he maintains: “The more marginal, the more ephemeral the culture is, the more fleeting the object is… the more it can actually reflect and reveal ‘culture at large.’” What looks at first like a glimpse into the perverse fringes, is really meant to be a portrait of online culture in general: a fragmented world of niche identities and uneasy escapism, where humor and pleasure carry undercurrents of aggression and despair. With such an abundance of stimulation, it’s difficult to say where satisfaction ends and enslavement begins.

Even as we joke about the pathologies of online life, we often lose sight of the depressing arc the internet revolution has followed during the past decade. It’s impossible to know exactly what lies behind the playful tone of Twitter and the carefree images of Instagram, but judging by the personal stories we hear, there’s no shortage of addiction (to social media, porn, smartphones), identity crisis, and anxiety about being judged or exposed. It seems much of our online existence is now characterized by the same sense of hyper-alert boredom, claustrophobia and social estrangement that Rafman found at the margins of the internet years ago.

Indeed, the destructive impulses of Rafman’s trolls seem almost quaint by comparison to the shaming and malicious gossip we take for granted on social media. And whereas a plurality of outlooks and personalities was once the glory of the internet, today every conceivable subject, from art and sports to haircuts, food, and knitting, is reified as a divisive issue within a vast political metanarrative.

In somewhat of an ironic twist, last year, Rafman himself was dropped or suspended by numerous galleriesfollowing accusations of inappropriate sexual behavior, leveled through the anonymous Instagram account Surviving the Artworld (which publishes allegations of abusive behavior in the art industry). The accusers say they felt taken advantage of by the artist; Rafman insists that there was a misunderstanding. It’s always hard to know what to make of such cases, but that social media now serves as a mechanism for this kind of summary justice seems symptomatic of the social disintegration portrayed in works like Mainsqueeze.

Even if these accusations mark the end of Rafman’s career, his efforts to document online culture now seem more valuable than ever. His art gives us a way of thinking about the internet and its discontents that goes beyond manipulative social media algorithms, ideological debasement or the culture wars. The artist’s work shows the evolution of the virtual realm above all as a new chapter of human experience, seeking to represent the structures of feeling that made this world so enticing and, ultimately, troubled.

The first video by Rafman I came across reminded me of Swift’s Gulliver’s Travels. Begun in 2008, the visionary Kool-Aid Man in Second Life consists of a series of tours through the virtual world platform Second Life, where users have designed a phantasmagorical array of settings in which their avatars can lead, as the name suggests, another life. In the video, our guide is Rafman’s own avatar, the famous Kool-Aid advertising mascot (a jug of red liquid with the weird rictus grin) — a protagonist that reminds us we’ve entered an era where, as Rafman puts it, “different symbols float around equally and free from the weight of history.” For the entire duration, Kool-Aid Man wanders around aimlessly in a surreal, artificial universe, sauntering in magical forests and across empty plains, through run-down cityscapes and futuristic metropolises, placidly observing nightclub dance floors, ancient temples, and the endless stages where the denizens of Second Life perform their sexual fantasies.

Kool-Aid Man in Second Life is best viewed against the backdrop of the great migration onto the internet which started in the mid-2000s, facilitated by emerging tech giants like Amazon, Google and Facebook. For the great majority of people, this was when the internet ceased being merely a toolbox for particular tasks and became part of everyday life (the art world jargon for this was ‘post-internet’). The artwork can be seen as a celebration of the curiosity, fun, and boundless sense of possibility that accompanied this transition. Humanity was stepping en-masse out of the limits of physical space, and what it found was both trivial and sublime: a kitsch world of selfies and cute animal as well as effortless new forms of association and access to knowledge. The euphoric smile of Kool-Aid Man speaks to the birth of online mass culture as an innocent adventure.

Similar themes appear also in Rafman’s more famous (and ongoing) early work The Nine Eyes of Google Street View, in which the artist collects peculiar images captured by Google Maps’ vehicles. Scenes include a magnificent stag bounding down a coastal highway, a clown stepping into a minibus, a lone woman breastfeeding her child in a desolate landscape of dilapidated buildings. As in Rafman’s treatment of Second Life, such eclectic scenes are juxtaposed to portray the internet as an emotional voyage of discovery, marked by novel combinations of empathy and detachment, sincerity and irony, humour and desire. But in hindsight, no less striking than the spirit of wonder in these works are the ways they seem to anticipate the unravelling of online culture. 

If there’s something ominous about the ornate dream palaces of Second Life, it comes from our intuition that the stimulation and belonging offered by this virtual community is also a measure of alienation. The internet gives us relations with people and things that have the detached simplicity of a game, which only become more appealing as we find niches offering social participation and identity. But inevitably, these ersatz lives become a form of compulsive retreat from the difficulties of the wider world and a source of personal and social tension. Rafman’s Second Life is a vivid metaphor for how virtual experience tempts us with the prospect of a weightless existence, one that can’t possibly be realised and must, ultimately, lead to resentment. 

Equally prescient was Rafman’s emphasis on the breakdown of meaning, as words, images, and symbols of all kinds become unmoored from any stable context. Today, all ‘content’ presents itself much like the serendipitous scenes in The Nine-Eyes of Google Street View – an arbitrary jumble of trivial and profound, comic and tragic, impressions stripped of semantic coherence and flattened into passing flickers of stimulation. Symbols are no longer held firm in their meaning by clearly defined contexts where we might expect to find them, but can be endlessly mixed and refashioned in the course of online communication. This has been a great source of creativity, most obviously in the form of memes, but it has also produced neurosis. Today’s widespread sensitivity to the alleged violence concealed in language and representation, and the resulting desire to police expression, seems to reflect deep anxiety about a world where nothing has fixed significance. 

These more ominous trends dominate the next phase of Rafman’s work, where we find pieces like Mainsqueeze. Here Rafman plunges us into the sordid underworld of the internet, a carnival of adolescent rebellion and perverse obsessions. A sequence of images showing a group of people passed-out drunk, one with the word “LOSER” scrawled on his forehead, captures the overall tone. In contrast to Rafman’s Second Life, where the diversity of the virtual realm could be encompassed by a single explorer, we now find insular and inaccessible communities, apparently basking in an angry sense of estrangement from the mainstream of culture. Their various transgressive gestures — swastikas, illicit porn, garish make-up — seem tinted with desperation, as though they’re more about finding boundaries than breaking them.

This portrayal of troll culture has some unsettling resonances with the boredom and anxiety of internet life today. According to Rafman himself, however, the wider relevance of these outcasts concerns their inability to confront the forces shaping their frustrated existence. Trapped in a numbing cycle of distraction, their subversive energy is channelled into escapist rituals rather than any kind of meaningful criticism of the society they seem to resent. Seen from this perspective, online life comes to resemble a form of unknowing servitude, a captive state unable to grasp the conditions of its own deprivation.

All of this points to the broader context which is always dimly present in Rafman’s work: the architecture of the virtual world itself through which Silicon Valley facilitated the great migration onto the internet over the past fifteen-odd year. In this respect, Rafman’s documentation of Second Life becomes even more interesting, since that platform really belonged to the pre-social media Cyberpunk era, which would make it a eulogy for the utopian ethos of the early internet, with its dreams of transcending the clutches of centralised authority. The power that would crush those dreams is represented, of course, by Rafman’s Google Street View’s car — the outrider of big tech on its endless mission to capitalise on all the information it can gather.

But how does this looming corporate presence relates to the disintegration of online culture traced by Rafman? The artist’s comments about misdirected critical potential suggest one depressing possibility: the internet is a power structure which sustains itself through our distraction, addiction and alienation. We might think of Huxley’s Brave New World, but with shitposting and doom-scrolling instead of the pleasure-drug soma. Rafman’s most recent animation work, Disaster under the Sun, seems to underscore this dystopian picture. We are given a God’s-eye perspective over a featureless grey landscape, where crowds of faceless human forms attack and merge into one another, their activities as frantic and vicious as they are lacking any apparent purpose. 

It’s certainly true that the internet giants have gained immense wealth and power while overseeing the profound social and political dislocations of the last decade. But it’s also true that there are limits to how far they can benefit from anarchy. This, might explain why we are now seeing the emergence of something like a formal constitutional structure to govern the internet’s most popular platforms, such as with Facebook, whose Oversight Board now even provides a court of appeal for its users — but also Twitter, Google, and now PayPal. The consolidation of centralized authority over the internet resembles the closing of a frontier, as a once-lawless space of discovery, chaos and potential is settled and brought under official control. 

Rafmans’ work allows us to grasp how this process of closure has also been a cultural and psychological one. We have seen how, in his art, the boundlessness of the virtual realm, and our freedom within it, are portrayed not just as a source of wonder but also of disorientation and insecurity. There have been plenty of indications that these feelings of flux have made people anxious to impose order, whether in the imagined form of conspiracy theories or by trying to enforce new norms and moral codes.

This isn’t to say that growing regulation will relax the tensions that have overtaken online culture. Given the divergence of identities and worldviews illustrated by Rafman’s depiction of the marginal internet, it seems highly unlikely that official authority can be impartial; drawing boundaries will involve taking sides and identifying who must be considered subversive. But all of this just emphasises that the revolutionary first chapter of internet life is drawing to a close. For better or worse, the particular spirit of discovery that marked the crossing of this frontier will never return.

Tooze and the Tragedy of the Left

Adam Tooze is one of the most impressive public intellectuals of our time. No other writer has the Columbia historian’s skill for laying bare the political, economic and financial sinews that tie together the modern world.

Tooze’s new book, Shutdown: How Covid Shook the World’s Economy, provides everything his readers have come to expect: a densely woven, relentlessly analytical narrative that uncovers the inner workings of a great crisis – in this case, the global crisis sparked by the Covid pandemic in 2020.

But Shutdown provides something else, too. It shows with unusual clarity that, for all his dry detachment and attention to detail, Tooze’s view of history is rooted in a deep sense of tragedy.

Towards the end of the book, Tooze reflects on the escalating “polycrisis” of the 21st century – overlapping political, economic and environmental conflagrations:

In an earlier period of history this sort of diagnosis might have been coupled with a forecast of revolution. If anything is unrealistic today, that prediction surely is. Indeed, radical reform is a stretch. The year 2020 was not a moment of victory for the left. The chief countervailing force to the escalation of global tension in political, economic, and ecological realms is therefore crisis management on an ever-larger scale, crisis-driven and ad hoc. … It is the choice between the third- and fourth-best options.

This seems at first typical of Tooze’s hard-nosed realism. He has long presented readers with a world shaped by “crisis management on an ever-larger scale.” Most of his work focuses on what, in Shutdown, he calls “functional elites” – small networks of technocratic professionals wielding enormous levers of power, whether in the Chinese Communist Party or among the bureaucrats and bankers of the global financial system.

These authorities, Tooze emphasises, are unable or unwilling to reform the dynamics of “heedless global growth” which keep plunging the world into crisis. But their ability to act in moments of extreme danger – the ability of the US Federal Reserve, for instance, to calm financial markets by buying assets at a rate of $1 million per second, as it did in March last year – is increasingly our last line of defence against catastrophe. The success or failure of these crisis managers is the difference between our third- and fourth-best options.

But when Tooze notes that radical change would have been thinkable “in an earlier period of history,” it is not without pathos. It calls to mind a historical moment that looms large in Tooze’s work. 

That moment is the market revolution of the 1980s, the birth of neoliberalism. For Tooze, this did not just bring about an economic order based on privatisation, the free movement of goods and capital, the destruction of organised labour and the dramatic growth of finance.

More fundamentally, neoliberalism was about what Tooze calls “depoliticisation.” As the west’s governing elites were overtaken by dogmas about market efficiency, the threat of inflation and the dangers of government borrowing, they hard-wired these principles into the framework of globalisation. Consequently, an entire spectrum of possibilities concerning how wealth and power might be distributed were closed-off to democratic politics. 

And so the inequalities created by the neoliberal order became, as Tony Blair said of globalisation, as inevitable as the seasons. Or in Thatcher’s more famous formulation, There Is No Alternative.

Tooze’s view of the present exists in the shadow of this earlier failure; it is haunted by what might have been. As he bitterly observes in Shutdown, it might appear that governments have suddenly discovered the joys of limitless spending, but this is only because the political forces that once made them nervous about doing so – most notably, a labour movement driving inflation through wage demands – have long since been “eviscerated.”

But it seems to me that Tooze’s tragic worldview reveals a trap facing the left today. It raises the question: what does it mean to accept, or merely to suspect, that radical change is off the table? 

We glimpse an answer of sorts when Tooze writes about how 2020 vindicated his own political movement, the environmentalist left. The pandemic, he claims, showed that huge state intervention against climate change and inequality is not just necessary, but possible. With all the talk of “Building Back Better” and “Green Deals,” centrist governments appear to be getting the message. Even Wall Street is “learning to love green capitalism.”

Of course, as per the tragic formula, Tooze does not imagine this development will be as transformative as advertised. A green revolution from the centre will likely be directed towards a conservative goal: “Everything must change so that everything remains the same.” The climate agenda, in other words, is being co-opted by a mutating neoliberalism. 

But if we follow the thrust of Tooze’s analysis, it’s difficult to avoid the conclusion that realistic progressives should embrace this third-best option. Given the implausibility of a genuine “antisystemic challenge” – and in light of the fragile systems of global capitalism, geopolitics and ecology which are now in play – it seems the best we can hope for is enlightened leadership by “functional elites.”

This may well be the true. But I think the price of this bargain will be higher than Tooze acknowledges. 

Whether it be climate, state investment, or piecemeal commitments to social justice, the guardians of the status quo have not accepted the left’s diagnosis simply because they realise change is now unavoidable. Rather, these policies are appealing because, with all their moral and existential urgency, they can provide fresh justification for the unaccountable power that will continue to be wielded by corporate, financial and bureaucratic interests. 

In other words, now that the free-market nostrums of neoliberalism 1.0 are truly shot, it is the left’s narratives of crisis that will offer a new basis for depoliticisation – another way of saying There Is No Alternative.

And therein lies the really perverse tragedy for a thinker like Tooze. If he believes the choice is survival on these terms or not at all, then he will have to agree.

The Fall of Zuma Threatens More Chaos for South Africa

This article was originally published by Unherd on 1st July 2021

It was a moment South Africans thought would never come. On Tuesday the Constitutional Court sentenced former president Jacob Zuma to 15 months in prison, after he refused to testify at an inquiry into corruption during his time in office.

When that inquiry reaches its conclusion, Zuma could face a much longer sentence — an amazing prospect. For now though, the simple willingness of the court to punish such blatant recalcitrance offers tantalising hope that the rule of law is not dead in South Africa.

The verdict was surprising given that Zuma still commands a significant power base in the ruling African National Congress party. The eye-watering levels of graft that marked his 2009-18 presidency means there are plenty of ANC figures at every level of government who want the anti-corruption drive of his successor, Cyril Ramaphosa, to fail.

And therein lies the more ominous question posed by Tuesday’s ruling. Even if Zuma hands himself over to the authorities as instructed, he won’t do it quietly. So could this lead to an escalation of the already murderous internal politics of the ANC – an all-out civil war within the party that drags the nation into the abyss?

The Zuma presidency was a waking nightmare for those of us who prayed that, after its miraculously peaceful transition from apartheid to democracy, South Africa’s governing elite would resist the slide into gangsterism which has squandered the potential of so many African nations. This was always a danger with the ANC because, being the party of Mandela and the heroic anti-apartheid struggle, it was destined to rule virtually unopposed during the first decades of democracy.

Zuma’s infamous Nkandla homestead in KwaZulu-Natal, for which he fleeced the public purse to the tune of £14 million, offers a flavour of his regime’s conspicuous venality. More serious was his gutting of the criminal justice system, paving the way for the kind of corruption that would make a hardened kleptocrat blush. At the current inquiry, witnesses have lined up to detail how Zuma effectively handed control of much of the state to a notorious trio of shady businessmen known as Gupta brothers. Apparently these cronies installed government ministers, siphoned money from state-owned companies and cashed-in on lucrative contracts. Prosecutors claim as much as £50 billion was swindled from state coffers.

With the ANC having lost ground in recent elections, Ramaphosa’s campaign to clean up the party might be a sign of democratic pressures finally kicking in. More cynically, we might note that the president needs to purge Zuma’s faction to consolidate his own leadership. At any rate, Ramaphosa knows corruption has to be addressed if South Africa is to attract the investors it sorely needs. Youth unemployment stands at a grim 75%, while millions of its citizens have only the most rudimentary housing and sanitation. Its tax base continues to shrink as wealthier citizens flee appalling levels of violent crime.

By insisting that Zuma be subject to the law, the Constitutional Court’s latest ruling suggests a positive outcome to this saga is still possible. But it remains far from clear what direction the ANC’s internal struggle will take —  and ultimately, it’s this struggle that will determine the country’s future.

Disaster Junkies

We live in an era where catastrophe looms large in the political imagination. On the one side, we find hellacious visions of climate crisis and ecological collapse; on the other, grim warnings of social disintegration through plummeting birth rates, mass immigration and crime. Popular culture’s vivid post-apocalyptic worlds, from Cormac McCarthy’s The Road to Margaret Atwood’s Handmaid’s Tale, increasingly echo in political discourse – most memorably in Donald Trump’s 2016 inauguration speech on the theme of “American Carnage.” For more imaginative doom-mongers there are various technological dystopias to contemplate, whether AI run amok, a digital surveillance state, or simply the replacement of physical experience with virtual surrogates. Then in 2020, with the eruption of a global pandemic, catastrophe crossed from the silver screen to the news studio, as much of the world sat transfixed by a profusion of statistics, graphs and harrowing reports of sickness and death.

If you are anything like me, the role of catastrophe in politics and culture raises endless fascinating questions. How should we explain our visceral revulsion at fellow citizens dying en mass from an infectious disease, and our contrasting apathy to other forms of large-scale suffering and death? Why can we be terrified by climate change without necessarily feeling a commensurate urgency to do something about it? Why do certain political tribes obsess over certain disasters?

It was questions like these that led me to pick up Niall Ferguson’s new book, Doom: The Politics of Catastrophe. I did this somewhat nervously, it must be said. I found one of Ferguson’s previous books extremely boring, and tend to cringe at his use of intellectual gimmicks – like his idea that the past success of Western civilisation can be attributed to six “killer apps.” Then again, Ferguson’s contrarianism does occasionally produce an interesting perspective, such as his willingness to weigh the negative aspects of the British Empire against the positive, as historians do with most other empires. But as I say, it was really the subject of this latest book that drew me in.

I might as well say upfront that I found it very disappointing. This is going to be a bad review – though hopefully not a pointless one. The flaws of this book can, I think, point us towards a richer understanding of catastrophe than Ferguson himself offers.

Firstly, Doom is not really about “the politics of catastrophe” as I understand that phrase. A few promising questions posed in the introduction – “Why do some societies and states respond to catastrophe so much better than others? Why do some fall apart, most hold together, and a few emerge stronger? Why does politics sometimes cause catastrophe?” – are not addressed in any sustained way. What this book is really about is the difficulty of predicting and mitigating statistically irregular events which cause excess deaths. That sounds interesting enough, to be sure, but there’s just one fundamental problem: Ferguson never gets to grips with what actually makes such events catastrophic, leaving a rather large hole where the subject of the book should be. 

The alarm bells start ringing when Ferguson introduces the book as “a general history of catastrophe” and, in case we didn’t grasp how capacious that sounds, tells us it will include:

not just pandemics but all kinds of disasters, from the geological (earthquakes) to the geopolitical (wars), from the biological (pandemics) to the technological (nuclear accidents). Asteroid strikes, volcanic eruptions, extreme weather events, famines, catastrophic accidents, depressions, revolutions, wars, and genocides: all life – and much death – is here.

You may be asking if there is really much of a relationship, throughout all the ages of history, between asteroid strikes, nuclear accidents and revolutions – and I’d say this gets to a pretty basic problem with tackling a subject like this. Writing about catastrophe (or disaster – the two are used a synonyms) requires finding a way to coherently group together the extremely diverse phenomena that might fall into this category. It requires, in other words, developing an understanding of what catastrophe actually means, in a way that allows for useful parallels between its different manifestations. 

Ferguson seems to acknowledge this when he rounds off his list by asking “For how else are we to see our disaster [i.e. Covid] – any disaster – in proper perspective?” Yet his concept of catastrophe turns out to be circular, inconsistent and inadequate. Whatever aspect of catastrophe Ferguson happens to be discussing in a particular chapter becomes, temporarily, his definition of catastrophe as such. When he is talking about mortality, mortality becomes definitive of catastrophe (“disaster, in the sense of excess mortality, can take diverse forms and yet pose similar challenges”). Likewise when he is showing how infrequent and therefore hard to predict catastrophes are (“the rare, large scale disasters that are the subject of this book”). In Ferguson’s chapter seeking similarities between smaller and larger disasters, he seems happy to simply accept whatever is viewed as a disaster in the popular memory: the Titanic, Chernobyl, the failed launch of NASA’s Challenger spacecraft. 

This is not nitpicking. I’m not expecting the metaphysical rigor of Immanuel Kant. I like an ambitious, wide-ranging discussion, even if that means sacrificing some depth. But attempting this without any real thesis, or even a firm conceptual framework, risks descending into a series of aimless and confusing digressions which don’t add up to anything. And that is more or less what happens in this book.

Consider Ferguson’s chapter on “The Psychology of Political Incompetence.” After a plodding and not especially relevant summary of Tolstoy’s concluding essay in War and Peace, Ferguson briefly introduces the idea that political leaders’ power is curtailed by the bureaucratic structures they inhabit. He then cuts to a discussion of the role of ideology in creating disastrous food shortages, by way of supporting Amartya Sen’s argument that democratic regimes respond better to famines than non-democratic ones. It’s not clear how this relates to the theme of bureaucracy and leadership, but this is one of the few sections where Ferguson is actually addressing something like “the politics of catastrophe;” and when he poses the interesting question of “why Sen’s theory does not apply to all forms of disaster” it feels like we are finally getting somewhere.

Alas, as tends to be the case in this book, Ferguson doesn’t answer the question, but embarks on a series of impromptu arguments against straw men. A winding discussion of British ineptness during the two World Wars brings him to the conclusion that “Democracy may insure a country against famine; it clearly does not insure against military disaster.” Who said that it does? Then Ferguson has suddenly returned to the issue of individual leadership, arguing that “it makes little sense” to hold Churchill solely responsible for the fall of Singapore to the Japanese in 1942. Again, who said we should? Ferguson then rounds off the chapter with an almost insultingly cursory discussion of “How Empires Fall,” cramming eight empires into less than five pages, to make the highly speculative argument that that imperial collapse is as unpredictable as various other kinds of disaster.

Insofar as anything holds this book together, it is the thin sinews of statistical probability models and network science. These do furnish a few worthwhile insights. Many of the events Ferguson classes as disasters follow power-law distributions, which is to say there is no regular relationship between their scale and the frequency with which they occur. So big disasters are essentially impossible to predict. In many cases, this is because they emerge from complex systems – natural, economic and social – which can unexpectedly amplify small events into enormous ones. In hindsight, these often seem to have been entirely predictable, and the Cassandras who warned of them are vindicated. But a regime that listened to every Cassandra would incur significant political costs in preparing for disasters that usually won’t materialize.

I also liked Ferguson’s observation that the key factor determining the scale of a disaster, in terms of mortality, is “whether or not there is contagion – that is, some way of propagating the initial shock through the biological networks of life or the social networks of humanity.” But his other useful comments about networks come in a single paragraph, and can be quoted without much further explanation:

If Cassandras had higher centrality [in the network], they might be more often heeded. If erroneous doctrines [i.e. misinformation] spread virally through a large social network, effective mitigation of disaster becomes much harder. Finally… hierarchical structures such as states exist principally because, while inferior to distributed networks when it comes to innovation, they are superior when it comes to defence.

I’m not sure it was necessary to have slogged through an entire chapter on network science, recycled from Ferguson’s last book, The Square and the Tower, to understand these points.

But returning to my main criticism, statistical and network analysis doesn’t really allow for meaningful parallels between different kinds of catastrophe. This is already evident in the introduction, when Ferguson states that “disaster takes too many forms for us to process with conventional approaches to risk mitigation. No sooner have we focused our minds on the threat of Salafi jihad than we find ourselves in a financial crisis originating in subprime mortgages.” As this strange comment suggests, the implied perspective of the book is that of a single government agency tasked with predicting everything from financial crises and terrorist attacks to volcanic eruptions and genocides. But no such agency exists, of course, for the simple reason that when you zoom in from lines plotted on a graph, the illusion that these risks are similar dissolves into a range of totally different phenomena attached to various concrete situations. The problem is absurdly illustrated when, having cited a statistical analysis of 315 conflicts between 1820-1950, Ferguson declares that in terms of predictability, “wars do indeed resemble pandemics and earthquakes. We cannot know in advance when or where a specific event will strike, nor on what scale.” Which makes it sound like we simply have no way of knowing whether the next conflict is more likely to break out in Gaza or Switzerland.  

In any case, there is something patently inadequate about measuring catastrophe in terms of mortality figures and QALYs (quality-adjusted life years), as though the only thing we have in common is a desire to live for as long as possible. Not once is the destruction of culture or ways of life mentioned in the book, despite the fact that throughout history these forms of loss have loomed large in people’s sense of catastrophe. Ferguson even mentions several times that the most prolific causes of mortality are often not recognised as catastrophes – but does not seem to grasp the corollary that catastrophe is about something more than large numbers of deaths. 

Indeed, maybe the best thing that can be said about Doom is that its shortcomings help us to realise what does need to be included in an understanding of catastrophe. Throughout the book, we see such missing dimensions flicker briefly into view. In his discussion of the flu pandemic of the late 1950s, Ferguson notes in passing that the Soviet launch of the Sputnik satellite in October 1957 “may help to explain why the memory of the Asian flu has faded” in the United States. This chimes with various other hints that this pandemic was not really perceived as a catastrophe. But why? And it what sense was it competing with the Cold War in the popular imagination? Likewise, Ferguson mentions that during the 1930s the lawyer Basil O’Connor used “the latest techniques in advertising and fundraising” to turn the “horrific but relatively rare disease” of polio into “the most feared affliction of the age.” This episode is briefly contrasted to the virtual silence of the American media and political class over AIDS during the 1980s. 

In fact, unacknowledged catastrophes are an unacknowledged theme of the book. It re-emerges in several intriguing mentions of the opioid epidemic in the United States, with its associated “deaths of despair.” At the same time as there was “obsessive discussion” of global warming among the American elite, Ferguson points out, “the chance of dying from an overdose was two hundred times greater than the chance of being killed by a cataclysmic storm.” He also describes the opioid crisis as “the biggest disaster of the Obama presidency,” and suggests that although “the media assigned almost no blame to Obama” for it, “such social trends did much to explain Donald J. Trump’s success.” Finally, Ferguson notes that during the current Covid crisis, the relative importance of protecting the vulnerable from the disease versus maintaining economic activity became an active front in the American culture war. 

The obvious implication of all this is that, while Ferguson does not really engage with “the politics of catastrophe,” the concept and reality of catastrophe is inherently political. There isn’t really an objective measure of catastrophe: the concept implies judging the nature and consequences of an event to be tragic. Whether or not something meets this standard often depends on who it affects and whether it fits into the emotionally compelling narratives of the day. The AIDS and opioid epidemics initially went unrecognized because their victims were homosexuals and working class people respectively. To take another example, the 1921 pogrom against the affluent African American community in Tulsa, Oklahoma, was for the longest time barely known about, let alone mourned (except of course by African Americans themselves); yet a hundred years later it is being widely recognised as a travesty. Last week’s volcanic eruption in the Democratic Republic of Congo, which may have left 20,000 people homeless, would probably be acknowledged as catastrophic by a Westerner who happened to read about it in the news. But we are much more likely to be aware of, and emotionally invested in, the disastrous Israeli-Palestinian conflict of recent weeks. 

Catastrophe, in other words, is inextricably bound up with popular perception and imagination. It is rooted in the emotions of fear, anger, sadness, horror and titillation with which certain events are experienced, remembered or anticipated. This is how we can make sense of apathy to the late-1950s flu pandemic: such hazards, as Ferguson mentions, were still considered a normal part of life rather than an exceptional danger, and people’s minds were focused on the potential escalation of the Cold War. Hence also the importance of the media in determining whether and how disasters become embedded in public discourse. While every culture has its religious and mythical visions of catastrophe (a few are mentioned in a typically fleeting discussion near the start of Doom), today Netflix and the news media have turned us into disaster junkies, giving form and content to our apocalyptic impulses. The Covid pandemic has been a fully mediated experience, an epic rollercoaster of the imagination, its personal and social significance shaped by a constant drumbeat of new information. It is because climate change cannot be made to fit this urgent tempo that is has been cast in stead as a source of fatalism and dread, always looming on the horizon and inspiring millions with a sense of terrified helplessness.  

Overlooking the central role of such cultural and political narratives probably meant that Ferguson’s Doom was doomed from the start. For one thing, this missing perspective immediately shows the problem with trying to compare catastrophes across all human history. Yes, there are fascinating patterns even at this scale, like the tendency of extreme ideological movements to emerge in the midst of disasters – whether the flagellant orders that sprang from the 14th century Black Death, or the spread of Bolshevism in the latter part of the First World War. But to really understand any catastrophe, we have to know what it meant to the people living through it, and this means looking at the particulars of culture, politics and religion which vary enormously between epochs. This, I would argue, is why Ferguson’s attempt to compare the Athenian plague of the late 5th century BC to the Black Death in medieval England feels rather superficial. 

And whatever the historical scope, statistics simply don’t get close to the imaginative essence of catastrophe. Whether or not a disaster actually happens is incidental to its significance in our lives; many go unnoticed, others transform culture through mere anticipation. Nor do we experience catastrophes as an aggregate of death-fearing individuals. We do so as social beings whose concerns are much more elaborate and interesting than mere life and death.

How the Celebs Rule Us

Who should we call the first “Instagram billionaire”? It’s a mark of the new Gilded Age we’ve entered that both women vying for that title belong to the same family, the illustrious Kardashian-Jenner clan. In 2019, it looked like Kylie Jenner had passed the ten-figure mark, only for Forbes to revise its estimates, declaring that Jenner had juiced her net worth with “white lies, omissions and outright fabrications.” (Her real wealth, the magazine thought, was a paltry $900 million). So, as of April this year, the accolade belongs to Jenner’s no less enterprising sister, Kim Kardashian West.

Social media has ushered in a new fusion of celebrity worship and celebrity entrepreneurship, giving rise to an elite class of “influencers” like Jenner and Kardashian West. Reality TV stars who were, in that wonderful phrase, “famous for being famous,” they now rely on their vast social media followings to market advertising space and fashion and beauty products. As such, they are closely entwined with another freshly minted elite, the tech oligarchs whose platforms are the crucial instruments of celebrity today. Word has it the good people at Instagram are all too happy to offer special treatment to the likes of the Kardashians, Justin Bieber, Taylor Swift and Lady Gaga – not to mention His Holiness the Supreme Pontiff of the Universal Church (that’s @franciscus to you and me). And there’s every reason for social media companies to accommodate their glamorous accomplices: in 2018, Jenner managed to wipe $1.3 billion off the market value of Snapchat with a single tweet questioning the platform’s popularity. 

It’s perfectly obvious, of course, what hides behind the embarrassingly thin figleaf of “influence,” and that is power. Not just financial power but social status, cultural clout and, on the tech companies’ side of the bargain, access to the eyeballs and data of huge audiences. The interesting question is where this power ultimately stems from. The form of capital being harvested is human attention; but how does the tech/influencer elite monopolise this attention? One well-known answer is through the addictive algorithms and user interfaces that turn us into slaves of our own brain chemistry; another invokes those dynamics of social rivalry, identified by the philosopher René Girard, whereby we look to others to tell us what we should want. 

But I think there’s a further factor here which needs to be explored, and it begins with the idea of charisma. In a recent piece for Tablet magazine, I argued that social media had given rise to a new kind of charismatic political leader, examples of which include Donald Trump, Jeremy Corbyn, Jordan Peterson and Greta Thunberg. My contention was that the charisma of these individuals, so evident in the intense devotion of their followers, does not stem from any innate quality of their personalities. In stead, charisma is assigned to them by online communities which, in the process of rallying around a leader, galvanise themselves into political movements.

Here I was drawing on the great German sociologist Max Weber, whose concept of “charismatic authority” describes how groups of people find coherence and structure by recognising certain individuals as special. And yet, the political leaders I discussed in the Tablet piece are far from the only examples showing the relevance of Weber’s ideas today. If anything, they are interlopers: accidental beneficiaries of a media system that is calibrated for a different type of charismatic figure, pursuing a different kind of power. I’m referring, of course, to the Kardashians, Biebers, and countless lesser “influencers” of this world. It is the twin elite of celebrities and tech giants, not the leaders of political movements, who have designed the template of charismatic authority in the social media age. 


When Weber talks about charismatic authority, he is talking about the emotional and ideological inspiration we find in other people. We are compelled to emulate or follow those individuals who issue us with a “calling” – a desire to lead our lives a certain way or aspire towards a certain ideal. To take an obvious example, think about the way members of a cult are often transfixed by a leader, dropping everything in their lives to enter his or her service; some of you will recall the scarlet-clad followers of the guru Bhagwan Shree Rajneesh in the 2018 Netflix documentary Wild Wild Country. Weber’s key observation is that this intensely subjective experience is always part of a wider social process: the “calling” of charisma, though it feels like an intimate connection with an exceptional person, is really the calling of our own urge to fit in, to grasp an identity, to find purpose and belonging. There’s a reason charismatic figures attract followers, plural. They are charismatic because they represent a social phenomenon we want to be a part of, or an aspiration our social context has made appealing. Whatever Rajneesh’s personal qualities, his cult was only possible thanks to the appeal of New Age philosophy and collectivist ways of life to a certain kind of disillusioned Westerner during the 1960s and ’70s. 

Today there’s no shortage of Rajneesh-like figures preaching homespun doctrines to enraptured audiences on Youtube. But in modern societies, charismatic authority really belongs to the domain of celebrity culture; the domain, that is, of the passionate, irrational, mass-scale worship of stars. Since the youth movements of the 1950s and 60s, when burgeoning media industries gave the baby-boomers icons like James Dean and The Beatles, the charismatic figures who inspire entire subcultures and generations have mostly come from cinema and television screens, from sports leagues, music videos and fashion magazines. Cast your mind back to your own teenage years – the time when our need for role models is most pressing – and recall where you and your chums turned for your wardrobe choices, haircuts and values. To the worlds of politics and business, perhaps? Not likely. We may not be so easily star-struck as adults, but I’d vouch most of your transformative encounters with charisma still come, if not from Hollywood and Vogue, then from figures projected into your imagination via the media apparatus of mass culture. It’s no coincidence that when a politician does gain a following through personality and image, we borrow clichés from the entertainment industry, whether hailing Barack Obama’s “movie star charisma” or dubbing Yanis Varoufakis “Greece’s rock-star finance minister.”

Celebrity charisma relies on a peculiar suspension of disbelief. We can take profound inspiration from characters in films, and on some level we know that the stars presented to us in the media (or now presenting themselves through social media) are barely less fictional. They are personae designed to harness the binding force of charismatic authority – to embody movements and cultural trends that people want to be part of. In the context of the media and entertainment business, their role is essentially to commodify the uncommodifiable, to turn our search for meaning and identity into a source of profit. Indeed, the celebrity culture of recent decades grew from the bosom of huge media conglomerates, who found that the saturation of culture by new media technologies allowed them to turn a small number of stars into prodigious brands.

In the 1980s performers like Michael Jackson and Madonna, along with sports icons like Michael Jordan, joined Hollywood actors in a class of mega celebrities. By the ’90s, such ubiquitous figures were flanked by stars catering to all kinds of specific audiences: in the UK, for instance, lad culture had premiership footballers, popular feminism had Sex and the City, Britpoppers had the Gallagher brothers and grungers had Kurt Cobain. For their corporate handlers, high-profile celebrities ensured revenues from merchandise, management rights and advertising deals, as well as reliable consumer audiences that offset the risks of more speculative ventures.

Long before social media, in other words, celebrity culture had become a thoroughly commercialised form of charismatic authority. It still relied on the ability of stars to issue their followers with a “calling” – to embody popular ideals and galvanise movements – but these roles and relationships were reflected in various economic transactions. Most obviously, where a celebrity became a figurehead for a particular subculture, people might express their membership of that subculture by buying stuff the celebrity advertised. But no less important, in hindsight, was the commodification of celebrities’ private lives, as audiences were bonded to their stars through an endless stream of “just like us” paparazzi shots, advertising campaigns, exclusive interviews and documentaries, and so on. As show-business sought to the maximise the value of star power, the personae of celebrities were increasingly constructed in the mould of “real” people with human, all-too-human lives.

Which brings us back to our influencer friends. For all its claims to have opened up arts and entertainment to the masses, social media really represents another step towards a celebrity culture dominated by an elite cluster of stars. Digital tech, as we know, has annihilated older business models in media-related industries. This has concentrated even more success in the hands of the few who can command attention and drive cultural trends – who can be “influencers” – through the commodification of their personal lives. And that, of course, is exactly what platforms like Instagram are designed for. A Bloomberg report describes how the Kardashians took over and ramped-up the trends of earlier decades:

Back in the 1990s, when the paparazzi were in their pomp, pictures of celebrities going about their daily lives… could fetch $15,000 a pop from tabloids and magazines… The publications would in turn sell advertising space alongside those images and rake in a hefty profit.

Thanks to social media, the Kardashians were able to cut out the middle man. Instagram let the family post images that they controlled and allowed them to essentially sell their own advertising space to brands… The upshot is that Kardashian West can make $1 million per sponsored post, while paparazzi now earn just $5 to $10 apiece for “Just Like Us” snaps.

Obviously, Instagram does not “let” the Kardashians do this out of the kindness of its heart: as platforms compete for users, it’s in their interests to accommodate the individuals who secure the largest audiences. In fact, through their efforts to identify and promote such celebrities, the social media companies are increasingly important in actually making them celebrities, effectively deciding who among the aspiring masses gets a shot at fame. Thus another report details how TikTok “assigned individual managers to thousands of stars to help with everything, whether tech support or college tuition,” while carefully coordinating with said stars to make their content go viral.

But recall, again, that the power of celebrities ultimately rests on their followers’ feeling that they’re part of something – that is the essence of their charisma. And it’s here that social media really has been revolutionary. It has allowed followers to become active communities, fused by constant communication with each other and with the stars themselves. Instagram posts revealing what some celeb had for breakfast fuel a vast web of interactions, through which their fans sustain a lively sense of group identity. Naturally, this being social media, the clearest sign of such bonding is the willingness of fans to group together like a swarm of hornets and attack anyone who criticises their idols. Hence the notorious aggression of the “Beleibers,” or fanatical Justin Bieber fans (apparently not even controllable by the pop star himself); and hence Instagram rewriting an algorithm to protect Taylor Swift from a wave of snake emojis launched by Kim Kardashian followers. This, surely, is the sinister meaning behind an e-commerce executive bragging to Forbes magazine about Kylie Jenner’s following, “No other influencer has ever gotten to the volume or had the rabid fans” that she does. 

In other words, the celebrity/tech elite’s power is rooted in new forms of association and identification made possible by the internet. It’s worth taking a closer look at one act which has revealed this in an especially vivid way: the K-Pop boy band BTS (the name stands for Bangtan Sonyeodan, or Beyond the Scene in English). Preppy outfits and feline good looks notwithstanding, these guys are no lightweights. Never mind the chart-topping singles, the stadium concerts and the collaborations with Ed Sheeran; their success registers on a macroeconomic scale. According to 2018 estimates from the Hyundai Research Institute, BTS contributes $3.6 billion annually to the South Korean economy, and is responsible for around 7% of tourism to the country. No less impressive are the band’s figures for online consumption: it has racked up the most YouTube views in a 24-hour period, and an unprecedented 750,000 paying viewers for a live-streamed concert. 

Those last stats are the most suggestive, because BTS’s popularity rests on a fanatical online community of followers, the “Adorable Representative M.C. for Youth” (ARMY), literally numbering in the tens of millions. In certain respects, the ARMY doesn’t resemble a fan club so much as an uncontacted tribe in the rainforest: it has its own aesthetics, norms and rituals centred around worship of BTS. All that’s missing, perhaps, is a cosmology, but the band’s management is working on that. It orchestrates something called the “Bangtan Universe”: an ongoing fictional metanarrative about BTS, unfolding across multiple forms of media, which essentially encourages the ARMY to inhabit its own alternate reality. 

Consequently, such is the ARMY’s commitment that its members take personal responsibility for BTS’s commercial success. They are obsessive about boosting the band’s chart performance, streaming new content as frequently and on as many devices as possible. The Wall Street Journal describes one fan’s devotion:  

When [the BTS song] “Dynamite” launched, Michelle Tack, 47, a cosmetics stores manager from Chicopee, Massachusetts, requested a day off work to stream the music video on YouTube. “I streamed all day,” Tack says. She made sure to watch other clips on the platform in between her streaming so that her views would count toward the grand total of views. […]

“It feels like I’m part of this family that wants BTS to succeed, and we want to do everything we can do to help them,” says Tack. She says BTS has made her life “more fulfilled” and brought her closer to her two daughters, 12 and 14. 

The pay-off came last October, when the band’s management company, Big Hit Entertainment, went public, making one of the most successful debuts in the history of the South Korean stock market. And so the sense of belonging which captivated that retail manager from Massachussetts now underpins the value of financial assets traded by banks, insurance companies and investment funds. Needless to say, members of the ARMY were clamouring to buy the band’s shares too. 


It is this paradigm of charismatic authority – the virtual community bound by devotion to a celebrity figurehead – which has been echoed in politics in recent years. Most conspicuously, Donald Trump’s political project shared many features with the new celebrity culture. The parallels between Trump and a figure like Kylie Jenner are obvious, from building a personal brand off the back of reality TV fame to exaggerating his wealth and recognising the innovative potential of social media. Meanwhile, the immersive fiction of the Bangtan Universe looks like a striking precedent for the wacky world of Deep State conspiracy theories inhabited by diehard Trump supporters, which spilled dramatically into view with the Washington Capitol invasion of January 6th.

As I argued in my Tablet essay – and as the chaos and inefficacy of the Trump presidency demonstrates – this social media-based form of charismatic politics is not very well suited to wielding formal power. In part, this is because the model is better suited to the kinds of power sought by celebrities: financial enrichment and cultural influence. The immersive character of online communities, which tend to develop their own private languages and preoccupations, carries no real downside for the celebrity: it just means more strongly identified fans. It is, however, a major liability in politics. The leaders elevated by such movements aren’t necessarily effective politicians to begin with, and they struggle to broaden their appeal due to the uncompromising agendas their supporters foist on them. We saw these problems not just with Trump movement but also with the Jeremy Corbyn phenomenon in the UK, and, to an extent, with the younger college-educated liberals who influenced Bernie Sanders after 2016. 

But this doesn’t mean online celebrity culture has had no political impact. Even if virtual communities aren’t much good at practical politics, they are extremely good at producing new narratives and norms, whether rightwing conspiracy theories in the QAnon mould, or the progressive ideas about gender and identity which Angela Nagle has aptly dubbed “Tumblr liberalism.” Celebrities are key to the process whereby such innovations are exported into the wider discourse as politically-charged memes. Thus Moya Lothian Mclean has described how influencers popularise feminist narratives – first taking ideas from academics and activists, then simplifying them for mass consumption and “regurgitat[ing] them via an aesthetically pleasing Instagram tile.” Once such memes reach a certain level of popularity, the really big celebrities will pick them up as part of their efforts to present a compelling personality to their followers (which is not to say, of course, that they don’t also believe in them). The line from Tumblr liberalism through Instagram feminism eventually arrives at the various celebrities who have revealed non-binary gender identities to their followers in recent years. Celebs also play an important role in legitimising grassroots political movements: last year BTS joined countless other famous figures in publicly giving money to Black Lives Matter, their $1 million donation being matched by their fans in little more than a day.

No celebrity can single-handedly move the needle of public opinion, but discourse is increasingly shaped by activists borrowing the tools of the influencer, and by influencers borrowing the language of the activist. Such charismatic figures are the most important nodes in the sprawling network of online communities that constitutes popular culture today; and through their attempts to foster an intimate connection with their followers, they provide a channel through which the political can be made to feel personal. This doesn’t quite amount to a “celebocracy,” but nor can we fully understand the nature of power today without acknowledging the authority of stars.

The Charismatic Politics of Social Media

This essay was originally published by Tablet Magazine on 21st April 2021.

In the wake of Donald Trump’s presidency, the tone of politics has become much quieter, and not just in the United States. It’s amazing how much room this man’s personality took up in the public conversation. But we should remember that what silenced Trump was not losing an election in November 2020. It was being kicked off social media after his supporters stormed the Capitol on Jan. 6.

The decision to take away Trump’s megaphone was the natural outcome of a phenomenon that emerged around 2015, when politics was transformed by a new type of charismatic leader, unique to our own era, who emerged from a culture increasingly centered around social media platforms like Facebook, Twitter, Instagram, and YouTube. But Trump is just one example, albeit a dramatic one. On the left there is also Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez, as well as Jeremy Corbyn, the former leader of the Opposition in the United Kingdom. There is the teenage climate activist Greta Thunberg and the cult philosopher Jordan Peterson. These men and women “went viral,” their individual charisma spread by a new, decentralized media system, and they galvanized movements that defined themselves as fighting against the established order.

Some of these figures’ time in the limelight is already over. But others will take their place, because the forces that gave rise to them are still here. To understand their appeal, we only have to turn to the influential German sociologist of the early 20th century, Max Weber. It was Weber who popularized “charisma” as a political term. And it is Weber’s concept of charismatic leadership that seems more relevant now than ever before.

Born 157 years ago tomorrow, Weber lived at a time when Western societies, and Germany especially, were being transformed by industrialization at a frantic pace. The central aim of his work was to understand how modern societies evolved and functioned in contrast to those of the past. Hailed as a brilliant young intellectual, Weber suffered a nervous breakdown around the turn of the 20th century, and subsequently produced a gloomy account of the modern world that was to be his greatest legacy. In The Protestant Ethic and the Spirit of Capitalism, published in 1905, he argued that the foundation of modernity was an ultrarational approach to organizing our lives and institutions, especially in pursuit of profit—a culture he compared to an “iron cage.”

It is against this backdrop that we find Weber’s most famous ideas about charismatic leadership. There was, he observed, a weak point in the iron cage of rationality. The modern principle that the right to govern comes from the people created an opening for charismatic politicians to gain immense power by winning the adoration of the masses. In his influential 1919 lecture Politics as a Vocation, Weber suggested the best example of this was the 19th-century British politician William Gladstone. But after Weber’s death in 1920, his theory of charismatic leadership achieved new renown, as it seemed to predict the dictatorships of Mussolini, Hitler, and Stalin.

A century later, Weber’s vision of “dictatorship resting on the exploitation of mass emotionality” fits nicely into the current moment, and may even have fed the reflexive portrayal of Trump as some sort of proto-fascist ruler. But in fact, this understanding of political charisma as purely a tool of modern demagogues is a misreading of Weber’s ideas.

Weber believed that charismatic individuals shape the politics of every era. A charismatic leader, he wrote in the posthumously published Economy and Society, has “a certain quality of an individual personality, by virtue of which he is set apart from ordinary men and treated as endowed with supernatural, superhuman, or at least specifically exceptional powers or qualities.” For Weber, the crucial element is to understand that charisma has a social function. He didn’t see charisma merely as a character trait belonging solely to the leader. He saw the desire to follow charismatic individuals as a necessary ingredient that binds groups of people together. Hence, when he laid out the three forms of authority that organize all societies, he included “charismatic authority” alongside legal structures and tradition.

What’s more, this mutually binding power of charisma doesn’t only sustain societies, according to Weber—it also transforms them. He actually thought the purest example of charismatic authority came from religious movements led by prophets, of the kind that shaped the history of Judaism, Christianity, and Islam. Here Weber describes charisma as a “revolutionary force,” because of the way prophets unite their followers with a sense of confidence and conviction that can shatter existing structures of authority. Charisma is like a spark that ignites sweeping social and cultural change.

This is the Weberian insight that opens the door to understanding the charismatic leaders of our own time. To grasp what makes an individual charismatic, we shouldn’t just focus on their personality: We should look at the people who are brought together by their mutual recognition of a leader.

Today, the social basis for much political ideology and activism comes from online subcultures, where people develop common worldviews based on spontaneous and widely shared feelings, like the sense of being betrayed by corrupt elites. It is from these virtual communities that political movements emerge, often by discovering and adopting a charismatic figure that galvanizes them. Through the rapid circulation of video clips and social media posts, an individual can be turned into a leader almost overnight.

What is remarkable about this paradigm is how much the standard relationship between leaders and followers has been reversed: These new movements are not created by their leaders, even though the leaders may command tremendous devotion. The followers “choose” their leader. The movements exist first in an unrealized form, and conjure up leaders that allow them to fully manifest and mobilize themselves.

Weber spoke of charisma being “recognized,” emphasizing the way leaders inspire their followers with a sense of purpose or spiritual “calling.” People gravitate toward individuals who give them a language to express their shared feelings and an example to follow. But what matters most is that, through this collective recognition of a figurehead, the followers cement their own social bond.

When we look at the charismatic leaders who have emerged in recent years, we don’t in fact see authoritarian figures who control their movements and bend their followers to their own distinct political visions. What we see are leaders who rise suddenly and unexpectedly, and whose actual beliefs are less important than their ability to embody the emotions that unite their devotees. Today it is the leaders who are shaped by the attitudes of their movements rather than the other way around.

Thus, Trump’s followers were never all that interested in how effectively he turned campaign slogans into reality. What held the MAGA movement together was not the content of Trump’s rather inconsistent and half-hearted declarations about policy, but the irreverent drama of rebellion that he enacted through the political theater of his rallies and Twitter posts. His leadership gave birth to intense internet communities, where diehard supporters cooked up their own narratives about his struggle against the establishment.

The point isn’t that Trump had no real power over his followers, which of course he did. The point is that his power depended on—and was limited to—the role of culture war icon that his movement created for him. Trump was effective in this role because he had no apparent strategy apart from giving his audience what it wanted, whether photo-ops brandishing a Bible, or nods and winks at online conspiracy theories.

Likewise, Sanders and Corbyn were both old men who unexpectedly found themselves riding tidal waves of youthful support. But their sudden rise from relative obscurity led to some awkward moments when some of their more strongly held views did not align with the wishes of their followers. Sanders’ campaign for president changed significantly from 2016 to 2020, as the mass movement that chose him as its leader molded him into a champion of their immigration preferences, which he had previously opposed. Similarly, in his time as leader of the British Labour Party from 2015 to 2020, Corbyn had to abandon his lifelong opposition to the European Union because he was now leading a movement that cherished EU membership as one of its core values.

Finally, consider two cases from outside the realm of official politics. Greta Thunberg is treated as a modern saint who has inspired millions to march through the world’s cities demanding action against climate change. But Thunberg’s enormous presence in the environmental movement is not matched by a unique philosophy or any organizational power. She went viral on social media during her 2018 strike outside the Swedish parliament, and her fame now rests on being invited by political and religious leaders to shout at them on camera about how her generation has been betrayed. “I understand that people are impressed by this movement,” Thunberg told the Times in 2019, “and I am also very impressed with the young people, but I haven’t really done anything. I have just sat down.”

Then there’s Canadian psychologist Jordan Peterson. Thanks to a few viral videos about free speech in 2016 and a series of controversial media engagements thereafter, Peterson went from teaching Christological interpretations of Disney films to being hailed as the messiah of the anti-woke movement. Peterson has continually stressed that he’s interested in psychology, not politics, yet what followers find captivating are his filmed broadsides against social justice ideology, which have been viewed millions of times on YouTube.

All these figures have been imbued with a certain magical status, which deepens the shared identity of their followers. Movements have gathered around them as totems embodying a fight against injustice and a spirit of revolt. Consequently, they command strong emotional attachments, though their followers are only interested in them insofar as they stay within the limits of the movement they were chosen to lead. The power of their charisma depends, therefore, on conforming to parameters set by the imagination of their followers.

Obviously, individual personality is not irrelevant here. Charismatic figures are generally regarded as authentic, based on the perception that they are not trying to meet social expectations or simply advance their careers. Seen in this way, it makes sense that a generation accustomed to the shifting trends and constant self-promotion of social media would warm to old-timers like Sanders and Corbyn, who had been stoically banging the same drum for decades.

Interestingly, both Trump and Thunberg have often had their personalities pathologized by critics: Trump on account of his “narcissistic personality disorder,” Thunberg on account of her autism and single-minded commitment to her cause. But supporters see these same qualities as refreshingly direct. This kind of appeal is necessary for leaders who want to offer their followers the personal “calling” which Weber saw as key to charisma. No one is inspired to take on the establishment by people who look and sound like they belong to it.

Nonetheless, following Weber’s lead, we don’t need to think about charisma as something that’s simply inherent to these influential personalities. In the sudden explosion of hype surrounding certain figures on social media, we see how the conviction that an individual is special can be created through collective affirmation. This is the virtual equivalent of the electrifying rallies and demonstrations where followers have gathered to see figures like Trump, Corbyn, and Thunberg: The energy is focused on the leader, but it comes from the crowd.

So what does all this tell us about the future of the new charismatic movement politics? Weber insisted that to exercise real power, charismatic authority cannot keep relying on the spiritual calling of committed followers. It must establish its own structures of bureaucracy and tradition. According to Weber, this is how prophetic religious movements of the past created lasting regimes.

But the way that today’s charismatic leaders are chosen for their expressive qualities means they usually aren’t suited to consolidating power in this way. There is a remarkable contrast between the sweeping legislative program being enacted by the uncharismatic Biden presidency and Trump’s failure to deliver on most of his signature proposals.

This does not mean that the movements inspired by charismatic figures are irrelevant—far from it. They will continue to influence politics by reshaping the social and cultural context in which it unfolds. In fact, the potential for these movements is all the more dramatic because, as recent years have shown, they can appear almost out of thin air. We do not know who the next charismatic leaders will be until after they have been chosen.