Aby Warburg and the Memory of Images

This essay was originally published by Engelsberg Ideas in November 2023.

In June 2020, everyone was ‘taking the knee’. Protests had erupted in the United States and spread to dozens of countries around the world, after the killing of an African American man, George Floyd, by police in Minneapolis. The act of kneeling on one leg, en masse, quickly became one of the most resonant symbols of opposition to police brutality and racial discrimination. Crowds gathered to perform the gesture in London’s Hyde Park, outside the Palais de Justice in Paris, on the streets of Sydney and before the US consulate in Milan. In Washington, nine Democrat members of Congress donned West African scarves and kneeled before cameras in the Capitol Visitor Center. Canadian Prime Minister Justin Trudeau observed the ritual, as did members of the National Guard in Hollywood and Metropolitan Police in Westminster. In Britain and beyond, ‘taking the knee’ became part of pre-match proceedings in professional football.

Just three years later, it is difficult to recount how dramatic these events were. Without any planning or coordination – at a time, in fact, when many countries had social distancing measures in place due to the Covid pandemic – a global protest movement had sprung into existence virtually overnight. It was clear that iconography, striking visual gestures and symbols broadcast through social media, had been part of the mechanism that made it happen.

Where did ‘taking the knee’ come from? The American Football player Colin Kaepernick, who began using the pose as a protest during national anthems in 2016, is credited with inventing it. Yet the gesture is freighted with older meanings. Open a book of Christian iconography and you will see kneeling men and women across the ages, symbolising devotion, humility and respect. Even in relation to the cause of racial equality, the pose has – in the ambiguous phrase used by numerous press articles – ‘a long history’. At the bottom of the Wikipedia page for ‘Taking the Knee’, for instance, there is a curious section noting two precursors. In February 1965 the Reverend Martin Luther King Jr led a group of civil rights protestors in praying on one knee outside Dallas County courthouse. Meanwhile, more than a century earlier, the image of a kneeling African man had been a popular symbol of the Evangelical movement for the abolition of slavery.

What should we make of these precedents? None of the publications that noted them suggested any actual link with Kaepernick or the later protests. Yet I could not dispel the feeling that, in the summer of 2020, the past was speaking through those crowds of kneeling people. We are today so immersed in media that a sense of imitation, of the re-enactment of some earlier episode, hangs over every public act. Even the violent scenes that accompanied those protests, the pulling down of statues and the torching of buildings, seemed to function as a kind of familiar iconography, symbols of revolution that had somehow sprung from our cinematic imagination into reality.

My attempts to make sense of all this brought me to a rather unexpected place. They brought me to the European Renaissance of the fifteenth and sixteenth centuries; or rather, to a vision of the Renaissance put forward by a singular thinker, the German art historian Aby Warburg. Though he died almost a century ago, Warburg thought about images in a way that seems highly prescient today. He recognised that the power of iconography lay in its potential for transmission across space and time, its ability to accrue new meanings in different contexts. He warned, moreover, that this power could be destructive as well as creative.

Warburg is probably best known today as a collector of books, and for good reason. Born in 1866, he was the eldest son of an illustrious Jewish banking family based in Hamburg and New York. As legend has it, the thirteen-year-old Warburg gave up his claim on the family business to his brother Max, asking only that he would never be short of money for books. By 1919 he had collected enough to establish the Library of Cultural Studies in Hamburg, a setting famous for its esoteric approach to classifying knowledge (one section was titled ‘Religion, Magic and Science’). Together with the philosopher Ernst Cassirer, whose work on symbolic forms drew heavily from this library, Warburg helped to make Hamburg one of the most innovative sites in the intellectual landscape of the Weimar Republic.

Warburg died in 1929, and four years later, after the Nazis came to power in Germany, two steamers evacuated his collection across the North Sea, carrying some 60,000 books, along with 15,000 photographs and thousands of slides. These riches ended up at what is today the Warburg Institute and Library on London’s Woburn Square. Above its entrance, you can see engraved in Greek capitals the idea that Warburg pursued throughout his career: MNHMOΣYNH, Mnemosyne, memory.

The most extensive exposition of Warburg’s ideas comes not from his writings but from a remarkable project called The Image Atlas of Memory  (Der Bilderatlas Mnemosyne), on which he was still working at the time of his death. It consists of nearly a thousand images, arranged on a series of 63 dark blue panels, tracing the evolution of visual expression from the ancient Greco-Roman world to the Renaissance and then into 1920s photojournalism. The principle behind this work, which would influence the philosopher Walter Benjamin among others, was that visual symbols and motifs are a form of cultural memory. As they are transmitted and adapted through time, they reveal a dialogue in which the past infuses the present and the present reinterprets the past.

Warburg’s inspiration for these ideas came from Renaissance Florence, a city he considered ‘the birthplace of modern, confident, urban, mercantile civilisation’. The Florentine elite – a milieu of great banking families, artists and private scholars, for which Warburg felt a strong affinity – had developed a new synthesis of medieval Catholicism and ideals adapted from classical Greece and Rome. Warburg was especially interested in the role of iconography in creating that synthesis. He observed that Renaissance artists had not simply been inspired by ancient sources; they had borrowed a distinct catalogue of expressive gestures, ways of representing the human body that communicated movement, energy and confidence. These borrowed gestures were like visual quotations, ‘migrant images’ drawn from the past to serve the expressive needs of the present. Warburg called such images ‘pathos formulas’, and he came to see them as the vehicles of cultural memory.

One such formula was a woman in swirling garments, ‘the Nymph’ as Warburg dubbed it, a figure he documented with more than fifty images in the Memory Atlas. A vivid example appears in Dominico Ghirlandaio’s fresco, The Birth of John the Baptist, where, on the furthest right of the painting, we see a female attendant carrying a plate of fruit with an almost absurd dynamism – an ancient figure literally bursting into the late-fifteenth century. Another formula was the chaotic melee of warriors, a scene portrayed in ancient sources, such as the Arch of Constantine, and imitated by numerous Renaissance artists down to Raphael’s own Battle of Milvian Bridge in the 1520s. Still another was the figure of the pagan goddess Fortuna, a female nude with a billowing sail, as seen in the family crest of the merchant Giovanni Rucellai, where it sat uneasily above a Christian coat of arms. Fortuna had come to symbolise an ethic of prudence, calculation, and trust in one’s own judgment; the Runcellai crest was echoed in a phrase that sometimes opened Medici business documents: Col nome di Dio e di Buonaventura, ‘In the name of God and of Good Fortune’.

The search for such patterns might seem out of place today, given the vast number of images created on a daily basis, but, as Warburg realised, mass culture is perfectly suited to the transmission of pathos formulas, since the most resonant images tend to be the most widely circulated and reproduced. In the Memory Atlas, photographs of antique coins and oil paintings sit alongside modern advertising imagery and magazine clippings. A quick glance at the images that populate our own media reveals countless expressive gestures that could be called pathos formulas: the ecstasy of footballers celebrating a goal, the stance of actors assembled on a film poster, the pose of models on the covers of men’s magazines, the wooden bearing of a politician shaking someone’s hand, or the conventions dictating how people present their bodies on Instagram.

Warburg’s notion of memory emphasises that, when we need to express or represent an idea, we draw from a stock of gestures and symbols whose meanings are kept alive in our culture by the presence of earlier representations. In this way, images can travel across centuries, being recalled repeatedly into new contexts and acquiring new layers of meaning. This is what we saw with the re-emergence of Christian body language as a way of galvanising protest. Only in light of its deeper history, its long passage through time, can we understand the various connotations of ‘taking the knee’ in 2020. When performed by figures of authority, for instance, it did not express defiance but contrition and repentance; an idea we could find in any number of Hollywood films, but which we could equally trace all the way back to images of medieval kings kneeling in penance to the church.

Warburg’s study of the Renaissance also brought him to reflect on darker aspects of iconography. Like his contemporary, the sociologist Max Weber, he understood the emergence of the modern world in terms of a ‘loss of magic’. (Weber’s term, Entzauberung, is normally translated as ‘disenchantment’). Warburg saw the Renaissance as a crucial moment in this process. It was a time when astrological beliefs and practices were still widespread, with the movements of planets and stars held to predict everything from natural disasters to the course of an illness and the qualities of a person’s character. Yet the era also saw a growing tendency to view the cosmos in terms of impersonal laws, which for Warburg signalled a movement towards greater creativity and reason.

Again, iconography had played an important role. Images had always been implicated in magic, mediating between human beings and the forces of the occult. They could serve as fetishes and idols – objects imbued with magical powers – or as superstitious forms of representation, as with astrological symbols and charts that projected the presence of terrifying deities onto the planets. According to Warburg, Renaissance artists undermined the psychology of magic by presenting mythology in a new way. Their style produced a ‘distance’ between the viewer and the image, a mental space that allowed astrological figures to appear not as active, daemonic forces, but as abstract ideas. Warburg writes of Albrecht Dürer’s portrayal of Saturn, for instance, that ‘the artist has taken a magical and mythical logic and made it spiritual and intellectual’, transforming a ‘malignant, child-devouring, planetary god’ into ‘the image of the thinking, working human being’.

The genius of the Renaissance, for Warburg, was that it could retrieve the energy of past eras, whether the pagan cults, which had created figures such as the Nymph, or the magical traditions associated with astrology, but could also interpret these influences in a way that was aesthetic and rational. Developments in Warburg’s own lifetime made him realise that the modern mind was not immune from relapsing into a primitive relationship with images. When the First World War broke out in 1914, he observed with horror the role of visual propaganda in fomenting hatred, and then suffered a lengthy nervous collapse punctuated by psychotic episodes. As he later wrote to his family, the war ‘confronted me with the devastating truth that unchained, elemental man is the unconquerable ruler of this world’. Warburg did not live to see the political spectacles of the Third Reich, though he did witness Italian fascism. He was at the Vatican in 1929 when Pope Pius XI signed an accord with Mussolini, quipping to a friend that he’d had ‘the chance of my life to be present at the re-paganisation of Rome’.

Likewise, many of the contemporary images Warburg chose for the Memory Atlas hint at the darkness underlying the sophistication of modernity, albeit in a somewhat ironic way. On the panel dedicated to Mars, the ancient god of war and astrological harbinger of violence, Warburg pinned images of the huge airships that captivated the German imagination during the 1920s. The mesmerising power of modern technology, he seems to be saying, threatens to collapse the critical space of reason, imbuing images with magic once again.

This loss of distance is another concept we could apply to our own visual culture. To a far greater degree than the photojournalism of Warburg’s time, internet media tends to diminish our ability to reflect on the feelings that images provoke. Designers are incentivised to make platforms immersive, not to mention addictive, through algorithms that manipulate the limbic system; we often now consume images in a state resembling a trance. More subtly, social media blurs the generic boundaries that, in older media like films and video games, distinguish between reality and unreality. Much of what passes across our screens now, regardless of whether it is genuine, curated or simply fake, appeals to our emotions with a sense of raw reality, of unmediated and authentic life. All of this encourages those impulsive responses – of anger, jealousy, fear or desire – that notoriously govern the online world.

We should not overstate the role of images by imagining they can somehow determine events in isolation from the conditions in which they appear. To suggest that a symbol or gesture exercises power over us regardless of context is itself a kind of magical thinking. Nonetheless, seen through a Warburgian lens, iconography is still a potent force. The events of the summer of 2020 seem especially significant because they illustrated two contrasting ways that emotion can be visually communicated and spread. It can be expressed through a rhetorical gesture like ‘taking the knee’, which is, whatever one thinks of the message, a meaningful political act. It can also – as we’ve seen on many other occasions in recent years – be channelled into contagious images of violence.

Space architecture: a moonage daydream?

This essay was originally published by Engelsberg Ideas in January 2024.

In January, the architecture studio Hassell published designs for a settlement to house 144 people on the moon. Commissioned by the European Space Agency (ESA), this ‘Lunar Habitat Master Plan’ shows a series of inflatable pods in which settlers will live, eat, exercise and cultivate plants. A protective shell of lunar soil, to be 3D-printed on site, will shield the structures from devastating levels of radiation on the moon’s surface. Nor will this life be without leisure and style. Hassell’s renderings include a cocktail bar with elegant coffee tables, atmospheric lighting, and moulded chairs that carefully echo the imagery of science fiction.

This proposal is just the latest in a growing genre of architectural projects for sites beyond Earth. The last decade has seen a flurry of eye-catching designs for both the moon and Mars. Hassell has previously imagined something even more swish than its lunar cocktail bar: a Martian abode with timber-effect flooring, houseplants and minimalist furniture. It is a vision fit for an IKEA advert, down to the young couple whose relaxing evening we can glimpse through the generous picture window. Meanwhile, NASA has enlisted architects from two studios, Bjarke Ingels Group and SEArch +, to work on lunar structures. Both firms have already been involved in space-related projects, with the latter proposing a Martian community living in igloos of sculpted ice.

Another idea for Mars comes from the high-profile Foster + Partners: mound-like dwellings of fused rubble, assembled by teams of robots arriving via parachute. Perhaps the most ambitious concept, courtesy of the architects at Abiboo, imagines a vast rabbit warren of tunnels embedded in a Martian cliff-face, containing a metropolis of 250,000 people.

I could go on, but it should already be apparent that this burgeoning field of space architecture involves considerably more fantasy than concrete planning. The problem is not necessarily a lack of detail: many projects indulge in technical discussion of materials, construction methods and service systems. But given that the longest habitation of the moon to date was the three days the Apollo 17 crew spent camping in their rover in 1972, while no person has ever set foot on Mars, it is clear these futuristic structures and fashionable interiors really belong to the realm of science fiction. We shouldn’t be surprised that the winners of one NASA competition for a Mars base also want to‘harness the power of comets for interplanetary transportation’, or that Abiboo’s Martian city proposal requires steel-making technology ‘that will need to be developed’.

So what exactly is the point of these designs, and why are agencies such as NASA and the ESA bothering to commission them? Ultimately, speculative space projects tell us more about architecture as a practice and an industry here on Earth than they do about future settlements on distant celestial bodies. This is not to say that such schemes will never bear fruit, but such fruits are likely to emerge much closer to home, as part of architecture’s ongoing fascination with the idea of space.

The notion of lunar or Martian architecture is not necessarily absurd. We are on the cusp of a new Space Age, and this time the aim is not just to visit other parts of the solar system, but to establish a lasting presence there. The first destination is the south pole of the moon, where there are craters containing frozen water. Last August, India’s Chandrayaan-3 mission landed an unmanned spacecraft tere for the first time. The main players, however, are the United States and China, who have both assembled international coalitions for space exploration. NASA’s Artemis program, in cooperation with more than 30 nations, hopes to have astronauts at the lunar pole by the end of the decade, and built structures in place by 2040. Unlike the earlier Apollo missions, Artemis can draw on a buoyant private-sector space industry, including rockets and spacecraft designed by Elon Musk’s SpaceX. Musk has claimed that he is amassing his vast personal wealth in order to make humanity a ‘multi-planetary species’.

Meanwhile, on a similar timetable, China’s Chang’e program is hoping to establish its own International Lunar Research Station, with partners including Russia, Egypt and Pakistan. Both the American- and Chinese-led efforts have ostensibly scientific aims. The moon promises a wealth of information about the distant past of the solar system, including the formation of Earth itself, but it would be naïve to imagine this new space race is about a disinterested search for knowledge. The moon is expected to furnish valuable supplies of Helium-3, potentially useful for nuclear fusion, as well as rare earth metals. More importantly, the hydrogen and oxygen at the moon’s south pole can be used for rocket propellant, allowing space travellers to continue on to further destinations. This is no longer just idle speculation: all parties now see the moon as a stepping-stone towards the future colonisation of Mars.

The problem is that building anything in these distant places, let alone living there, involves enormous challenges of engineering and logistics. Lunar pioneers will need to endure temperature swings of hundreds of degrees Celsius between day and night, along with constant bombardment by micrometeorites. On Mars, they can look forward to average temperatures of -60C and frequent deadly sandstorms. Both places are subject to intense cancer-causing radiation, and a total absence of soil suitable for growing food. Even breathable air needs to be manufactured.

Space agencies are investigating potential infrastructure for a moon base, including water extraction, satellites for communication, and electricity from solar and nuclear plants. The difficulty of transporting heavy materials such a long way, however, means that any construction will have to make use of the sparse elements already in situ, especially ice, water, and the rubble known as regolith. There will be no workforce, so structures will need to be built robotically or by the astronauts themselves. Before we worry about the aesthetics of the lunar hospitality industry, engineers face a more basic question: can we even make bricks in these conditions? The firm that NASA is hoping will develop its construction technology, the 3D-printing specialists ICON, has only been receiving funding since 2020.

Thinking ahead can be valuable. There is little use developing construction techniques if we don’t have some sense of how we want to employ them. By visualising the endpoint, architecture can help to specify the problems that the engineers need to solve. Besides, living on the moon for an extended period will require much more than engineering. According to architect Hugh Broughton, whose work on polar research bases is providing inspiration for NASA, the deeper problem is ‘how architecture can respond to the human condition’. When designing for extreme conditions, the architect has to consider ‘how you deal with isolation, how you create a sense of community… how you support people in the darkness’. This is worth bearing in mind when we see lunar proposals that resemble a leisure centre or cruise ship. Research bases in the Antarctic include amenities such as cinemas and themed bars. Their purpose is, above all, to provide a sense of familiarity. The German firm Duravit has even considered this psychological need in its design for a Martian toilet, which resembles an earthbound one despite working in a different way.

Nonetheless, there remains an indulgent quality to recent space designs. The Ukrainian studio Makhno has envisaged a Martian settlement along the lines of a luxury spa, complete with swimming pools and bucolic rock gardens. It even has a meditation capsule for ‘recovery, restart of consciousness, and immersion into the inner’. No doubt there is publicity value for architects in these imaginative projects – like the concept cars that automakers show at fairs – but this then raises the question of what virtues architects are trying to showcase, and why space is the appropriate medium.

Decades ago, the architect and critic Kenneth Frampton noted a tendency in the avant-garde to imagine impossible projects, which he diagnosed ‘as the return of a repressed creativity, as the implosion of utopia upon itself’. Frampton was pointing to the tension between the ideals and criteria of excellence that animate modern architecture and the highly constrained reality in which architects actually operate. Architecture aspires to engage with deep problems of social life, and also to achieve aesthetic or technical brilliance. Yet, across much of the developed world, innovative approaches to the built environment are stifled by all manner of restraints, from restrictive planning codes to the profit margins of property developers. There may be more scope for originality when it comes to designing fancy hotels, art museums and corporate office buildings, but such displays tend to make architecture the handmaiden of business or entertainment.

Following Frampton’s critique, we could see space architecture as a means to demonstrate, in the realm of imagery, the ambition and purpose that are so rarely possible in real buildings. Designing for the moon or Mars offers not just the grandeur of a new frontier in human history, but the creative freedom of a genre whose boundaries are yet to be established, and which is in any case largely speculative.

More particularly, these projects allow a kind of imaginative return to the heroic age of modern architecture, the 1920s and 30s. In the turmoil of Europe following the First World War, for a short while it appeared that the designers of buildings and cities could shape the future of humanity. They felt emboldened to break with the past and develop rational, efficient and functional answers to the problems of modern society. The results ranged from the visionary doctrines of Le Corbusier to the granular ingenuity of a figure such as Grete Schütte-Lihotzky, designer of the galley kitchen, whose basic template we still use today. Space architecture provides a similar opportunity to address fundamental questions of design, from materials and form to the arrangement of functions in the overall plan, without the weight of convention obstructing the most elegant solution.

If architects can use space projects as an outlet for repressed creativity, their work serves a rather different purpose for the organisations that commission them. In an era when imagery carries immense power, digital renderings have become a political medium, called on to visualise the imagined marvels of the future. And space exploration is deeply political. It embroils the relevant agencies in a constant struggle for government funds, forcing them to confront public misgivings about the necessity and cost of their activities. Since the Soviet Union launched Sputnik, the first satellite, in 1957, such doubts have been countered in part through the imaginative appeal of the final frontier; an appeal that has only grown with the rise of visual media. In 2020, when NASA’s Perseverance rover explored the surface of Mars, social media users were thrilled to hear audio of Martian winds, and to see a horizon with Earth sparkling in the distance; that this particular photograph turned out to be fake only underscored the role of fantasy in these matters. Architectural imagery gives form to dreams of colonising the solar system. It thereby helps to justify space exploration not just to politicians, but to a wider audience of media consumers.

Space design reveals a peculiar alignment of interests between architects and their clients. The former can apply themselves to a heroic paper architecture – or rather, digital architecture – for which there is little scope on Earth; the latter, meanwhile, can justify their budgets with images that invoke an exciting horizon of possibility. It would be short sighted, however, to consider such projects only in terms of their immediate motives and plausibility.

The consequences of human engagement with space have always been dynamic and unpredictable. Technology provides the clearest examples: NASA laboratories have inadvertently contributed to all kinds of everyday products, from camera phones and baby formula to running shoes and wireless headsets. We can already see the potential for such transfers in the drive to build in other parts of the solar system. At the University of Nebraska, a team led by the engineer Congui Jin has been developing organic construction materials for Mars. Jin thinks that certain kinds of fungi can assemble minerals from Martian sand and grow them into bricks. If successful, such techniques could find numerous applications on Earth, starting with the repair of damaged concrete and provision of refugee housing in remote areas.

Architecture has its own story to tell about the role of space exploration in the modern world. When Kenneth Frampton made his comments about the frivolity of avant-garde ideas, he had in mind a group of students and municipal architects that appeared in Britain during the 1960s, known as Archigram. In their magazine of the same title, the group explored fantastical and futuristic schemes like the ‘Walking City’ (that is, a city with legs) and the ‘Underwater City,’ while devising all manner of pods, capsules and bubbles in which people might one day dwell. These fantasies were informed by the technology and aesthetics of the Space Age. As Archigram 3 put it, architecture needed something ‘to stand alongside the space capsules, computers and throw-away packages of an atomic/electronic age.’ The first issue had invoked ‘the poetry of countdown… [and] orbital helmets,’ while the second took inspiration from ‘Lunar architecture and shredded wheat… the radiator grille and the launching pad.’

Archigram’s provocations were often more outlandish than the space habitats of recent years. And yet, the group’s influence has been profound. Their disciples include leading practitioners of high-tech architecture, such as Norman Foster, Renzo Piano and the now-deceased Richard Rogers, who have designed many of the world’s most notable buildings over the past half-century. While Archigram dreamed of structures that would capture the ethos of the space age, these architects have designed and built them, often using materials and methods borrowed from the aerospace industry. We can even detect Archigram’s spirit in the inflatable pods that feature in many recent proposals for the moon and Mars.

Strangely then, space architecture is not really the fresh departure it appears to be. When we look at new schemes for settlements beyond Earth, we are seeing a long-extant futuristic sensibility that is now straining closer towards material reality. By the same token, even if we don’t see cocktail bars on the moon anytime soon, the ideas stimulated by such projects may still prove influential for the future of design.

The Kitchen as a Theatre of History

This essay was first published at The Pathos of Things newsletter. Subscribe here.

In Britain, where the saying goes that every man’s home is his castle, we like to see domestic space as something to be improved. Even if we have to save until middle age to own a decent home, we do so, in part, so that we can hand it over to builders for six months, after which there will be fewer carpets and more sunrooms. 

But domestic space is also a medium through which external forces shape us, in what we mistakenly consider our private existence. Nothing illustrates this better than the evolution of the modern kitchen.

In one of my favourite essays, former Design Museum director Deyan Sudjic describes how the British middle-class kitchen was transformed over the course of a century, from the early 1900s until today. Beginning as a “no-man’s land” where suburban housewives maintained awkward relations with their working-class servants, it has become “a domestic shrine to the idea of family life and conviviality.” Whereas the kitchen’s association with work and working people once ensured that it was partitioned, physically and socially, from the rest of the home, today the image of domestic bliss tends to centre on a spacious open-plan kitchen, with its granite-topped islands, its ranks of cupboard doors in crisp colours, its barstools and dining tables.

And in the process of being transformed, the kitchen transformed us. The other thing we find in this space today is an assortment of appliances, from toasters and kettles to expensive blenders and coffee machines, reflecting a certain admiration for efficiency in domestic life. This does not seem so striking in a world where smartphones and laptops are ubiquitous, but as Sudjic points out, the kitchen was the Trojan horse through which the cult of functionality first penetrated the private sphere.

A hundred years ago, sewing machines and radios had to be disguised as antique furniture, lest they contaminate the home with the feeling of a factory. It was after the middle-classes began to occupy the formerly menial world of the kitchen that everyday communion with machines became acceptable.

In its most idealised and affluent form, the contemporary kitchen has almost become a parody of the factory. Labour in conditions of mechanised order – the very thing the respectable home once defined itself against – is now a kind of luxury, a form of self-expression and appreciation for the finer things in life. We see the same tendency in the success of cooking shows like Master Chef, and in the design of fashionable restaurants, where the kitchen is made visible to diners like a theatre.

What paved the way for this strange marriage of the therapeutic and the functional was the design of the modern kitchen. During this process, the kitchen was a stage where history’s grand struggles played out on an intimate scale, often refracted through contests over women’s role in society. The central theme of this story is how the disenchanting forces of modern rationality have also produced enchanting visions of their own, visions long associated with social progress but eventually absorbed into the realm of private aspiration.

The principles underpinning the modern kitchen came from the northern United States, where the absence of servants demanded a more systematic approach to domestic work. That approach was defined in the mid-19th century by Catharine Beecher, sister of the novelist Harriet Beecher Stowe. In her hugely popular Treatise on Domestic Economy, addressed specifically to American women, Beecher gave detailed instructions on everything from building a house to raising a child, from cooking and cleaning to gardening and plumbing. Identifying the organised, self-contained space of the ship’s galley as the ideal model for the kitchen, she provided designs for various labour-saving devices, setting in motion the process of household automation.

Beecher promoted an ethic of hard work and self-denial that she derived from a stern Calvinist upbringing. Yet she was also a leading campaigner for educational equality, establishing numerous schools and seminaries for women. Her professional approach to household work was an attempt, within the parameters of her culture, to give women a central role in the national myth of progress, though its ultimate effect was to deepen the association of women with the domestic sphere.

Something similar could be said about Christine Frederick, a former teacher from Boston, who in the early-20th century took some of Beecher’s ideas much further. Frederick’s faith was not Calvinism but the Taylorist doctrines of scientific management being implemented in American factories. What she called “household engineering” involved an obsessive analysis and streamlining of tasks as mundane as dishwashing. “I felt I was working hand in hand with the efficiency engineers in business,” she said, “and what they were accomplishing in industry, I too was accomplishing in the home.”

By this time Europe was ready for American modernity in the household, as relations between the classes and sexes shifted radically in the wake of the First World War. Women were entering a wider range of occupations, which meant fewer wives at home and especially fewer servants. At the same time, the provision of housing for the working class demanded new thinking about the kitchen.

In the late-1920s one of Christine Frederick’s disciples, the Austrian architect Margarete Schütte-Lihotzky, designed perhaps the most celebrated kitchen in history. The Frankfurt kitchen, as it came to be known, was one of many efforts at this time to repurpose the insights of American industry for the cause of socialism, for Schütte-Lihotzky was an ardent radical. She would, during her remarkably long life, offer her skills to a succession of socialist regimes, from the Soviet Union to Fidel Castro’s Cuba, as well as spending four years in a concentration camp for her resistance to Nazism.

For the Modernist architects among whom Schütte-Lihotzky worked in the 1920s, the social and technical challenge of the moment was the design of low-cost public housing. Cash-strapped government agencies were struggling to provide accommodation for war widows, disabled veterans, pensioners and slum-dwelling workers. It was for a project like this in Frankfurt that Schütte-Lihotzky produced her masterpiece, a compact, meticulously organized galley kitchen, offering a maximum of amenities in a minimum of space.

By the end of the decade, different versions of the Frankfurt kitchen had been installed in 10,000 German apartments, and were inspiring imitations elsewhere. Its innovations included a suspended lamp that moved along a ceiling runner, a height-adjustable revolving stool, and a sliding door that allowed women to observe their children in the living area. It was not devoid of style either, with ultramarine blue cupboards and drawers, ochre wall tiles and a black floor. Schütte-Lihotzky would later claim she designed it for professional women, having never done much cooking herself.

The Frankfurt kitchen was essentially the prototype of the fitted kitchens we are familiar with today, but we shouldn’t overlook what a technological marvel it represented at the time. Across much of working class Europe, a separate kitchen was unheard of (cooking and washing were done in the same rooms as working and sleeping), let alone a kitchen that combined water, gas and electricity in a single integrated system of appliances, workspaces and storage units.

But even as this template became a benchmark of modernity and social progress in Europe, the next frontier of domestic life was already appearing in the United States. During the 1920s and 30s, American manufacturers developed the design and marketing strategies for a full-fledged consumer culture, turning functional household items into objects of desire. This culture duly took off with the economic boom that followed the Second World War, as the kitchen became the symbol of a new domestic ideal.

With the growth of suburbia, community-based ways of life were replaced by the nuclear family and its neighbours, whose rituals centred on the kitchen as a place of social interaction and display. The role of women in the home, firmly asserted by various cultural authorities, served as a kind of traditional anchor in a world of change. Thanks to steel-beam construction and central heating, the kitchen could now become a large, open-plan space. It was, moreover, increasingly populated by colourful plastic-laminated surfaces, double cookers, washing machines and other novel technologies. Advertisers had learned to target housewives as masters of the family budget, so that huge lime green or salmon pink fridges became no less a status symbol than the cars whose streamlined forms they imitated.

Despite their own post-war boom, most Europeans could only dream of such domestic affluence, and dream they did, for the mass media filled their cinema and television screens with the comforts of American suburbia. This was after all the era of the Cold War, and the American kitchen was on the front line of the campaign to promote the wonders of capitalism. On the occasion of the 1959 American National Exhibition in Moscow, US vice-president Richard Nixon got the chance to lecture Soviet premier Nikita Khrushchev on the virtues of a lemon yellow kitchen designed by General Electric.

In this ideological competition, the technologies of the modern kitchen were still assumed to represent an important form of social progress; Nixon’s PR victory in the Moscow “kitchen debate” was significant because Khrushchev himself had promised to overtake the United States in the provision of domestic consumer goods. This battle for abundance was famously one that Communism would lose, but by the time the Soviet challenge had disappeared in the 1990s, it was increasingly unlikely that someone in the west would see their microwave as emblematic of a collective project of modernity.

Perhaps capitalism has been a victim of its own success in this regard; being able to buy a Chinese manufactured oven for a single day’s wages, as many people now can, makes it difficult to view that commodity as a profound achievement. Yet there is also a sense in which progress, at least in this domain, has become a private experience, albeit one that tends to emerge from a comparison with others. The beautiful gadgetsthat occupy the contemporary home are tools of pleasure and convenience, but also milestones in the personal quest for happiness and perfection. 

The open-plan kitchen descended from mid-century America has become a desired destination for that quest in much of the developed world, even if it is often disguised in a local vernacular. It is no coincidence that in 1999, such a kitchen featured in the first episode of Grand Designs, the show which embodies the British middle-class love affair with domestic improvement. But the conspicuous efficiency and functional aesthetics of today’s kitchen dream show that it is equally indebted to Margrete Schütte-Lihotzky’s utopian efforts of the 1920s. This is a cruel irony, given that for most people today, and most of them still women, working in the kitchen is not a form of mechanised leisure but a stressful necessity, if there is time for it at all. 

Then again, Schütte-Lihotzky is part of a longer story about the modern world’s fascination with rational order. When Kathryn Kish Sklar writes about Catharine Beecher’s kitchen from the 1850s, she could equally be describing the satisfaction our own culture longs to find in the well-organised home: “It demonstrates the belief that for every space there is an object, for every question an answer. It speaks of interrelated certainties and completion.”

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Designing Modernity

This essay was first published at The Pathos of Things newsletter. Subscribe here.

Somewhere in my room (I forget where exactly) there is a box containing four smartphones I’ve cycled through in the last decade or so. Each of these phones appeared shockingly new when I first removed it from its neat cuboid packaging, though now there is a clear progression of models, with the earliest almost looking fit for a museum. This effect is part of their design, of course: these objects were made to look original at first, and then, by contrast to newer models, out of date. That all have cracked screens only emphasises their consignment to the oblivion of the obsolete.

The point of this sketch is not just to make you reflect on your consumer habits. I think it represents something more profound. This series of phones is like an oblique record of the transformation of society, describing the emergence of a new paradigm for organising human existence. It captures a slice of time in which the smartphone has changed every dimension of our lives, from work and leisure to knowledge and personal relations. This small device has upended professions from taxi driving to journalism, and shaped global politics by bringing media from around the world to even the poorest countries. It has significantly altered language. It has enabled new forms of surveillance by private companies and government agencies alike. A growing number of services are inaccessible without it.

Yet with its sleek plastic shell and vibrant interfaces, the smartphone is nonetheless a formidable object of desire: a kind of gateway to the possibilities of the 21st century. Ultimately, what it represents is paradoxical. An exhilarating sense of novelty, progress and opportunity; but also the countless adaptations we have to make as technology reshapes our lives, the new systems into which we are forced to fit ourselves.

To understand how a designed object can have this kind of power, defining both the practical and imaginative horizons of our age, we have to look beyond the immediate circumstances in which it appeared. The smartphone is a truly modern artefact: modern not just in the sense that it represents something distinctive about this era, but modern in another, deeper sense too. It belongs to a longer chapter of history, modernity, which is composed of moments that feel “modern” in their own ways.

The story of modernity shows us the conditions that enable design to shape our lives today. But the reverse is also true: the growing power of design is crucial to understanding modernity itself.


The very idea of design, as we understand it now, points to what is fundamentally at stake in modernity. To say that something is designed implies that it is not natural; that it is artificial, conceived and constructed in a certain way for a human purpose. Something which is not designed might be some form of spontaneous order, like a path repeatedly trodden through a field; but we still view such order as in some sense natural. The other antonym of the designed is the disordered, the chaotic.  

These contrasts are deeply modern. If we wind the clock back a few centuries – and in many places, much less than that – a hard distinction between human order and nature or chaos becomes unfamiliar. In medieval Europe, for instance, design and its synonyms (form, plan, intention) came ultimately from a transcendent order, ordained by God, that was manifest in nature and society alike. Human designs, such as the ornamentation of Gothic cathedrals or the symbols and trappings of noble rank, drew their meaning from that transcendent order.

In practical terms though, the question of where order came from was really a question about the authority of the past. It was the continuity of customs, traditions, and social structures in general which provided evidence that order came from somewhere beyond society, that it was natural. This in turn meant the existing order, inherited from the past, placed constraints of what human ambition could fathom.

To be modern, by contrast, is to view the world without such limitations. It is to view the world as something human beings must shape, or design, according to their own goals.

This modern spirit, as it is sometimes called, was bubbling up in European politics and philosophy over centuries. But it could only be fully realised after a dramatic rupture from the past, and this came around the turn of the 19th century. The French Revolution overturned the established order with its ancient hierarchies across large parts of Europe. It spread the idea that the legitimacy of rulers came from “the people” or “the nation,” a public whose desires and expectations made politics increasingly volatile. At the same time, the seismic changes known as the Industrial Revolution were underway. There emerged an unpredictable, dynamic form of capitalism, transforming society with its generation of new technologies, industries and markets.

These developments signalled a world that was unmistakably new and highly unstable. The notion of a transcendent order inherited from the past became absurd, because the past was clearly vanishing. What replaced it was the modern outlook that, in its basic assumptions, we still have today. This outlook assumes the world is constantly changing, and that human beings are responsible for giving it order, preventing it from sliding into chaos.

Modernity was and is most powerfully expressed in certain experiences of space and time. It is rooted in artificial landscapes, worlds built and managed by human beings, of which cities are still the best example. And since it involves constant change, modernity creates a sense of the present as a distinct moment with its own fashions, problems and ideas; a moment that is always slipping away into a redundant past, giving way to an uncertain future. “Modernity,” in the poet Charles Baudelaire’s famous expression, “is the transient, the fleeting, the contingent.”


Design was present at the primal scenes of modernity. The French Revolutionaries, having broken dramatically with the past, tried to reengineer various aspects of social life. They devised new ways of measuring space (the metric system) and time (the revolutionary calendar, beginning at Year Zero, and the decimal clock). They tried to establish a new religion called the Cult of the Supreme Being, for which the artist Jacques-Louis David designed sets and costumes.

Likewise, the Industrial Revolution emerged in part through the design activities of manufacturers. In textiles, furniture, ceramics and print, entrepreneurs fashioned their goods for the rising middle-classes, encouraging a desire to display social status and taste. They devised more efficient production processes to increase profits, ushering in the age of factories and machines.

These early examples illustrate forces that have shaped design to this day. The French Revolution empowered later generations to believe that radical change could be conceived and implemented. In its more extreme phases, it also foreshadowed the attempts of some modern regimes to demolish an existing society and design a new one. This utopian impulse towards order and perfection is the ever-present dark side of design, in that it risks treating people as mere material to be moulded according to an abstract blueprint. Needless to say, design normally takes place on a much more granular level, and with somewhat less grandiose ambitions. 

Modern politics and commerce both require the persuasion of large groups of people, to engineer desire, enthusiasm, fear and trust. This is the realm of propaganda and advertising, a big part of what the aesthetic design of objects and spaces tries to achieve. But modern politics and commerce also require efficient, systematic organisation, to handle complexity and adapt to competition and change. Here design plays its more functional role of devising processes and tools.

Typically we find design practices connected in chains or webs, with functional and aesthetic components. Such is the connection between the machine humming in the factory and the commodity gleaming in the shop window, between urban infrastructure and the facades which project the glory of a regime, between software programmes and the digital interface that keeps you scrolling.

But modernity also creates space for idealism. Modern people have an acute need of ideals, whether or not they can be articulated or made consistent, because modern people have an acute need to feel that change is meaningful.

The modern mind anticipates constant change, and understands order as human, but by themselves these principles are far from reassuring. Each generation experiences them through the loss of a familiar world to new ideas, new technologies, new social and cultural patterns. We therefore need a way to understand change as positive, or at least a sense of what positive change might look like (even if that means returning to the past). Modernity creates a need for horizons towards which we can orient ourselves: a vision of the future in relation to which we can define who we are.

Such horizons can take the form of a collective project, where people feel part of a movement aiming at a vision of the future. But for a project to get off the ground, it again needs design for persuasion and efficiency. From Napoleon Bonaparte’s Empire Style furniture, with which he fitted out a vast army of bureaucrats, to Barack Obama’s pioneering Internet campaigns, successful leaders have used a distinctive aesthetic style and careful planning to bring projects to life.

Indeed, the search for effective design is one of modernity’s common denominators, creating an overlap between very different visions of society. In the aftermath of the Russian Revolution of October 1917, the ideals of communist artists and designers diverged from those dominant in the capitalist west. But the similarities between Soviet and western design in the 1920s and 30s are as striking as the differences. Communist propaganda posters and innovative capitalist advertising mirrored one another. Soviet industrial centres used the same principles of efficiency as the factories of Ford Motor Company in the United States. There was even much in common between the 1935 General Plan for Moscow and the redevelopment of Paris in the 1850s, from the rationalisation of transport arteries to the preference for neoclassical architecture.

But horizons can also be personal. The basis of consumerism has long been to encourage individuals to see their own lives as a trajectory of self-improvement, which can be measured by having the latest products and moving towards the idealised versions of ourselves presented in advertising. At the very least, indulging in novelty can help us feel part of the fashions and trends that define “the now”: a kind of unspoken collective project with its own sense of forward movement that consumerism arranges for us.


Above all though, design has provided horizons for modern people through technology. Technological change is a curiously two-sided phenomenon, epitomising our relative helplessness in the face of complex processes governing the modern world, while also creating many of the opportunities and material improvements that make modern ways of life desirable. Technology embodies the darkest aspects of modernity – alienation, exploitation, the constant displacement of human beings – as well as the most miraculous and exhilarating.

Design gives technology its practical applications and its aesthetic character. A series of design processes are involved, for instance, in turning the theory of internal combustion into an engine, combining that engine with countless other forms of engineering to produce an aeroplane, and finally, making the aeroplane signify something in the imagination of consumers. In this way, design determines the forms that technology will take, but also shapes the course of technological change by influencing how we respond to it.

Technology can always draw on a deep well of imaginative power, despite its ambiguous nature, because it ties together the two core modern ideals: reason and progress. Reason essentially describes a faith that human beings have the intellectual resources to shape the world according to their goals. Progress, meanwhile, describes a faith that change is unfolding in a positive direction, or could be made to do so. By giving concrete evidence of what reason can achieve, technology makes it easier to believe in progress.

But a small number of artefacts achieve something much greater. They dominate the horizons of their era, defining what it means to be modern at that moment. These artefacts tend to represent technological changes that are, in a very practical sense, transforming society. More than that, they package revolutionary technology in a way that communicates empowerment, turning a disorientating process of change into a new paradigm of human potential.

One such artefact was the railway, the most compelling symbol of 19th century industrial civilisation, its precise schedules and remorseless passage across continents transforming the meaning of time and space. Another was the factory, which in the first half of the 20th century became an aesthetic and political ideal, providing Modernist architects as well as dictators with a model of efficiency, mass participation and material progress. And probably the most iconic product ever to emerge from a factory was the automobile, which, especially in the United States, served for decades as an emblem of modern freedom and prosperity, its streamlined form copied in everything from kitchen appliances to radios.   

Streamlining: the Zephyr electric clock, designed by Kem Weber in the 1930s, shows the influence of automobile forms in other design areas.

I will write in more detail about such era-defining artefacts in later instalments of this newsletter. For now, I only want to say that I believe the smartphone also belongs in this series.

Obviously the smartphone arrived in a world very different from the factory or car. The western experience is now just one among numerous distinct modernities, from East Asia to Latin America. For those of us who are in the west, social and cultural identity are no longer defined by ideas like nation or class, but increasingly by the relations between individuals and corporate business, mediated by an immersive media environment.

But the smartphone’s conquest of society implies that this fragmented form of modernity still sustains a collective imagination. What we have in common is precisely what defines the smartphone’s power: a vision of compact individual agency in a fluid, mobile, competitive age. The smartphone is like a Swiss army knife for the ambitious explorer of two worlds, the physical and the virtual; it offers self-sufficiency to the footloose traveller, and access to the infinite realms of online culture. It provides countless ways to structure and reflect on individual life, with its smorgasbord of maps, photographs, accounts and data. It allows us to seal ourselves in a personal enclave of headphones and media wherever we may be.

Yet the smartphone also communicates a social vision of sorts. One of its greatest achievements is to relieve the tension between personal desire and sociability, since we can be in contact with scores of others, friends and strangers alike, even as we pursue our own ends. It allows us to imagine collective life as flashes of connectivity between particles floating freely through distant reaches of the world.

It is not uniquely modern for a society to find its imagined centre in a singular technological and aesthetic achievement, as Roland Barthes suggested in the 1950s by comparing a new model Citroën to the cathedrals of medieval Europe. The difference is that, in modernity, such objects can never be felt to reflect a continuous, transcendent order. They must always point towards a future very different from the present, and as such, towards their own obsolescence.

The intriguing question raised by the smartphone is whether the next such artefact will have a physical existence at all, or will emerge on the other side of the door opened by the touch screen, in the virtual world. 

This essay was first published at The Pathos of Things newsletter. Subscribe here.

How the Internet Turned Sour: Jon Rafman and the Closing of the Digital Frontier

This essay was first published by IM1776 on 17th August 2021

A tumble-drier is dragged out into someone’s garden and filled with something heavy — a brick perhaps. After setting it spinning, a figure in a camouflage jacket and protective face visor retreats from the camera frame. Immediately the machine begins to shudder violently, and soon disintegrates as parts fly off onto the surrounding lawn. 

This is the opening shot of Mainsqueeze, a 2014 video collage by the Canadian artist Jon Rafman. What comes after is no less unsettling: a young woman holds a small shellfish, stroking it affectionately, before placing it on the ground and crushing it slowly under her heel; an amateur bodybuilder, muscles straining grotesquely, splits a watermelon between his thighs. 

Rafman, concerned about the social and existential impact of technology on contemporary life, discovered these and many other strange performances while obsessively trawling the subaltern corners of the internet — communities of trolls, pranksters and fetishists. The artist’s aim, however, isn’t to ridicule these characters as freaks: to the contrary, he maintains: “The more marginal, the more ephemeral the culture is, the more fleeting the object is… the more it can actually reflect and reveal ‘culture at large.’” What looks at first like a glimpse into the perverse fringes, is really meant to be a portrait of online culture in general: a fragmented world of niche identities and uneasy escapism, where humor and pleasure carry undercurrents of aggression and despair. With such an abundance of stimulation, it’s difficult to say where satisfaction ends and enslavement begins.

Even as we joke about the pathologies of online life, we often lose sight of the depressing arc the internet revolution has followed during the past decade. It’s impossible to know exactly what lies behind the playful tone of Twitter and the carefree images of Instagram, but judging by the personal stories we hear, there’s no shortage of addiction (to social media, porn, smartphones), identity crisis, and anxiety about being judged or exposed. It seems much of our online existence is now characterized by the same sense of hyper-alert boredom, claustrophobia and social estrangement that Rafman found at the margins of the internet years ago.

Indeed, the destructive impulses of Rafman’s trolls seem almost quaint by comparison to the shaming and malicious gossip we take for granted on social media. And whereas a plurality of outlooks and personalities was once the glory of the internet, today every conceivable subject, from art and sports to haircuts, food, and knitting, is reified as a divisive issue within a vast political metanarrative.

In somewhat of an ironic twist, last year, Rafman himself was dropped or suspended by numerous galleriesfollowing accusations of inappropriate sexual behavior, leveled through the anonymous Instagram account Surviving the Artworld (which publishes allegations of abusive behavior in the art industry). The accusers say they felt taken advantage of by the artist; Rafman insists that there was a misunderstanding. It’s always hard to know what to make of such cases, but that social media now serves as a mechanism for this kind of summary justice seems symptomatic of the social disintegration portrayed in works like Mainsqueeze.

Even if these accusations mark the end of Rafman’s career, his efforts to document online culture now seem more valuable than ever. His art gives us a way of thinking about the internet and its discontents that goes beyond manipulative social media algorithms, ideological debasement or the culture wars. The artist’s work shows the evolution of the virtual realm above all as a new chapter of human experience, seeking to represent the structures of feeling that made this world so enticing and, ultimately, troubled.

The first video by Rafman I came across reminded me of Swift’s Gulliver’s Travels. Begun in 2008, the visionary Kool-Aid Man in Second Life consists of a series of tours through the virtual world platform Second Life, where users have designed a phantasmagorical array of settings in which their avatars can lead, as the name suggests, another life. In the video, our guide is Rafman’s own avatar, the famous Kool-Aid advertising mascot (a jug of red liquid with the weird rictus grin) — a protagonist that reminds us we’ve entered an era where, as Rafman puts it, “different symbols float around equally and free from the weight of history.” For the entire duration, Kool-Aid Man wanders around aimlessly in a surreal, artificial universe, sauntering in magical forests and across empty plains, through run-down cityscapes and futuristic metropolises, placidly observing nightclub dance floors, ancient temples, and the endless stages where the denizens of Second Life perform their sexual fantasies.

Kool-Aid Man in Second Life is best viewed against the backdrop of the great migration onto the internet which started in the mid-2000s, facilitated by emerging tech giants like Amazon, Google and Facebook. For the great majority of people, this was when the internet ceased being merely a toolbox for particular tasks and became part of everyday life (the art world jargon for this was ‘post-internet’). The artwork can be seen as a celebration of the curiosity, fun, and boundless sense of possibility that accompanied this transition. Humanity was stepping en-masse out of the limits of physical space, and what it found was both trivial and sublime: a kitsch world of selfies and cute animal as well as effortless new forms of association and access to knowledge. The euphoric smile of Kool-Aid Man speaks to the birth of online mass culture as an innocent adventure.

Similar themes appear also in Rafman’s more famous (and ongoing) early work The Nine Eyes of Google Street View, in which the artist collects peculiar images captured by Google Maps’ vehicles. Scenes include a magnificent stag bounding down a coastal highway, a clown stepping into a minibus, a lone woman breastfeeding her child in a desolate landscape of dilapidated buildings. As in Rafman’s treatment of Second Life, such eclectic scenes are juxtaposed to portray the internet as an emotional voyage of discovery, marked by novel combinations of empathy and detachment, sincerity and irony, humour and desire. But in hindsight, no less striking than the spirit of wonder in these works are the ways they seem to anticipate the unravelling of online culture. 

If there’s something ominous about the ornate dream palaces of Second Life, it comes from our intuition that the stimulation and belonging offered by this virtual community is also a measure of alienation. The internet gives us relations with people and things that have the detached simplicity of a game, which only become more appealing as we find niches offering social participation and identity. But inevitably, these ersatz lives become a form of compulsive retreat from the difficulties of the wider world and a source of personal and social tension. Rafman’s Second Life is a vivid metaphor for how virtual experience tempts us with the prospect of a weightless existence, one that can’t possibly be realised and must, ultimately, lead to resentment. 

Equally prescient was Rafman’s emphasis on the breakdown of meaning, as words, images, and symbols of all kinds become unmoored from any stable context. Today, all ‘content’ presents itself much like the serendipitous scenes in The Nine-Eyes of Google Street View – an arbitrary jumble of trivial and profound, comic and tragic, impressions stripped of semantic coherence and flattened into passing flickers of stimulation. Symbols are no longer held firm in their meaning by clearly defined contexts where we might expect to find them, but can be endlessly mixed and refashioned in the course of online communication. This has been a great source of creativity, most obviously in the form of memes, but it has also produced neurosis. Today’s widespread sensitivity to the alleged violence concealed in language and representation, and the resulting desire to police expression, seems to reflect deep anxiety about a world where nothing has fixed significance. 

These more ominous trends dominate the next phase of Rafman’s work, where we find pieces like Mainsqueeze. Here Rafman plunges us into the sordid underworld of the internet, a carnival of adolescent rebellion and perverse obsessions. A sequence of images showing a group of people passed-out drunk, one with the word “LOSER” scrawled on his forehead, captures the overall tone. In contrast to Rafman’s Second Life, where the diversity of the virtual realm could be encompassed by a single explorer, we now find insular and inaccessible communities, apparently basking in an angry sense of estrangement from the mainstream of culture. Their various transgressive gestures — swastikas, illicit porn, garish make-up — seem tinted with desperation, as though they’re more about finding boundaries than breaking them.

This portrayal of troll culture has some unsettling resonances with the boredom and anxiety of internet life today. According to Rafman himself, however, the wider relevance of these outcasts concerns their inability to confront the forces shaping their frustrated existence. Trapped in a numbing cycle of distraction, their subversive energy is channelled into escapist rituals rather than any kind of meaningful criticism of the society they seem to resent. Seen from this perspective, online life comes to resemble a form of unknowing servitude, a captive state unable to grasp the conditions of its own deprivation.

All of this points to the broader context which is always dimly present in Rafman’s work: the architecture of the virtual world itself through which Silicon Valley facilitated the great migration onto the internet over the past fifteen-odd year. In this respect, Rafman’s documentation of Second Life becomes even more interesting, since that platform really belonged to the pre-social media Cyberpunk era, which would make it a eulogy for the utopian ethos of the early internet, with its dreams of transcending the clutches of centralised authority. The power that would crush those dreams is represented, of course, by Rafman’s Google Street View’s car — the outrider of big tech on its endless mission to capitalise on all the information it can gather.

But how does this looming corporate presence relates to the disintegration of online culture traced by Rafman? The artist’s comments about misdirected critical potential suggest one depressing possibility: the internet is a power structure which sustains itself through our distraction, addiction and alienation. We might think of Huxley’s Brave New World, but with shitposting and doom-scrolling instead of the pleasure-drug soma. Rafman’s most recent animation work, Disaster under the Sun, seems to underscore this dystopian picture. We are given a God’s-eye perspective over a featureless grey landscape, where crowds of faceless human forms attack and merge into one another, their activities as frantic and vicious as they are lacking any apparent purpose. 

It’s certainly true that the internet giants have gained immense wealth and power while overseeing the profound social and political dislocations of the last decade. But it’s also true that there are limits to how far they can benefit from anarchy. This, might explain why we are now seeing the emergence of something like a formal constitutional structure to govern the internet’s most popular platforms, such as with Facebook, whose Oversight Board now even provides a court of appeal for its users — but also Twitter, Google, and now PayPal. The consolidation of centralized authority over the internet resembles the closing of a frontier, as a once-lawless space of discovery, chaos and potential is settled and brought under official control. 

Rafmans’ work allows us to grasp how this process of closure has also been a cultural and psychological one. We have seen how, in his art, the boundlessness of the virtual realm, and our freedom within it, are portrayed not just as a source of wonder but also of disorientation and insecurity. There have been plenty of indications that these feelings of flux have made people anxious to impose order, whether in the imagined form of conspiracy theories or by trying to enforce new norms and moral codes.

This isn’t to say that growing regulation will relax the tensions that have overtaken online culture. Given the divergence of identities and worldviews illustrated by Rafman’s depiction of the marginal internet, it seems highly unlikely that official authority can be impartial; drawing boundaries will involve taking sides and identifying who must be considered subversive. But all of this just emphasises that the revolutionary first chapter of internet life is drawing to a close. For better or worse, the particular spirit of discovery that marked the crossing of this frontier will never return.

How the Celebs Rule Us

Who should we call the first “Instagram billionaire”? It’s a mark of the new Gilded Age we’ve entered that both women vying for that title belong to the same family, the illustrious Kardashian-Jenner clan. In 2019, it looked like Kylie Jenner had passed the ten-figure mark, only for Forbes to revise its estimates, declaring that Jenner had juiced her net worth with “white lies, omissions and outright fabrications.” (Her real wealth, the magazine thought, was a paltry $900 million). So, as of April this year, the accolade belongs to Jenner’s no less enterprising sister, Kim Kardashian West.

Social media has ushered in a new fusion of celebrity worship and celebrity entrepreneurship, giving rise to an elite class of “influencers” like Jenner and Kardashian West. Reality TV stars who were, in that wonderful phrase, “famous for being famous,” they now rely on their vast social media followings to market advertising space and fashion and beauty products. As such, they are closely entwined with another freshly minted elite, the tech oligarchs whose platforms are the crucial instruments of celebrity today. Word has it the good people at Instagram are all too happy to offer special treatment to the likes of the Kardashians, Justin Bieber, Taylor Swift and Lady Gaga – not to mention His Holiness the Supreme Pontiff of the Universal Church (that’s @franciscus to you and me). And there’s every reason for social media companies to accommodate their glamorous accomplices: in 2018, Jenner managed to wipe $1.3 billion off the market value of Snapchat with a single tweet questioning the platform’s popularity. 

It’s perfectly obvious, of course, what hides behind the embarrassingly thin figleaf of “influence,” and that is power. Not just financial power but social status, cultural clout and, on the tech companies’ side of the bargain, access to the eyeballs and data of huge audiences. The interesting question is where this power ultimately stems from. The form of capital being harvested is human attention; but how does the tech/influencer elite monopolise this attention? One well-known answer is through the addictive algorithms and user interfaces that turn us into slaves of our own brain chemistry; another invokes those dynamics of social rivalry, identified by the philosopher René Girard, whereby we look to others to tell us what we should want. 

But I think there’s a further factor here which needs to be explored, and it begins with the idea of charisma. In a recent piece for Tablet magazine, I argued that social media had given rise to a new kind of charismatic political leader, examples of which include Donald Trump, Jeremy Corbyn, Jordan Peterson and Greta Thunberg. My contention was that the charisma of these individuals, so evident in the intense devotion of their followers, does not stem from any innate quality of their personalities. In stead, charisma is assigned to them by online communities which, in the process of rallying around a leader, galvanise themselves into political movements.

Here I was drawing on the great German sociologist Max Weber, whose concept of “charismatic authority” describes how groups of people find coherence and structure by recognising certain individuals as special. And yet, the political leaders I discussed in the Tablet piece are far from the only examples showing the relevance of Weber’s ideas today. If anything, they are interlopers: accidental beneficiaries of a media system that is calibrated for a different type of charismatic figure, pursuing a different kind of power. I’m referring, of course, to the Kardashians, Biebers, and countless lesser “influencers” of this world. It is the twin elite of celebrities and tech giants, not the leaders of political movements, who have designed the template of charismatic authority in the social media age. 


When Weber talks about charismatic authority, he is talking about the emotional and ideological inspiration we find in other people. We are compelled to emulate or follow those individuals who issue us with a “calling” – a desire to lead our lives a certain way or aspire towards a certain ideal. To take an obvious example, think about the way members of a cult are often transfixed by a leader, dropping everything in their lives to enter his or her service; some of you will recall the scarlet-clad followers of the guru Bhagwan Shree Rajneesh in the 2018 Netflix documentary Wild Wild Country. Weber’s key observation is that this intensely subjective experience is always part of a wider social process: the “calling” of charisma, though it feels like an intimate connection with an exceptional person, is really the calling of our own urge to fit in, to grasp an identity, to find purpose and belonging. There’s a reason charismatic figures attract followers, plural. They are charismatic because they represent a social phenomenon we want to be a part of, or an aspiration our social context has made appealing. Whatever Rajneesh’s personal qualities, his cult was only possible thanks to the appeal of New Age philosophy and collectivist ways of life to a certain kind of disillusioned Westerner during the 1960s and ’70s. 

Today there’s no shortage of Rajneesh-like figures preaching homespun doctrines to enraptured audiences on Youtube. But in modern societies, charismatic authority really belongs to the domain of celebrity culture; the domain, that is, of the passionate, irrational, mass-scale worship of stars. Since the youth movements of the 1950s and 60s, when burgeoning media industries gave the baby-boomers icons like James Dean and The Beatles, the charismatic figures who inspire entire subcultures and generations have mostly come from cinema and television screens, from sports leagues, music videos and fashion magazines. Cast your mind back to your own teenage years – the time when our need for role models is most pressing – and recall where you and your chums turned for your wardrobe choices, haircuts and values. To the worlds of politics and business, perhaps? Not likely. We may not be so easily star-struck as adults, but I’d vouch most of your transformative encounters with charisma still come, if not from Hollywood and Vogue, then from figures projected into your imagination via the media apparatus of mass culture. It’s no coincidence that when a politician does gain a following through personality and image, we borrow clichés from the entertainment industry, whether hailing Barack Obama’s “movie star charisma” or dubbing Yanis Varoufakis “Greece’s rock-star finance minister.”

Celebrity charisma relies on a peculiar suspension of disbelief. We can take profound inspiration from characters in films, and on some level we know that the stars presented to us in the media (or now presenting themselves through social media) are barely less fictional. They are personae designed to harness the binding force of charismatic authority – to embody movements and cultural trends that people want to be part of. In the context of the media and entertainment business, their role is essentially to commodify the uncommodifiable, to turn our search for meaning and identity into a source of profit. Indeed, the celebrity culture of recent decades grew from the bosom of huge media conglomerates, who found that the saturation of culture by new media technologies allowed them to turn a small number of stars into prodigious brands.

In the 1980s performers like Michael Jackson and Madonna, along with sports icons like Michael Jordan, joined Hollywood actors in a class of mega celebrities. By the ’90s, such ubiquitous figures were flanked by stars catering to all kinds of specific audiences: in the UK, for instance, lad culture had premiership footballers, popular feminism had Sex and the City, Britpoppers had the Gallagher brothers and grungers had Kurt Cobain. For their corporate handlers, high-profile celebrities ensured revenues from merchandise, management rights and advertising deals, as well as reliable consumer audiences that offset the risks of more speculative ventures.

Long before social media, in other words, celebrity culture had become a thoroughly commercialised form of charismatic authority. It still relied on the ability of stars to issue their followers with a “calling” – to embody popular ideals and galvanise movements – but these roles and relationships were reflected in various economic transactions. Most obviously, where a celebrity became a figurehead for a particular subculture, people might express their membership of that subculture by buying stuff the celebrity advertised. But no less important, in hindsight, was the commodification of celebrities’ private lives, as audiences were bonded to their stars through an endless stream of “just like us” paparazzi shots, advertising campaigns, exclusive interviews and documentaries, and so on. As show-business sought to the maximise the value of star power, the personae of celebrities were increasingly constructed in the mould of “real” people with human, all-too-human lives.

Which brings us back to our influencer friends. For all its claims to have opened up arts and entertainment to the masses, social media really represents another step towards a celebrity culture dominated by an elite cluster of stars. Digital tech, as we know, has annihilated older business models in media-related industries. This has concentrated even more success in the hands of the few who can command attention and drive cultural trends – who can be “influencers” – through the commodification of their personal lives. And that, of course, is exactly what platforms like Instagram are designed for. A Bloomberg report describes how the Kardashians took over and ramped-up the trends of earlier decades:

Back in the 1990s, when the paparazzi were in their pomp, pictures of celebrities going about their daily lives… could fetch $15,000 a pop from tabloids and magazines… The publications would in turn sell advertising space alongside those images and rake in a hefty profit.

Thanks to social media, the Kardashians were able to cut out the middle man. Instagram let the family post images that they controlled and allowed them to essentially sell their own advertising space to brands… The upshot is that Kardashian West can make $1 million per sponsored post, while paparazzi now earn just $5 to $10 apiece for “Just Like Us” snaps.

Obviously, Instagram does not “let” the Kardashians do this out of the kindness of its heart: as platforms compete for users, it’s in their interests to accommodate the individuals who secure the largest audiences. In fact, through their efforts to identify and promote such celebrities, the social media companies are increasingly important in actually making them celebrities, effectively deciding who among the aspiring masses gets a shot at fame. Thus another report details how TikTok “assigned individual managers to thousands of stars to help with everything, whether tech support or college tuition,” while carefully coordinating with said stars to make their content go viral.

But recall, again, that the power of celebrities ultimately rests on their followers’ feeling that they’re part of something – that is the essence of their charisma. And it’s here that social media really has been revolutionary. It has allowed followers to become active communities, fused by constant communication with each other and with the stars themselves. Instagram posts revealing what some celeb had for breakfast fuel a vast web of interactions, through which their fans sustain a lively sense of group identity. Naturally, this being social media, the clearest sign of such bonding is the willingness of fans to group together like a swarm of hornets and attack anyone who criticises their idols. Hence the notorious aggression of the “Beleibers,” or fanatical Justin Bieber fans (apparently not even controllable by the pop star himself); and hence Instagram rewriting an algorithm to protect Taylor Swift from a wave of snake emojis launched by Kim Kardashian followers. This, surely, is the sinister meaning behind an e-commerce executive bragging to Forbes magazine about Kylie Jenner’s following, “No other influencer has ever gotten to the volume or had the rabid fans” that she does. 

In other words, the celebrity/tech elite’s power is rooted in new forms of association and identification made possible by the internet. It’s worth taking a closer look at one act which has revealed this in an especially vivid way: the K-Pop boy band BTS (the name stands for Bangtan Sonyeodan, or Beyond the Scene in English). Preppy outfits and feline good looks notwithstanding, these guys are no lightweights. Never mind the chart-topping singles, the stadium concerts and the collaborations with Ed Sheeran; their success registers on a macroeconomic scale. According to 2018 estimates from the Hyundai Research Institute, BTS contributes $3.6 billion annually to the South Korean economy, and is responsible for around 7% of tourism to the country. No less impressive are the band’s figures for online consumption: it has racked up the most YouTube views in a 24-hour period, and an unprecedented 750,000 paying viewers for a live-streamed concert. 

Those last stats are the most suggestive, because BTS’s popularity rests on a fanatical online community of followers, the “Adorable Representative M.C. for Youth” (ARMY), literally numbering in the tens of millions. In certain respects, the ARMY doesn’t resemble a fan club so much as an uncontacted tribe in the rainforest: it has its own aesthetics, norms and rituals centred around worship of BTS. All that’s missing, perhaps, is a cosmology, but the band’s management is working on that. It orchestrates something called the “Bangtan Universe”: an ongoing fictional metanarrative about BTS, unfolding across multiple forms of media, which essentially encourages the ARMY to inhabit its own alternate reality. 

Consequently, such is the ARMY’s commitment that its members take personal responsibility for BTS’s commercial success. They are obsessive about boosting the band’s chart performance, streaming new content as frequently and on as many devices as possible. The Wall Street Journal describes one fan’s devotion:  

When [the BTS song] “Dynamite” launched, Michelle Tack, 47, a cosmetics stores manager from Chicopee, Massachusetts, requested a day off work to stream the music video on YouTube. “I streamed all day,” Tack says. She made sure to watch other clips on the platform in between her streaming so that her views would count toward the grand total of views. […]

“It feels like I’m part of this family that wants BTS to succeed, and we want to do everything we can do to help them,” says Tack. She says BTS has made her life “more fulfilled” and brought her closer to her two daughters, 12 and 14. 

The pay-off came last October, when the band’s management company, Big Hit Entertainment, went public, making one of the most successful debuts in the history of the South Korean stock market. And so the sense of belonging which captivated that retail manager from Massachussetts now underpins the value of financial assets traded by banks, insurance companies and investment funds. Needless to say, members of the ARMY were clamouring to buy the band’s shares too. 


It is this paradigm of charismatic authority – the virtual community bound by devotion to a celebrity figurehead – which has been echoed in politics in recent years. Most conspicuously, Donald Trump’s political project shared many features with the new celebrity culture. The parallels between Trump and a figure like Kylie Jenner are obvious, from building a personal brand off the back of reality TV fame to exaggerating his wealth and recognising the innovative potential of social media. Meanwhile, the immersive fiction of the Bangtan Universe looks like a striking precedent for the wacky world of Deep State conspiracy theories inhabited by diehard Trump supporters, which spilled dramatically into view with the Washington Capitol invasion of January 6th.

As I argued in my Tablet essay – and as the chaos and inefficacy of the Trump presidency demonstrates – this social media-based form of charismatic politics is not very well suited to wielding formal power. In part, this is because the model is better suited to the kinds of power sought by celebrities: financial enrichment and cultural influence. The immersive character of online communities, which tend to develop their own private languages and preoccupations, carries no real downside for the celebrity: it just means more strongly identified fans. It is, however, a major liability in politics. The leaders elevated by such movements aren’t necessarily effective politicians to begin with, and they struggle to broaden their appeal due to the uncompromising agendas their supporters foist on them. We saw these problems not just with Trump movement but also with the Jeremy Corbyn phenomenon in the UK, and, to an extent, with the younger college-educated liberals who influenced Bernie Sanders after 2016. 

But this doesn’t mean online celebrity culture has had no political impact. Even if virtual communities aren’t much good at practical politics, they are extremely good at producing new narratives and norms, whether rightwing conspiracy theories in the QAnon mould, or the progressive ideas about gender and identity which Angela Nagle has aptly dubbed “Tumblr liberalism.” Celebrities are key to the process whereby such innovations are exported into the wider discourse as politically-charged memes. Thus Moya Lothian Mclean has described how influencers popularise feminist narratives – first taking ideas from academics and activists, then simplifying them for mass consumption and “regurgitat[ing] them via an aesthetically pleasing Instagram tile.” Once such memes reach a certain level of popularity, the really big celebrities will pick them up as part of their efforts to present a compelling personality to their followers (which is not to say, of course, that they don’t also believe in them). The line from Tumblr liberalism through Instagram feminism eventually arrives at the various celebrities who have revealed non-binary gender identities to their followers in recent years. Celebs also play an important role in legitimising grassroots political movements: last year BTS joined countless other famous figures in publicly giving money to Black Lives Matter, their $1 million donation being matched by their fans in little more than a day.

No celebrity can single-handedly move the needle of public opinion, but discourse is increasingly shaped by activists borrowing the tools of the influencer, and by influencers borrowing the language of the activist. Such charismatic figures are the most important nodes in the sprawling network of online communities that constitutes popular culture today; and through their attempts to foster an intimate connection with their followers, they provide a channel through which the political can be made to feel personal. This doesn’t quite amount to a “celebocracy,” but nor can we fully understand the nature of power today without acknowledging the authority of stars.

The Charismatic Politics of Social Media

This essay was originally published by Tablet Magazine on 21st April 2021.

In the wake of Donald Trump’s presidency, the tone of politics has become much quieter, and not just in the United States. It’s amazing how much room this man’s personality took up in the public conversation. But we should remember that what silenced Trump was not losing an election in November 2020. It was being kicked off social media after his supporters stormed the Capitol on Jan. 6.

The decision to take away Trump’s megaphone was the natural outcome of a phenomenon that emerged around 2015, when politics was transformed by a new type of charismatic leader, unique to our own era, who emerged from a culture increasingly centered around social media platforms like Facebook, Twitter, Instagram, and YouTube. But Trump is just one example, albeit a dramatic one. On the left there is also Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez, as well as Jeremy Corbyn, the former leader of the Opposition in the United Kingdom. There is the teenage climate activist Greta Thunberg and the cult philosopher Jordan Peterson. These men and women “went viral,” their individual charisma spread by a new, decentralized media system, and they galvanized movements that defined themselves as fighting against the established order.

Some of these figures’ time in the limelight is already over. But others will take their place, because the forces that gave rise to them are still here. To understand their appeal, we only have to turn to the influential German sociologist of the early 20th century, Max Weber. It was Weber who popularized “charisma” as a political term. And it is Weber’s concept of charismatic leadership that seems more relevant now than ever before.

Born 157 years ago tomorrow, Weber lived at a time when Western societies, and Germany especially, were being transformed by industrialization at a frantic pace. The central aim of his work was to understand how modern societies evolved and functioned in contrast to those of the past. Hailed as a brilliant young intellectual, Weber suffered a nervous breakdown around the turn of the 20th century, and subsequently produced a gloomy account of the modern world that was to be his greatest legacy. In The Protestant Ethic and the Spirit of Capitalism, published in 1905, he argued that the foundation of modernity was an ultrarational approach to organizing our lives and institutions, especially in pursuit of profit—a culture he compared to an “iron cage.”

It is against this backdrop that we find Weber’s most famous ideas about charismatic leadership. There was, he observed, a weak point in the iron cage of rationality. The modern principle that the right to govern comes from the people created an opening for charismatic politicians to gain immense power by winning the adoration of the masses. In his influential 1919 lecture Politics as a Vocation, Weber suggested the best example of this was the 19th-century British politician William Gladstone. But after Weber’s death in 1920, his theory of charismatic leadership achieved new renown, as it seemed to predict the dictatorships of Mussolini, Hitler, and Stalin.

A century later, Weber’s vision of “dictatorship resting on the exploitation of mass emotionality” fits nicely into the current moment, and may even have fed the reflexive portrayal of Trump as some sort of proto-fascist ruler. But in fact, this understanding of political charisma as purely a tool of modern demagogues is a misreading of Weber’s ideas.

Weber believed that charismatic individuals shape the politics of every era. A charismatic leader, he wrote in the posthumously published Economy and Society, has “a certain quality of an individual personality, by virtue of which he is set apart from ordinary men and treated as endowed with supernatural, superhuman, or at least specifically exceptional powers or qualities.” For Weber, the crucial element is to understand that charisma has a social function. He didn’t see charisma merely as a character trait belonging solely to the leader. He saw the desire to follow charismatic individuals as a necessary ingredient that binds groups of people together. Hence, when he laid out the three forms of authority that organize all societies, he included “charismatic authority” alongside legal structures and tradition.

What’s more, this mutually binding power of charisma doesn’t only sustain societies, according to Weber—it also transforms them. He actually thought the purest example of charismatic authority came from religious movements led by prophets, of the kind that shaped the history of Judaism, Christianity, and Islam. Here Weber describes charisma as a “revolutionary force,” because of the way prophets unite their followers with a sense of confidence and conviction that can shatter existing structures of authority. Charisma is like a spark that ignites sweeping social and cultural change.

This is the Weberian insight that opens the door to understanding the charismatic leaders of our own time. To grasp what makes an individual charismatic, we shouldn’t just focus on their personality: We should look at the people who are brought together by their mutual recognition of a leader.

Today, the social basis for much political ideology and activism comes from online subcultures, where people develop common worldviews based on spontaneous and widely shared feelings, like the sense of being betrayed by corrupt elites. It is from these virtual communities that political movements emerge, often by discovering and adopting a charismatic figure that galvanizes them. Through the rapid circulation of video clips and social media posts, an individual can be turned into a leader almost overnight.

What is remarkable about this paradigm is how much the standard relationship between leaders and followers has been reversed: These new movements are not created by their leaders, even though the leaders may command tremendous devotion. The followers “choose” their leader. The movements exist first in an unrealized form, and conjure up leaders that allow them to fully manifest and mobilize themselves.

Weber spoke of charisma being “recognized,” emphasizing the way leaders inspire their followers with a sense of purpose or spiritual “calling.” People gravitate toward individuals who give them a language to express their shared feelings and an example to follow. But what matters most is that, through this collective recognition of a figurehead, the followers cement their own social bond.

When we look at the charismatic leaders who have emerged in recent years, we don’t in fact see authoritarian figures who control their movements and bend their followers to their own distinct political visions. What we see are leaders who rise suddenly and unexpectedly, and whose actual beliefs are less important than their ability to embody the emotions that unite their devotees. Today it is the leaders who are shaped by the attitudes of their movements rather than the other way around.

Thus, Trump’s followers were never all that interested in how effectively he turned campaign slogans into reality. What held the MAGA movement together was not the content of Trump’s rather inconsistent and half-hearted declarations about policy, but the irreverent drama of rebellion that he enacted through the political theater of his rallies and Twitter posts. His leadership gave birth to intense internet communities, where diehard supporters cooked up their own narratives about his struggle against the establishment.

The point isn’t that Trump had no real power over his followers, which of course he did. The point is that his power depended on—and was limited to—the role of culture war icon that his movement created for him. Trump was effective in this role because he had no apparent strategy apart from giving his audience what it wanted, whether photo-ops brandishing a Bible, or nods and winks at online conspiracy theories.

Likewise, Sanders and Corbyn were both old men who unexpectedly found themselves riding tidal waves of youthful support. But their sudden rise from relative obscurity led to some awkward moments when some of their more strongly held views did not align with the wishes of their followers. Sanders’ campaign for president changed significantly from 2016 to 2020, as the mass movement that chose him as its leader molded him into a champion of their immigration preferences, which he had previously opposed. Similarly, in his time as leader of the British Labour Party from 2015 to 2020, Corbyn had to abandon his lifelong opposition to the European Union because he was now leading a movement that cherished EU membership as one of its core values.

Finally, consider two cases from outside the realm of official politics. Greta Thunberg is treated as a modern saint who has inspired millions to march through the world’s cities demanding action against climate change. But Thunberg’s enormous presence in the environmental movement is not matched by a unique philosophy or any organizational power. She went viral on social media during her 2018 strike outside the Swedish parliament, and her fame now rests on being invited by political and religious leaders to shout at them on camera about how her generation has been betrayed. “I understand that people are impressed by this movement,” Thunberg told the Times in 2019, “and I am also very impressed with the young people, but I haven’t really done anything. I have just sat down.”

Then there’s Canadian psychologist Jordan Peterson. Thanks to a few viral videos about free speech in 2016 and a series of controversial media engagements thereafter, Peterson went from teaching Christological interpretations of Disney films to being hailed as the messiah of the anti-woke movement. Peterson has continually stressed that he’s interested in psychology, not politics, yet what followers find captivating are his filmed broadsides against social justice ideology, which have been viewed millions of times on YouTube.

All these figures have been imbued with a certain magical status, which deepens the shared identity of their followers. Movements have gathered around them as totems embodying a fight against injustice and a spirit of revolt. Consequently, they command strong emotional attachments, though their followers are only interested in them insofar as they stay within the limits of the movement they were chosen to lead. The power of their charisma depends, therefore, on conforming to parameters set by the imagination of their followers.

Obviously, individual personality is not irrelevant here. Charismatic figures are generally regarded as authentic, based on the perception that they are not trying to meet social expectations or simply advance their careers. Seen in this way, it makes sense that a generation accustomed to the shifting trends and constant self-promotion of social media would warm to old-timers like Sanders and Corbyn, who had been stoically banging the same drum for decades.

Interestingly, both Trump and Thunberg have often had their personalities pathologized by critics: Trump on account of his “narcissistic personality disorder,” Thunberg on account of her autism and single-minded commitment to her cause. But supporters see these same qualities as refreshingly direct. This kind of appeal is necessary for leaders who want to offer their followers the personal “calling” which Weber saw as key to charisma. No one is inspired to take on the establishment by people who look and sound like they belong to it.

Nonetheless, following Weber’s lead, we don’t need to think about charisma as something that’s simply inherent to these influential personalities. In the sudden explosion of hype surrounding certain figures on social media, we see how the conviction that an individual is special can be created through collective affirmation. This is the virtual equivalent of the electrifying rallies and demonstrations where followers have gathered to see figures like Trump, Corbyn, and Thunberg: The energy is focused on the leader, but it comes from the crowd.

So what does all this tell us about the future of the new charismatic movement politics? Weber insisted that to exercise real power, charismatic authority cannot keep relying on the spiritual calling of committed followers. It must establish its own structures of bureaucracy and tradition. According to Weber, this is how prophetic religious movements of the past created lasting regimes.

But the way that today’s charismatic leaders are chosen for their expressive qualities means they usually aren’t suited to consolidating power in this way. There is a remarkable contrast between the sweeping legislative program being enacted by the uncharismatic Biden presidency and Trump’s failure to deliver on most of his signature proposals.

This does not mean that the movements inspired by charismatic figures are irrelevant—far from it. They will continue to influence politics by reshaping the social and cultural context in which it unfolds. In fact, the potential for these movements is all the more dramatic because, as recent years have shown, they can appear almost out of thin air. We do not know who the next charismatic leaders will be until after they have been chosen.

Tradition with a capital T: Dylan at 80

It’s December 1963, and a roomful of liberal luminaries are gathered at New York’s Americana Hotel. They are here for the presentation of the Emergency Civil Liberties Committee’s prestigious Tom Paine Award, an accolade which, a year earlier, had been accepted by esteemed philosopher and anti-nuclear campaigner Bertrand Russell. If any in the audience have reservations about this year’s recipient, a 22-year-old folk singer called Bob Dylan, their skepticism will soon be vindicated. 

In what must rank as one of the most cack-handed acceptance speeches in history, an evidently drunk Dylan begins with a surreal digression about the attendees’ lack of hair, his way of saying that maybe it’s time they made room for some younger voices in politics. “You people should be at the beach,” he informs them, “just relaxing in the time you have to relax. It is not an old people’s world.” Not that it really matters anyway, since, as Dylan goes on to say, “There’s no black and white, left and right to me anymore; there’s only up and down… And I’m trying to go up without thinking of anything trivial such as politics.” Strange way to thank an organisation which barely survived the McCarthyite witch-hunts, but Dylan isn’t finished. To a mounting chorus of boos, he takes the opportunity to express sympathy for Lee Harvey Oswald, the assassin who had shot president John F. Kennedy less than a month earlier. “I have to be honest, I just have to be… I got to admit honestly that I, too, saw some of myself in him… Not to go that far and shoot…”

Stories like this one have a special status in the world of Bobology, or whatever we want to call the strange community-cum-industry of critics, fans and vinyl-collecting professors who have turned Dylan into a unique cultural phenomenon. The unacceptable acceptance speech at the Americana is among a handful of anecdotes that dramatize the most iconic time in his career – the mid-’60s period when Dylan rejected/ betrayed/ transcended (delete as you see fit) the folk movement and its social justice oriented vision of music. 

For the benefit of the uninitiated, Dylan made his name in the early ’60s as a politically engaged troubadour, writing protest anthems that became the soundtrack of the Civil Rights movement. He even performed as a warm-up act for Martin Luther King Jnr’s “I Have a Dream” speech at the 1963 March on Washington. Yet no sooner had Dylan been crowned “the conscience of a generation” than he started furiously trying to wriggle out of that role, most controversially through his embrace of rock music. In 1965, Dylan plugged in to play an electric set at the Newport Folk Festival (“the most written about performance in the history of rock,” writes biographer Clinton Heylin), leading to the wonderful though apocryphal story of folk stalwart Pete Seeger trying to cleave the sound cables with an axe. Another famous confrontation came at the Manchester Free Trade Hall in 1966, where angry folkies pelted Dylan with cries of “Judas!” (a moment whose magic really rests on Dylan’s response, as he turns around to his electric backing band and snarls “play it fuckin’ loud”). 

In the coming days, as the Bobologists celebrate their master’s 80th birthday, we’ll see how Dylan’s vast and elaborate legend remains anchored in this original sin of abandoning the folk community. I like the Tom Paine Award anecdote because it makes us recall that, for all his prodigious gifts, Dylan was little more than an adolescent when these events took place – a chaotic, moody, often petulant young man. What has come to define Dylan, in a sense, is a commonplace bout of youthful rebellion which has been elevated into a symbolic narrative about a transformative moment in cultural history. 

Still, we can hardly deny its power as a symbolic narrative. Numerous writers have claimed that Dylan’s rejection of folk marks a decisive turning point in the counterculture politics of ’60s, separating the collective purpose and idealism of the first half of the decade, as demonstrated in the March on Washington, from the bad acid trips, violent radicalism and disillusionment of the second. Hadn’t Dylan, through some uncanny intuition, sensed this descent into chaos? How else can we explain the radically different mood of his post-folk albums? The uplifting “Come gather ’round people/ Wherever you roam” is replaced by the sneering “How does it feel/ to be on your own,” and the hopeful “The answer, my friend, is blowin’ in the wind” by the cynical “You don’t need a weatherman to know which way the wind blows.” Or was Dylan, in fact, responsible for unleashing the furies of the late-’60s? That last lyric, after all, provided the name for the militant activist cell The Weathermen.

More profound still, Dylan’s mid-’60s transformation seemed to expose a deep fault line in the liberal worldview, a tension between two conceptions of freedom and authenticity. The folk movement saw itself in fundamentally egalitarian and collectivist terms, as a community of values whose progressive vision of the future was rooted in the shared inheritance of the folk tradition. Folkies were thus especially hostile to the rising tide of mass culture and consumerism in America. And clearly, had Dylan merely succumbed to the cringeworthy teenybopper rock ’n’ roll which was then topping the charts, he could have been written off as a sell-out. But Dylan’s first three rock records – the “Electric Trilogy” of Bringing It All Back HomeHighway 61 Revisited and Blonde on Blonde – are quite simply his best albums, and probably some of the best albums in the history of popular music. They didn’t just signal a move towards a wider market of consumers; they practically invented rock music as a sophisticated and artistically credible form. And the key to this was a seductive of vision of the artist as an individual set apart, an anarchic fount of creativity without earthly commitments, beholden only to the sublime visions of his own interior world. 

It was Dylan’s lyrical innovations, above all, that carried this vision. His new mode of social criticism, as heard in “Gates of Eden” and “It’s Alright, Ma (I’m Only Bleeding),” was savage and indiscriminate, condemning all alike and refusing to offer any answers. Redemption came in stead from the imaginative power of the words and images themselves – the artist’s transcendent “thought dreams,” his spontaneous “skippin’ reels of rhyme” – his ability to laugh, cry, love and express himself in the face of a bleak and inscrutable world.

Yes, to dance beneath the diamond sky with one hand waving free
Silhouetted by the sea, circled by the circus sands
With all memory and fate driven deep beneath the waves

Here is the fantasy of artistic individualism with which Dylan countered the idealism of folk music, raising a dilemma whose acuteness can still be felt in writing on the subject today. 

But for a certain kind of Dylan fan, to read so much into the break with folk is to miss the magician’s hand in the crafting of his own legend. Throughout his career, Dylan has shown a flair for mystifying his public image (some would say a flair for dishonesty). His original folksinger persona was precisely that – a persona he copied from his adolescent hero Woody Guthrie, from the pitch of his voice and his workman’s cap to the very idea of writing “topical” songs about social injustice. From his first arrival on the New York folk scene, Dylan intrigued the press with fabrications about his past, mostly involving running away from home, travelling with a circus and riding on freight trains. (He also managed to persuade one of his biographers, Robert Shelton, that he had spent time working as a prostitute, but the less said about that yarn the better). Likewise, Dylan’s subsequent persona as the poet of anarchy drew much of its effect from the drama of his split with the folk movement, and so its no surprise to find him fanning that drama, both at the time and long afterwards, with an array of facetious, hyperbolic and self-pitying comments about what he was doing. 

When the press tried to tap into Dylan’s motivations, he tended to swat them away with claims to the effect that he was just “a song and dance man,” a kind of false modesty (always delivered in a tone of preening arrogance) that fed his reputation for irreverence. He told the folksinger Joan Baez, among others, that his interest in protest songs had always been cynical – “You know me. I knew people would buy that kind of shit, right? I was never into that stuff” – despite numerous confidants from Dylan’s folk days insisting he had been obsessed with social justice. Later, in his book Chronicles: Volume One, Dylan made the opposite claim, insisting both his folk and post-folk phases reflected the same authentic calling: “All I’d ever done was sing songs that were dead straight and expressed powerful new realities. … My destiny lay down the road with whatever life invited, had nothing to do with representing any kind of civilisation.” He then complained (and note that modesty again): “It seems like the world has always needed a scapegoat – someone to lead the charge against the Roman Empire.” Incidentally, the “autobiographical” Chronicles is a masterpiece of self-mythologizing, where, among other sleights of hand, Dylan cuts back and forth between different stages of his career, neatly evading the question of how and why his worldview evolved.

Nor, of course, was Dylan’s break with folk his last act of reinvention. The rock phase lasted scarcely two years, after which he pivoted towards country music, first with the austere John Wesley Harding and then with the bittersweet Nashville Skyline. In the mid-1970s, Dylan recast himself as a travelling minstrel, complete with face paint and flower-decked hat, on the Rolling Thunder Revue tour. At the end of that decade he emerged as a born-again Christian playing gospel music, and shortly afterwards as an Infidel (releasing an album with that title). In the ’90s he appeared, among other guises, as a blues revivalist, while his more recent gestures include a kitsch Christmas album and a homage to Frank Sinatra. If there’s one line that manages to echo through the six decades of Dylan’s career, it must be “strike another match, go start anew.” 

This restless drive to wrong-foot his audience makes it tempting to see Dylan as a kind of prototype for the shape-shifting pop idol, anticipating the likes of David Bowie and Kate Bush, not to mention the countless fading stars who refresh their wardrobes and their political causes in a desperate clinging to relevance. Like so many readings of Dylan, this one inevitably doubles back, concertina-like, to the original break with folk. That episode can now be made to appear as the sudden rupture with tradition that gave birth to the postmodern celebrity, a paragon of mercurial autonomy whose image can be endlessly refashioned through the media.

But trying to fit Dylan into this template reveals precisely what is so distinctive about him. Alongside his capacity for inventing and reinventing himself as a cultural figure, there has always been a sincere and passionate devotion to the forms and traditions of the past. Each of the personae in Dylan’s long and winding musical innings – from folk troubadour to country singer to roadshow performer to bluesman to roots rocker to jazz crooner – has involved a deliberate engagement with some aspect of the American musical heritage, as well as with countless other cultural influences from the U.S. and beyond. This became most obvious from the ’90s onwards, with albums such as Good As I Been to You and World Gone Wrong, composed entirely of covers and traditional folk songs – not to mention “Love and Theft, a title whose quotation marks point to a book by historian Eric Lott, the subject of which, in turn, is the folklore of the American South. But these later works just made explicit what he had been doing all along.

“What I was into was traditional stuff with a capital T,” writes Dylan about his younger self in Chronicles. The unreliability of that book has already been mentioned, but the phrase is a neat way of describing his approach to borrowing from history. Dylan’s personae are never “traditional” in the sense of adhering devoutly to a moribund form; nor would it be quite right to say that he makes older styles his own. Rather, he treats tradition as an invitation to performance and pastiche, as though standing by the costume cupboard of history and trying on a series of eye-catching but not-quite-convincing disguises, always with a nod and a wink. I remember hearing Nashville Skyline for the first time and being slightly bemused at what sounded like an entirely artless imitation of country music; I was doubly bemused to learn this album had been recorded and released in 1969, the year of Woodstock and a year when Dylan was actually living in Woodstock. But it soon occurred to me that this was Dylan’s way of swimming against the tide. He may have lit the fuse of the high ’60s, but by the time the explosion came he had already moved on, not forward but back, recognising where his unique contribution as a musician really lay: in an ongoing dance with the spirits of the past, part eulogy and part pantomime. I then realised this same dance was happening in his earlier folk period, and in any number of his later chapters.

“The madly complicated modern world was something I took little interest in” – Chronicles again – “What was swinging, topical and up to date for me was stuff like the Titanic sinking, the Galveston flood, John Henry driving steel, John Hardy shooting a man on the West Virginia line.” We know this is at least partly true, because this overtly mythologized, larger-than-life history, this traditional stuff with a capital T, is never far away in Dylan’s music. The Titanic, great floods, folk heroes and wild-west outlaws all appear in his catalogue, usually with a few deliberate twists to imbue them with a more biblical grandeur, and to remind us not to take our narrator too seriously. It’s even plausible that he really did take time out from beatnik life in Greenwich Village to study 19th century newspapers at the New York Public Library, not “so much interested in the issues as intrigued by the language and rhetoric of the times.” Dylan is nothing if not a ventriloquist, using his various musical dummies to recall the languages of bygone eras. 

And if we look more closely at the Electric Trilogy, the infamous reinvention that sealed Dylan’s betrayal of folk, we find that much of the innovation on those albums fits into a twelve-bar blues structure, while their rhythms recall the R&B that Dylan had performed as a teenager in Hibbing, Minnesota. Likewise, it’s often been noted that their lyrical style, based on chains of loosely associated or juxtaposed images, shows not just the influence of the Beats, but also French symbolist poet Arthur Rimbaud, German radical playwright Bertolt Brecht, and bluesman Robert Johnson. This is to say nothing of the content of the lyrics, which feature an endless stream of allusions to history, literature, religion and myth. Songs like “Tombstone Blues” make an absurd parody of their own intertextuality (“The ghost of Belle Starr she hands down her wits/ To Jezebel the nun she violently knits/ A bald wig for Jack the Ripper who sits/ At the head of the chamber of commerce”). For all its iconoclasm, Dylan’s novel contribution to songwriting in this phase was to bring contemporary America into dialogue with a wider universe of cultural riches. 

Now consider this. Could it be that even Dylan’s disposable approach to his own persona, far from hearkening the arrival of the modern media star, is itself a tip of the hat to some older convention? The thought hadn’t occurred to me until I dipped into the latest round of Bobology marking Dylan’s 80th. There I found an intriguing lecture by the critic Greil Marcus about Dylan’s relationship to blues music (and it’s worth recalling that, by his own account, the young Dylan only arrived at folk music via the blues of Lead Belly and Odetta). “The blues,” says Marcus, “mandate that you present a story on the premise that it happened to you, so it has to be written [as] not autobiography but fiction.” He explains:

words first came from a common store of phrases, couplets, curses, blessings, jokes, greetings, and goodbyes that passed anonymously between blacks and whites after the Civil War. From that, the blues said, you craft a story, a philosophy lesson, that you present as your own: This happened to me. This is what I did. This is how it felt.

Is this where we find a synthesis of those two countervailing tendencies in Dylan’s career – on to the next character, back again to the “common store” of memories? Weaving a set of tropes into a fiction, which you then “present as your own,” certainly works as a description of how Dylan constructs his various artistic masks, not to mention many of his songs. It would be satisfying to imagine that this practice is itself a refashioned one – and as a way of understanding where Dylan is coming from, probably no less fictitious than all the others.

How Napoleon made the British

In 1803, the poet and philosopher Samuel Taylor Coleridge wrote to a friend about his relish at the prospect of being invaded by Napoleon Bonaparte. “As to me, I think, the Invasion must be a Blessing,” he said, “For if we do not repel it, & cut them to pieces, we are a vile sunken race… And if we do act as Men, Christians, Englishmen – down goes the Corsican Miscreant, & Europe may have peace.”

This was during the great invasion scare, when Napoleon’s Army of England could on clear days be seen across the channel from Kent. Coleridge’s fighting talk captured the rash of patriotism that had broken out in Britain. The largest popular mobilisation of the entire Hanoverian era was set in motion, as some 400,000 men from Inverness to Cornwall entered volunteer militia units. London’s playhouses were overtaken by anti-French songs and plays, notably Shakespeare’s Henry V. Caricaturists such as James Gillray took a break from mocking King George III and focused on patriotic propaganda, contrasting the sturdy beef-eating Englishman John Bull with a puny, effete Napoleon.

These years were an important moment in the evolution of Britain’s identity, one that resonated through the 19th century and far beyond. The mission identified by Coleridge – to endure some ordeal as a vindication of national character, preferably without help from anyone else, and maybe benefit wider humanity as a by-product – anticipates a British exceptionalism that loomed throughout the Victorian era, reaching its final apotheosis in the Churchillian “if necessary alone” patriotism of the Second World War. Coleridge’s friend William Wordsworth expressed the same sentiment in 1806, after Napoleon had smashed the Prussian army at Jena, leaving the United Kingdom his only remaining opponent. “We are left, or shall be left, alone;/ The last that dare to struggle with the Foe,” Wordsworth wrote, “’Tis well! From this day forward we shall know/ That in ourselves our safety must be sought;/ That by our own right hands it must be wrought.”

As we mark the bicentennial of Napoleon’s death on St Helena in 1821, attention has naturally been focused on his legacy in France. But we shouldn’t forget that in his various guises – conquering general, founder of states and institutions, cultural icon – Napoleon transformed every part of Europe, and Britain was no exception. Yet the apparent national pride of the invasion scare was very far from the whole story. If the experience of fighting Napoleon left the British in important ways more cohesive, confident and powerful, it was largely because the country had previously looked like it was about to fall apart. 

Throughout the 1790s, as the French Revolution followed the twists and turns that eventually brought Napoleon to power, Britain was a tinder box. Ten years before he boasted of confronting Napoleon as “Men, Christians, Englishmen,” Coleridge had burned the words “Liberty” and “Equality” into the lawns of Cambridge university. Like Wordsworth, and like countless other radicals and republicans, he had embraced the Revolution as the dawn of a glorious new age in which the corrupt and oppressive ancien régime, including the Anglican establishment of Britain, would be swept away. 

And the tide of history seemed to be on the radicals’ side. The storming of the Bastille came less than a decade after Britain had lost its American colonies, while in George III the country had an unpopular king, prone to bouts of debilitating madness, whose scandalous sons appeared destined to drag the monarchy into disgrace. 

Support for the Revolution was strongest among Nonconformist Protestant sects – especially Unitarians, the so-called “rational Dissenters” – who formed the intellectual and commercial elite of cities such as Norwich, Birmingham and Manchester, and among the radical wing of the Whig party. But for the first time, educated working men also entered the political sphere en masse. They joined the Corresponding Societies which held public meetings and demonstrations across the country, so named because of their contacts with Jacobin counterparts in France. Influential Unitarian ministers, such as the Welsh philosopher Richard Price and the chemist Joseph Priestly, interpreted the Revolution as the work of providence and possibly a sign of the imminent Apocalypse. In the circle of Whig aristocrats around Charles James Fox, implacable adversary of William Pitt’s Tory government, the radicals had sympathisers at the highest levels of power. Fox famously said of the Revolution “how much the greatest event it is that ever happened in the world, and how much the best.”

From 1792 Britain was at war with revolutionary France, and this mix of new ideals and longstanding religious divides boiled over into mass unrest and fears of insurrection. In 1795 protestors smashed the windows at 10 Downing Street, and at the opening of parliament a crowd of 200,000 jeered at Pitt and George III. The radicals were met by an equally volatile loyalist reaction in defence of church and king. In 1793, a dinner celebrating Bastille Day in Birmingham sparked three days of rioting, including attacks on Nonconformist chapels and Priestly’s home. Pitt’s government introduced draconian limitations on thought, speech and association, although his attempt to convict members of the London Corresponding Society with high treason was foiled by a jury. 

Both sides drew inspiration from an intense pamphlet war that included some of the most iconic and controversial texts in British intellectual history. Conservatives were galvanised by Edmund Burke’s Reflections on the Revolution in France, a defence of England’s time-honoured social hierarchies, while radicals hailed Thomas Paine’s Rights of Man, calling for the abolition of Britain’s monarchy and aristocracy. When summoned on charges of seditious libel, Paine fled to Paris, where he sat in the National Assembly and continued to support the revolutionary regime despite almost being executed during the Reign of Terror that began in 1793. Among his supporters were the pioneering feminist Mary Wollstonecraft and the utopian progressive William Godwin, who shared an intellectual circle with Coleridge and Wordsworth. 

Britain seemed to be coming apart at the seams. Bad harvests at the turn of the century brought misery and renewed unrest, and the war effort failed to prevent France (under the leadership, from 1799, of First Consul Bonaparte) from dominating the continent. Paradoxically, nothing captures the paralysing divisions of the British state at this moment better than its expansion in 1801 to become the United Kingdom of Great Britain and Ireland. The annexation of Ireland was a symptom of weakness, not strength, since it reflected the threat posed by a bitterly divided and largely hostile satellite off Britain’s west coast. The only way to make it work, as Pitt insisted, was to grant political rights to Ireland’s Catholic majority – but George III refused. So Pitt resigned, and the Revolutionary Wars ended with the Treaty of Amiens in 1802, effectively acknowledging French victory.

Britain’s tensions and weaknesses certainly did not disappear during the ensuing, epic conflict with Napoleon from 1803-15. Violent social unrest continued to flare up, especially at times of harvest failure, financial crisis, and economic hardship resulting from restriction of trade with the continent. There were, at times, widespread demands for peace. The government continued to repress dissent with military force and legal measures; the radical poet and engraver William Blake (later rebranded as a patriotic figure when his words were used for the hymn Jerusalem) stood trial for sedition in 1803, following an altercation with two soldiers. Many of those who volunteered for local military units probably did so out of peer pressure and to avoid being impressed into the navy. Ireland, of course, would prove to be a more intractable problem than even Pitt had imagined.  

Nonetheless, Coleridge and Wordsworth’s transition from radicals to staunch patriots was emblematic. Whether the population at large was genuinely loyal or merely quiescent, Britain’s internal divisions lost much of their earlier ideological edge, and the threat of outright insurrection faded away. This process had already started in the 1790s, as many radicals shied away from the violence and militarism of revolutionary France, but it was galvanised by Napoleon. This was not just because he appeared determined and able to crush Britain, but also because of British perceptions of his regime. 

As Yale professor Stuart Semmel has observed, Napoleon did not fit neatly into the dichotomies with which Britain was used to contrasting itself against France. For the longest time, the opposition had been (roughly) “free Protestant constitutional monarchy” vs “Popish absolutist despotism”; after the Revolution, it had flipped to “Christian peace and order” vs “bloodthirsty atheism and chaos.” Napoleon threw these catagories into disarray. The British, says Semmel, had to ask “Was he a Jacobin or a king …; Italian or Frenchman; Catholic, atheist, or Muslim?” The religious uncertainty was especially unsettling, after Napoleon’s “declaration of kinship with Egyptian Muslims, his Concordat with the papacy, his tolerance for Protestants, and his convoking a Grand Sanhedrin of European Jews.” 

This may have forced some soul-searching on the part of the British as they struggled to define Napoleonic France, but in some respects the novelty simplified matters. Former radicals could argue Napoleon represented a betrayal of the Revolution, and could agree with loyalists that he was a tyrant bent on personal domination of Europe, thus drawing a line under the ideological passions of the revolutionary period. In any case, loyalist propaganda had no difficulty transferring to Napoleon the template traditionally reserved for the Pope – that of the biblical Antichrist. This simple fact of having a single infamous figure on which to focus patriotic feelings no doubt aided national unity. As the essayist William Hazlitt, an enduring supporter of Napoleon, later noted: “Everybody knows that it is only necessary to raise a bugbear before the English imagination in order to govern it at will.”

More subtly, conservatives introduced the concept of “legitimacy” to the political lexicon, to distinguish the hereditary power of British monarchs from Napoleon’s usurpation of the Bourbon throne. This was rank hypocrisy, given the British elite’s habit of importing a new dynasty whenever it suited them, but it played to an attitude which did help to unify the nation: during the conflict with Napoleon, people could feel that they were defending the British system in general, rather than supporting the current government or waging an ideological war against the Revolution. The resulting change of sentiment could be seen in 1809, when there were vast celebrations to mark the Golden Jubilee of the once unpopular George III. 

Undoubtedly British culture was also transformed by admiration for Napoleon, especially among artists, intellectuals and Whigs, yet even here the tendency was towards calming antagonisms rather than enflaming them. This period saw the ascendance of Romanticism in European culture and ways of thinking, and there was not and never would be a greater Romantic hero than Napoleon, who had turned the world upside down through force of will and what Victor Hugo later called “supernatural instinct.” But ultimately this meant aestheticizing Napoleon, removing him from the sphere of politics to that of sentiment, imagination and history. Thus when Napoleon abdicated his throne in 1814, the admiring poet Lord Byron was mostly disappointed he had not fulfilled his dramatic potential by committing suicide. 

But Napoleon profoundly reshaped Britain in another way: the long and grueling conflict against him left a lasting stamp on every aspect of the British state. In short, while no-one could have reasonably predicted victory until Napoleon’s catastrophic invasion of Russia in 1812, the war was nonetheless crucial in forging Britain into the global superpower it would become after 1815. 

The British had long been in the habit of fighting wars with ships and money rather than armies, and for the most part this was true of the Napoleonic wars as well. But the unprecedented demands of this conflict led to an equally unprecedented development of Britain’s financial system. This started with the introduction of new property taxes and, in 1799, the first income tax, which were continually raised until by 1814 their yield had increased by a factor of ten. What mattered here was not so much the immediate revenue as the unparalleled fiscal base it gave Britain for the purpose of borrowing money – which it did, prodigiously. In 1804, the year Bonaparte was crowned Emperor, the “Napoleon of finance” Nathan Rothschild arrived in London from Frankfurt, helping to secure a century of British hegemony in the global financial system. 

No less significant were the effects of war in stimulating Britain’s nascent industrial revolution, and its accompanying commercial empire. The state relied on private contractors for most of its materiel, especially that required to build and maintain the vast Royal Navy, while creating immense demand for iron, coal and timber. In 1814, when rulers and representatives of Britain’s European allies came to Portsmouth, they were shown a startling vision of the future: enormous factories where pulley blocks for the rigging of warships were being mass-produced with steam-driven machine tools. Meanwhile Napoleon’s Continental System, by shutting British manufacturers and exporters out of Europe, forced them to develop markets in South Asia, Africa and Latin America. 

Even Britain’s fabled “liberal” constitution – the term was taken from Spanish opponents to Napoleon – did in fact do some of the organic adaptation that smug Victorians would later claim as its hallmark. The Nonconformist middle classes, so subversive during the revolutionary period, were courted in 1812-13 with greater political rights and by the relaxation of various restrictions on trade. Meanwhile, Britain discovered what would become its greatest moral crusade of the 19thcentury. Napoleon’s reintroduction of slavery in France’s Caribbean colonies created the conditions for abolitionism to grow as a popular movement in Britain, since, as William Wilberforce argued, “we should not give advantages to our enemies.” Two bills in 1806-7 effectively ended Britain’s centuries-long participation in the trans-Atlantic slave trade.

Thus Napoleon was not just a hurdle to be cleared en route to the British century – he was, with all his charisma and ruthless determination, a formative element in the nation’s history. And his influence did not end with his death in 1821, of course. He would long haunt the Romantic Victorian imagination as, in Eric Hobsbawm’s words, “the figure every man who broke with tradition could identify himself with.”

The Philosophy of Rupture: How the 1920s Gave Rise to Intellectual Magicians

This essay was originally published by Areo magazine on 4th November 2020.

When it comes to intellectual history, Central Europe in the decade of the 1920s presents a paradox. It was an era when revolutionary thought – original and iconoclastic ideas and modes of thinking – was not in fact revolutionary, but almost the norm. And the results are all around us today. The 1920s were the final flourish in a remarkable period of path-breaking activity in German-speaking Europe, one that laid many of the foundations for both analytic and continental philosophy, for psychology and sociology, and for several branches of legal philosophy and of theoretical science.

This creative ferment is partly what people grasp at when they refer to the “spirit” of the ’20s, especially in Germany’s Weimar Republic. But this doesn’t help us understand where that spirit came from, or how it draws together the various thinkers who, in hindsight, seem to be bursting out of their historical context rather than sharing it.

Wolfram Eilenberger attempts one solution to that problem in his new book, Time of the Magicians: The Invention of Modern Thought, 1919-1929. He manages to weave together the ideas of four philosophers – Ludwig Wittgenstein, Martin Heidegger, Walter Benjamin and Ernst Cassirer – by showing how they emerged from those thinkers’ personal lives. We get colourful accounts of money troubles, love affairs, career struggles and mental breakdowns, each giving way to a discussion of the philosophical material. In this way, the personal and intellectual journeys of the four protagonists are linked in an expanding web of experiences and ideas.

This is a satisfying format. There’s just no denying the voyeuristic pleasure of peering into these characters’ private lives, whether it be Heidegger’s and Benjamin’s attempts to rationalise their adulterous tendencies, or the series of car crashes that was Wittgenstein’s social life. Besides, it’s always useful to be reminded that, with the exception of the genuinely upstanding Cassirer, these great thinkers were frequently selfish, delusional, hypocritical and insecure. Just like the rest of us then.

But entertaining as it is, Eilenberger’s biographical approach does not really cast much light on that riddle of the age: why was this such a propitious time for magicians? If anything, his portraits play into the romantic myth of the intellectual window-breaker as a congenital outsider and unusual genius – an ideal that was in no small part erected by this very generation. This is a shame because, as I’ll try to show later, these figures become still more engaging when considered not just as brilliant individuals, but also as products of their time.

First, it’s worth looking at how Eilenberger manages to draw parallels between the four philosophers’ ideas, for that is no mean feat. Inevitably this challenge makes his presentation selective and occasionally tendentious, but it also produces some imaginative insights.

*          *          *

 

At first sight, Wittgenstein seems an awkward fit for this book, seeing as he did not produce any philosophy during the decade in question. His famous early work, the Tractatus Logico-Philosophicus, claimed to have solved the problems of philosophy “on all essential points.” So we are left with the (admittedly fascinating) account of how he signed away his vast inheritance, trained as a primary school teacher, and moved through a series of remote Austrian towns becoming increasingly isolated and depressed.

But this does leave Eilenberger plenty of space to discuss the puzzling Tractatus. He points out, rightly, that Wittgenstein’s mission to establish once and for all what can meaningfully be said – that is, what kinds of statements actually make sense – was far more than an attempt to rid philosophy of metaphysical hokum (even if that was how his logical-empiricist fans in Cambridge and the Vienna Circle wanted to read the work).

Wittgenstein did declare that the only valid propositions were those of natural science, since these alone shared the same logical structure as empirical reality, and so could capture an existing or possible “state of affairs” in the world. But as Wittgenstein freely admitted, this meant the Tractatus itself was nonsense. Therefore its reader was encouraged to disregard the very claims which had established how to judge claims, to “throw away the ladder after he has climbed up it.” Besides, it remained the case that “even if all possible scientific questions be answered, the problems of life have still not been touched at all.”

According to Eilenberger, who belongs to the “existentialist Wittgenstein” school, the Tractatus’ real goals were twofold. First, to save humanity from pointless conflict by clarifying what could be communicated with certainty. And second, to emphasise the degree to which our lives will always be plagued by ambiguity – by that which can only be “shown,” not said – and hence by decisions that must be taken on the basis of faith.

This reading allows Eilenberger to place Wittgenstein in dialogue with Heidegger and Benjamin. The latter both styled themselves as abrasive outsiders: Heidegger as the Black Forest peasant seeking to subvert academic philosophy from within, Benjamin as the struggling journalist and flaneur who, thanks to his erratic behaviour and idiosyncratic methods, never found an academic post. By the end of the ’20s, they had gravitated towards the political extremes, with Heidegger eventually joining the Nazi party and Benjamin flirting with Communism.

Like many intellectuals at this time, Heidegger and Benjamin were interested in the consequences of the scientific and philosophical revolutions of the 17th century, the revolutions of Galileo and Descartes, which had produced the characteristic dualism of modernity: the separation of the autonomous, thinking subject from a scientific reality governed by natural laws. Both presented this as an illusory and fallen state, in which the world had been stripped of authentic human purpose and significance.

Granted, Heidegger did not think such fine things were available to most of humanity anyway. As he argued in his masterpiece Being and Time, people tend to seek distraction in mundane tasks, social conventions and gossip. But it did bother him that philosophers had forgotten about “the question of the meaning of Being.” To ask this question was to realise that, before we come to do science or anything else, we are always already “thrown” into an existence we have neither chosen nor designed, and which we can only access through the meanings made available by language and by the looming horizon of our own mortality.

Likewise, Benjamin insisted language was not a means of communication or rational thought, but an aesthetic medium through which the world was revealed to us. In his work on German baroque theatre, he identified the arrival of modernity with a tragic distortion in that medium. Rather than a holistic existence in which in which everything had its proper name and meaning – an existence that, for Benjamin, was intimately connected with the religious temporality of awaiting salvation – the very process of understanding had become arbitrary and reified, so that any given symbol might as well stand for any given thing.

As Eilenberger details, both Heidegger and Benjamin found some redemption in the idea of decision – a fleeting moment when the superficial autonomy of everyday choices gave way to an all-embracing realisation of purpose and fate. Benjamin identified such potential in love and, on a collective and political level, in the “profane illuminations” of the metropolis, where the alienation of the modern subject was most profound. For Heidegger, only a stark confrontation with death could produce a truly “authentic” decision. (This too had political implications, which Eilenberger avoids: Heidegger saw the “possibilities” glimpsed in these moments as handed down by tradition to each generation, leaving the door open to a reactionary idea of authenticity as something a community discovers in its past).

If Wittgenstein, Heidegger and Benjamin were outsiders and “conceptual wrecking balls,” Ernst Cassirer cuts a very different figure. His inclusion in this book is the latest sign of an extraordinary revival in his reputation over the past fifteen years or so. That said, some of Eilenberger’s remarks suggest Cassirer has not entirely shaken off the earlier judgment, that he was merely “an intellectual bureaucrat,” “a thoroughly decent man and thinker, but not a great one.”

Cassirer was the last major figure in the Neo-Kantian tradition, which had dominated German academic philosophy from the mid-19th century until around 1910. At this point, it grew unfashionable for its associations with scientific positivism and naïve notions of rationality and progress (not to mention the presence of prominent Jewish scholars like Cassirer within its ranks). The coup de grâce was delivered by Heidegger himself at the famous 1929 “Davos debate” with Cassirer, the event which opens and closes Eilenberger’s book. Here contemporaries portrayed Cassirer as an embodiment of “the old thinking” that was being swept away.

That judgment was not entirely accurate. It’s true that Cassirer was an intellectual in the mould of 19th century Central European liberalism, committed to human progress and individual freedom, devoted to science, culture and the achievements of German classicism. Not incidentally, he was the only one of our four thinkers to wholeheartedly defend Germany’s Weimar democracy. But he was also an imaginative, versatile and unbelievably prolific philosopher.

Cassirer’s three-volume project of the 1920s, The Theory of Symbolic Forms, showed that he, too, understood language and meaning as largely constitutive of reality. But for Cassirer, the modern scientific worldview was not a debasement of the subject’s relationship to the world, but a development of the same faculty which underlay language, myth and culture – that of representing phenomena through symbolic forms. It was, moreover, an advance. The logical coherence of theoretical science, and the impersonal detachment from nature it afforded, was the supreme example of how human beings achieved freedom: by understanding the structure of the world they inhabited to ever greater degrees.

But nor was Cassirer dogmatic in his admiration for science. His key principle was the plurality of representation and understanding, allowing the same phenomenon to be grasped in different ways. The scientist and artist are capable of different insights. More to the point, the creative process through which human minds devised new forms of representation was open ended. The very history of science, as of culture, showed that there were always new symbolic forms to be invented, transforming our perception of the world in the process.

*          *          *

 

It would be unfair to say Eilenberger gives us no sense of how these ideas relate to the context in which they were formed; his biographical vignettes do offer vivid glimpses of life in 1920s Europe. But that context is largely personal, and rarely social, cultural or intellectual. As a result, the most striking parallel of all – the determination of Wittgenstein, Heidegger and Benjamin to upend the premises of the philosophical discipline, and that of Cassirer to protect them – can only be explained in terms of personality. This is misleading.

A time-traveller visiting Central Europe in the years after 1918 could not help but notice that all things intellectual were in a state of profound flux. Not only was Neo-Kantianism succumbing to a generation of students obsessed with metaphysics, existence and (in the strict sense) nihilism. Every certainty was being forcefully undermined: the superiority of European culture in Oswald Spengler’s bestselling Decline of the West (1918); the purpose and progress of history in Ernst Troeltsch’s “Crisis of Historicism” (1922); the Protestant worldview in Karl Barth’s Epistle to the Romans (1919); and the structure of nature itself in Albert Einstein’s article “On the Present Crisis in Theoretical Physics” (1922).

In these years, even the concept of revolution was undergoing a revolution, as seen in the influence of unorthodox Marxist works like György Lukács’ History and Class Consciousness (1923). And this is to say nothing of what our time-traveller would discover in the arts. Dada, a movement dedicated to the destruction of bourgeois norms and sensibilities, had broken out in Zurich in 1917 and quickly spread to Berlin. Here it infused the works of brilliant but scandalous artists such as George Grosz and Otto Dix.

German intellectuals, in other words, were conscious of living in an age of immense disruption. More particularly, they saw themselves as responding to world defined by rupture; or to borrow a term from Heidegger and Benjamin, by “caesura” – a decisive and irreversible break from the past.

It’s not difficult to imagine where that impression came from. This generation experienced the cataclysm of the First World War, an unprecedented bloodbath that discredited assumptions of progress even as it toppled ancient regimes (though among Eilenberger’s quartet, only Wittgenstein served on the front lines). In its wake came the febrile economic and political atmosphere of the Weimar Republic, which has invited so many comparisons to our own time. Less noticed is that the ’20s were also, like our era, a time of destabilising technological revolution, witnessing the arrival of radio, the expansion of the telephone, cinema and aviation, and a bevy of new capitalist practices extending from factory to billboard.

Nonetheless, in philosophy and culture, we should not imagine that an awareness of rupture emerged suddenly in 1918, or even in 1914. The war is best seen as an explosive catalyst which propelled and distorted changes already underway. The problems that occupied Eilenberger’s four philosophers, and the intellectual currents that drove them, stem for a deeper set of dislocations.

 Anxiety over the scientific worldview, and over philosophy’s relationship to science, was an inheritance from the 19thcentury. In Neo-Kantianism, Germany had produced a philosophy at ease with the advances of modern science. But paradoxically, this grew to be a problem when it became clear how momentous those advances really were. Increasingly science was not just producing strange new ways of seeing the world, but through technology and industry, reshaping it. Ultimately the Neo-Kantian holding pattern, which had tried to reconcile science with the humanistic traditions of the intellectual class, gave way. Philosophy became the site of a backlash against both.

But critics of philosophy’s subordination to science had their own predecessors to call on, not least with respect to the problem of language. Those who, like Heidegger and Benjamin, saw language not as a potential tool for representing empirical reality, but the medium which disclosed that reality to us (and who thus began to draw the dividing line between continental and Anglo-American philosophy), were sharpening a conflict that had simmered since the Enlightenment. They took inspiration from the 18th century mystic and scourge of scientific rationality, Johann Georg Hamann.

Meanwhile, the 1890s saw widespread recognition of the three figures most responsible for the post-war generation’s ideal of the radical outsider: Søren Kierkegaard, Friedrich Nietzsche and Karl Marx. That generation would also be taught by the great pioneers of sociology in Germany, Max Weber and Georg Simmel, whose work recognised what many could feel around them: that modern society was impersonal, fragmented and beset by irresolvable conflicts of value.

In light of all this, it’s not surprising that the concept of rupture appears on several levels in Wittgenstein, Heidegger and Benjamin. They presented their works as breaks in and with the philosophical tradition. They reinterpreted history in terms of rupture, going back and seeking the junctures when pathologies had appeared and possibilities had been foreclosed. They emphasised the leaps of faith and moments of decision that punctuated the course of life.

Even the personal qualities that attract Eilenberger to these individuals – their eccentric behaviour, their search for authenticity – were not theirs alone. They were part of a generational desire to break with the old bourgeois ways, which no doubt seemed the only way to take ownership of such a rapidly changing world.