How the Celebs Rule Us

Who should we call the first “Instagram billionaire”? It’s a mark of the new Gilded Age we’ve entered that both women vying for that title belong to the same family, the illustrious Kardashian-Jenner clan. In 2019, it looked like Kylie Jenner had passed the ten-figure mark, only for Forbes to revise its estimates, declaring that Jenner had juiced her net worth with “white lies, omissions and outright fabrications.” (Her real wealth, the magazine thought, was a paltry $900 million). So, as of April this year, the accolade belongs to Jenner’s no less enterprising sister, Kim Kardashian West.

Social media has ushered in a new fusion of celebrity worship and celebrity entrepreneurship, giving rise to an elite class of “influencers” like Jenner and Kardashian West. Reality TV stars who were, in that wonderful phrase, “famous for being famous,” they now rely on their vast social media followings to market advertising space and fashion and beauty products. As such, they are closely entwined with another freshly minted elite, the tech oligarchs whose platforms are the crucial instruments of celebrity today. Word has it the good people at Instagram are all too happy to offer special treatment to the likes of the Kardashians, Justin Bieber, Taylor Swift and Lady Gaga – not to mention His Holiness the Supreme Pontiff of the Universal Church (that’s @franciscus to you and me). And there’s every reason for social media companies to accommodate their glamorous accomplices: in 2018, Jenner managed to wipe $1.3 billion off the market value of Snapchat with a single tweet questioning the platform’s popularity. 

It’s perfectly obvious, of course, what hides behind the embarrassingly thin figleaf of “influence,” and that is power. Not just financial power but social status, cultural clout and, on the tech companies’ side of the bargain, access to the eyeballs and data of huge audiences. The interesting question is where this power ultimately stems from. The form of capital being harvested is human attention; but how does the tech/influencer elite monopolise this attention? One well-known answer is through the addictive algorithms and user interfaces that turn us into slaves of our own brain chemistry; another invokes those dynamics of social rivalry, identified by the philosopher René Girard, whereby we look to others to tell us what we should want. 

But I think there’s a further factor here which needs to be explored, and it begins with the idea of charisma. In a recent piece for Tablet magazine, I argued that social media had given rise to a new kind of charismatic political leader, examples of which include Donald Trump, Jeremy Corbyn, Jordan Peterson and Greta Thunberg. My contention was that the charisma of these individuals, so evident in the intense devotion of their followers, does not stem from any innate quality of their personalities. In stead, charisma is assigned to them by online communities which, in the process of rallying around a leader, galvanise themselves into political movements.

Here I was drawing on the great German sociologist Max Weber, whose concept of “charismatic authority” describes how groups of people find coherence and structure by recognising certain individuals as special. And yet, the political leaders I discussed in the Tablet piece are far from the only examples showing the relevance of Weber’s ideas today. If anything, they are interlopers: accidental beneficiaries of a media system that is calibrated for a different type of charismatic figure, pursuing a different kind of power. I’m referring, of course, to the Kardashians, Biebers, and countless lesser “influencers” of this world. It is the twin elite of celebrities and tech giants, not the leaders of political movements, who have designed the template of charismatic authority in the social media age. 

When Weber talks about charismatic authority, he is talking about the emotional and ideological inspiration we find in other people. We are compelled to emulate or follow those individuals who issue us with a “calling” – a desire to lead our lives a certain way or aspire towards a certain ideal. To take an obvious example, think about the way members of a cult are often transfixed by a leader, dropping everything in their lives to enter his or her service; some of you will recall the scarlet-clad followers of the guru Bhagwan Shree Rajneesh in the 2018 Netflix documentary Wild Wild Country. Weber’s key observation is that this intensely subjective experience is always part of a wider social process: the “calling” of charisma, though it feels like an intimate connection with an exceptional person, is really the calling of our own urge to fit in, to grasp an identity, to find purpose and belonging. There’s a reason charismatic figures attract followers, plural. They are charismatic because they represent a social phenomenon we want to be a part of, or an aspiration our social context has made appealing. Whatever Rajneesh’s personal qualities, his cult was only possible thanks to the appeal of New Age philosophy and collectivist ways of life to a certain kind of disillusioned Westerner during the 1960s and ’70s. 

Today there’s no shortage of Rajneesh-like figures preaching homespun doctrines to enraptured audiences on Youtube. But in modern societies, charismatic authority really belongs to the domain of celebrity culture; the domain, that is, of the passionate, irrational, mass-scale worship of stars. Since the youth movements of the 1950s and 60s, when burgeoning media industries gave the baby-boomers icons like James Dean and The Beatles, the charismatic figures who inspire entire subcultures and generations have mostly come from cinema and television screens, from sports leagues, music videos and fashion magazines. Cast your mind back to your own teenage years – the time when our need for role models is most pressing – and recall where you and your chums turned for your wardrobe choices, haircuts and values. To the worlds of politics and business, perhaps? Not likely. We may not be so easily star-struck as adults, but I’d vouch most of your transformative encounters with charisma still come, if not from Hollywood and Vogue, then from figures projected into your imagination via the media apparatus of mass culture. It’s no coincidence that when a politician does gain a following through personality and image, we borrow clichés from the entertainment industry, whether hailing Barack Obama’s “movie star charisma” or dubbing Yanis Varoufakis “Greece’s rock-star finance minister.”

Celebrity charisma relies on a peculiar suspension of disbelief. We can take profound inspiration from characters in films, and on some level we know that the stars presented to us in the media (or now presenting themselves through social media) are barely less fictional. They are personae designed to harness the binding force of charismatic authority – to embody movements and cultural trends that people want to be part of. In the context of the media and entertainment business, their role is essentially to commodify the uncommodifiable, to turn our search for meaning and identity into a source of profit. Indeed, the celebrity culture of recent decades grew from the bosom of huge media conglomerates, who found that the saturation of culture by new media technologies allowed them to turn a small number of stars into prodigious brands.

In the 1980s performers like Michael Jackson and Madonna, along with sports icons like Michael Jordan, joined Hollywood actors in a class of mega celebrities. By the ’90s, such ubiquitous figures were flanked by stars catering to all kinds of specific audiences: in the UK, for instance, lad culture had premiership footballers, popular feminism had Sex and the City, Britpoppers had the Gallagher brothers and grungers had Kurt Cobain. For their corporate handlers, high-profile celebrities ensured revenues from merchandise, management rights and advertising deals, as well as reliable consumer audiences that offset the risks of more speculative ventures.

Long before social media, in other words, celebrity culture had become a thoroughly commercialised form of charismatic authority. It still relied on the ability of stars to issue their followers with a “calling” – to embody popular ideals and galvanise movements – but these roles and relationships were reflected in various economic transactions. Most obviously, where a celebrity became a figurehead for a particular subculture, people might express their membership of that subculture by buying stuff the celebrity advertised. But no less important, in hindsight, was the commodification of celebrities’ private lives, as audiences were bonded to their stars through an endless stream of “just like us” paparazzi shots, advertising campaigns, exclusive interviews and documentaries, and so on. As show-business sought to the maximise the value of star power, the personae of celebrities were increasingly constructed in the mould of “real” people with human, all-too-human lives.

Which brings us back to our influencer friends. For all its claims to have opened up arts and entertainment to the masses, social media really represents another step towards a celebrity culture dominated by an elite cluster of stars. Digital tech, as we know, has annihilated older business models in media-related industries. This has concentrated even more success in the hands of the few who can command attention and drive cultural trends – who can be “influencers” – through the commodification of their personal lives. And that, of course, is exactly what platforms like Instagram are designed for. A Bloomberg report describes how the Kardashians took over and ramped-up the trends of earlier decades:

Back in the 1990s, when the paparazzi were in their pomp, pictures of celebrities going about their daily lives… could fetch $15,000 a pop from tabloids and magazines… The publications would in turn sell advertising space alongside those images and rake in a hefty profit.

Thanks to social media, the Kardashians were able to cut out the middle man. Instagram let the family post images that they controlled and allowed them to essentially sell their own advertising space to brands… The upshot is that Kardashian West can make $1 million per sponsored post, while paparazzi now earn just $5 to $10 apiece for “Just Like Us” snaps.

Obviously, Instagram does not “let” the Kardashians do this out of the kindness of its heart: as platforms compete for users, it’s in their interests to accommodate the individuals who secure the largest audiences. In fact, through their efforts to identify and promote such celebrities, the social media companies are increasingly important in actually making them celebrities, effectively deciding who among the aspiring masses gets a shot at fame. Thus another report details how TikTok “assigned individual managers to thousands of stars to help with everything, whether tech support or college tuition,” while carefully coordinating with said stars to make their content go viral.

But recall, again, that the power of celebrities ultimately rests on their followers’ feeling that they’re part of something – that is the essence of their charisma. And it’s here that social media really has been revolutionary. It has allowed followers to become active communities, fused by constant communication with each other and with the stars themselves. Instagram posts revealing what some celeb had for breakfast fuel a vast web of interactions, through which their fans sustain a lively sense of group identity. Naturally, this being social media, the clearest sign of such bonding is the willingness of fans to group together like a swarm of hornets and attack anyone who criticises their idols. Hence the notorious aggression of the “Beleibers,” or fanatical Justin Bieber fans (apparently not even controllable by the pop star himself); and hence Instagram rewriting an algorithm to protect Taylor Swift from a wave of snake emojis launched by Kim Kardashian followers. This, surely, is the sinister meaning behind an e-commerce executive bragging to Forbes magazine about Kylie Jenner’s following, “No other influencer has ever gotten to the volume or had the rabid fans” that she does. 

In other words, the celebrity/tech elite’s power is rooted in new forms of association and identification made possible by the internet. It’s worth taking a closer look at one act which has revealed this in an especially vivid way: the K-Pop boy band BTS (the name stands for Bangtan Sonyeodan, or Beyond the Scene in English). Preppy outfits and feline good looks notwithstanding, these guys are no lightweights. Never mind the chart-topping singles, the stadium concerts and the collaborations with Ed Sheeran; their success registers on a macroeconomic scale. According to 2018 estimates from the Hyundai Research Institute, BTS contributes $3.6 billion annually to the South Korean economy, and is responsible for around 7% of tourism to the country. No less impressive are the band’s figures for online consumption: it has racked up the most YouTube views in a 24-hour period, and an unprecedented 750,000 paying viewers for a live-streamed concert. 

Those last stats are the most suggestive, because BTS’s popularity rests on a fanatical online community of followers, the “Adorable Representative M.C. for Youth” (ARMY), literally numbering in the tens of millions. In certain respects, the ARMY doesn’t resemble a fan club so much as an uncontacted tribe in the rainforest: it has its own aesthetics, norms and rituals centred around worship of BTS. All that’s missing, perhaps, is a cosmology, but the band’s management is working on that. It orchestrates something called the “Bangtan Universe”: an ongoing fictional metanarrative about BTS, unfolding across multiple forms of media, which essentially encourages the ARMY to inhabit its own alternate reality. 

Consequently, such is the ARMY’s commitment that its members take personal responsibility for BTS’s commercial success. They are obsessive about boosting the band’s chart performance, streaming new content as frequently and on as many devices as possible. The Wall Street Journal describes one fan’s devotion:  

When [the BTS song] “Dynamite” launched, Michelle Tack, 47, a cosmetics stores manager from Chicopee, Massachusetts, requested a day off work to stream the music video on YouTube. “I streamed all day,” Tack says. She made sure to watch other clips on the platform in between her streaming so that her views would count toward the grand total of views. […]

“It feels like I’m part of this family that wants BTS to succeed, and we want to do everything we can do to help them,” says Tack. She says BTS has made her life “more fulfilled” and brought her closer to her two daughters, 12 and 14. 

The pay-off came last October, when the band’s management company, Big Hit Entertainment, went public, making one of the most successful debuts in the history of the South Korean stock market. And so the sense of belonging which captivated that retail manager from Massachussetts now underpins the value of financial assets traded by banks, insurance companies and investment funds. Needless to say, members of the ARMY were clamouring to buy the band’s shares too. 

It is this paradigm of charismatic authority – the virtual community bound by devotion to a celebrity figurehead – which has been echoed in politics in recent years. Most conspicuously, Donald Trump’s political project shared many features with the new celebrity culture. The parallels between Trump and a figure like Kylie Jenner are obvious, from building a personal brand off the back of reality TV fame to exaggerating his wealth and recognising the innovative potential of social media. Meanwhile, the immersive fiction of the Bangtan Universe looks like a striking precedent for the wacky world of Deep State conspiracy theories inhabited by diehard Trump supporters, which spilled dramatically into view with the Washington Capitol invasion of January 6th.

As I argued in my Tablet essay – and as the chaos and inefficacy of the Trump presidency demonstrates – this social media-based form of charismatic politics is not very well suited to wielding formal power. In part, this is because the model is better suited to the kinds of power sought by celebrities: financial enrichment and cultural influence. The immersive character of online communities, which tend to develop their own private languages and preoccupations, carries no real downside for the celebrity: it just means more strongly identified fans. It is, however, a major liability in politics. The leaders elevated by such movements aren’t necessarily effective politicians to begin with, and they struggle to broaden their appeal due to the uncompromising agendas their supporters foist on them. We saw these problems not just with Trump movement but also with the Jeremy Corbyn phenomenon in the UK, and, to an extent, with the younger college-educated liberals who influenced Bernie Sanders after 2016. 

But this doesn’t mean online celebrity culture has had no political impact. Even if virtual communities aren’t much good at practical politics, they are extremely good at producing new narratives and norms, whether rightwing conspiracy theories in the QAnon mould, or the progressive ideas about gender and identity which Angela Nagle has aptly dubbed “Tumblr liberalism.” Celebrities are key to the process whereby such innovations are exported into the wider discourse as politically-charged memes. Thus Moya Lothian Mclean has described how influencers popularise feminist narratives – first taking ideas from academics and activists, then simplifying them for mass consumption and “regurgitat[ing] them via an aesthetically pleasing Instagram tile.” Once such memes reach a certain level of popularity, the really big celebrities will pick them up as part of their efforts to present a compelling personality to their followers (which is not to say, of course, that they don’t also believe in them). The line from Tumblr liberalism through Instagram feminism eventually arrives at the various celebrities who have revealed non-binary gender identities to their followers in recent years. Celebs also play an important role in legitimising grassroots political movements: last year BTS joined countless other famous figures in publicly giving money to Black Lives Matter, their $1 million donation being matched by their fans in little more than a day.

No celebrity can single-handedly move the needle of public opinion, but discourse is increasingly shaped by activists borrowing the tools of the influencer, and by influencers borrowing the language of the activist. Such charismatic figures are the most important nodes in the sprawling network of online communities that constitutes popular culture today; and through their attempts to foster an intimate connection with their followers, they provide a channel through which the political can be made to feel personal. This doesn’t quite amount to a “celebocracy,” but nor can we fully understand the nature of power today without acknowledging the authority of stars.

Tradition with a capital T: Dylan at 80

It’s December 1963, and a roomful of liberal luminaries are gathered at New York’s Americana Hotel. They are here for the presentation of the Emergency Civil Liberties Committee’s prestigious Tom Paine Award, an accolade which, a year earlier, had been accepted by esteemed philosopher and anti-nuclear campaigner Bertrand Russell. If any in the audience have reservations about this year’s recipient, a 22-year-old folk singer called Bob Dylan, their skepticism will soon be vindicated. 

In what must rank as one of the most cack-handed acceptance speeches in history, an evidently drunk Dylan begins with a surreal digression about the attendees’ lack of hair, his way of saying that maybe it’s time they made room for some younger voices in politics. “You people should be at the beach,” he informs them, “just relaxing in the time you have to relax. It is not an old people’s world.” Not that it really matters anyway, since, as Dylan goes on to say, “There’s no black and white, left and right to me anymore; there’s only up and down… And I’m trying to go up without thinking of anything trivial such as politics.” Strange way to thank an organisation which barely survived the McCarthyite witch-hunts, but Dylan isn’t finished. To a mounting chorus of boos, he takes the opportunity to express sympathy for Lee Harvey Oswald, the assassin who had shot president John F. Kennedy less than a month earlier. “I have to be honest, I just have to be… I got to admit honestly that I, too, saw some of myself in him… Not to go that far and shoot…”

Stories like this one have a special status in the world of Bobology, or whatever we want to call the strange community-cum-industry of critics, fans and vinyl-collecting professors who have turned Dylan into a unique cultural phenomenon. The unacceptable acceptance speech at the Americana is among a handful of anecdotes that dramatize the most iconic time in his career – the mid-’60s period when Dylan rejected/ betrayed/ transcended (delete as you see fit) the folk movement and its social justice oriented vision of music. 

For the benefit of the uninitiated, Dylan made his name in the early ’60s as a politically engaged troubadour, writing protest anthems that became the soundtrack of the Civil Rights movement. He even performed as a warm-up act for Martin Luther King Jnr’s “I Have a Dream” speech at the 1963 March on Washington. Yet no sooner had Dylan been crowned “the conscience of a generation” than he started furiously trying to wriggle out of that role, most controversially through his embrace of rock music. In 1965, Dylan plugged in to play an electric set at the Newport Folk Festival (“the most written about performance in the history of rock,” writes biographer Clinton Heylin), leading to the wonderful though apocryphal story of folk stalwart Pete Seeger trying to cleave the sound cables with an axe. Another famous confrontation came at the Manchester Free Trade Hall in 1966, where angry folkies pelted Dylan with cries of “Judas!” (a moment whose magic really rests on Dylan’s response, as he turns around to his electric backing band and snarls “play it fuckin’ loud”). 

In the coming days, as the Bobologists celebrate their master’s 80th birthday, we’ll see how Dylan’s vast and elaborate legend remains anchored in this original sin of abandoning the folk community. I like the Tom Paine Award anecdote because it makes us recall that, for all his prodigious gifts, Dylan was little more than an adolescent when these events took place – a chaotic, moody, often petulant young man. What has come to define Dylan, in a sense, is a commonplace bout of youthful rebellion which has been elevated into a symbolic narrative about a transformative moment in cultural history. 

Still, we can hardly deny its power as a symbolic narrative. Numerous writers have claimed that Dylan’s rejection of folk marks a decisive turning point in the counterculture politics of ’60s, separating the collective purpose and idealism of the first half of the decade, as demonstrated in the March on Washington, from the bad acid trips, violent radicalism and disillusionment of the second. Hadn’t Dylan, through some uncanny intuition, sensed this descent into chaos? How else can we explain the radically different mood of his post-folk albums? The uplifting “Come gather ’round people/ Wherever you roam” is replaced by the sneering “How does it feel/ to be on your own,” and the hopeful “The answer, my friend, is blowin’ in the wind” by the cynical “You don’t need a weatherman to know which way the wind blows.” Or was Dylan, in fact, responsible for unleashing the furies of the late-’60s? That last lyric, after all, provided the name for the militant activist cell The Weathermen.

More profound still, Dylan’s mid-’60s transformation seemed to expose a deep fault line in the liberal worldview, a tension between two conceptions of freedom and authenticity. The folk movement saw itself in fundamentally egalitarian and collectivist terms, as a community of values whose progressive vision of the future was rooted in the shared inheritance of the folk tradition. Folkies were thus especially hostile to the rising tide of mass culture and consumerism in America. And clearly, had Dylan merely succumbed to the cringeworthy teenybopper rock ’n’ roll which was then topping the charts, he could have been written off as a sell-out. But Dylan’s first three rock records – the “Electric Trilogy” of Bringing It All Back HomeHighway 61 Revisited and Blonde on Blonde – are quite simply his best albums, and probably some of the best albums in the history of popular music. They didn’t just signal a move towards a wider market of consumers; they practically invented rock music as a sophisticated and artistically credible form. And the key to this was a seductive of vision of the artist as an individual set apart, an anarchic fount of creativity without earthly commitments, beholden only to the sublime visions of his own interior world. 

It was Dylan’s lyrical innovations, above all, that carried this vision. His new mode of social criticism, as heard in “Gates of Eden” and “It’s Alright, Ma (I’m Only Bleeding),” was savage and indiscriminate, condemning all alike and refusing to offer any answers. Redemption came in stead from the imaginative power of the words and images themselves – the artist’s transcendent “thought dreams,” his spontaneous “skippin’ reels of rhyme” – his ability to laugh, cry, love and express himself in the face of a bleak and inscrutable world.

Yes, to dance beneath the diamond sky with one hand waving free
Silhouetted by the sea, circled by the circus sands
With all memory and fate driven deep beneath the waves

Here is the fantasy of artistic individualism with which Dylan countered the idealism of folk music, raising a dilemma whose acuteness can still be felt in writing on the subject today. 

But for a certain kind of Dylan fan, to read so much into the break with folk is to miss the magician’s hand in the crafting of his own legend. Throughout his career, Dylan has shown a flair for mystifying his public image (some would say a flair for dishonesty). His original folksinger persona was precisely that – a persona he copied from his adolescent hero Woody Guthrie, from the pitch of his voice and his workman’s cap to the very idea of writing “topical” songs about social injustice. From his first arrival on the New York folk scene, Dylan intrigued the press with fabrications about his past, mostly involving running away from home, travelling with a circus and riding on freight trains. (He also managed to persuade one of his biographers, Robert Shelton, that he had spent time working as a prostitute, but the less said about that yarn the better). Likewise, Dylan’s subsequent persona as the poet of anarchy drew much of its effect from the drama of his split with the folk movement, and so its no surprise to find him fanning that drama, both at the time and long afterwards, with an array of facetious, hyperbolic and self-pitying comments about what he was doing. 

When the press tried to tap into Dylan’s motivations, he tended to swat them away with claims to the effect that he was just “a song and dance man,” a kind of false modesty (always delivered in a tone of preening arrogance) that fed his reputation for irreverence. He told the folksinger Joan Baez, among others, that his interest in protest songs had always been cynical – “You know me. I knew people would buy that kind of shit, right? I was never into that stuff” – despite numerous confidants from Dylan’s folk days insisting he had been obsessed with social justice. Later, in his book Chronicles: Volume One, Dylan made the opposite claim, insisting both his folk and post-folk phases reflected the same authentic calling: “All I’d ever done was sing songs that were dead straight and expressed powerful new realities. … My destiny lay down the road with whatever life invited, had nothing to do with representing any kind of civilisation.” He then complained (and note that modesty again): “It seems like the world has always needed a scapegoat – someone to lead the charge against the Roman Empire.” Incidentally, the “autobiographical” Chronicles is a masterpiece of self-mythologizing, where, among other sleights of hand, Dylan cuts back and forth between different stages of his career, neatly evading the question of how and why his worldview evolved.

Nor, of course, was Dylan’s break with folk his last act of reinvention. The rock phase lasted scarcely two years, after which he pivoted towards country music, first with the austere John Wesley Harding and then with the bittersweet Nashville Skyline. In the mid-1970s, Dylan recast himself as a travelling minstrel, complete with face paint and flower-decked hat, on the Rolling Thunder Revue tour. At the end of that decade he emerged as a born-again Christian playing gospel music, and shortly afterwards as an Infidel (releasing an album with that title). In the ’90s he appeared, among other guises, as a blues revivalist, while his more recent gestures include a kitsch Christmas album and a homage to Frank Sinatra. If there’s one line that manages to echo through the six decades of Dylan’s career, it must be “strike another match, go start anew.” 

This restless drive to wrong-foot his audience makes it tempting to see Dylan as a kind of prototype for the shape-shifting pop idol, anticipating the likes of David Bowie and Kate Bush, not to mention the countless fading stars who refresh their wardrobes and their political causes in a desperate clinging to relevance. Like so many readings of Dylan, this one inevitably doubles back, concertina-like, to the original break with folk. That episode can now be made to appear as the sudden rupture with tradition that gave birth to the postmodern celebrity, a paragon of mercurial autonomy whose image can be endlessly refashioned through the media.

But trying to fit Dylan into this template reveals precisely what is so distinctive about him. Alongside his capacity for inventing and reinventing himself as a cultural figure, there has always been a sincere and passionate devotion to the forms and traditions of the past. Each of the personae in Dylan’s long and winding musical innings – from folk troubadour to country singer to roadshow performer to bluesman to roots rocker to jazz crooner – has involved a deliberate engagement with some aspect of the American musical heritage, as well as with countless other cultural influences from the U.S. and beyond. This became most obvious from the ’90s onwards, with albums such as Good As I Been to You and World Gone Wrong, composed entirely of covers and traditional folk songs – not to mention “Love and Theft, a title whose quotation marks point to a book by historian Eric Lott, the subject of which, in turn, is the folklore of the American South. But these later works just made explicit what he had been doing all along.

“What I was into was traditional stuff with a capital T,” writes Dylan about his younger self in Chronicles. The unreliability of that book has already been mentioned, but the phrase is a neat way of describing his approach to borrowing from history. Dylan’s personae are never “traditional” in the sense of adhering devoutly to a moribund form; nor would it be quite right to say that he makes older styles his own. Rather, he treats tradition as an invitation to performance and pastiche, as though standing by the costume cupboard of history and trying on a series of eye-catching but not-quite-convincing disguises, always with a nod and a wink. I remember hearing Nashville Skyline for the first time and being slightly bemused at what sounded like an entirely artless imitation of country music; I was doubly bemused to learn this album had been recorded and released in 1969, the year of Woodstock and a year when Dylan was actually living in Woodstock. But it soon occurred to me that this was Dylan’s way of swimming against the tide. He may have lit the fuse of the high ’60s, but by the time the explosion came he had already moved on, not forward but back, recognising where his unique contribution as a musician really lay: in an ongoing dance with the spirits of the past, part eulogy and part pantomime. I then realised this same dance was happening in his earlier folk period, and in any number of his later chapters.

“The madly complicated modern world was something I took little interest in” – Chronicles again – “What was swinging, topical and up to date for me was stuff like the Titanic sinking, the Galveston flood, John Henry driving steel, John Hardy shooting a man on the West Virginia line.” We know this is at least partly true, because this overtly mythologized, larger-than-life history, this traditional stuff with a capital T, is never far away in Dylan’s music. The Titanic, great floods, folk heroes and wild-west outlaws all appear in his catalogue, usually with a few deliberate twists to imbue them with a more biblical grandeur, and to remind us not to take our narrator too seriously. It’s even plausible that he really did take time out from beatnik life in Greenwich Village to study 19th century newspapers at the New York Public Library, not “so much interested in the issues as intrigued by the language and rhetoric of the times.” Dylan is nothing if not a ventriloquist, using his various musical dummies to recall the languages of bygone eras. 

And if we look more closely at the Electric Trilogy, the infamous reinvention that sealed Dylan’s betrayal of folk, we find that much of the innovation on those albums fits into a twelve-bar blues structure, while their rhythms recall the R&B that Dylan had performed as a teenager in Hibbing, Minnesota. Likewise, it’s often been noted that their lyrical style, based on chains of loosely associated or juxtaposed images, shows not just the influence of the Beats, but also French symbolist poet Arthur Rimbaud, German radical playwright Bertolt Brecht, and bluesman Robert Johnson. This is to say nothing of the content of the lyrics, which feature an endless stream of allusions to history, literature, religion and myth. Songs like “Tombstone Blues” make an absurd parody of their own intertextuality (“The ghost of Belle Starr she hands down her wits/ To Jezebel the nun she violently knits/ A bald wig for Jack the Ripper who sits/ At the head of the chamber of commerce”). For all its iconoclasm, Dylan’s novel contribution to songwriting in this phase was to bring contemporary America into dialogue with a wider universe of cultural riches. 

Now consider this. Could it be that even Dylan’s disposable approach to his own persona, far from hearkening the arrival of the modern media star, is itself a tip of the hat to some older convention? The thought hadn’t occurred to me until I dipped into the latest round of Bobology marking Dylan’s 80th. There I found an intriguing lecture by the critic Greil Marcus about Dylan’s relationship to blues music (and it’s worth recalling that, by his own account, the young Dylan only arrived at folk music via the blues of Lead Belly and Odetta). “The blues,” says Marcus, “mandate that you present a story on the premise that it happened to you, so it has to be written [as] not autobiography but fiction.” He explains:

words first came from a common store of phrases, couplets, curses, blessings, jokes, greetings, and goodbyes that passed anonymously between blacks and whites after the Civil War. From that, the blues said, you craft a story, a philosophy lesson, that you present as your own: This happened to me. This is what I did. This is how it felt.

Is this where we find a synthesis of those two countervailing tendencies in Dylan’s career – on to the next character, back again to the “common store” of memories? Weaving a set of tropes into a fiction, which you then “present as your own,” certainly works as a description of how Dylan constructs his various artistic masks, not to mention many of his songs. It would be satisfying to imagine that this practice is itself a refashioned one – and as a way of understanding where Dylan is coming from, probably no less fictitious than all the others.

How Napoleon made the British

In 1803, the poet and philosopher Samuel Taylor Coleridge wrote to a friend about his relish at the prospect of being invaded by Napoleon Bonaparte. “As to me, I think, the Invasion must be a Blessing,” he said, “For if we do not repel it, & cut them to pieces, we are a vile sunken race… And if we do act as Men, Christians, Englishmen – down goes the Corsican Miscreant, & Europe may have peace.”

This was during the great invasion scare, when Napoleon’s Army of England could on clear days be seen across the channel from Kent. Coleridge’s fighting talk captured the rash of patriotism that had broken out in Britain. The largest popular mobilisation of the entire Hanoverian era was set in motion, as some 400,000 men from Inverness to Cornwall entered volunteer militia units. London’s playhouses were overtaken by anti-French songs and plays, notably Shakespeare’s Henry V. Caricaturists such as James Gillray took a break from mocking King George III and focused on patriotic propaganda, contrasting the sturdy beef-eating Englishman John Bull with a puny, effete Napoleon.

These years were an important moment in the evolution of Britain’s identity, one that resonated through the 19th century and far beyond. The mission identified by Coleridge – to endure some ordeal as a vindication of national character, preferably without help from anyone else, and maybe benefit wider humanity as a by-product – anticipates a British exceptionalism that loomed throughout the Victorian era, reaching its final apotheosis in the Churchillian “if necessary alone” patriotism of the Second World War. Coleridge’s friend William Wordsworth expressed the same sentiment in 1806, after Napoleon had smashed the Prussian army at Jena, leaving the United Kingdom his only remaining opponent. “We are left, or shall be left, alone;/ The last that dare to struggle with the Foe,” Wordsworth wrote, “’Tis well! From this day forward we shall know/ That in ourselves our safety must be sought;/ That by our own right hands it must be wrought.”

As we mark the bicentennial of Napoleon’s death on St Helena in 1821, attention has naturally been focused on his legacy in France. But we shouldn’t forget that in his various guises – conquering general, founder of states and institutions, cultural icon – Napoleon transformed every part of Europe, and Britain was no exception. Yet the apparent national pride of the invasion scare was very far from the whole story. If the experience of fighting Napoleon left the British in important ways more cohesive, confident and powerful, it was largely because the country had previously looked like it was about to fall apart. 

Throughout the 1790s, as the French Revolution followed the twists and turns that eventually brought Napoleon to power, Britain was a tinder box. Ten years before he boasted of confronting Napoleon as “Men, Christians, Englishmen,” Coleridge had burned the words “Liberty” and “Equality” into the lawns of Cambridge university. Like Wordsworth, and like countless other radicals and republicans, he had embraced the Revolution as the dawn of a glorious new age in which the corrupt and oppressive ancien régime, including the Anglican establishment of Britain, would be swept away. 

And the tide of history seemed to be on the radicals’ side. The storming of the Bastille came less than a decade after Britain had lost its American colonies, while in George III the country had an unpopular king, prone to bouts of debilitating madness, whose scandalous sons appeared destined to drag the monarchy into disgrace. 

Support for the Revolution was strongest among Nonconformist Protestant sects – especially Unitarians, the so-called “rational Dissenters” – who formed the intellectual and commercial elite of cities such as Norwich, Birmingham and Manchester, and among the radical wing of the Whig party. But for the first time, educated working men also entered the political sphere en masse. They joined the Corresponding Societies which held public meetings and demonstrations across the country, so named because of their contacts with Jacobin counterparts in France. Influential Unitarian ministers, such as the Welsh philosopher Richard Price and the chemist Joseph Priestly, interpreted the Revolution as the work of providence and possibly a sign of the imminent Apocalypse. In the circle of Whig aristocrats around Charles James Fox, implacable adversary of William Pitt’s Tory government, the radicals had sympathisers at the highest levels of power. Fox famously said of the Revolution “how much the greatest event it is that ever happened in the world, and how much the best.”

From 1792 Britain was at war with revolutionary France, and this mix of new ideals and longstanding religious divides boiled over into mass unrest and fears of insurrection. In 1795 protestors smashed the windows at 10 Downing Street, and at the opening of parliament a crowd of 200,000 jeered at Pitt and George III. The radicals were met by an equally volatile loyalist reaction in defence of church and king. In 1793, a dinner celebrating Bastille Day in Birmingham sparked three days of rioting, including attacks on Nonconformist chapels and Priestly’s home. Pitt’s government introduced draconian limitations on thought, speech and association, although his attempt to convict members of the London Corresponding Society with high treason was foiled by a jury. 

Both sides drew inspiration from an intense pamphlet war that included some of the most iconic and controversial texts in British intellectual history. Conservatives were galvanised by Edmund Burke’s Reflections on the Revolution in France, a defence of England’s time-honoured social hierarchies, while radicals hailed Thomas Paine’s Rights of Man, calling for the abolition of Britain’s monarchy and aristocracy. When summoned on charges of seditious libel, Paine fled to Paris, where he sat in the National Assembly and continued to support the revolutionary regime despite almost being executed during the Reign of Terror that began in 1793. Among his supporters were the pioneering feminist Mary Wollstonecraft and the utopian progressive William Godwin, who shared an intellectual circle with Coleridge and Wordsworth. 

Britain seemed to be coming apart at the seams. Bad harvests at the turn of the century brought misery and renewed unrest, and the war effort failed to prevent France (under the leadership, from 1799, of First Consul Bonaparte) from dominating the continent. Paradoxically, nothing captures the paralysing divisions of the British state at this moment better than its expansion in 1801 to become the United Kingdom of Great Britain and Ireland. The annexation of Ireland was a symptom of weakness, not strength, since it reflected the threat posed by a bitterly divided and largely hostile satellite off Britain’s west coast. The only way to make it work, as Pitt insisted, was to grant political rights to Ireland’s Catholic majority – but George III refused. So Pitt resigned, and the Revolutionary Wars ended with the Treaty of Amiens in 1802, effectively acknowledging French victory.

Britain’s tensions and weaknesses certainly did not disappear during the ensuing, epic conflict with Napoleon from 1803-15. Violent social unrest continued to flare up, especially at times of harvest failure, financial crisis, and economic hardship resulting from restriction of trade with the continent. There were, at times, widespread demands for peace. The government continued to repress dissent with military force and legal measures; the radical poet and engraver William Blake (later rebranded as a patriotic figure when his words were used for the hymn Jerusalem) stood trial for sedition in 1803, following an altercation with two soldiers. Many of those who volunteered for local military units probably did so out of peer pressure and to avoid being impressed into the navy. Ireland, of course, would prove to be a more intractable problem than even Pitt had imagined.  

Nonetheless, Coleridge and Wordsworth’s transition from radicals to staunch patriots was emblematic. Whether the population at large was genuinely loyal or merely quiescent, Britain’s internal divisions lost much of their earlier ideological edge, and the threat of outright insurrection faded away. This process had already started in the 1790s, as many radicals shied away from the violence and militarism of revolutionary France, but it was galvanised by Napoleon. This was not just because he appeared determined and able to crush Britain, but also because of British perceptions of his regime. 

As Yale professor Stuart Semmel has observed, Napoleon did not fit neatly into the dichotomies with which Britain was used to contrasting itself against France. For the longest time, the opposition had been (roughly) “free Protestant constitutional monarchy” vs “Popish absolutist despotism”; after the Revolution, it had flipped to “Christian peace and order” vs “bloodthirsty atheism and chaos.” Napoleon threw these catagories into disarray. The British, says Semmel, had to ask “Was he a Jacobin or a king …; Italian or Frenchman; Catholic, atheist, or Muslim?” The religious uncertainty was especially unsettling, after Napoleon’s “declaration of kinship with Egyptian Muslims, his Concordat with the papacy, his tolerance for Protestants, and his convoking a Grand Sanhedrin of European Jews.” 

This may have forced some soul-searching on the part of the British as they struggled to define Napoleonic France, but in some respects the novelty simplified matters. Former radicals could argue Napoleon represented a betrayal of the Revolution, and could agree with loyalists that he was a tyrant bent on personal domination of Europe, thus drawing a line under the ideological passions of the revolutionary period. In any case, loyalist propaganda had no difficulty transferring to Napoleon the template traditionally reserved for the Pope – that of the biblical Antichrist. This simple fact of having a single infamous figure on which to focus patriotic feelings no doubt aided national unity. As the essayist William Hazlitt, an enduring supporter of Napoleon, later noted: “Everybody knows that it is only necessary to raise a bugbear before the English imagination in order to govern it at will.”

More subtly, conservatives introduced the concept of “legitimacy” to the political lexicon, to distinguish the hereditary power of British monarchs from Napoleon’s usurpation of the Bourbon throne. This was rank hypocrisy, given the British elite’s habit of importing a new dynasty whenever it suited them, but it played to an attitude which did help to unify the nation: during the conflict with Napoleon, people could feel that they were defending the British system in general, rather than supporting the current government or waging an ideological war against the Revolution. The resulting change of sentiment could be seen in 1809, when there were vast celebrations to mark the Golden Jubilee of the once unpopular George III. 

Undoubtedly British culture was also transformed by admiration for Napoleon, especially among artists, intellectuals and Whigs, yet even here the tendency was towards calming antagonisms rather than enflaming them. This period saw the ascendance of Romanticism in European culture and ways of thinking, and there was not and never would be a greater Romantic hero than Napoleon, who had turned the world upside down through force of will and what Victor Hugo later called “supernatural instinct.” But ultimately this meant aestheticizing Napoleon, removing him from the sphere of politics to that of sentiment, imagination and history. Thus when Napoleon abdicated his throne in 1814, the admiring poet Lord Byron was mostly disappointed he had not fulfilled his dramatic potential by committing suicide. 

But Napoleon profoundly reshaped Britain in another way: the long and grueling conflict against him left a lasting stamp on every aspect of the British state. In short, while no-one could have reasonably predicted victory until Napoleon’s catastrophic invasion of Russia in 1812, the war was nonetheless crucial in forging Britain into the global superpower it would become after 1815. 

The British had long been in the habit of fighting wars with ships and money rather than armies, and for the most part this was true of the Napoleonic wars as well. But the unprecedented demands of this conflict led to an equally unprecedented development of Britain’s financial system. This started with the introduction of new property taxes and, in 1799, the first income tax, which were continually raised until by 1814 their yield had increased by a factor of ten. What mattered here was not so much the immediate revenue as the unparalleled fiscal base it gave Britain for the purpose of borrowing money – which it did, prodigiously. In 1804, the year Bonaparte was crowned Emperor, the “Napoleon of finance” Nathan Rothschild arrived in London from Frankfurt, helping to secure a century of British hegemony in the global financial system. 

No less significant were the effects of war in stimulating Britain’s nascent industrial revolution, and its accompanying commercial empire. The state relied on private contractors for most of its materiel, especially that required to build and maintain the vast Royal Navy, while creating immense demand for iron, coal and timber. In 1814, when rulers and representatives of Britain’s European allies came to Portsmouth, they were shown a startling vision of the future: enormous factories where pulley blocks for the rigging of warships were being mass-produced with steam-driven machine tools. Meanwhile Napoleon’s Continental System, by shutting British manufacturers and exporters out of Europe, forced them to develop markets in South Asia, Africa and Latin America. 

Even Britain’s fabled “liberal” constitution – the term was taken from Spanish opponents to Napoleon – did in fact do some of the organic adaptation that smug Victorians would later claim as its hallmark. The Nonconformist middle classes, so subversive during the revolutionary period, were courted in 1812-13 with greater political rights and by the relaxation of various restrictions on trade. Meanwhile, Britain discovered what would become its greatest moral crusade of the 19thcentury. Napoleon’s reintroduction of slavery in France’s Caribbean colonies created the conditions for abolitionism to grow as a popular movement in Britain, since, as William Wilberforce argued, “we should not give advantages to our enemies.” Two bills in 1806-7 effectively ended Britain’s centuries-long participation in the trans-Atlantic slave trade.

Thus Napoleon was not just a hurdle to be cleared en route to the British century – he was, with all his charisma and ruthless determination, a formative element in the nation’s history. And his influence did not end with his death in 1821, of course. He would long haunt the Romantic Victorian imagination as, in Eric Hobsbawm’s words, “the figure every man who broke with tradition could identify himself with.”

The Philosophy of Rupture: How the 1920s Gave Rise to Intellectual Magicians

This essay was originally published by Areo magazine on 4th November 2020.

When it comes to intellectual history, Central Europe in the decade of the 1920s presents a paradox. It was an era when revolutionary thought – original and iconoclastic ideas and modes of thinking – was not in fact revolutionary, but almost the norm. And the results are all around us today. The 1920s were the final flourish in a remarkable period of path-breaking activity in German-speaking Europe, one that laid many of the foundations for both analytic and continental philosophy, for psychology and sociology, and for several branches of legal philosophy and of theoretical science.

This creative ferment is partly what people grasp at when they refer to the “spirit” of the ’20s, especially in Germany’s Weimar Republic. But this doesn’t help us understand where that spirit came from, or how it draws together the various thinkers who, in hindsight, seem to be bursting out of their historical context rather than sharing it.

Wolfram Eilenberger attempts one solution to that problem in his new book, Time of the Magicians: The Invention of Modern Thought, 1919-1929. He manages to weave together the ideas of four philosophers – Ludwig Wittgenstein, Martin Heidegger, Walter Benjamin and Ernst Cassirer – by showing how they emerged from those thinkers’ personal lives. We get colourful accounts of money troubles, love affairs, career struggles and mental breakdowns, each giving way to a discussion of the philosophical material. In this way, the personal and intellectual journeys of the four protagonists are linked in an expanding web of experiences and ideas.

This is a satisfying format. There’s just no denying the voyeuristic pleasure of peering into these characters’ private lives, whether it be Heidegger’s and Benjamin’s attempts to rationalise their adulterous tendencies, or the series of car crashes that was Wittgenstein’s social life. Besides, it’s always useful to be reminded that, with the exception of the genuinely upstanding Cassirer, these great thinkers were frequently selfish, delusional, hypocritical and insecure. Just like the rest of us then.

But entertaining as it is, Eilenberger’s biographical approach does not really cast much light on that riddle of the age: why was this such a propitious time for magicians? If anything, his portraits play into the romantic myth of the intellectual window-breaker as a congenital outsider and unusual genius – an ideal that was in no small part erected by this very generation. This is a shame because, as I’ll try to show later, these figures become still more engaging when considered not just as brilliant individuals, but also as products of their time.

First, it’s worth looking at how Eilenberger manages to draw parallels between the four philosophers’ ideas, for that is no mean feat. Inevitably this challenge makes his presentation selective and occasionally tendentious, but it also produces some imaginative insights.

*          *          *


At first sight, Wittgenstein seems an awkward fit for this book, seeing as he did not produce any philosophy during the decade in question. His famous early work, the Tractatus Logico-Philosophicus, claimed to have solved the problems of philosophy “on all essential points.” So we are left with the (admittedly fascinating) account of how he signed away his vast inheritance, trained as a primary school teacher, and moved through a series of remote Austrian towns becoming increasingly isolated and depressed.

But this does leave Eilenberger plenty of space to discuss the puzzling Tractatus. He points out, rightly, that Wittgenstein’s mission to establish once and for all what can meaningfully be said – that is, what kinds of statements actually make sense – was far more than an attempt to rid philosophy of metaphysical hokum (even if that was how his logical-empiricist fans in Cambridge and the Vienna Circle wanted to read the work).

Wittgenstein did declare that the only valid propositions were those of natural science, since these alone shared the same logical structure as empirical reality, and so could capture an existing or possible “state of affairs” in the world. But as Wittgenstein freely admitted, this meant the Tractatus itself was nonsense. Therefore its reader was encouraged to disregard the very claims which had established how to judge claims, to “throw away the ladder after he has climbed up it.” Besides, it remained the case that “even if all possible scientific questions be answered, the problems of life have still not been touched at all.”

According to Eilenberger, who belongs to the “existentialist Wittgenstein” school, the Tractatus’ real goals were twofold. First, to save humanity from pointless conflict by clarifying what could be communicated with certainty. And second, to emphasise the degree to which our lives will always be plagued by ambiguity – by that which can only be “shown,” not said – and hence by decisions that must be taken on the basis of faith.

This reading allows Eilenberger to place Wittgenstein in dialogue with Heidegger and Benjamin. The latter both styled themselves as abrasive outsiders: Heidegger as the Black Forest peasant seeking to subvert academic philosophy from within, Benjamin as the struggling journalist and flaneur who, thanks to his erratic behaviour and idiosyncratic methods, never found an academic post. By the end of the ’20s, they had gravitated towards the political extremes, with Heidegger eventually joining the Nazi party and Benjamin flirting with Communism.

Like many intellectuals at this time, Heidegger and Benjamin were interested in the consequences of the scientific and philosophical revolutions of the 17th century, the revolutions of Galileo and Descartes, which had produced the characteristic dualism of modernity: the separation of the autonomous, thinking subject from a scientific reality governed by natural laws. Both presented this as an illusory and fallen state, in which the world had been stripped of authentic human purpose and significance.

Granted, Heidegger did not think such fine things were available to most of humanity anyway. As he argued in his masterpiece Being and Time, people tend to seek distraction in mundane tasks, social conventions and gossip. But it did bother him that philosophers had forgotten about “the question of the meaning of Being.” To ask this question was to realise that, before we come to do science or anything else, we are always already “thrown” into an existence we have neither chosen nor designed, and which we can only access through the meanings made available by language and by the looming horizon of our own mortality.

Likewise, Benjamin insisted language was not a means of communication or rational thought, but an aesthetic medium through which the world was revealed to us. In his work on German baroque theatre, he identified the arrival of modernity with a tragic distortion in that medium. Rather than a holistic existence in which in which everything had its proper name and meaning – an existence that, for Benjamin, was intimately connected with the religious temporality of awaiting salvation – the very process of understanding had become arbitrary and reified, so that any given symbol might as well stand for any given thing.

As Eilenberger details, both Heidegger and Benjamin found some redemption in the idea of decision – a fleeting moment when the superficial autonomy of everyday choices gave way to an all-embracing realisation of purpose and fate. Benjamin identified such potential in love and, on a collective and political level, in the “profane illuminations” of the metropolis, where the alienation of the modern subject was most profound. For Heidegger, only a stark confrontation with death could produce a truly “authentic” decision. (This too had political implications, which Eilenberger avoids: Heidegger saw the “possibilities” glimpsed in these moments as handed down by tradition to each generation, leaving the door open to a reactionary idea of authenticity as something a community discovers in its past).

If Wittgenstein, Heidegger and Benjamin were outsiders and “conceptual wrecking balls,” Ernst Cassirer cuts a very different figure. His inclusion in this book is the latest sign of an extraordinary revival in his reputation over the past fifteen years or so. That said, some of Eilenberger’s remarks suggest Cassirer has not entirely shaken off the earlier judgment, that he was merely “an intellectual bureaucrat,” “a thoroughly decent man and thinker, but not a great one.”

Cassirer was the last major figure in the Neo-Kantian tradition, which had dominated German academic philosophy from the mid-19th century until around 1910. At this point, it grew unfashionable for its associations with scientific positivism and naïve notions of rationality and progress (not to mention the presence of prominent Jewish scholars like Cassirer within its ranks). The coup de grâce was delivered by Heidegger himself at the famous 1929 “Davos debate” with Cassirer, the event which opens and closes Eilenberger’s book. Here contemporaries portrayed Cassirer as an embodiment of “the old thinking” that was being swept away.

That judgment was not entirely accurate. It’s true that Cassirer was an intellectual in the mould of 19th century Central European liberalism, committed to human progress and individual freedom, devoted to science, culture and the achievements of German classicism. Not incidentally, he was the only one of our four thinkers to wholeheartedly defend Germany’s Weimar democracy. But he was also an imaginative, versatile and unbelievably prolific philosopher.

Cassirer’s three-volume project of the 1920s, The Theory of Symbolic Forms, showed that he, too, understood language and meaning as largely constitutive of reality. But for Cassirer, the modern scientific worldview was not a debasement of the subject’s relationship to the world, but a development of the same faculty which underlay language, myth and culture – that of representing phenomena through symbolic forms. It was, moreover, an advance. The logical coherence of theoretical science, and the impersonal detachment from nature it afforded, was the supreme example of how human beings achieved freedom: by understanding the structure of the world they inhabited to ever greater degrees.

But nor was Cassirer dogmatic in his admiration for science. His key principle was the plurality of representation and understanding, allowing the same phenomenon to be grasped in different ways. The scientist and artist are capable of different insights. More to the point, the creative process through which human minds devised new forms of representation was open ended. The very history of science, as of culture, showed that there were always new symbolic forms to be invented, transforming our perception of the world in the process.

*          *          *


It would be unfair to say Eilenberger gives us no sense of how these ideas relate to the context in which they were formed; his biographical vignettes do offer vivid glimpses of life in 1920s Europe. But that context is largely personal, and rarely social, cultural or intellectual. As a result, the most striking parallel of all – the determination of Wittgenstein, Heidegger and Benjamin to upend the premises of the philosophical discipline, and that of Cassirer to protect them – can only be explained in terms of personality. This is misleading.

A time-traveller visiting Central Europe in the years after 1918 could not help but notice that all things intellectual were in a state of profound flux. Not only was Neo-Kantianism succumbing to a generation of students obsessed with metaphysics, existence and (in the strict sense) nihilism. Every certainty was being forcefully undermined: the superiority of European culture in Oswald Spengler’s bestselling Decline of the West (1918); the purpose and progress of history in Ernst Troeltsch’s “Crisis of Historicism” (1922); the Protestant worldview in Karl Barth’s Epistle to the Romans (1919); and the structure of nature itself in Albert Einstein’s article “On the Present Crisis in Theoretical Physics” (1922).

In these years, even the concept of revolution was undergoing a revolution, as seen in the influence of unorthodox Marxist works like György Lukács’ History and Class Consciousness (1923). And this is to say nothing of what our time-traveller would discover in the arts. Dada, a movement dedicated to the destruction of bourgeois norms and sensibilities, had broken out in Zurich in 1917 and quickly spread to Berlin. Here it infused the works of brilliant but scandalous artists such as George Grosz and Otto Dix.

German intellectuals, in other words, were conscious of living in an age of immense disruption. More particularly, they saw themselves as responding to world defined by rupture; or to borrow a term from Heidegger and Benjamin, by “caesura” – a decisive and irreversible break from the past.

It’s not difficult to imagine where that impression came from. This generation experienced the cataclysm of the First World War, an unprecedented bloodbath that discredited assumptions of progress even as it toppled ancient regimes (though among Eilenberger’s quartet, only Wittgenstein served on the front lines). In its wake came the febrile economic and political atmosphere of the Weimar Republic, which has invited so many comparisons to our own time. Less noticed is that the ’20s were also, like our era, a time of destabilising technological revolution, witnessing the arrival of radio, the expansion of the telephone, cinema and aviation, and a bevy of new capitalist practices extending from factory to billboard.

Nonetheless, in philosophy and culture, we should not imagine that an awareness of rupture emerged suddenly in 1918, or even in 1914. The war is best seen as an explosive catalyst which propelled and distorted changes already underway. The problems that occupied Eilenberger’s four philosophers, and the intellectual currents that drove them, stem for a deeper set of dislocations.

 Anxiety over the scientific worldview, and over philosophy’s relationship to science, was an inheritance from the 19thcentury. In Neo-Kantianism, Germany had produced a philosophy at ease with the advances of modern science. But paradoxically, this grew to be a problem when it became clear how momentous those advances really were. Increasingly science was not just producing strange new ways of seeing the world, but through technology and industry, reshaping it. Ultimately the Neo-Kantian holding pattern, which had tried to reconcile science with the humanistic traditions of the intellectual class, gave way. Philosophy became the site of a backlash against both.

But critics of philosophy’s subordination to science had their own predecessors to call on, not least with respect to the problem of language. Those who, like Heidegger and Benjamin, saw language not as a potential tool for representing empirical reality, but the medium which disclosed that reality to us (and who thus began to draw the dividing line between continental and Anglo-American philosophy), were sharpening a conflict that had simmered since the Enlightenment. They took inspiration from the 18th century mystic and scourge of scientific rationality, Johann Georg Hamann.

Meanwhile, the 1890s saw widespread recognition of the three figures most responsible for the post-war generation’s ideal of the radical outsider: Søren Kierkegaard, Friedrich Nietzsche and Karl Marx. That generation would also be taught by the great pioneers of sociology in Germany, Max Weber and Georg Simmel, whose work recognised what many could feel around them: that modern society was impersonal, fragmented and beset by irresolvable conflicts of value.

In light of all this, it’s not surprising that the concept of rupture appears on several levels in Wittgenstein, Heidegger and Benjamin. They presented their works as breaks in and with the philosophical tradition. They reinterpreted history in terms of rupture, going back and seeking the junctures when pathologies had appeared and possibilities had been foreclosed. They emphasised the leaps of faith and moments of decision that punctuated the course of life.

Even the personal qualities that attract Eilenberger to these individuals – their eccentric behaviour, their search for authenticity – were not theirs alone. They were part of a generational desire to break with the old bourgeois ways, which no doubt seemed the only way to take ownership of such a rapidly changing world.


The politics of crisis is not going away any time soon

This essay was originally published by Palladium magazine on June 10th 2020

A pattern emerges when surveying the vast commentary on the COVID-19 pandemic. At its center is a distinctive image of crisis: the image of a cruel but instructive spotlight laying bare the flaws of contemporary society. Crisis, we read, has “revealed,” “illuminated,” “clarified,” and above all, “exposed” our collective failures and weaknesses. It has unveiled the corruption of institutions, the decadence of culture, and the fragility of a material way of life. It has sounded the death-knell for countless projects and ideals.

“The pernicious coronavirus tore off an American scab and revealed suppurating wounds beneath,” announces one commentator, after noting “these calamities can be tragically instructional…Fundamental but forgotten truths, easily masked in times of calm, reemerge.”

Says another: “Invasion and occupation expose a society’s fault lines, exaggerating what goes unnoticed or accepted in peacetime, clarifying essential truths, raising the smell of buried rot.”

You may not be surprised to learn that these two near-identical comments come from very different interpretations of the crisis. The first, from Trump-supporting historian Victor Davis Hanson of the Hoover Institution, claims that the “suppurating wounds” of American society are an effete liberal elite compromised by their reliance on a malignant China and determined to undermine the president at any cost. According to the second, by The Atlantic’s George Packer, the “smell of buried rot” comes from the Trump administration itself, the product of an oligarchic ascendency whose power stems from the division of society and hollowing-out of the state.

Nothing, it seems, has evaded the extraordinary powers of diagnosis made available by crisis: merciless globalism, backwards nationalism, the ignorance of populists, the naivety of liberals, the feral market, the authoritarian state. We are awash in diagnoses, but diagnosis is only the first step. It is customary to sharpen the reality exposed by the virus into a binary, existential decision: address the weakness identified, or succumb to it. “We’re faced with a choice that the crisis makes inescapably clear,” writes Packer, “the alternative to solidarity is death.” No less ominous is Hanson’s invocation of Pearl Harbor: “Whether China has woken a sleeping giant in the manner of the earlier Japanese, or just a purring kitten, remains to be seen.”

The crisis mindset is not just limited to journalistic sensationalism. Politicians, too, have appealed to a now-or-never, sink-or-swim framing of the COVID-19 emergency. French President Emmanuel Macron has been among those using such terms to pressure Eurozone leaders into finally establishing a collective means of financing debt. “If we can’t do this today, I tell you the populists will win,” Macron told The Financial Times. Across the Atlantic, U.S. Congresswoman Alexandria Ocasio-Cortez has claimed that the pandemic “has just exposed us, the fragility of our system,” and has adopted the language of “life or death” in her efforts to bring together the progressive and centrist wings of the Democratic Party before the presidential election in November.

And yet, in surveying this rhetoric of diagnosis and decision, what is most surprising is how familiar it sounds. Apart from the pathogen itself, there are few narratives of crisis now being aired which were not already well-established during the last decade. Much as the coronavirus outbreak has felt like a sudden rupture from the past, we have already been long accustomed to the politics of crisis.

It was under the mantra of “tough decisions,” with the shadow of the financial crisis still looming, that sharp reductions in public spending were justified across much of the Western world after 2010. Since then, the European Union has been crippled by conflicts over sovereign debt and migration. It was the rhetoric of the Chinese menace and of terminal decline—of “rusted-out factories scattered like tombstones across the landscape of our nation,” to quote the 2017 inaugural address—that brought President Trump to power. Meanwhile, progressives had already mobilized themselves around the language of emergency with respect to inequality and climate change.

There is something deeply paradoxical about all of this. The concept of crisis is supposed to denote a need for exceptional attention and decisive focus. In its original Greek, the term krisis often referred to a decision between two possible futures, but the ubiquity of “crisis” in our politics today has produced only deepening chaos. The sense of emergency is stoked continuously, but the accompanying promises of clarity, agency, and action are never delivered. Far from a revealing spotlight, the crises of the past decade have left us with a lingering fog which now threatens to obscure us at a moment when we really do need judicious action.


Crises are a perennial feature of modern history. For half a millenium, human life has been shaped by impersonal forces of increasing complexity and abstraction, from global trade and finance to technological development and geopolitical competition. These forces are inherently unstable and frequently produce moments of crisis, not least due to an exogenous shock like a deadly plague. Though rarely openly acknowledged, the legitimacy of modern regimes has largely depended on a perceived ability to keep that instability at bay.

This is the case even at times of apparent calm, such as the period of U.S. global hegemony immediately following the Cold War. The market revolution of the 1980s and globalization of the 1990s were predicated on a conception of capitalism as an unpredictable, dynamic system which could nonetheless be harnessed and governed by technocratic expertise. Such were the hopes of “the great moderation.” A series of emerging market financial crises—in Mexico, Korea, Thailand, Indonesia, Russia, and Argentina—provided opportunities for the IMF and World Bank to demand compliance with the Washington Consensus in economic policy. Meanwhile, there were frequent occasions for the U.S. to coordinate global police actions in war-torn states.

Despite the façade of independent institutions and international bodies, it was in no small part through such crisis-fighting economic and military interventions that a generation of U.S. leaders projected power abroad and secured legitimacy at home. This model of competence and progress, which seems so distant now, was not based on a sense of inevitability so much as confidence in the capacity to manage one crisis after another: to “stabilize” the most recent eruption of chaos and instability.

A still more striking example comes from the European Union, another product of the post-Cold War era. The project’s main purpose was to maintain stability in a trading bloc soon to be dominated by a reunified Germany. Nonetheless, many of its proponents envisaged that the development of a fully federal Europe would occur through a series of crises, with the supra-national structures of the EU achieving more power and legitimacy at each step. When the Euro currency was launched in 1999, Romano Prodi, then president of the European Commission, spoke of how the EU would extend its control over economic policy: “It is politically impossible to propose that now. But some day there will be a crisis and new instruments will be created.”

It is not difficult to see why Prodi took this stance. Since the rise of the rationalized state two centuries ago, managerial competence has been central to notions of successful governance. In the late 19th century, French sociologist Emile Durkheim compared the modern statesman to a physician: “he prevents the outbreak of illnesses by good hygiene, and seeks to cure them when they have appeared.” Indeed, the bureaucratic structures which govern modern societies have been forged in the furnaces of crisis. Social security programs, income tax, business regulation, and a host of other state functions now taken for granted are a product of upheavals of the 19th and early 20th centuries: total war, breakneck industrialization, famine, and financial panic. If necessity is the mother of invention, crisis is the midwife of administrative capacity.

By the same token, the major political ideologies of the modern era have always claimed to offer some mastery over uncertainty. The locus of agency has variously been situated in the state, the nation, individuals, businesses, or some particular class or group; the stated objectives have been progress, emancipation, greatness, or simply order and stability. But in every instance, the message has been that the chaos endemic to modern history must be tamed or overcome by some paradigmatic form of human action. The curious development of Western modernity, where the management of complex, crisis-prone systems has come to be legitimated through secular mass politics, appears amenable to no other template.

It is against this backdrop that we can understand the period of crisis we have endured since 2008. The narratives of diagnosis and decision which have overtaken politics during this time are variations on a much older theme—one that is present even in what are retrospectively called “times of calm.” The difference is that, where established regimes have failed to protect citizens from instability, the logic of crisis management has burst its technocratic and ideological bounds and entered the wider political sphere. The greatest of these ruptures was captured by a famous statement attributed to Federal Reserve Chairman Ben Bernanke in September 2008. Pleading with Congress to pass a $700 billion bailout, Bernanke claimed: “If we don’t do this now, we won’t have an economy on Monday.”

This remark set the tone for the either/or, act-or-perish politics of the last decade. It points to a loss of control which, in the United States and beyond, opened the way for competing accounts not just of how order could be restored, but also what that order should look like. Danger and disruption have become a kind of opportunity, as political insurgents across the West have captured established parties, upended traditional power-sharing arrangements, and produced the electoral shocks suggested by the ubiquitous phrase “the age of Trump and Brexit.” These campaigns sought to give the mood of crisis a definite shape, directing it towards the need for urgent decision or transformative action, thereby giving supporters a compelling sense of their own agency.


Typically though, such movements do not merely offer a choice between existing chaos and redemption to come. In diagnoses of crisis, there is always an opposing agent who is responsible for and threatening to deepen the problem. We saw this already in Hanson’s and Packer’s association of the COVID-19 crisis with their political opponents. But it was there, too, among Trump’s original supporters, for whom the agents of crisis were not just immigrants and elites but, more potently, the threat posed by the progressive vision for America. This was most vividly laid out in Michael Anton’s infamous “Flight 93 Election” essay, an archetypal crisis narrative which urged fellow conservatives that only Trump could stem the tide of “wholesale cultural and political change,” claiming “if you don’t try, death is certain.”

Yet Trump’s victory only galvanized the radical elements of the left, as it gave them a villain to point to as a way of further raising the consciousness of crisis among their own supporters. The reviled figure of Trump has done more for progressive stances on immigration, healthcare, and climate action than anyone else, for he is the ever-present foil in these narratives of emergency. Then again, such progressive ambitions, relayed on Fox News and social media, have also proved invaluable in further stoking conservatives’ fears.

To simply call this polarization is to miss the point. The dynamic taking shape here is rooted in a shared understanding of crisis, one that treats the present as a time in which the future of society is being decided. There is no middle path, no going back: each party claims that if they do not take this opportunity to reshape society, their opponents will. In this way, narratives of crisis feed off one another, and become the basis for a highly ideological politics—a politics that de-emphasizes compromise with opponents and with the practical constraints of the situation at hand, prioritizing instead the fulfillment of a goal or vision for the future.

Liberal politics is ill-equipped to deal with, or even to properly recognize, such degeneration of discourse. In the liberal imagination, the danger of crisis is typically that the insecurity of the masses will be exploited by a demagogue, who will then transfigure the system into an illiberal one. In many cases, though, it is the system which loses legitimacy first, as the frustrating business of deliberative, transactional politics cannot meet the expectations of transformative change which are raised in the public sphere.

Consider the most iconic and, in recent years, most frequently analogized period of crisis in modern history: Germany’s Weimar Republic of 1918-33. These were the tempestuous years between World War I and Hitler’s dictatorship, during which a fledgling democracy was rocked by armed insurrection, hyperinflation, foreign occupation, and the onset of the Great Depression, all against a backdrop of rapid social, economic, and technological upheaval.

Over the past decade or so, there have been no end of suggestions that ours is a “Weimar moment.” Though echoes have been found in all sorts of social and cultural trends, the overriding tendency has been to view the crises of the Weimar period backwards through their end result, the establishment of Nazi dictatorship in 1933. In various liberal democracies, the most assertive Weimar parallels have referred to the rise of populist and nationalist politics, and in particular, the erosion of constitutional norms by leaders of this stripe. The implication is that history has warned us how the path of crisis can lead towards an authoritarian ending.

What this overlooks, however, is that Weimar society was not just a victim of crisis that stumbled blindly towards authoritarianism, but was active in interpreting what crises revealed and how they should be addressed. In particular, the notion of crisis served the ideological narratives of the day as evidence of the need to refashion the social settlement. Long before the National Socialists began their rise in the early 1930s, these conflicting visions, pointing to one another as evidence of the stakes, sapped the republic’s legitimacy by making it appear impermanent and fungible.

The First World War had left German thought with a pronounced sense of the importance of human agency in shaping history. On the one hand, the scale and brutality of the conflict left survivors adrift in a world of unprecedented chaos, seeming to confirm a suspicion of some 19th century German intellectuals that history had no inherent meaning. But at the same time, the war had shown the extraordinary feats of organization and ingenuity that an industrialized society, unified and mobilized around a single purpose, was capable of. Consequently, the prevailing mood of Weimar was best captured by the popular term Zeitenwende, the turning of the times. Its implication was that the past was irretrievably lost, the present was chaotic and dangerous, but the future was there to be claimed by those with the conviction and technical skill to do so.

Throughout the 1920s, this historical self-consciousness was expressed in the concept of Krisis or Krise, crisis. Intellectual buzzwords referred to a crisis of learning, a crisis of European culture, a crisis of historicism, crisis theology, and numerous crises of science and mathematics. The implication was that these fields were in a state of flux which called for resolution. A similar dynamic could be seen in the political polemics which filled the Weimar press, where discussions of crisis tended to portray the present as a moment of decision or opportunity. According to Rüdiger Graf’s study of more than 370 Weimar-era books and still more journal articles with the term “crisis” in their titles, the concept generally functioned as “a call to action” by “narrow[ing] the complex political world to two exclusive alternatives.”

Although the republic was most popular among workers and social democrats, the Weimar left contained an influential strain of utopian thought which saw itself as working beyond the bounds of formal politics. Here, too, crisis was considered a source of potential. Consider the sentiments expressed by Walter Gropius, founder of the Bauhaus school of architecture of design, in 1919:

Capitalism and power politics have made our generation creatively sluggish, and our vital art is mired in a broad bourgeois philistinism. The intellectual bourgeois of the old Empire…has proven his incapacity to be the bearer of German culture. The benumbed world is now toppled, its spirit is overthrown, and is in the midst of being recast in a new mold.

Gropius was among those intellectuals, artists, and administrators who, often taking inspiration from an idealized image of the Soviet Union, subscribed to the idea of the “new man”—a post-capitalist individual whose self-fulfillment would come from social duty. Urban planning, social policy, and the arts were all seen as means to create the environment in which this new man could emerge.

The “bourgeois of the old Empire,” as Gropius called them, had indeed been overthrown; but in their place came a reactionary modernist movement, often referred to as the “conservative revolution,” whose own ideas of political transformation used socialism both as inspiration and as ideological counterpoint. In the works of Ernst Jünger, technology and militarist willpower were romanticized as dynamic forces which could pull society out of decadence. Meanwhile, the political theorist Carl Schmitt emphasized the need for a democratic polity to achieve a shared identity in opposition to a common enemy, a need sometimes better accomplished by the decisive judgments of a sovereign dictator than by a fractious parliamentary system.

Even some steadfast supporters of the republic, like the novelist Heinrich Mann, seized on the theme of crisis as a call to transformative action. In a 1923 speech, against a backdrop of hyperinflation and the occupation of the Ruhr by French forces, Mann insisted that the republic should resist the temptation of nationalism, and instead fulfill its promise as a “free people’s state” by dethroning the “blood-gorging” capitalists who still controlled society in their own interests.

These trends were not confined to rhetoric and intellectual discussion. They were reflected in practical politics by the tendency of even trivial issues to be treated as crises that raised fundamental conflicts of worldview. So it was that, in 1926, a government was toppled by a dispute over the regulations for the display of the republican flag. Meanwhile, representatives were harangued by voters who expected them to embody the uncompromising ideological clashes taking place in the wider political sphere. In towns and cities across the country, rival marches and processions signaled the antagonism of socialists and their conservative counterparts—the burghers, professionals and petite bourgeoisie who would later form the National Socialist coalition, and who by mid-decade had already coalesced around President Paul von Hindenburg.


We are not Weimar. The ideologies of that era, and the politics that flowed from them, were products of their time, and there were numerous contingent reasons why the republic faced an uphill battle for acceptance. Still, there are lessons. The conflict between opposing visions of society may seem integral to the spirit of democratic politics, but at times of crisis, it can be corrosive to democratic institutions. The either/or mindset can add a whole new dimension to whatever emergency is at hand, forcing what is already a time of disorientating change into a zero-sum competition between grand projects and convictions that leave ordinary, procedural politics looking at best insignificant, and at worst an obstacle.

But sometimes this kind of escalation is simply unavoidable. Crisis ideologies amplify, but do not create, a desire for change. The always-evolving material realities of capitalist societies frequently create circumstances that are untenable, and which cannot be sufficiently addressed by political systems prone to inertia and capture by vested interests. When such a situation erupts into crisis, incremental change and a moderate tone may already be a foregone conclusion. If your political opponent is electrifying voters with the rhetoric of emergency, the only option might be to fight fire with fire.

There is also a hypocrisy innate to democratic politics which makes the reality of how severe crises are managed something of a dirty secret. Politicians like to invite comparisons with past leaders who acted decisively during crises, whether it be French president Macron’s idolization of Charles de Gaulle, the progressive movement in the U.S. and elsewhere taking Franklin D Roosevelt as their inspiration, or virtually every British leader’s wish to be likened to Winston Churchill. What is not acknowledged is the shameful compromises that accompanied these leaders’ triumphs. De Gaulle’s opportunity to found the French Fifth Republic came amid threats of a military coup. Roosevelt’s New Deal could only be enacted with the backing of Southern Democratic politicians, and as such, effectively excluded African Americans from its most important programs. Allied victory in the Second World War, the final fruit of Churchill’s resistance, came at the price of ceding Eastern and Central Europe to Soviet tyranny.

Such realities are especially difficult to bear because the crises of the past are a uniquely unifying force in liberal democracies. It was often through crises, after all, that rights were won, new institutions forged, and loyalty and sacrifice demonstrated. We tend to imagine those achievements as acts of principled agency which can be attributed to society as a whole, whereas they were just as often the result of improvisation, reluctant concession, and tragic compromise.

Obviously, we cannot expect a willingness to bend principles to be treated as a virtue, and nor, perhaps, should we want it to. But we can acknowledge the basic degree of pragmatism  which crises demand. This is the most worrying aspect of the narratives of decision surrounding the current COVID-19 crisis: still rooted in the projects and preoccupations of the past, they threaten to render us inflexible at a moment when we are entering uncharted territory.

Away from the discussions about what the emergency has revealed and the action it demands, a new era is being forged by governments and other institutions acting on a more pressing set of motives—in particular, maintaining legitimacy in the face of sweeping political pressures and staving off the risk of financial and public health catastrophes. It is also being shaped from the ground up, as countless individuals have changed their behavior in response to an endless stream of graphs, tables, and reports in the media.

Political narratives simply fail to grip the contingency of this situation. Commentators talk about the need to reduce global interdependence, even as the architecture of global finance has been further built up by the decision of the Federal Reserve, in March, to support it with unprecedented amounts of dollar liquidity. They continue to argue within a binary of free market and big government, even as staunchly neoliberal parties endorse state intervention in their economies on a previously unimaginable scale. Likewise, with discussions about climate policy or western relations with China—the parameters within which these strategies will have to operate are simply unknown.

To reduce such complex circumstances to simple, momentous decisions is to offer us more clarity and agency than we actually possess. Nonetheless, that is how this crisis will continue to be framed, as political actors strive to capture the mood of emergency. It will only make matters worse, though, if our judgment remains colored by ambitions and resentments which were formed in earlier crises. If we continue those old struggles on this new terrain, we will swiftly lose our purchase on reality. We will be incapable of a realistic appraisal of the constraints now facing us, and without such realistic appraisal, no solution can be effectively pursued.

What was Romanticism? Putting the “counter-Enlightenment” in context

In his latest book Enlightenment Now: The Case for Reason, Science, Humanism and Progress, Steven Pinker heaps a fair amount of scorn on Romanticism, the movement in art and philosophy which spread across Europe during the late-18th and 19th centuries. In Pinker’s Manichean reading of history, Romanticism was the malign counterstroke to the Enlightenment: its goal was to quash those values listed in his subtitle. Thus, the movement’s immense diversity and ambiguity are reduced to a handful of ideas, which show that the Romantics favored “the heart over the head, the limbic system over the cortex.” This provides the basis for Pinker to label “Romantic” various irrational tendencies that are still with us, such as nationalism and reverence for nature.

In the debates following Enlightenment Now, many have continued to use Romanticism simply as a suitcase term for “counter-Enlightenment” modes of thought. Defending Pinker in Areo, Bo Winegard and Benjamin Winegard do produce a concise list of Romantic propositions. But again, their version of Romanticism is deliberately anachronistic, providing a historical lineage for the “modern romantics” who resist Enlightenment principles today.

As it happens, this dichotomy does not appeal only to defenders of the Enlightenment. In his book The Age of Anger, published last year, Pankaj Mishra explains various 21st century phenomena — including right-wing populism and Islamism — as reactions to an acquisitive, competitive capitalism that he traces directly back to the 18th century Enlightenment. This, says Mishra, is when “the unlimited growth of production . . . steadily replaced all other ideas of the human good.” And who provided the template for resisting this development? The German Romantics, who rejected the Enlightenment’s “materialist, individualistic and imperialistic civilization in the name of local religious and cultural truth and spiritual virtue.”

Since the Second World War, it has suited liberals, Marxists, and postmodernists alike to portray Romanticism as the mortal enemy of Western rationalism. This can convey the impression that history has long consisted of the same struggle we are engaged in today, with the same teams fighting over the same ideas. But even a brief glance at the Romantic era suggests that such narratives are too tidy. These were chaotic times. Populations were rising, people were moving into cities, the industrial revolution was occurring, and the first mass culture emerging. Europe was wracked by war and revolution, nations won and lost their independence, and modern politics was being born.

So I’m going to try to explain Romanticism and its relationship with the Enlightenment in a bit more depth. And let me say this up front: Romanticism was not a coherent doctrine, much less a concerted attack on or rejection of anything. Put simply, the Romantics were a disparate constellation of individuals and groups who arrived at similar motifs and tendencies, partly by inspiration from one another, partly due to underlying trends in European culture. In many instances, their ideas were incompatible with, or indeed hostile towards, the Enlightenment and its legacy. On the other hand, there was also a good deal of mutual inspiration between the two.


Sour grapes

The narrative of Romanticism as a “counter-Enlightenment” often begins in the mid-18th century, when several forerunners of the movement appeared. The first was Jean-Jacques Rousseau, whose Social Contract famously asserts “Man is born free, but everywhere he is in chains.” Rousseau portrayed civilization as decadent and morally compromised, proposing instead a society of minimal interdependence where humanity would recover its natural virtue. Elsewhere in his work he also idealized childhood, and celebrated the outpouring of subjective emotion.

In fact various Enlightenment thinkers, Immanuel Kant in particular, admired Rousseau’s ideas; he was arguing that left to their own devices, ordinary people would use reason to discover virtue. Nonetheless, he was clearly attacking the principle of progress, and his apparent motivations for doing so were portentous. Rousseau had been associated with the French philosophes — men such as Thiry d’Holbach, Denis Diderot, Claude Helvétius and Jean d’Alembert — who were developing the most radical strands of Enlightenment thought, including materialist philosophy and atheism. But crucially, they were doing so within a rather glamorous, cosmopolitan milieu. Though they were monitored and harassed by the French ancien régime, many of the philosophes were nonetheless wealthy and well-connected figures, their Parisian salons frequented by intellectuals, ambassadors and aristocrats from across Europe.

Rousseau decided the Enlightenment belonged to a superficial, hedonistic elite, and essentially styled himself as a god-fearing voice of the people. This turned out to be an important precedent. In Prussia, where a prolific Romantic movement would emerge, such antipathy towards the effete culture of the French was widespread. For much to the frustration of Prussian intellectuals and artists — many of whom were Pietist Christians from lowly backgrounds — their ruler Frederick the Great was an “Enlightened despot” and dedicated Francophile. He subscribed to Melchior Grimm’s Correspondence Littéraire, which brought the latest ideas from the Paris; he hosted Voltaire at his court as an Enlightenment mascot; he conducted affairs in French, his first language.

This is the background against which we find Johann Gottfried Herder, whose ideas about language and culture were deeply influential to Romanticism. He argued that one can only understand the world via the linguistic concepts that one inherits, and that these reflect the contingent evolution of one’s culture. Hence in moral terms, different cultures occupy significantly different worlds, so their values should not be compared to one another. Nor should they be replaced with rational schemes dreamed up elsewhere, even if this means that societies are bound to come into conflict.

Rousseau and Herder anticipated an important cluster of Romantic themes. Among them are the sanctity of the inner-life, of folkways and corporate social structures, of belonging, of independence, and of things that cannot be quantified. And given the apparent bitterness of Herder and some of his contemporaries, one can see why Isaiah Berlin declared that all this amounted to “a very grand form of sour grapes.” Berlin takes this line too far, but there is an important insight here. During the 19th century, with the rise of the bourgeoisie and of government by utilitarian principles, many Romantics will show a similar resentment towards “sophisters, economists, and calculators,” as Edmund Burke famously called them. Thus Romanticism must be seen in part as coming from people denied status in a changing society.

Then again, Romantic critiques of excessive uniformity and rationality were often made in the context of developments that were quite dramatic. During the 1790s, it was the French Revolution’s degeneration into tyranny that led first-generation Romantics in Germany and England to fear the so-called “machine state,” or government by rational blueprint. Similarly, the appalling conditions that marked the first phase of the industrial revolution lay behind some later Romantics’ revulsion at industrialism itself. John Ruskin celebrated medieval production methods because “men were not made to work with the accuracy of tools,” with “all the energy of their spirits . . . given to make cogs and compasses of themselves.”

And ultimately, it must be asked if opposition to such social and political changes was opposition to the Enlightenment itself. The answer, of course, depends on how you define the Enlightenment, but with regards to Romanticism we can only make the following generalization. Romantics believed that ideals such as reason, science, and progress had been elevated at the expense of values like beauty, expression, or belonging. In other words, they thought the Enlightenment paradigm established in the 18th century was limited. This is well captured by Percy Shelley’s comment in 1821 that although humanity owed enormous gratitude to philosophers such as John Locke and Voltaire, only Rousseau had been more than a “mere reasoner.”

And yet, in perhaps the majority of cases, this did not make Romantics hostile to science, reason, or progress as such. For it did not seem to them, as it can seem to us in hindsight, that these ideals must inevitably produce arrangements such as industrial capitalism or technocratic government. And for all their sour grapes, they often had reason to suspect those whose ascent to wealth and power rested on this particular vision of human improvement.


“The world must be romanticized”

One reason Romanticism is often characterized as against something — against the Enlightenment, against capitalism, against modernity as such — is that it seems like the only way to tie the movement together. In the florescence of 19th century art and thought, Romantic motifs were arrived at from a bewildering array of perspectives. In England during the 1810s, for instance, radical, progressive liberals such as Shelley and Lord Byron celebrated the crumbling of empires and of religion, and glamorized outcasts and oppressed peoples in their poetry. They were followed by arch-Tories like Thomas Carlyle and Ruskin, whose outlook is fundamentally paternalistic. Other Romantics migrated across the political spectrum during their lifetimes, bringing their themes with them.

All this is easier to understand if we note that a new sensibility appeared in European culture during this period, remarkable for its idealism and commitment to principle. Disparaged in England as “enthusiasm,” and in Germany as Schwärmerei or fanaticism, we get a flavor of it by looking at some of the era’s celebrities. There was Beethoven, celebrated as a model of the passionate and impoverished genius; there was Byron, the rebellious outsider who received locks of hair from female fans; and there was Napoleon, seen as an embodiment of untrammeled willpower.

Curiously, though, while this Romantic sensibility was a far cry from the formality and refinement which had characterized the preceding age of Enlightenment, it was inspired by many of the same ideals. To illustrate this, and to expand on some key Romantic concepts, I’m going to focus briefly on a group that came together in Prussia at the turn of the 19th century, known as the Jena Romantics.

The Jena circle — centred around Ludwig Tieck, Friedrich and August Schlegel, Friedrich Hölderlin, and the writer known as Novalis — have often been portrayed as scruffy bohemians, a conservative framing that seems to rest largely on their liberal attitudes to sex. But this does give us an indication of the group’s aims: they were interested in questioning convention, and pursuing social progress (their journal Das Athenäum was among the few to publish female writers). They were children of the Enlightenment in other respects, too. They accepted that rational skepticism had ruled out traditional religion and superstition, and that science was a tool for understanding reality. Their philosophy, however, shows an overriding desire to reconcile these capacities with an inspiring picture of culture, creativity, and individual fulfillment. And so they began by adapting the ideas of two major Enlightenment figures: Immanuel Kant and Benedict Spinoza.

Kant, who spent his entire life among the Romantics in Prussia, had impressed on them the importance of one dilemma in particular: how was human freedom possible given that nature was determined? But rather than follow Kant down the route of transcendental freedom, the Jena school tried to update the universe Spinoza had described a century earlier, which was a single deterministic entity governed by a mechanical sequence of cause and effect. Conveniently, this mechanistic model had been called into doubt by contemporary physics. So they kept the integrated, holistic quality of Spinoza’s nature, but now suggested that it was suffused with another Kantian idea — that of organic force or purpose.

Consequently, the Jena Romantics arrived at an organic conception of the universe, in which nature expressed the same omnipresent purpose in all its manifestations, up to and including human consciousness. Thus there was no discrepancy between mental activity and matter, and the Romantic notion of freedom as a channelling of some greater will was born. After all, nature must be free because, as Spinoza had argued, there is nothing outside nature. Therefore, in Friedrich Schlegel’s words, “Man is free because he is the highest expression of nature.”

Various concepts flowed from this, the most consequential being a revolutionary theory of art. Whereas the existing neo-classical paradigm had assumed that art should hold a mirror up to nature, reflecting its perfection, the Romantics now stated that the artist should express nature, since he is part of its creative flow. What this entails, moreover, is something like a primitive notion of the unconscious. For this natural force comes to us through the profound depths of language and myth; it cannot be definitely articulated, only grasped at through symbolism and allegory.

Such longing for the inexpressible, the infinite, the unfathomable depth thought to lie beneath the surface of ordinary reality, is absolutely central to Romanticism. And via the Jena school, it produces an ideal which could almost serve as a Romantic program: being-through-art. The modern condition, August Schlegel says, is the sensation of being adrift between two idealized figments of our imagination: a lost past and an uncertain future. So ultimately, we must embrace our frustrated existence by making everything we do a kind of artistic expression, allowing us to move forward despite knowing that we will never reach what we are aiming for. This notion that you can turn just about anything into a mystery, and thus into a field for action, is what Novalis alludes to in his famous statement that “the world must be romanticized.”

It appears there’s been something of a detour here: we began with Spinoza and have ended with obscurantism and myth. But as Frederick Beiser has argued, this baroque enterprise was in many ways an attempt to radicalize the 18th century Enlightenment. Indeed, the central thesis that our grip on reality is not certain, but we must embrace things as they seem to us and continue towards our aims, was almost a parody of the skepticism advanced by David Hume and by Kant. Moreover, and more ominously, the Romantics amplified the Enlightenment principle of self-determination, producing the imperative that individuals and societies must pursue their own values.


The Romantic legacy

It is beyond doubt that some Romantic ideas had pernicious consequences, the most demonstrable being a contribution to German nationalism. By the end of the 19th century, when Prussia had become the dominant force in a unified Germany and Richard Wagner’s feverish operas were being performed, the Romantic fascination with national identity, myth, and the active will had evolved into something altogether menacing. Many have taken the additional step, which is not a very large one, of implicating Romanticism in the fascism of the 1930s.

A more tenuous claim is that Romanticism (and German Romanticism especially) contains the origins of the postmodern critique of the Enlightenment, and of Western civilization itself, which is so current among leftist intellectuals today. As we have seen, there was in Romanticism a strong strain of cultural relativism — which is to say, relativism about values. But postmodernism has at its core a relativism about facts, a denial of the possibility of reaching objective truth by reason or observation. This nihilistic stance is far from the skepticism of the Jena school, which was fundamentally a means for creative engagement with the world.

But whatever we make of these genealogies, remember that we are talking about developments, progressions over time. We are not saying that Romanticism was in any meaningful sense fascistic, postmodernist, or whichever other adjective appears downstream. I emphasize this because if we identify Romanticism with these contentious subjects, we will overlook its myriad more subtle contributions to the history of thought.

Many of these contributions come from what I described earlier as the Romantic sensibility: a variety of intuitions that seem to have taken root in Western culture during this era. For instance, that one should remain true to one’s own principles at any cost; that there is something tragic about the replacement of the old and unusual with the uniform and standardized; that different cultures should be appreciated on their own terms, not on a scale of development; that artistic production involves the expression of something within oneself. Whether these intuitions are desirable is open to debate, but the point is that the legacy of Romanticism cannot be compartmentalized, for it has colored many of our basic assumptions.

This is true even of ideas that we claim to have inherited from the Enlightenment. For some of these were these were modified, and arguably enriched, as they passed through the Romantic era. An explicit example comes from John Stuart Mill, the founding figure of classical Liberalism. Mill inherited from his father and from Jeremy Bentham a very austere version of utilitarian ethics. This posited as its goal the greatest good for the greatest number of people; but its notion of the good did not account for the value of culture, spirituality, and a great many other things we now see as intrinsic to human flourishing. As Mill recounts in his autobiography, he realized these shortcomings by reading England’s first-generation Romantics, William Wordsworth and Samuel Taylor Coleridge.

This is why, in 1840, Mill bemoaned the fact that his fellow progressives thought they had nothing to learn from Coleridge’s philosophy, warning them that “the besetting danger is not so much of embracing falsehood for truth, as of mistaking part of the truth for the whole.” We are committing a similar error today when we treat Romanticism simply as a “counter-Enlightenment.” Ultimately this limits our understanding not just of Romanticism but of the Enlightenment as well.


This essay was first published in Areo Magazine on June 10 2018. See it here.

Social media’s turn towards the grotesque

This essay was first published by Little Atoms on 09 August 2018. The image on my homepage is a detail from an original illustration by Jacob Stead. You can see the full work here.

Until recently it seemed safe to assume that what most people wanted on social media was to appear attractive. Over the last decade, the major concerns about self-presentation online have been focused on narcissism and, for women especially, unrealistic standards of beauty. But just as it is becoming apparent that some behaviours previously interpreted as narcissistic – selfies, for instance – are simply new forms of communication, it is also no longer obvious that the rules of this game will remain those of the beauty contest. In fact, as people derive an ever-larger proportion of their social interaction here, the aesthetics of social media are moving distinctly towards the grotesque.

When I use the term grotesque, I do so in a technical sense. I am referring to a manner of representing things – the human form especially – which is not just bizarre or unsettling, but which creates a sense of indeterminacy. Familiar features are distorted, and conventional boundaries dissolved.

Instagram, notably, has become the site of countless bizarre makeup trends among its large demographic of young women and girls. These transformations range from the merely dramatic to the carnivalesque, including enormous lips, nose-hair extensions, eyebrows sculpted into every shape imaginable, and glitter coated onto everything from scalps to breasts. Likewise, the popularity of Snapchat has led to a proliferation of face-changing apps which revel in cartoonish distortions of appearance. Eyes are expanded into enormous saucers, faces are ghoulishly elongated or squashed, and animal features are tacked onto heads. These images, interestingly, are also making their way onto dating app profiles.

Of course for many people such tools are simply a way, as one reviewer puts it, “to make your face more fun.” There is something singularly playful in embracing such plasticity: see for instance the creative craze “#slime”, which features videos of people playing with colourful gooey substances, and has over eight million entries on Instagram. But if you follow the threads of garishness and indeterminacy through the image-oriented realms of the internet, deeper resonances emerge.

The pop culture embraced by Millennials and the so-called Generation C (born after 2000) reflects a fascination with brightly adorned, shape-shifting and sexually ambiguous personae. If performers like Miley Cyrus and Lady Gaga were forerunners of this tendency, they are now joined by more dark and refined way figures such as Sophie and Arca from the dance music scene. Meanwhile fashion, photography and video abound with kitsch, quasi-surreal imagery of the kind popularised by Dazed magazine. Celebrated subcultures such as Japan’s “genderless Kei,” who are characterised by bright hairstyles and makeup, are also part of this picture.

But the most striking examples of this turn towards the grotesque come from art forms emerging within digital culture itself. It is especially well illustrated by Porpentine, a game designer working with the platform Twine, whose disturbing interactive poems have achieved something of a cult status. They typically place readers in the perspective of psychologically and socially insecure characters, leading them through violent urban futurescapes reminiscent of William Burrough’s Naked Lunch. The New York Times aptly describes her games as “dystopian landscapes peopled by cyborgs, intersectional empresses and deadly angels,” teeming with “garbage, slime and sludge.”

These are all manifestations both of a particular sensibility which is emerging in parts of the internet, and more generally of a new way of projecting oneself into public space. To spend any significant time in the networks where such trends appear is to become aware of a certain model of identity being enacted, one that is mercurial, effervescent, and boldly expressive. And while the attitudes expressed vary from anxious subjectivity to humorous posturing – as well as, at times, both simultaneously – in most instances one senses that the online persona has become explicitly artificial, plastic, or even disposable.

*   *   *

Why, though, would a paradigm of identity such as this invite expression as the grotesque? Interpreting these developments is not easy given that digital culture is so diffuse and rapidly evolving. One approach that seems natural enough is to view them as social phenomena, arising from the nature of online interaction. Yet to take this approach is immediately to encounter a paradox of sorts. If “the fluid self” represents “identity as a vast and ever-changing range of ideas that should all be celebrated” (according to trend forecaster Brenda Milis), then why does it seem to conform to generic forms at all? This is a contradiction, that in fact might prove enlightening.

One frame which has been widely applied to social media is sociologist Erving Goffman’s “dramaturgical model,” as outlined in his 1959 book The Presentation of Self in Every Day Life. According to Goffman, identity can be understood in terms of a basic dichotomy, which he explains in terms of “Front Stage” and “Back Stage.” Our “Front Stage” identity, when we are interacting with others, is highly responsive to context. It is preoccupied with managing impressions and assessing expectations so as to present what we consider a positive view of ourselves. In other words, we are malleable in the degree to which we are willing to tailor our self-presentation.

The first thing to note about this model is that it allows for dramatic transformations. If you consider the degree of detachment enabled by projecting ourselves into different contexts through words and imagery, and empathising with others on the same basis, then the stage is set for more or less anything becoming normative within a given peer group. As for why people would want to take this expressive potential to unusual places, it seems reasonable to speculate that in many cases, the role we want to perform is precisely that of someone who doesn’t care what anyone thinks. But since most of us do in fact care, we might end up, ironically enough, expressing this within certain established parameters.

But focusing too much on social dynamics risks underplaying the undoubted sense of freedom associated with the detachment from self in online interaction. Yes, there is peer pressure here, but within these bounds there is also a palpable euphoria in escaping mundane reality. The neuroscientist Susan Greenfield has made this point while commenting on the “alternative identity” embraced by young social media users. The ability to depart from the confines of stable identity, whether by altering your appearance or enacting a performative ritual, essentially opens the door to a world of fantasy.

With this in mind, we could see the digital grotesque as part of a cultural tradition that offers us many precedents. Indeed, this year marks the 200th anniversary of perhaps the greatest precedent of all: Mary Shelley’s iconic novel Frankenstein. The great anti-hero of that story, the monster who is assembled and brought to life by the scientist Victor Frankenstein, was regarded by later generations as an embodiment of all the passions that society requires the individual to suppress – passions that the artist, in the act of creation, has special access to. The uncanny appearance and emotional crises of Frankenstein’s monster thus signify the potential for unknown depths of expression, strange, sentimental, and macabre.

That notion of the grotesque as something uniquely expressive and transformative was and has remained prominent in all of the genres with which Frankenstein is associated – romanticism, science fiction, and the gothic. It frequently aligns itself with the irrational and surreal landscapes of the unconscious, and with eroticism and sexual deviancy; the films of David Lynch are emblematic of this crossover. In modern pop culture a certain glamourised version of the grotesque, which subverts rigid identity with makeup and fashion, appeared in the likes of David Bowie and Marilyn Manson.

Are today’s online avatars potentially incarnations of Frankenstein’s monster, tempting us with unfettered creativity? The idea has been explored by numerous artists over the last decade. Ed Atkins is renowned for his humanoid characters, their bodies defaced by crude drawings, who deliver streams of consciousness fluctuating between the poetic and the absurd. Jon Rafman, meanwhile, uses video and animation to piece together entire composite worlds, mapping out what he calls “the anarchic psyche of the internet.” Reflecting on his years spent exploring cyberspace, Rafman concludes: “We’ve reached a point where we’re enjoying our own nightmares.”

*   *   *

It is possible that the changing aesthetics of the Internet reflect both the social pressures and the imaginative freedoms I’ve tried to describe, or perhaps even the tension between them. One thing that seems clear, though, is that the new notions of identity emerging here will have consequences beyond the digital world. Even if we accept in some sense Goffman’s idea of a “Backstage” self, which resumes its existence when we are not interacting with others, the distinction is ultimately illusory. The roles and contexts we occupy inevitably feed back into how we think of ourselves, as well as our views on a range of social questions. Some surveys already suggest a generational shift in attitudes to gender, for instance.

That paradigms of identity shift in relation to technological and social changes is scarcely surprising. The first half of the 20th century witnessed the rise of a conformist culture, enabled by mass production, communication, and ideology, and often directed by the state. This then gave way to the era of the unique individual promoted by consumerism. As for the balance of psychological benefits and problems that will arise as online interaction grows, that is a notoriously contentious question requiring more research.

There is, however, a bigger picture here that deserves attention. The willingness of people to assume different identities online is really part of a much broader current being borne along by technology and design – one whose general direction to enable individuals to modify and customise themselves in a wide range of ways. Whereas throughout the 20th century designers and advertisers were instrumental in shaping how we interpreted and expressed our social identity – through clothing, consumer products, and so on – this function is now increasingly being assumed by individuals within social networks.

Indeed, designers and producers are surrendering control of both the practical and the prescriptive aspects of their trade. 3D printing is just one example of how, in the future, tools and not products will be marketed. In many areas, the traditional hierarchy of ideas has been reversed, as those who used to call the tune are now trying to keep up with and capitalise on trends that emerge from their audiences. One can see this loss of influence in an aesthetic trend that seems to run counter to those I’ve been observing here, but which ultimately reflects the same reality. From fashion to furniture, designers are making neutral products which can be customised by an increasingly identity-conscious, changeable audience.

Currently, the personal transformations taking place online rely for the most part on software; the body itself is not seriously altered. But with scientific fields such as bioengineering expanding in scope, this may not be the case for long. Alice Rawsthorn has considered the implications: “As our personal identities become subtler and more singular, we will wish to make increasingly complex and nuanced choices about the design of many aspects of out lives… We will also have more of the technological tools required to do so.” If this does turn out to be the case, we will face considerable ethical dilemmas regarding the uses and more generally the purpose of science and technology.

When did death become so personal?


I have a slightly gloomy but, I think, not unreasonable view of birthdays, which is that they are really all about death. It rests on two simple observations. First, much as they pretend otherwise, people do generally find birthdays to be poignant occasions. And second, a milestone can have no poignancy which does not ultimately come from the knowledge that the journey in question must end. (Would an eternal being find poignancy in ageing, nostalgia, or anything else associated with the passing of time? Surely not in the sense that we use the word). In any case, I suspect most of us are aware that at these moments when our life is quantified, we are in some sense facing our own finitude. What I find interesting, though, is that to acknowledge this is verboten. In fact, we seem to have designed a whole edifice of niceties and diversions – cards, parties, superstitions about this or that age – to avoid saying it plainly.

Well it was my birthday recently, and it appears at least one of my friends got the memo. He gave a copy of Hans Holbein’s Dance of Death, a sequence of woodcuts composed in 1523-5. They show various classes in society being escorted away by a Renaissance version of the grim reaper – a somewhat cheeky-looking skeleton who plays musical instruments and occasionally wears a hat. He stands behind The Emperor, hands poised to seize his crown; he sweeps away the coins from The Miser’s counting table; he finds The Astrologer lost in thought, and mocks him with a skull; he leads The Child away from his distraught parents.

Screen Shot 2018-05-21 at 09.58.17
Hans Holbein, “The Astrologer” and “The Child,” from “The Dance of Death” (1523-5)

It is striking for the modern viewer to see death out in the open like this. But the “dance of death” was a popular genre that, before the advent of the printing press, had adorned the walls of churches and graveyards. Needless to say, this reflects the fact that in Holbein’s time, death came frequently, often without warning, and was handled (both literally and psychologically) within the community. Historians speculate about what pre-modern societies really believed regarding death, but belief is a slippery concept when death is part of the warp and weft of culture, encountered daily through ritual and artistic representations. It would be a bit like asking the average person today what their “beliefs” are about sex – where to begin? Likewise in Holbein’s woodcuts, death is complex, simultaneously a bringer of humour, justice, grief, and consolation.

Now let me be clear, I am not trying to romanticise a world before antibiotics, germ theory, and basic sanitation. In such a world, with child mortality being what it was, you and I would most likely be dead already. Nonetheless, the contrast with our own time (or at least with certain cultures, and more about that later) is revealing. When death enters the public sphere today – which is to say, fictional and news media – it rarely signifies anything, for there is no framework in which it can do so. It is merely a dramatic device, injecting shock or tragedy into a particular set of circumstances. The best an artist can do now is to expose this vacuum, as the photographer Jo Spence did in her wonderful series The Final Project, turning her own death into a kitsch extravaganza of joke-shop masks and skeletons.

Screen Shot 2018-05-21 at 10.04.25
From Jo Spence, “The Final Project,” 1991-2, courtesy of The Jo Spence Memorial Archive and Richard Saltoun Gallery

And yet, to say that modern secular societies ignore or avoid death is, in my view, to miss the point. It is rather that we place the task of interpreting mortality squarely and exclusively upon the individual. In other words, if we lack a common means of understanding death – a language and a liturgy, if you like – it is first and foremost because we regard that as a private affair. This convention is hinted at by euphemisms like “life is short” and “you only live once,” which acknowledge that our mortality has a bearing on our decisions, but also imply that what we make of that is down to us. It is also apparent, I think, in our farcical approach to birthdays.

Could it be that, thanks to this arrangement, we have actually come to feel our mortality more keenly? I’m not sure. But it does seem to produce some distinctive experiences, such as the one described in Philip Larkin’s famous poem “Aubade” (first published in 1977):

Waking at four to soundless dark, I stare.
In time the curtain-edges will grow light.
Till then I see what’s really always there:
Unresting death, a whole day nearer now,
Making all thought impossible but how
And where and when I shall myself die.

Larkin’s sleepless narrator tries to persuade himself that humanity has always struggled with this “special way of being afraid.” He dismisses as futile the comforts of religion (“That vast moth-eaten musical brocade / Created to pretend we never die”), as well as the “specious stuff” peddled by philosophy over the centuries. Yet in the final stanza, as he turns to the outside world, he nonetheless acknowledges what does make his fear special:

telephones crouch, getting ready to ring
In locked-up offices, and all the uncaring
Intricate rented world begins to rouse.

Work has to be done.
Postmen like doctors go from house to house.

There is a dichotomy here, between a personal world of introspection, and a public world of routine and action. The modern negotiation with death is confined to the former: each in our own house.


*     *     *


When did this internalisation of death occur, and why? Many reasons spring to mind: the decline of religion, the rise of Freudian psychology in the 20thcentury, the discrediting of a socially meaningful death by the bloodletting of the two world wars, the rise of liberal consumer societies which assign death to the “personal beliefs” category, and would rather people focused on their desires in the here and now. No doubt all of these have had some part to play. But there is also another way of approaching this question, which is to ask if there isn’t some sense in which we actually savour this private relationship with our mortality that I’ve outlined, whatever the burden we incur as a result. Seen from this angle, there is perhaps an interesting story about how these attitudes evolved.

I direct you again to Holbein’s Dance of Death woodcuts.As I’ve said, what is notable from our perspective is that they picture death within a traditional social context. But as it turns out, these images also reflect profound changes that were taking place in Northern Europe during the early modern era. Most notably, Martin Luther’s Protestant Reformation had erupted less than a decade before Holbein composed them. And among the many factors which led to that Reformation was a tendency which had begun emerging within Christianity during the preceding century, and which would be enormously influential in the future. This tendency was piety, which stressed the importance of the individual’s emotional relationship to God.

As Ulinka Rublack notes in her commentary on The Dance of Death, one of the early contributions of piety was the convention of representing death as a grisly skeleton. This figure, writes Rublack, “tested its onlooker’s immunity to spiritual anxiety,” since those who were firm in their convictions “could laugh back at Death.” In other words, buried within Holbein’s rich and varied portrayal of mortality was already, in embryonic form, an emotionally charged, personal confrontation with death. And nor was piety the only sign of this development in early modern Europe.

Hans Holbein, The Ambassadors (1533)

In 1533, Holbein produced another, much more famous work dealing with death: his painting The Ambassadors. Here we see two young members of Europe’s courtly elite standing either side of a table, on which are arrayed various objects that symbolise a certain Renaissance ideal: a life of politics, art, and learning. There are globes, scientific instruments, a lute, and references to the ongoing feud within the church. The most striking feature of the painting, however, is the enormous skull which hovers inexplicably in the foreground, fully perceptible only from a sidelong angle. This remarkable and playful item signals the arrival of another way of confronting death, which I describe as decadent. It is not serving any moral or doctrinal message, but illuminating what is most precious to the individual: status, ambition, accomplishment.

The basis of this decadent stance is as follows: death renders meaningless our worldly pursuits, yet at the same time makes them seem all the more urgent and compelling. This will be expounded in a still more iconic Renaissance artwork: Shakespeare’s Hamlet (1599). It is no coincidence that the two most famous moments in this play are both direct confrontations with death. One is, of course, the “To be or not to be” soliloquy; the other is the graveside scene, in which Hamlet holds a jester’s skull and asks: “Where be your gibes now, your gambols, your songs, your flashes of merriment, that were wont to set the table on a roar?” These moments are indeed crucial, for they suggest why the tragic hero, famously, cannot commit to action. As he weighs up various decisions from the perspective of mortality, he becomes intoxicated by the nuances of meaning and meaninglessness. He dithers because ultimately, such contemplation itself is what makes him feel, as it were, most alive.

All of this is happening, of course, within the larger development that historians like to call “the birth of the modern individual.” But as the modern era progresses, I think there are grounds to say that these two approaches – the pious and the decadent – will be especially influential in shaping how certain cultures view the question of mortality. And although there is an important difference between them insofar as one addresses itself to God, they also share something significant: a mystification of the inner life, of the agony and ecstasy of the individual soul, at the expense of religious orthodoxy and other socially articulated ideas about life’s purpose and meaning.

During the 17thcentury, piety became the basis of Pietism, a Lutheran movement that enshrined an emotional connection with God as the most important aspect of faith. Just as pre-Reformation piety may have been a response, in part, to the ravages of the Black Death, Pietism emerged from the utter devastation wreaked in Germany by the Thirty Years War. Its worship was based on private study of the bible, alone or in small groups (sometimes called “churches within a church”), and on evangelism in the wider community. In Pietistic sermons, the problem of our finitude – of our time in this world – is often bound up with a sense of mystery regarding how we ought to lead our lives. Everything points towards introspection, a search for duty. We can judge how important these ideas were to the consciousness of Northern Europe and the United States simply by naming two individuals who came strongly under their influence: Immanuel Kant and John Wesley.

It was also from the Central German heartlands of Pietism that, in the late-18thcentury, Romanticism was born – a movement which took the decadent fascination with death far beyond what we find in Hamlet. Goethe’s novel The Sorrows of Young Werther, in which the eponymous artist shoots himself from lovesickness, led to a wave of copycat suicides by men dressed in dandyish clothing. As Romanticism spread across Europe and into the 19thcentury, flirting with death, using its proximity as a kind of emotional aphrodisiac, became a prominent theme in the arts. As Byron describes one of his typical heroes: “With pleasure drugged, he almost longed for woe, / And e’en for change of scene would seek the shades below.” Similarly, Keats: “Many a time / I have been half in love with easeful Death.”


*     *     *


This is a very cursory account, and I am certainly not claiming there is any direct or inevitable progression between these developments and our own attitudes to death. Indeed, with Pietism and Romanticism, we have now come to the brink of the Great Awakenings and Evangelism, of Wagner and mystic nationalism – of an age, in other words, where spirituality enters the public sphere in a dramatic and sometimes apocalyptic way. Nonetheless, I think all of this points to a crucial idea which has been passed on to some modern cultures, perhaps those with a northern European, Protestant heritage; the idea that mortality is an emotional and psychological burden which the individual should willingly assume.

And I think we can now discern a larger principle which is being cultivated here – one that has come to define our understanding of individualism perhaps more than any other. That is the principle of freedom. To take responsibility for one’s mortality – to face up to it and, in a manner of speaking, to own it – is to reflect on life itself and ask: for what purpose, for what meaning? Whether framed as a search for duty or, in the extreme decadent case, as the basis of an aesthetic experience, such questions seem to arise from a personal confrontation with death; and they are very central to our notions of freedom. This is partly, I think, what underlies our convention that what you make of death is your own business.

The philosophy that has explored these ideas most comprehensively is, of course, existentialism. In the 20thcentury, Martin Heidegger and Jean Paul Sartre argued that the individual can only lead an authentic life – a life guided by the values they deem important – by accepting that they are free in the fullest, most terrifying sense. And this in turn requires that the individual honestly accept, or even embrace, their finitude. For the way we see ourselves, these thinkers claim, is future-oriented: it consists not so much in what we have already done, but in the possibility of assigning new meaning to those past actions through what we might do in the future. Thus, in order to discover what our most essential values really are – the values we wish to direct our choices as free beings – we should consider our lives from its real endpoint, which is death.

Sartre and Heidegger were eager to portray these dilemmas, and their solutions, as brute facts of existence which they had uncovered. But it is perhaps truer to say that they were signing off on a deal which had been much longer in the making – a deal whereby the individual accepts the burden of understanding their existence as doomed beings, with all the nausea that entails, in exchange for the very expansive sense of freedom we now consider so important. Indeed, there is very little that Sartre and Heidegger posited in this regard which cannot be found in the work of the 19thcentury Danish philosopher Søren Kierkegaard; and Kierkegaard, it so happens, can also be placed squarely within the traditions of both Pietism and Romanticism.

To grasp how deeply engrained these ideas have become, consider again Larkin’s poem “Aubade:”

Most things may never happen: this one will,
And realisation of it rages out
In furnace-fear when we are caught without
People or drink. Courage is no good:
It means not scaring others. Being brave
Lets no one off the grave.
Death is no different whined at than withstood.

Here is the private confrontation with death framed in the most neurotic and desperate way. Yet part and parcel with all the negative emotions, there is undoubtedly a certain lugubrious relish in that confrontation. There is, in particular, something titillating in the rejection of all illusions and consolations, clearing the way for chastisement by death’s uncertainty. This, in other words, is the embrace of freedom taken to its most masochistic limit. And if you find something strangely uplifting about this bleak poem, it may be that you share some of those intuitions.




How The Past Became A Battlefield


In recent years, a great deal has been written on the subject of group identity in politics, much of it aiming to understand how people in Western countries have become more likely to adopt a “tribal” or “us-versus-them” perspective. Naturally, the most scrutiny has fallen on the furthest ends of the spectrum: populist nationalism on one side, and certain forms of radical progressivism on the other. We are by now familiar with various economic, technological, and psychological accounts of these group-based belief systems, which are to some extent analogous throughout Europe and in North America. Something that remains little discussed, though, is the role of ideas and attitudes regarding the past.

When I refer to the past here, I am not talking about the study of history – though as a source of information and opinion, it is not irrelevant either. Rather, I’m talking about the past as a dimension of social identity; a locus of narratives and values that individuals and groups refer to as a means of understanding who they are, and with whom they belong. This strikes me as a vexed issue in Western societies generally, and one which has had a considerable bearing on politics of late. I can only provide a generic overview here, but I think it’s notable that movements and tendencies which emphasise group identity do so partly through a particular, emotionally salient conception of the past.

First consider populism, in particular the nationalist, culturally conservative kind associated with the Trump presidency and various anti-establishment movements in Europe. Common to this form of politics is a notion that Paul Taggart has termed “heartland” – an ill-defined earlier time in which “a virtuous and unified population resides.” It is through this temporal construct that individuals can identify with said virtuous population and, crucially, seek culprits for its loss: corrupt elites and, often, minorities. We see populist leaders invoking “heartland” by brandishing passports, or promising to make America great again; France’s Marine Le Pen has even sought comparison to Joan of Arc.

Meanwhile, parts of the left have embraced an outlook well expressed by Faulkner’s adage that the past is never dead – it isn’t even past. Historic episodes of oppression and liberating struggle are treated as continuous with, and sometimes identical to, the present. While there is often an element of truth in this view, its practical efficacy has been to spur on a new protest movement. A rhetorical fixation with slavery, colonialism, and patriarchy not only implies urgency, but adds moral force to certain forms of identification such as race, gender, or general antinomianism.

Nor are these tendencies entirely confined to the fringes. Being opposed to identity politics has itself become a basis for identification, albeit less distinct, and so we see purposeful conceptions of the past emerging among professed rationalists, humanists, centrists, classical liberals and so on. In their own ways, figures as disparate as Jordan Peterson and Steven Pinker define the terra firma of reasonable discourse by a cultural narrative of Western values or Enlightened liberal ideals, while everything outside these bounds invites comparison to one or another dark episode from history.

I am not implying any moral or intellectual equivalence between these different outlooks and belief systems, and nor am I saying their views are just figments of ideology. I am suggesting, though, that in all these instances, what could plausibly be seen as looking to history for understanding or guidance tends to shade into something more essential: the sense that a given conception of the past can underpin a collective identity, and serve as a basis for the demarcation of the political landscape into friends and foes.


*     *     *


These observations appear to be supported by recent findings in social psychology, where “collective nostalgia” is now being viewed as a catalyst for inter-group conflict. In various contexts, including populism and liberal activism, studies suggest that self-identifying groups can respond to perceived deprivation or threat by evoking a specific, value-leaden conception of the past. This appears to bolster solidarity within the group and, ultimately, to motivate action against out-groups. We might think of the past here as becoming a kind of sacred territory to be defended; consequently, it serves as yet another mechanism whereby polarisation drives further polarisation.

This should not, I think, come as a surprise. After all, nation states, religious movements and even international socialism have always found narratives of provenance and tradition essential to extracting sacrifices from their members (sometimes against the grain of their professed beliefs). Likewise, as David Potter noted, separatist movements often succeed or fail on the basis of whether they can establish a more compelling claim to historical identity than that of larger entity from which they are trying to secede.

In our present context, though, politicised conceptions of the past have emerged from cultures where this source of meaning or identity has largely disappeared from the public sphere. Generally speaking, modern Western societies allow much less of the institutional transmission of stories which has, throughout history, brought an element of continuity to religious, civic, and family life. People associate with one another on the basis of individual preference, and institutions which emerge in this way usually have no traditions to refer to. In popular culture, the lingering sense that the past withholds some profound quality is largely confined to historical epics on the screen, and to consumer fads recycling vintage or antiquated aesthetics. And most people, it should be said, seem perfectly happy with this state of affairs.

Nonetheless, if we want to understand how the past is involved with the politics of identity today, it is precisely this detachment that we should scrutinise more closely. For ironically enough, we tend to forget that our sense of temporality – or indeed lack thereof – is itself historically contingent. As Francis O’Gorman details in his recent book Forgetfulness: Making the Modern Culture of Amnesia, Western modernity is the product of centuries worth of philosophical, economic, and cultural paradigms that have fixated on the future, driving us towards “unknown material and ideological prosperities to come.” Indeed, from capitalism to Marxism, from the Christian doctrine of salvation to the liberal doctrine of progress, it is remarkable how many of the Western world’s apparently diverse strands of thought regard the future as the site of universal redemption.

But more to the point, and as the intellectual historian Isaiah Berlin never tired of pointing out, this impulse towards transcending the particulars of time and space has frequently provoked, or at times merged with, its opposite: ethnic, cultural, and national particularism. Berlin made several important observations by way of explaining this. One is that universal and future-oriented ideals tend to be imposed by political and cultural elites, and are thus resented as an attack on common customs. Another is that many people find something superficial and alienating about being cut off from the past; consequently, notions like heritage or historical destiny become especially potent, since they offer both belonging and a form of spiritual superiority.

I will hardly be the first to point out that the most recent apotheosis of progressive and universalist thought came in the era immediately following the Cold War (not for nothing has Francis Fukuyama’s The End of History become its most iconic text). In this moment, energetic voices in Western culture – including capitalists and Marxists, Christians and liberals – were preoccupied with cutting loose from existing norms. And so, from the post-national rhetoric of the EU to postmodern academia and the champions of the service economy and global trade, they all defined the past by outdated modes of thought, work, and indeed social identity.

I should say that I’m too young to remember this epoch before the war on terror and the financial crisis, but the more I’ve tried to learn about it, the more I am amazed by its teleological overreach. This modernising discourse, or so it appears to me, was not so much concerned with constructing a narrative of progress leading up to the present day as with portraying the past as inherently shameful and of no use whatsoever. To give just one example, consider that as late as 2005, Britain’s then Prime Minister Tony Blair did not even bother to clothe his vision of the future in the language of hope, simply stating: “Unless we ‘own’ the future, unless our values are matched by a completely honest understanding of the reality now upon us and the next about to hit us, we will fail.”

Did such ways of thinking lay in store the divisive attachments to the past we see in politics today? Arguably, yes. The populist impulse towards heartland has doubtless been galvanised by the perception that elites have abandoned provenance as a source of common values. Moreover, as the narrative of progress has become increasingly unconvincing in the twenty-first century, its latent view of history as a site of backwardness and trauma has been seized upon by a new cult of guilt. What were intended as reasons to dissociate from the past have become reasons to identify with it as victims or remorseful oppressors.


*     *     *


Even if you accept all of this, there remains a daunting question: namely, what is the appropriate relationship between a society and its past? Is there something to be gained from cultivating some sense of a common background, or should we simply refrain from undermining that which already exists? It’s important to state, firstly, that there is no perfect myth which every group in a polity can identify with equally. History is full of conflict and tension, and well as genuine injustice, and to suppress this fact is inevitably to sow the seeds of resentment. Such was the case, for instance, with the Confederate monuments which were the focus of last year’s protests in the United States: many of these were erected as part of a campaign for national unity in the early 20th century, one that denied the legacy of African American slavery.

Moreover, a strong sense of tradition is easily co-opted by rulers to sacralise their own authority and stifle dissent. The commemoration of heroes and the vilification of old enemies are today common motifs of state propaganda in Russia, India, China, Turkey, Poland and elsewhere. Indeed, many of the things we value about modern liberal society – free thought, scientific progress, political equality – have been won largely by intransigence towards the claims of the past. None of them sit comfortably in societies who afford significant moral authority to tradition. And this is to say nothing of the inevitable sacrificing of historical truth when the past is used as an agent of social cohesion.

But notwithstanding the partial resurgence of nationalism, it is not clear there exists in the West today any vehicle for such comprehensive, overarching myths. As with “tribal” politics in general, the politicisation of the past has been divergent rather than unifying because social identity is no longer confined to traditional concepts and categories. A symptom of this, at least in Europe, is that people who bemoan the absence of shared historical identity – whether politicians such as Emmanuel Macron or critics like Douglas Murray – struggle to express what such a thing might actually consist in. Thus they resort to platitudes like “sovereignty, unity and democracy” (Macron), or a rarefied high culture of Cathedrals and composers (Murray).

The reality which needs to be acknowledged, in my view, is that the past will never be an inert space reserved for mere curiosity or the measurement of progress. The human desire for group membership is such that it will always be seized upon as a buttress for identity. The problem we have encountered today is that, when society at large loses its sense of the relevance and meaning of the past, the field is left open to the most divisive interpretations; there is, moreover, no common ground from which to moderate between such conflicting narratives. How to broaden out this conversation, and restore some equanimity to it, might in the present circumstances be an insoluble question. It certainly bears thinking about though.

Consumerism or idealism? Making sense of authenticity


One of my favourite moments in cinema comes from Paolo Sorrentino’s film The Great Beauty. The scene is a fashionable get-together on a summer evening, and as the guests gossip over aperitifs, we catch a woman uttering: “Everybody knows Ethiopian jazz is the only kind worth listening to.” The brilliance of this line is not just that it shows the speaker to be a pretentious fool. More than that, it manages to demonstrate the slipperiness of a particular ideal. For what this character is implying, with her reference to Ethiopian jazz, is that she and her tastes are authentic. She appreciates artistic integrity, meaningful expression, and maybe a certain virtuous naivety. And the irony, of course, is that by setting out to be authentic she has merely stumbled into cliché.

I find myself recalling this dilemma when I pass through the many parts of London that seem to be suffering an epidemic of authenticity today. Over the past decade or so, life here and in many other cities has become crammed with nostalgic, sentimental objects and experiences. We’ve seen retro décor in cocktail bars and diners, the return of analogue formats like vinyl and film photography, and a fetishism of the vintage and the hand-made in everything from fashion to crockery. Meanwhile restaurants, bookshops, and social media feeds offer a similarly quaint take on customs from around the globe.

Whether looking back to a 1920s Chicago of leather banquettes and Old Fashioned cocktails, or the wholesome cuisine of a traditional Balkan home, these are so many tokens of an idealised past – attempts to signify that simple integrity which, paradoxically, is the mark of cosmopolitan sophistication. These motifs have long since passed into cliché themselves. Yet the generic bars and coffee shops keep appearing, the LPs are still being reissued, and urban neighborhoods continue being regenerated to look like snapshots of times and places that never quite existed.

The Discount Suit Company, one of London’s many “Prohibition-style cocktail dens” according to TimeOut

There is something jarring about this marriage of the authentic with the commercial and trendy, just as there is when someone announces their love of Ethiopian jazz to burnish their social credentials. We understand there is more to authenticity than just an aura of uniqueness, a vague sense of being true to something, which a product or experience might successfully capture. Authenticity is also defined by what it isn’t: shallow conformity. Whether we find it in the charmingly traditional or in the unusual and eccentric, authenticity implies a defiance of those aspects of our culture that strike us as superficial or contrived.

Unsurprisingly then, most commentators have concluded that what surrounds us today is not authenticity at all. Rather, in these “ready-made generic spaces,” what we see is no less than “the triumph of hive mind aesthetics to the expense of spirit and of soul.” The authentic has become a mere pretense, a façade behind which a homogenized, soulless modernity has consolidated its hold. And this says something about us of course. To partake in such a fake culture suggests we are either unfortunate dupes or, perhaps, something worse. As one critic rather dramatically puts it: “In cultural markets that are all too disappointingly accessible to the masses, the authenticity fetish disguises and renders socially acceptable a raw hunger for hierarchy and power.”

These responses echo a line of criticism going back to the 1970s, which sees the twin ideals of the authentic self and the authentic product as mere euphemisms for the narcissistic consumer and the passing fad. And who can doubt that the prerogative of realising our unique selves has proved susceptible to less-than-unique commercial formulas? This cosmetic notion of authenticity is also applied easily to cultures as a whole. As such, it is well suited to an age of sentimental relativism, when all are encouraged to be tourists superficially sampling the delights of world.

And yet, if we are too sceptical, we risk accepting the same anaemic understanding of authenticity that the advertisers and trendsetters foist on us. Is there really no value in authenticity beyond the affirmation it gives us as consumers? Is there no sense in which we can live up to this ideal? Does modern culture offer us nothing apart from illusions? If we try to grasp where our understanding of authenticity comes from, and how it governs our relationship with culture, we might find that for all its fallibility it remains something that is worth aiming for. More importantly perhaps, we’ll see that for better or for worse, it’s not a concept we can be rid of any time soon.



Authenticity vs. mass culture

In the narrowest sense of the word, authenticity applies to things like banknotes and paintings by Van Gogh: it describes whether they are genuine or fake. What do we mean, though, when we say that an outfit, a meal, or a way of life is authentic? Maybe it’s still a question of provenance and veracity – where they originate and whether they are what they claim – but now these properties have taken on a quasi-spiritual character. Our aesthetic intuitions have lured us into much deeper waters, where we grope at values like integrity, humility, and self-expression.

Clearly authenticity in this wider sense cannot be determined by an expert with a magnifying glass. In fact, if we want to grasp how such values can seem to be embodied in our cultural environment – and how this relates to the notion of being an authentic person – we should take a step back. The most basic answers can be found in the context from which the ideal of authenticity emerged, and in which it continues to operate today: Western mass culture.

That phrase – mass culture ­– might strike you as modern sounding, recalling as it does a world of consumerism, Hollywood and TV ads. But it simply means a culture in which beliefs and habits are shaped by exposure to the same products and media, rather than by person-to-person interaction. In Europe and elsewhere, this was clearly emerging in the 18th and 19th centuries, in the form of mass media (journals and novels), mass-produced goods, and a middle class seeking novelties and entertainments. During the industrial revolution especially, information and commodities began to circulate at a distinctly modern tempo and scale.

Gradually, these changes heralded a new and somewhat paradoxical experience. On the one hand, the content of this culture – whether business periodicals, novels and plays, or department store window displays – inspired people to see themselves as individuals with their own ambitions and desires. Yet those individuals also felt compelled to keep up with the latest news, fashions and opinions. Ensconced in a technologically driven, commercially-minded society, culture became the site of constant change, behind which loomed an inscrutable mass of people. The result was an anxiety which has remained a feature of art and literature ever since: that of the unique subject being pulled along, puppet-like, by social expectations, or caught up in the gears of an anonymous system.

And one product of that anxiety was the ideal of authenticity. Philosophers like Jean-Jacques Rousseau in the 18th century, Søren Kierkegaard in the 19th, and Martin Heidegger in the 20th, developed ideas of what it meant to be an authentic individual. Very broadly speaking, they were interested in the distinction between the person who conforms unthinkingly, and the person who approaches life on his or her own terms. This was never a question of satisfying the desire for uniqueness vis-à-vis the crowd, but an insistence that there were higher concepts and goals in relation to which individuals, and perhaps societies, could realise themselves.

Screen Shot 2017-11-27 at 09.44.12
John Ruskin’s illustrations of Gothic architecture, published in The Stones of Venice (1851)

Others, though, approached the problem from the opposite angle. The way to achieve an authentic way of being, they thought, was collectively, through culture. They emphasised the need for shared values that are not merely instrumental – values more meaningful than making money, saving time, or seeking social status. The most famous figures to attempt this in the 19th century were John Ruskin and William Morris, and the way they went about it was very telling indeed. They turned to the past and, drawing a direct link between aesthetics and morality, sought forms of creativity and production that seemed to embody a more harmonious existence among individuals.

For Morris, the answer was a return to small-scale, pre-industrial crafts. For Ruskin, medieval Gothic architecture was the model to be emulated. Although their visions of the ideal society differed greatly, both men praised loving craftsmanship, poetic expressiveness, imperfection and integrity – and viewed them as social as well as artistic virtues. The contrast with the identical commodities coming off factory production lines could hardly be more emphatic. In Ruskin’s words, whereas cheap wholesale goods forced workers “to make cogs and compasses of themselves,” the contours of the Gothic cathedral showed “the life and liberty of every workman who struck the stone.”



The authentic dilemma

In Ruskin and Morris we can see the outlines of our own understanding of authenticity today. Few of us share their moral and social vision (Morris was a utopian socialist, Ruskin a paternalist Christian), but they were among the first to articulate a particular intuition that arises from the experience of mass culture – one that leads us to idealise certain products and pastimes as embodiments of a more free-spirited and nourishing, often bygone world. Our basic sense of what it means to be an authentic individual is rooted in this same ground: a defiance of the superficial and materialistic considerations that the world seems to impose on us.

Thanks to ongoing technological change, mass culture has impressed each new generation with these same tensions. The latest installment, of course, has been the digital revolution. Many of us find something impersonal in cultural products that exist only as binary code and appear only on a screen – a coldness somehow worsened by their convenience. The innocuous branding of digital publishing companies, with cuddly names like Spotify and Kindle, struggles to hide the bloodless efficiency of the algorithm. This is stereotypically contrasted with the soulful pleasures of, say, the authentic music fan, pouring over the sleeve notes of his vinyl record on the top deck of the bus.

But this hackneyed image immediately recalls the dilemma we started with, whereby authenticity itself gets caught-up in the web of fashion and consumerist desire. So when did ideals become marketing tools? The prevailing narrative emphasises the commodification of leisure in the early 20th century, the expansion of mass media into radio and cinema, and the development of modern advertising techniques. Yet, on a far more basic level, authenticity was vulnerable to this contradiction from the very beginning.

Ideals are less clear-cut in practice than they are in the page. For Ruskin and Morris, the authenticity of certain products and aesthetics stemmed from their association with a whole other system of values and beliefs. To appreciate them was effectively to discard the imperatives of mass culture and commit yourself to a different way of being. But no such clear separation exists in reality. We are quite capable of recognizing and appreciating authenticity when it is served to us by mass culture itself – and we can do so without even questioning our less authentic motives and desires.

Hi-tech Victorian entertainment: the Panorama. (Source: Wikimedia commons)

Thus, by the time Ruskin published “On the Nature of Gothic” in 1851, Britain had long been in the grip of a mass phenomenon known as the Gothic Revival – a fascination with Europe’s Christian heritage manifest in everything from painting and poetry to fashion and architecture. Its most famous monument would be the building from which the new industrial society was managed and directed: the Houses of Parliament in Westminster. Likewise, nodding along to Ruskin’s noble sentiments did not prevent bourgeois readers from enjoying modern conveniences and entertainments, and merely justified their disdain for mass-produced goods as cheap and common.

From then until now, to be “cultured” has to some degree implied a mingling of nostalgia and novelty, efficiency and sentimentality. Today’s middle-classes might resent their cultural pursuits becoming generic trends, but also know that their own behavior mirrors this duplicity. The artisanal plate of food is shared on Facebook, a yoga session begins a day of materialistic ambition, and the Macbook-toting creative expresses in their fashion an air of old-fashioned simplicity. It’s little wonder boutique coffee shops the world over look depressingly similar, seeing as most of their customers happily share the same environment on their screens.

Given this tendency to pursue conflicting values simultaneously, there is really nothing to stop authentic products and ideas becoming fashionable in their own right. And once they do so, of course, they have started their inevitable descent into cliché. But crucially, this does not mean that authenticity is indistinguishable from conformity and status seeking itself. In fact, it can remain meaningful even alongside these tendencies.



Performing the authentic

A few years ago, I came across a new, elaborately designed series of Penguin books. With their ornate frontispieces and tactile covers, these “Clothbound Classics” seemed to be recalling the kind volume that John Ruskin himself might have read. On closer inspection, though, these objects really reflected the desires of the present. The antique design elements were balanced with modern ones, so as to produce a carefully crafted simulacrum: a copy for which no original has ever existed. Deftly straddling the nostalgia market and the world of contemporary visuals, these were books for people who now did most of their reading from screens.

Screen Shot 2017-11-27 at 10.04.43
Volumes from Penguin’s “Clothbound Classics” series

As we’ve seen, to be authentic is to aspire to a value more profound than mere expediency – one that we often situate in the obsolete forms of the past. This same sentimental quality, however, also makes for a very good commodity. We often find that things are only old or useless insofar as this allows them to be used as novelties or fashion statements. And such appropriation is only too easy when the aura of authenticity can be summoned, almost magically, by the manipulation of symbols: the right typeface on a menu, the right degree of saturation in a photograph, the right pattern on a book cover.

This is where our self-deceiving relationship with culture comes into closer focus. How is it we can be fooled by what are clearly just token gestures towards authenticity, couched in alterior motives like making money or grabbing our attention? The reason is that, in our everyday interactions with culture, we are not going around as judges but as imaginative social beings who appreciate such gestures. We recognise that they have a value simply as reminders of ideals that we hold in common, or that we identify with personally. Indeed, buying into hints and suggestions is how ideals remain alive in amidst the disappointments and limitations of lived reality.

In his essay “A is for Authentic,” Design Museum curator Deyan Sudjic expands this idea by portraying culture as a series of choreographed rituals and routines, which demonstrate not so much authenticity as our aspirations towards it. From the homes we inhabit to the places we shop and the clothes we wear, Sudjic suggests, “we live much of our lives on a sequence of stage sets, modeled on dreamlike evocations of the world that we would like to live in rather than the world as it is.”

This role-play takes us away from the realities of profit and loss, necessity and compromise, and into a realm where those other notions like humility and integrity have the place they deserve. For Sudjic, the authentic charm of a period-themed restaurant, for instance, allows us to “toy with the idea that the rituals of everyday life have more significance than, in truth, we suspect that they really do.” We know we are not going to find anything like pure, undiluted authenticity, free from all pretense. But we can settle for something that acknowledges the value of authenticity in a compelling way – something “authentic in its artistic sincerity.” That is enough for us to play along.

Steven Poole makes a similar point about the ideal of being an authentic person, responding to the uncompromising stance that Jean Paul Satre takes on this issue. In Satre’s Being and Nothingness, there is a humorous vignette in which he caricatures the mannerisms of a waiter in a café. In Satre’s eyes, this man’s contrived behavior shows that he is performing a role rather than being his authentic self. But Poole suggests that, “far from being deluded that he really is a waiter,” maybe Satre’s dupe is aware that he is acting, and is just enjoying it.

Social life is circumscribed by performance and gesture to the extent that, were we to dig down in an effort to find some authentic bedrock, we would simply be taking up another role. Our surroundings and possessions are part of that drama too – products like books and Gothic cathedrals are ultimately just props we use to signal towards a hypothetical ideal. So yes, authenticity is a fiction. But insofar as it allows us to express our appreciation of values we regard as important, it can be a useful one.



Between thought and expression

Regardless of the benefits, though, our willingness to relax judgment for the sake of gesture has obvious shortcomings. The recent craze for the authentic, with its countless generic trends, has demonstrated them clearly. Carried away by the rituals of consumerism, we can end up embracing little more than a pastiche of authenticity, apparently losing sight of the bigger picture of sterile conformity in which those interactions are taking place. Again, the suspicion arises that authenticity itself is a sham. For how can it be an effective moral standard if, when it comes to actually consuming culture, we simply accept whatever is served up to us?

I don’t think this picture is entirely right, though. Like most of our ideals, authenticity has no clear and permanent outline, but exists somewhere between critical thought and social conventions. Yet these two worlds are not cut off from each other. We do still possess some awareness when we are immersed in everyday life, and the distinctions we make from a more detached perspective can, gradually and unevenly, sharpen that awareness. Indeed, even the most aggressive criticism of authenticity today is, at least implicitly, grounded in this possibility.

One writer, for instance, describes the vernacular of “reclaimed wood, Edison bulbs, and refurbished industrial lighting” which has become so ubiquitous in modern cities, calling it “a hipster reduction obsessed with a superficial sense of history and the remnants of industrial machinery that once occupied the neighbourhoods they take over.” The pretense of authenticity has allowed the emergence of zombie-like cultural forms: deracinated, fake, and sinister in their social implications. “From Bangkok to Beijing, Seoul to San Francisco,” he writes, this “tired style” is catering to “a wealthy, mobile elite, who want to feel like they’re visiting somewhere ‘authentic’ while they travel.”

This is an effective line of attack because it clarifies a vague unease that many will already feel in these surroundings. But crucially, it can only do this by appealing to a higher standard of authenticity. Like most recent critiques of this kind, it combines aesthetic revulsion at a soulless, monotonous landscape, with moral condemnation of the social forces responsible, and thus reads exactly like an updated version of John Ruskin’s arguments. In other words, the same intuitions that lead consumers, however erroneously, to find certain gestures and symbols appealing, are being leveraged here to clarify those intuitions.

This is the fundamental thing to understand about authenticity: it is so deeply ingrained in our ways of thinking about culture, and in our worldview generally, that it is both highly corruptible and impossible to dispense with. Since our basic desire for authenticity doesn’t come from advertisers or philosophers, but from the experience of mass culture itself, we can manipulate and refine that desire but we can’t suppress it. And almost regardless of what we do, it will continue to find expression in any number of ways.

A portrait posted by socialite Kendall Jenner on Instagram in 2015, typical of the new mannerist, sentimental style

This has been vividly demonstrated, for instance, in the relatively new domain of social media. Here the tensions of mass culture have, in a sense, risen afresh, with person-to-person interaction taking place within the same apparatus that circulates mass media and social trends. Thus a paradigm of authentic expression has emerged which in some places verges on outright romanticism: consider the phenomenon of baring your soul to strangers on Facebook, or the mannerist yet sentimental style of portrait that is so popular on Instagram. Yet this paradigm still functions precisely along the lines we identified earlier. Everybody knows it is ultimately a performance, but are willing to go along with it.

Authenticity has also become “the stardust of this political age.” The sprouting of a whole crop of unorthodox, anti-establishment politicians on both sides of the Atlantic is taken to mean that people crave conviction and a human touch. Yet even here it seems we are dealing not so much with authentic personas as with authentic products. For their followers, such leaders are an ideal standard against which culture can be judged, as well as symbolic objects that embody an ideology – much as handcrafted goods were for William Morris’ socialism, or Gothic architecture was for Ruskin’s Christianity.

Moreover, where these figures have broadened their appeal beyond their immediate factions, it is again because mass culture has allowed them to circulate as recognisable and indeed fashionable symbols of authenticity. One of the most intriguing objects I’ve come across recently is a “bootlegged” Nike t-shirt, made by the anonymous group Bristol Street Wear in support of the politician Jeremy Corbyn. Deliberately or not, their use of one of the most iconic commercial designs in history is an interesting comment on that trade-off between popularity and integrity which is such a feature of authenticity in general.

Screen Shot 2017-11-17 at 11.09.55
The bootleg t-shirt produced by Bristol Street Wear during the 2017 General Election campaign. Photograph: Victoria & Albert Museum, London

These are just cursory observations; my point is that the ideal of authenticity is pervasive, and that for this very reason, any expression of it risks being caught-up in the same system of superficial motives and ephemeral trends that it seeks to oppose. This does not make authenticity an empty concept. But it does mean that, ultimately, it should be seen as a form of aspiration, rather than a goal which can be fully realised.