Addressing the crisis of work

This article was first published by Arc Digital on December 10th 2018.

There are few ideals as central to the life of liberal democracies as that of stable and rewarding work. Political parties of every stripe make promises and boasts about job creation; even Donald Trump is not so eccentric that he does not brag about falling rates of unemployment. Preparing individuals for the job market is seen as the main purpose of education, and a major responsibility of parents too.

But all of this is starting to ring hollow. Today it is an open secret that, whatever the headline employment figures say, the future of work is beset by uncertainty.

Since the 1980s, the share of national income going to wages has declined in almost every advanced economy (the socially democratic Nordic countries are the exception). The decade since the financial crisis of 2007–8 has seen a stubborn rise in youth unemployment, and an increase in “alternative arrangements” characteristic of the gig economy: short-term contracts, freelancing and part-time work. Graduates struggle to find jobs to match their expectations. In many places the salaried middle-class is shrinking, leaving a workforce increasingly polarized between low- and high-earners.

Nor do we particularly enjoy our work. A 2013 Gallup survey found that in Western countries only a fifth of people say they are “engaged” at work, with the rest “not engaged” or “actively disengaged.”

The net result is an uptick of resentment, apathy, and despair. Various studies suggest that younger generations are less likely to identify with their career, or profess loyalty to their employer. In the United States, a worrying number of young men have dropped out of work altogether, with many apparently devoting their time to video games or taking prescription medication. And that’s without mentioning the ongoing automation revolution, which will exacerbate these trends. Robotics and artificial intelligence will likely wipe-out whole echelons of the current employment structure.

So what to do? Given the complexity of these problems — social, cultural, and economic — we should not expect any single, perfect solution. Yet it would be reckless to hope that, as the economy changes, it will reinvent a model of employment resembling what we have known in the past.

We should be thinking in broad terms about two related questions: in the short term, how could we reduce the strains of precarious or unfulfilling employment? And in the long term, what will we do if work grows increasingly scarce?

One answer involves a limited intervention by the state, aimed at revitalizing the habits of a free-market society — encouraging individuals to be independent, mobile, and entrepreneurial. American entrepreneur Andrew Yang proposes a Universal Basic Income (UBI) paid to all citizens, a policy he dubs “the freedom dividend.” Alternatively, Harvard economist Lawrence Katz suggests improving labor rights for part-time and contracted workers, while encouraging a middle-class “artisan economy” of creative entrepreneurs, whose greatest asset is their “personal flair.”

There are valid intuitions here about what many of us desire from work — namely, autonomy, and useful productivity. We want some control over how our labor is employed, and ideally to derive some personal fulfillment from its results. These values are captured in what political scientist Ian Shapiro has termed “the workmanship ideal”: the tendency, remarkably persistent in Western thought since the Enlightenment, to recognize “the sense of subjective satisfaction that attaches to the idea of making something that one can subsequently call one’s own.”

But if technology becomes as disruptive as many foresee, then independence may come at a steep price in terms of unpredictability and stress. For your labor — or, for that matter, your artisan products — to be worth anything in a constantly evolving market, you will need to dedicate huge amounts of time and energy to retraining. According to some upbeat advice from the World Economic Forum, individuals should now be aiming to “skill, reskill, and reskill again,” perhaps as often as every 2–3 years.

Is it time, then, for more radical solutions? There is a strand of thinking on the left which sees the demise of stable employment very differently. It argues that by harnessing technological efficiency in an egalitarian way, we could all work much less and still have the means to lead more fulfilling lives.

This “post-work” vision, as it is now called, has been gaining traction in the United Kingdom especially. Its advocates — a motley group of Marx-inspired journalists and academics — found an unexpected political platform in Jeremy Corbyn’s Labour Party, which has recently proposed cutting the working week to four days. It has also established a presence in mainstream progressive publications such as The Guardian and New Statesman.

To be sure, there is no coherent, long-term program here. Rather, there is a great deal of blind faith in the prospects of automation, common ownership and cultural revolution. Many in the post-work camp see liberation from employment, usually accompanied by UBI, as the first step in an ill-defined plan to transcend capitalism. Typical in that respect are Alex Williams and Nick Srnicek, authors of Inventing the Future: Postcapitalism and a World Without Work. This blueprint includes open borders and a pervasive propaganda network, and flirts with the possibility of “synthetic forms of biological reproduction” to enable “a newfound equality between the sexes.”

We don’t need to buy into any of this, though, to appreciate the appeal of enabling people to work less. Various thinkers, including Bertrand Russell and John Maynard Keynes, took this to be an obvious goal of technological development. And since employment does not provide many of us with the promised goods of autonomy, fulfillment, productive satisfaction and so on, why shouldn’t we make the time to pursue them elsewhere?

Now, one could say that even this proposition is based on an unrealistic view of human nature. Arguably the real value of work is not enjoyment or even wealth, but purpose: people need routine, structure, a reason to get up in the morning, otherwise they would be adrift in a sea of aimlessness. Or at least some of them would – for another thing employment currently provides is a relatively civilized way for ambitious individuals to compete for resources and social status. Nothing in human history suggests that, even in conditions of superabundance, that competition would stop.

According to this pessimistic view, freedom and fulfillment are secondary concerns. The real question is, in the absence of employment, what belief systems, political mechanisms, and social institutions would make work for all of those idle thumbs?

But the way things are headed, it looks like we are going to need to face that question anyway, in which case our work-centric culture is a profound obstacle to generating good solutions. With so much energy committed to long hours and career success (the former being increasingly necessary for the latter), there is no space for other sources of purpose, recognition, or indeed fulfilment to emerge in an organic way.

The same goes for the economic side of the problem. I am no supporter of UBI – a policy whose potential benefits are dwarfed by the implications of a society where every individual is a client of the state. But if we want to avoid that future, it would be better to explore other arrangements now than to cling to our current habits until we end up there by default. Thus, if for no other reason than to create room for such experiments, the idea of working less is worth rescuing from the margins of the debate.

More to the point, there needs to be a proper debate. Given how deeply rooted our current ideas about employment are, politicians will continue appealing to them. We shouldn’t accept such sedatives. Addressing this problem will likely be a messy and imperfect process however we go about it, and the sooner we acknowledge that the better.

Social media’s turn towards the grotesque

This essay was first published by Little Atoms on 09 August 2018. The image on my homepage is a detail from an original illustration by Jacob Stead. You can see the full work here.

Until recently it seemed safe to assume that what most people wanted on social media was to appear attractive. Over the last decade, the major concerns about self-presentation online have been focused on narcissism and, for women especially, unrealistic standards of beauty. But just as it is becoming apparent that some behaviours previously interpreted as narcissistic – selfies, for instance – are simply new forms of communication, it is also no longer obvious that the rules of this game will remain those of the beauty contest. In fact, as people derive an ever-larger proportion of their social interaction here, the aesthetics of social media are moving distinctly towards the grotesque.

When I use the term grotesque, I do so in a technical sense. I am referring to a manner of representing things – the human form especially – which is not just bizarre or unsettling, but which creates a sense of indeterminacy. Familiar features are distorted, and conventional boundaries dissolved.

Instagram, notably, has become the site of countless bizarre makeup trends among its large demographic of young women and girls. These transformations range from the merely dramatic to the carnivalesque, including enormous lips, nose-hair extensions, eyebrows sculpted into every shape imaginable, and glitter coated onto everything from scalps to breasts. Likewise, the popularity of Snapchat has led to a proliferation of face-changing apps which revel in cartoonish distortions of appearance. Eyes are expanded into enormous saucers, faces are ghoulishly elongated or squashed, and animal features are tacked onto heads. These images, interestingly, are also making their way onto dating app profiles.

Of course for many people such tools are simply a way, as one reviewer puts it, “to make your face more fun.” There is something singularly playful in embracing such plasticity: see for instance the creative craze “#slime”, which features videos of people playing with colourful gooey substances, and has over eight million entries on Instagram. But if you follow the threads of garishness and indeterminacy through the image-oriented realms of the internet, deeper resonances emerge.

The pop culture embraced by Millennials and the so-called Generation C (born after 2000) reflects a fascination with brightly adorned, shape-shifting and sexually ambiguous personae. If performers like Miley Cyrus and Lady Gaga were forerunners of this tendency, they are now joined by more dark and refined way figures such as Sophie and Arca from the dance music scene. Meanwhile fashion, photography and video abound with kitsch, quasi-surreal imagery of the kind popularised by Dazed magazine. Celebrated subcultures such as Japan’s “genderless Kei,” who are characterised by bright hairstyles and makeup, are also part of this picture.

But the most striking examples of this turn towards the grotesque come from art forms emerging within digital culture itself. It is especially well illustrated by Porpentine, a game designer working with the platform Twine, whose disturbing interactive poems have achieved something of a cult status. They typically place readers in the perspective of psychologically and socially insecure characters, leading them through violent urban futurescapes reminiscent of William Burrough’s Naked Lunch. The New York Times aptly describes her games as “dystopian landscapes peopled by cyborgs, intersectional empresses and deadly angels,” teeming with “garbage, slime and sludge.”

These are all manifestations both of a particular sensibility which is emerging in parts of the internet, and more generally of a new way of projecting oneself into public space. To spend any significant time in the networks where such trends appear is to become aware of a certain model of identity being enacted, one that is mercurial, effervescent, and boldly expressive. And while the attitudes expressed vary from anxious subjectivity to humorous posturing – as well as, at times, both simultaneously – in most instances one senses that the online persona has become explicitly artificial, plastic, or even disposable.

*   *   *

Why, though, would a paradigm of identity such as this invite expression as the grotesque? Interpreting these developments is not easy given that digital culture is so diffuse and rapidly evolving. One approach that seems natural enough is to view them as social phenomena, arising from the nature of online interaction. Yet to take this approach is immediately to encounter a paradox of sorts. If “the fluid self” represents “identity as a vast and ever-changing range of ideas that should all be celebrated” (according to trend forecaster Brenda Milis), then why does it seem to conform to generic forms at all? This is a contradiction, that in fact might prove enlightening.

One frame which has been widely applied to social media is sociologist Erving Goffman’s “dramaturgical model,” as outlined in his 1959 book The Presentation of Self in Every Day Life. According to Goffman, identity can be understood in terms of a basic dichotomy, which he explains in terms of “Front Stage” and “Back Stage.” Our “Front Stage” identity, when we are interacting with others, is highly responsive to context. It is preoccupied with managing impressions and assessing expectations so as to present what we consider a positive view of ourselves. In other words, we are malleable in the degree to which we are willing to tailor our self-presentation.

The first thing to note about this model is that it allows for dramatic transformations. If you consider the degree of detachment enabled by projecting ourselves into different contexts through words and imagery, and empathising with others on the same basis, then the stage is set for more or less anything becoming normative within a given peer group. As for why people would want to take this expressive potential to unusual places, it seems reasonable to speculate that in many cases, the role we want to perform is precisely that of someone who doesn’t care what anyone thinks. But since most of us do in fact care, we might end up, ironically enough, expressing this within certain established parameters.

But focusing too much on social dynamics risks underplaying the undoubted sense of freedom associated with the detachment from self in online interaction. Yes, there is peer pressure here, but within these bounds there is also a palpable euphoria in escaping mundane reality. The neuroscientist Susan Greenfield has made this point while commenting on the “alternative identity” embraced by young social media users. The ability to depart from the confines of stable identity, whether by altering your appearance or enacting a performative ritual, essentially opens the door to a world of fantasy.

With this in mind, we could see the digital grotesque as part of a cultural tradition that offers us many precedents. Indeed, this year marks the 200th anniversary of perhaps the greatest precedent of all: Mary Shelley’s iconic novel Frankenstein. The great anti-hero of that story, the monster who is assembled and brought to life by the scientist Victor Frankenstein, was regarded by later generations as an embodiment of all the passions that society requires the individual to suppress – passions that the artist, in the act of creation, has special access to. The uncanny appearance and emotional crises of Frankenstein’s monster thus signify the potential for unknown depths of expression, strange, sentimental, and macabre.

That notion of the grotesque as something uniquely expressive and transformative was and has remained prominent in all of the genres with which Frankenstein is associated – romanticism, science fiction, and the gothic. It frequently aligns itself with the irrational and surreal landscapes of the unconscious, and with eroticism and sexual deviancy; the films of David Lynch are emblematic of this crossover. In modern pop culture a certain glamourised version of the grotesque, which subverts rigid identity with makeup and fashion, appeared in the likes of David Bowie and Marilyn Manson.

Are today’s online avatars potentially incarnations of Frankenstein’s monster, tempting us with unfettered creativity? The idea has been explored by numerous artists over the last decade. Ed Atkins is renowned for his humanoid characters, their bodies defaced by crude drawings, who deliver streams of consciousness fluctuating between the poetic and the absurd. Jon Rafman, meanwhile, uses video and animation to piece together entire composite worlds, mapping out what he calls “the anarchic psyche of the internet.” Reflecting on his years spent exploring cyberspace, Rafman concludes: “We’ve reached a point where we’re enjoying our own nightmares.”

*   *   *

It is possible that the changing aesthetics of the Internet reflect both the social pressures and the imaginative freedoms I’ve tried to describe, or perhaps even the tension between them. One thing that seems clear, though, is that the new notions of identity emerging here will have consequences beyond the digital world. Even if we accept in some sense Goffman’s idea of a “Backstage” self, which resumes its existence when we are not interacting with others, the distinction is ultimately illusory. The roles and contexts we occupy inevitably feed back into how we think of ourselves, as well as our views on a range of social questions. Some surveys already suggest a generational shift in attitudes to gender, for instance.

That paradigms of identity shift in relation to technological and social changes is scarcely surprising. The first half of the 20th century witnessed the rise of a conformist culture, enabled by mass production, communication, and ideology, and often directed by the state. This then gave way to the era of the unique individual promoted by consumerism. As for the balance of psychological benefits and problems that will arise as online interaction grows, that is a notoriously contentious question requiring more research.

There is, however, a bigger picture here that deserves attention. The willingness of people to assume different identities online is really part of a much broader current being borne along by technology and design – one whose general direction to enable individuals to modify and customise themselves in a wide range of ways. Whereas throughout the 20th century designers and advertisers were instrumental in shaping how we interpreted and expressed our social identity – through clothing, consumer products, and so on – this function is now increasingly being assumed by individuals within social networks.

Indeed, designers and producers are surrendering control of both the practical and the prescriptive aspects of their trade. 3D printing is just one example of how, in the future, tools and not products will be marketed. In many areas, the traditional hierarchy of ideas has been reversed, as those who used to call the tune are now trying to keep up with and capitalise on trends that emerge from their audiences. One can see this loss of influence in an aesthetic trend that seems to run counter to those I’ve been observing here, but which ultimately reflects the same reality. From fashion to furniture, designers are making neutral products which can be customised by an increasingly identity-conscious, changeable audience.

Currently, the personal transformations taking place online rely for the most part on software; the body itself is not seriously altered. But with scientific fields such as bioengineering expanding in scope, this may not be the case for long. Alice Rawsthorn has considered the implications: “As our personal identities become subtler and more singular, we will wish to make increasingly complex and nuanced choices about the design of many aspects of out lives… We will also have more of the technological tools required to do so.” If this does turn out to be the case, we will face considerable ethical dilemmas regarding the uses and more generally the purpose of science and technology.