Memes, Subcultures and Social Media

Education and the Internet — Part 3

Introduction

News items relating to the Internet are by now a daily occurrence: cyber war, cyber terrorism, ransomware attacks, election interference, news manipulation, cyber bullying, and so on. Although these are serious issues, and rightly subjects of concern, another online phenomenon that rarely makes the front page (in Australia, at any rate) is perhaps more directly relevant to many people’s everyday experience. I am referring to the ubiquitous ‘internet meme’. The reader may be tempted to scoff at the notion that memes could be compared to the more alarming topics listed above. After all, most peoples’ experience of memes is probably the occasional (and endlessly re-posted) humorous, cute or motivational image that appears in their Facebook newsfeed. What harm could there be, right?

Some may be familiar with the origin of the word ‘meme’ in a 1976 publication. I’ll hazard a guess, however, that few would be aware of the connection between the internet-meme phenomenon and several online subcultures with dubious reputations. Furthermore, although some media organizations (e.g. The Guardian) have recently started to shine a spotlight on the controversial content to be found on social-media sites like Facebook, analysis of the links between the above-mentioned subcultures, memes, and social-networking groups attracts little, if any, publicity.

I first became alert to Facebook ‘meme groups’ early in 2016, and in April of that year I posted a message indicating my concern about some of the groups that were using the social-media site as a platform. I wondered at that time whether the parents among my Facebook contacts were even aware of the nature of some of these groups and their associated pages. For a while the topic was off my radar, but recently it became a topic of discussion in the school where I teach, and I decided to investigate further. My research resulted in a better understanding of the background to the Facebook meme-group culture, which I present below.

First, however, a point about terminology. The word ‘meme’ itself is neutral, just like the word ‘joke’. There’s nothing inherently offensive about a joke. Many jokes are simply funny; some make a point about something, perhaps a political one (e.g. satire); some might be a bit ‘edgy’; yet others would be generally considered in poor taste or, worse, downright offensive. As with jokes, so also with memes. It’s a spectrum, and the dividing lines between acceptable, edgy, in poor taste, and downright offensive, vary from person to person. I think it’s fair to say, however, that most people would recognize there is a spectrum, ranging from perfectly acceptable to downright objectionable. We might wonder at someone who collapses such distinctions and sees no difference between the two extremes.

Dawkins’ dodgy dogma

The word ‘meme’ pre-dates its internet incarnation by several decades. It was coined by Oxford evolutionary biologist, Richard Dawkins, in his 1976 book, The Selfish Gene. Like its biological counterpart, the gene, a carrier of heritable traits between generations of an organism, Dawkins conceived of the meme as a carrier of cultural information, such as an idea, a symbol, or a practice. Such cultural units are transferred from mind to mind, and in this sense either survive or die out. Successful memes, therefore, have survival value. Like genes, they are ‘selfish’, using minds as ‘hosts’, just as a virus uses an organism as a host. Dawkins derived his coinage from the ancient Greek concept of mimesis, from which we get words like ‘mime’, ‘mimicry’ and ‘imitate’.

We need a name for the new replicator, a noun that conveys the idea of a unit of cultural transmission, or a unit of imitation. ‘Mimeme’ comes from a suitable Greek root, but I want a monosyllable that sounds a bit like ‘gene’. I hope my classicist friends will forgive me if I abbreviate mimeme to meme. If it is any consolation, it could alternatively be thought of as being related to ‘memory’, or to the French word même. It should be pronounced to rhyme with ‘cream’. (The Selfish Gene)

Unsurprisingly, Dawkins’ theory of memes has been the subject of astute criticism, with one critic describing it as ‘pseudoscientific dogma’ (see ‘Memetics: A Dangerous Idea’). To me the concept of meme seems a crude version of ‘sign’, in the semiotic sense, a study of which was part of my postgraduate research in philosophy (see the section on ‘Semiotics’ in my earlier post: An Educational Autobiography). A useful summary of the development of the meme concept, including some inherent problems with it, is James Gleick’s ‘What Defines a Meme?’, which is an extract adapted from his 2011 book, The Information: A History, A Theory, A Flood.

My concern here, however, is not with meme theory as such, but rather the social phenomenon of internet memes, and in particular the online groups that employ them relentlessly. For this analysis, the coherence or otherwise of the theory is irrelevant, and it would make no difference if the word ‘meme’ had never been inventeded.

Digital deviation

In its transition to the Web, the term has undergone an evolutionary development of its own. The internet usage carries the more restricted sense of ‘an activity, concept, catchphrase or piece of media which spreads, often as mimicry or for comedic purposes, from person to person via the Internet’ (Wikipedia). Images, usually with some form of text, are probably the most common permutation (see image macro). Standard forms have developed, with associated protocols. Typical examples of internet-meme jokes are ‘Y U NO’ and ‘Condescending Wonka’.

Y U NO

‘Y U NO’ meme

One thing to note about these is that the link between the image and the text is usually tenuous: just about any text could be used, ranging from relatively benign (the examples given here) to strongly offensive.

Condescending Wonka

‘Condescending Wonka’ meme

Not all memes are jokes. Animals feature prominently, and, while sometimes amusing, they are more often ‘cute’ as in the following example (which also demonstrates the popular animated ‘gif’ format):

Cute

Bear cub playing with wolf cub: cute animals are common memes

Sticking with animals, we have the ‘pet shaming’ series:

Pet shaming

And it would be negligent not to include some reference to the most famous cat on the Internet, ‘Grumpy Cat’ (real name: Tardar Sauce), whose fame can be traced to a September 2012 Reddit post by her owner’s brother. The cat’s peculiar physiognomy, caused by feline dwarfism and an underbite, gives her a permanently scowling expression:

Grumpy Cat

‘Grumpy Cat’ meme

Yet other memes depend on incidental photographs, often amusing, whether staged or fortuitous:

So I turned into a toad last night

‘So I turned into a toad last night’

All that’s needed is a clever caption:

Instructions were unclear

‘Instructions were unclear’

One of the most famous photograph memes is indicative of a further development in this rapidly evolving social phenomenon. ‘Bad Luck Brian’ is a goofy photo from a 2005-2006 school yearbook. The picture is of Kyle Craven, a class clown and self-confessed prankster, who deliberately dressed and posed in order to create a joke image. The principal was unimpressed and forced Kyle to sit for the picture retakes, although he (Kyle) later persuaded someone on the yearbook staff to include both images. Kyle was forced to surrender the photo, but not before he and his friend, Ian Davies, had scanned and saved it. It was Davies who posted the image to Reddit in January 2012, naming it ‘Bad Luck Brian’ and including the caption ‘Takes driving test – gets first DUI’ (Driving Under the Influence). It was an instant hit, and ‘Bad Luck Brian’ became a magnet for every conceivable bad-luck caption.

Bad Luck Brian

Kyle Craven as ‘Bad Luck Brian’

Reddit fame was only the beginning:

Before long, Bad Luck Brian was an Internet sensation. His face appeared on Facebook, blogs and advertisements. T-shirts with his photo were sold at Wal-Mart and Hot Topic. Companies made Bad Luck Brian paperweights and Bad Luck Brian stuffed animals. He was flown to Internet conventions across the country. People like me, who barely knew him in high school, bragged about his photo’s popularity. (‘Anatomy of a meme: The real story of Bad Luck Brian, his viral class portrait and the fleeting nature of online fame’, National Post, 6 January 2015)

As the case of ‘Bad Luck Brian’ indicates, it didn’t take long for commercial interests to realize the marketing opportunities presented by internet memes. ‘Grumpy Cat’ garnered similar attention, not to mention being featured in mainstream media (see ‘Grumpy Cat’ on Wikipedia).

Examples could be multiplied indefinitely, and many of them can be found on websites like ‘quickmeme’ and ‘knowyourmeme’. Those provided here demonstrate that there is nothing inherently objectionable about the content of internet memes. Other aspects of the practice, however, are legitimate topics for discussion: its potential for time wasting; an occasional tendency to misinformation (especially quotes attributed to celebrity geniuses like Einstein); and perhaps a general trivialization and dumbing-down of culture. These topics will re-emerge in future posts in this series.

Furthermore, I have chosen some particularly ‘tame’ examples, and there are definitely ‘edgier’ ones, depending on the viewer’s perspective. The phenomenon becomes problematic when it goes beyond what might be termed ‘common decency’. Here the internet meme seems to cross a line that only a minority are willing to traverse. Moreover, the memes themselves are only a part of the problem. At this point, the context in which they are being shared becomes just as salient.

‘Something Awful’, ‘4chan’ and ‘Encyclopedia Dramatica’

The story of the internet meme is inseparable from the online subcultures known as Something Awful, 4chan, and Encyclopedia Dramatica.

‘Something Awful’ (SA) was created by Richard ‘Lowtax’ Kyanka in 1999, and is the source for the Slender Man meme (regarding which, see the Waukesha stabbing and other incidents). Something Awful was described in a January 2008 Wired article as a collection of members-only message forums:

an online humor site dedicated to a brand of scorching irreverence and gross-out wit that, in its eight years of existence, has attracted a fanatical and almost all-male following. Strictly governed by its founder, Rich “Lowtax” Kyanka, the site boasts more than 100,000 registered Goons (as members proudly call themselves) and has spawned a small diaspora of spinoff sites. Most noticeable is the anime fan community 4chan, with its notorious /b/ forum and communities of ‘/b/tards.’ Flowing from this vast ecosystem are some of the Web’s most infectious memes and catchphrases (‘all your base are belong to us’ was popularized by Something Awful, for example; 4chan gave us lolcats) and online gaming’s most exasperating wiseasses. (‘Mutilated Furries, Flying Phalluses: Put the Blame on Griefers, the Sociopaths of the Virtual World’, Wired, 18 January 2008)

‘4chan’ is an ‘imageboard’ website that was launched in October 2003 by Christopher Poole, then a 15-year-old student from New York City, and a regular participant on the SA forums. Poole intended 4chan to be an American counterpart to the popular Japanese Futaba Channel (‘2chan’) imageboard, and a place to discuss Japanese ‘manga’ and ‘anime’. He encouraged users from the SA subforum, ‘Anime Death Tentacle Rape Whorehouse’, to discuss anime on his website. In its earliest days, 4chan had only two boards: ‘/a/ – Anime/General’ and ‘/b/ – Anime/Random’. The latter was the first board to be created, and is, according to Wikipedia, ‘by far 4chan’s most popular board, with 30% of site traffic’ (retrieved 4 July 2017). More boards were added over time, and /b/ was eventually renamed to ‘/b/ – Random’, or simply ‘random’. The ‘random’ board has minimal regulation and its notoriety is attested by numerous sources, including the Wired article cited above (for more, see the ‘/b/’ subsection of the Wikipedia ‘4chan’ article). A 2008 New York Times article (worth reading in its entirety) contains the following description of /b/:

Measured in terms of depravity, insularity and traffic-driven turnover, the culture of /b/ has little precedent. /b/ reads like the inside of a high-school bathroom stall, or an obscene telephone party line, or a blog with no posts and all comments filled with slang that you are too old to understand. (‘The Trolls Among Us’, New York Times, 3 August 2008)

According to Wikipedia, /b/ is the source of many internet memes, some of which are listed in the ‘Internet memes’ subsection.

‘Encyclopedia Dramatica’ (ED) was founded in 2004 by Sherrod DeGrippo. Wikipedia describes it as a ‘satirical website’ that ‘celebrates a subversive “trolling culture”, and documents Internet memes, culture, and events, such as mass organized pranks, trolling events, “raids”, large-scale failures of Internet security, and criticism of Internet communities which are accused of self-censorship in order to garner prestige or positive coverage from traditional and established media outlets’ (accessed 4 July 2017). Julian Dibbell, in a 2009 Wired article, situates EA in the context of ‘trolling’ (‘the most obnoxious innovation that architecture [i.e. the Internet] ever produced’): ‘Flamingly racist and misogynist content lurks throughout, all of it calculated to offend, along with links to eye-gougingly horrific images of mutilation, [and] sexual perversity’ (‘The Assclown Offensive: How to Enrage the Church of Scientology’, Wired, 21 September 2009).

As the Wired articles make clear, a paradoxical attitude pervades the subcultures of SA, 4chan and ED, which is a seriousness about not taking anything, including the Internet, seriously. Everything is for ‘the lulz’ (a corruption of ‘lols’, the plural form of ‘lol’ or ‘laugh out loud’). For those who haven’t come across the expression, ‘doing it for the lulz’ means doing something ‘for the laughs’, and the laughs are typically at someone else’s expense. This ambivalent stance appears to be the case whether the activity is online, or real-world events orchestrated by Anonymous, the actvist group spawned by 4chan, with strong links to ED.

The association of the internet meme with these subcultures helps explain the attitudes and ‘banter’ encountered in the meme groups on social-networking sites like Facebook.

Social-media minefield

Like many others, my earliest encounters with internet memes were of the generally innocuous variety described above. These appeared in my Facebook newsfeed, posted by people within my own circle of contacts. They were frequently amusing and I habitually re-posted them, thereby facilitating their viral spread. Facebook procedures make sharing easy, and the default setting is to share with one’s entire circle, which for many individuals amounts to hundreds of people (some younger users number their contacts in the thousands). The process preserves a link to the originating poster, though for memes it’s unnecessary to follow that link since the typical composite of image and text is visible in its entirety.

Early in 2016, one of my contacts shared a meme that caught my attention for some reason. I followed the link to the source and encountered something unexpected. Here was a publicly visible Facebook page with posts that were frequently objectionable for one reason or another. This led to another discovery: there are thousands of such user pages on Facebook. A lot of them have ‘meme’ in their titles, such as ‘Dank Memeology’ and ‘Meme Extreme’, while others, like ‘Filthy Frank’, although dispensing with the defining noun, leave no doubt about the owner’s posting preferences. Postings on these pages range from the merely puerile to the explicitly racist, misogynistic, antisemitic, homophobic, pornographic, and ‘disturbing’. My contact was following about five hundred of them.

Later I discovered that, in addition to the openly-visible ‘pages’, which require only a button-click to follow and whose historical postings are visible to anyone, there are also member-only groups based on meme sharing. Membership in these is by request. Entry prerequisites may vary, but the bar is likely to be low. Age requirements can easily be circumvented anyway, since Facebook doesn’t verify user age at the time of initial account set-up. In addition to the ‘closed’ groups that still show up in Facebook searches, there are also invisible groups that no one can see, membership in which is by invitation only.

Another common adjective, applied to ‘pages’ and ‘groups’ alike is ‘banter’. A search for this key term on Facebook reveals pages and groups devoted to almost every imaginable topic. These pages and groups also tend to favour the meme-type post.

Apart from ‘pages’ and ‘groups’, there is another Facebook feature that has been adopted by the meme-based group, and that is ‘chat’. Chat uses Facebook’s instant-messaging service, simply called ‘Messenger’, which comes built-in with the browser version, but is also available as a separate app for mobile devices. Chat groups can have the same sort of titles as pages and groups (e.g. ‘Edgy Memes’), but this doesn’t mean that membership in the chat group is the same as the general group with that title. Individuals are ‘added’ to a chat group by an admin who selects them from his or her own list of contacts. Chat is more ephemeral than pages and groups. Chats don’t show up in Facebook searches, and you can’t tell from your own Facebook account whether any of your contacts is in a group chat, unless you are also in that chat of course (although identities can be disguised in chat through the use of ‘nicknames’).

What is true of Facebook is doubtless true of other social media sites, although Facebook is one of the largest, with about two billion monthly active users. Many of the meme-based groups, including ‘Filthy Frank’, have their own YouTube channels.

It is important to remember that there is nothing inherently objectionable about ‘meme’ or ‘banter’ pages and groups. Many are devoted to completely innocent interests. My extended family has a Facebook group, and it’s a great way to share photos and generally keep in touch. Some pages and groups are devoted to political causes; others are based on national, ethnic, or religious identity; yet others are concerned with special interests, such as sport, art, or philosophy; and the list goes on.

That notwithstanding, it remains troubling that there is so much objectionable content distributed across social-media pages and groups. What exactly is the nature of this content, and how does it relate to the well-known memes pictured above? In terms of process, there is no difference: the most extreme and objectionable memes are made in exactly the same way as the examples provided, generally involving the association of an image or short video with some text. Just how objectionable some of them are will become clear in what follows.

The offending categories listed above (racist, misogynistic, antisemitic, homophobic, pornographic, and ‘disturbing’) refer to ‘generic’ posts, i.e. not targeting any particular individual. It doesn’t stop there, however, as specific individuals are also liable to be victimized. If your photo is available online, then you are a potential target. Both generic and specific types can be extremely objectionable, as several years of investigative journalism have demonstrated.

As early as 2011, The Guardian newspaper reported that Facebook was refusing to remove pages containing rape jokes, on the grounds that a rude joke wouldn’t ‘get you thrown out of your local pub’ (‘Facebook refuses to take down rape joke pages’, The Guardian, 1 October 2011). This was followed three days later by another piece questioning the analogy with pub humour:

By refusing to take these pages down, and by resorting to such a ridiculous and quite frankly offensive ‘rude joke’ analogy to justify their decision, Facebook executives have made absolutely clear where they stand on the issue of gender hate crime. It’s fine to post hateful or threatening content on their site, just as it’s fine to post content that incites violence. Well, as long as it’s primarily aimed at women, that is. (‘Facebook is fine with hate speech, as long as it’s directed at women’, The Guardian, 4 October 2011)

The campaign against content endorsing rape and domestic violence continued, and in May 2013 The Huffington Post reported that high-profile companies were being urged to boycott advertising on the social media site, in the face of its continued refusal to remove objectionable content. According to the article, the Women Action Media (WAM) group, one of the organizations calling for a boycott, was maintaining a cache of offensive material, including:

a photograph of singer Rihanna’s bloodied and beaten face, captioned with ‘Chris Brown’s Greatest Hits’. It also features an image of a woman lying in a pool of blood, with the words ‘I like her for her brains’ emblazoned across it … Further examples include a picture of a bruised and battered woman entitled ‘WHOREMOUTH – shut it when men are talking’ and one of a man holding a rag over a woman’s mouth, captioned ‘Does this smell like chloroform to you?’. (‘#FBrape: Will Facebook Heed Open Letter Protesting ‘Endorsement Of Rape & Domestic Violence’?’, The Huffington Post, 28 May 2013)

The WAM cache is maintained here (contains graphic content).

The following day, The Guardian reported that Facebook had been forced to take action against ‘hate speech’ on its pages, as a result of the campaign against ‘supposedly humorous content endorsing rape and domestic violence’:

The company said on Tuesday it would update its policies on hate speech, increase accountability of content creators and train staff to be more responsive to complaints, marking a victory for women’s rights activists. ‘We need to do better – and we will,’ it said in a statement. ‘Facebook gives way to campaign against hate speech on its pages’, The Guardian, 29 May 2013).

Fast forward to March 2017, when The Guardian reported that the British government was calling on social media companies ‘to do more to expunge extremist material from the internet’. The main target was ‘the easy availability of material promoting violent extremism online’, with Boris Johnson, the foreign secretary, claiming that ‘extremist material online was “corrupting and polluting” many people’ (‘Internet firms must do more to tackle online extremism, says No 10’, The Guardian, 25 March 2017).

Also in March, the tabloid press exposed some of the more shocking examples of meme-based trolling, where victims of terrorist attacks and their families were mocked. The exposé by The Sun Online refers to the members of so-called ‘ghost’ (i.e. invisible) groups:

Jokes are made about Madeleine McCann, terror victims and disabilities – with no topic out of bounds. Groups are moderated by ‘admins’, who can remove sick content – but instead act as ringleaders. An admin of ‘Pure Banter 18+’ last week shared a sick joke about PC Keith Palmer, who lost his life in the Westminster terror attack … Jokes about slavery and racist slurs are also common, with users requesting memes about dark topics. In the ‘Banter18+’ group this week, one member asked for ‘all your best rape memes’ and received scores of sickening posts. Other topics have included Robin Williams’ death, 9/11 and child abuse. (‘Antisocial Network’, The Sun, 29 March 2017)

On 30 March, the same source reported that groups that had been removed as a result of the previous day’s article had been set up again within ten minutes, with members mocking The Sun (see ‘Who Can Stop Them?’, The Sun, 30 March 2017). On 31 March, the Daily Mail carried a similar story.

On 2 May, The Sun reported on ‘The Bathroom’ banter group (180,490 members), in which cash was being offered for the ‘most f****d up memes and videos’, including ones mocking Harvey Price, the disabled son of Katie Price. (‘Sick Facebook troll groups are offering MONEY to the nastiest bullies who taunt disabled kids including Harvey Price’). Later that month, ‘Pure Banter’ was reported to be still in action, with some users making jokes about the bombing at the Manchester Arena. The report added that other members of the group refused to endorse the activities of the trolls. Apparently this was going too far for some. (‘Vile Facebook ‘banter’ groups have been mocking the Manchester bombing victims since last week’s atrocity’, The Sun, 31 May 2017).

Recognizing the scale of the problem, in May The Guardian announced The Facebook Files, a series drawing together the burgeoning investigation into the social-media giant. One focus is the burden experienced by Facebook’s ‘moderators’, who simply cannot cope with the volume of material being uploaded. Another concerns the company’s dilemma in trying to reconcile free speech with social responsibility.

These files raise legitimate questions about the content Facebook does not tolerate, and the speed with which it deals with it. But just as importantly they raise questions about the material it does allow – which some people may consider cruel, insulting, offensive, sexist and racist. (‘Has Facebook become a forum for misogyny and racism?’, The Guardian, 22 May 2017)

One of the articles included in ‘The Files’ goes into some detail about ‘Facebook’s secret rules and guidelines for deciding what its 2 billion users can post on the site’:

They illustrate difficulties faced by executives scrabbling to react to new challenges such as ‘revenge porn’ – and the challenges for moderators, who say they are overwhelmed by the volume of work, which means they often have ‘just 10 seconds’ to make a decision. (‘Revealed: Facebook’s internal rulebook on sex, terrorism and violence’, The Guardian, 22 May 2017)

Given such restrictions, it is hardly surprising that the focus is often on ‘credible violence’. Several of ‘The Files’ deal with the ‘mission impossible’ faced by moderators, as well as the specific threat posed by online extremists.

Before proceeding, there is a final point to make about language in the meme-based Facebook groups. Terms like ‘banter’ disguise the real nature of the discourse that predominates in many of these groups, and especially in chat. There is nothing playful or friendly in the interactions between people who, more often than not, have never even met. Nor will you find the sort of inspirational quotes that do the rounds on Facebook newsfeeds, whether accurately attributed or not. The language in meme chat-groups tends to be denigratory. An indication of this is evident from the abbreviations that are frequently employed, for example ‘kys’ (kill yourself), ‘smd’ (suck my d**k), ‘gfy’ (go f**k yourself) and ‘stfu’ (shut the f**k up) – see Net Lingo. Such discourse is more akin to the trolling mentality, which is facilitated by internet anonymity.

A revealing indication of the extent to which the dark side of meme culture has pervaded society was reported by The Washington Post a little over a month ago (see ‘Harvard withdraws 10 acceptances for “offensive” memes in private group chat’). It concerned a group of students who had been offered places at Harvard, and for whom an official Facebook group had been set up (The Harvard College Class of 2021), allowing admitted students ‘to meet classmates, ask questions and prepare for their first semester’. About a hundred of the incoming freshman class used the official group to create ‘a messaging group where students could share memes about popular culture — a growing trend on the Internet among students at elite colleges’.

But then, the exchanges took a dark turn, according to an article published in the Harvard Crimson … Some of the group’s members decided to form an offshoot group in which students could share obscene, ‘R-rated’ memes, a student told the Crimson. The founders of the messaging group demanded that students post provocative memes in the main group chat to gain admittance to the smaller group.

The students in the spinoff group exchanged memes and images ‘mocking sexual assault, the Holocaust and the deaths of children,’ sometimes directing jokes at specific ethnic or racial groups, the Crimson reported. One message ‘called the hypothetical hanging of a Mexican child “piñata time”‘ while other messages quipped that ‘abusing children was sexually arousing,’ according to images of the chat described by the Crimson.

University officials got wind of the R-rated sub-group, and, following an investigation, the institution revoked its offers to ten of the offending students, on the basis that ‘the university reserves the right to withdraw an offer of admission if the admitted student “engages or has engaged in behavior that brings into question their honesty, maturity or moral character,” among other conditions’. The decision provoked mixed reactions, divided along free-speech-versus-social-responsibility lines. The newspaper pointedly observed:

The university’s decision to rescind the students’ acceptance to Harvard underscores the dangers of social media posts — public or private — among prospective college students. According to Kaplan Test Prep, which surveyed more than 350 college admissions officers, 35 percent of admissions officers said they check social media sites like Facebook, Twitter and Instagram to learn more about applicants. About 42 percent of those officials said what they found had a negative impact on prospective students.

The joke’s gone too far

According to Dawkins’ original conception, memes ‘selfishly’ use human brains to replicate themselves. In other words, we are not the actors in this evolutionary drama, but rather the passive victims of the meme imperative to survive and reproduce. Ultimately, therefore, we are not responsible for these cultural packages of meaning (symbols, theories, practices, and so on). Their existence is independent of individual human beings, since we are merely temporary ‘hosts’.

The analogy with the ‘selfish gene’ is ultimately dissatisfying, however, and leads to self-contradiction. Dawkins’ theory evinces a modern variation of the ancient paradox of philosophical relativism, since its universal application undermines its own objectivity. In other words, Dawkins’ theory of memes, if true, is itself a meme; but if it is a meme, selfishly using our brains to replicate itself, then how could we ever know that it is true, that it corresponds to some objective state of affairs?

Like all paradoxes, this one points to an important philosophical question: to what extent do we control our ideas, and to what extent are we controlled by them? Dawkins was perhaps aware of the contradiction at the heart of his theory, since he appears to allow us some measure of control over the mental parasites. He concluded the original Selfish Gene with the following: ‘We are built as gene machines and cultured as meme machines, but we have the power to turn against our creators. We, alone on Earth can rebel against the tyranny of the selfish replicators’.

In this connection, it is instructive to examine the writings of another scientist, with credentials at least as impeccable as Dawkins’. I refer to neuropsychologist, neurobiologist and Nobel laureate, Roger Sperry (1913-1994). Sperry opposed the prevailing materialist reductionism of twentieth-century science, and propounded instead a ‘mentalist’ theory in which mind plays a causal role in brain processes, taking its place in a hierarchy extending from the subatomic, through intermediary levels, to the cultural. Causes necessarily vary from level to level and, therefore, an explanation appropriate at one level will not be appropriate at another.

In 1965, over ten years before Dawkins’ Selfish Gene, Sperry authored a paper entitled ‘Mind, Brain, and Humanist Values’ (later included in his 1983 book, Science and Moral Priority). In keeping with his mentalist outlook, Sperry argues for the potency of ‘ideas’ in the brain:

Near the apex of this command system in the brain – to return to more humanistic concerns – we find ideas. Man over the chimpanzee has ideas and ideals. In the brain model proposed here, the causal potency of an idea, or an ideal, becomes just as real as that of a molecule, a cell, or a nerve impulse. Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and thanks to global communication, in far distant, foreign brains. And they also interact with the external surroundings to produce in toto a burstwise advance in evolution that is far beyond anything to hit the evolutionary scene yet, including the emergence of the living cell. (Science and Moral Priority,
p.36)

At first glance, this almost seems like a proto-meme theory, with ‘ideas’ playing the same role as ‘memes’ in Dawkins’ later scheme. The comparison is superficial, however, because Sperry incorporates the exercise of individual agency in the process. Ideas are only part of the psychic furniture of the brain, along with other ‘mental forces’. In his opposition to a thoroughgoing physical determinism, Sperry does allow for a moderate psychic determinism: cerebral operations are not without antecedent causes, but those causes are not compelling. Potentially included among the causes are memories of previous episodes in an individual’s life, and also the repository of collective experience that is civilization. Such causes can perhaps be better described as ‘influences’. (See op cit, p.39-41)

To my mind, Sperry’s theory more accurately reflects our ordinary human experience than does Dawkins’ dodgy dogma, and this has important implications for how we conceive of memes in general, and internet memes in particular. The mentalist approach implies a measure of responsibility for the memes we generate and disseminate, and this burden cannot be shirked.

In a 2013 Washington Post article (‘Have Internet memes lost their meaning?’), Dominic Basulto takes stock of Dawkins’ ‘extraordinarily clever idea’ as it has adapted to the Internet. Reflecting on things like ‘lolcat’, he makes no reference to the dark side of internet memes, and merely speculates that memes ‘no longer transmit intelligent ideas – they only transmit banality’.

I would venture further and say that the internet meme represents a dumbing-down of culture. After all, if you fill your mind (and your time) with banality, then there is simply no room for the great ideas that form our cultural inheritance. By this I don’t mean that we should ‘consume’ culture for its own sake. By exposing ourselves to the great cultural creations of the past, including literature, the visual arts and music, we rise above our biological nature and, ultimately, develop character.

Banality may be a problem in itself, but it is hardly the most pressing one. When we consider the darker side of internet memes, as outlined above, we are confronting a more serious issue. As I said at the beginning, when it comes to humour there may be varying degrees of acceptability, but most of us would agree that at some point a line is crossed and we have left ‘acceptable’ behind and entered ‘objectionable’ territory. Why does this matter? I would say that it is because the darkest memes pander to the lowest parts of our nature, where sensuality and aggression are to be found. These primitive impulses have always been with us, to different degrees, but the countervailing forces of civilization have served, as a minimum, to keep them in check, and ideally to transform them. Left unchecked, such tendencies can become magnified into gross sensuality, cruelty and sadism.

The propagation of such destructive tendencies via social media, in the guise of humour, is a cause for genuine concern. The whimsical nature of many familiar internet memes can downplay the toxicity of others. It is a process of trivialization. This toxicity then spreads virally through the entire medium, with an accompanying risk of normalization and desensitization, especially when the consumers are young and still forming an identity. The internet troll represents the nadir of this social phenomenon, and while most of us would not identify with such sociopathic traits, to vary degrees the discourse and practices of trolling have pervaded the culture of internet memes.

When I recently discussed the issue of ‘dark’ memes with Grade-11 Philosophy students at a boys’ school, there seemed to be a general dismissal of the problem, summed up by one boy when he said that ‘no one takes those things seriously, even the people who post them’. In other words, it’s all ‘for the lulz’. Another boy responded to the first by saying that you can’t excuse any offensive content simply by labelling it as a ‘meme’. I added to this objection, by referring to a point made by Umberto Eco (1932-2016) in his 1967 article, ‘Towards a Semiological Guerrilla Warfare’ (published in the 1986 collection Faith in Fakes, later republished as Travels in Hyperreality). Eco pointed out that messages are interpreted at the destination, not at the source. This means that, regardless of someone’s intentions in sending a message, its interpretation will have more to do with the frame of reference of the person receiving it. This means that the interpretation of dark memes is beyond the control of the poster, and ultimately is unpredictable. Doing it ‘for the lulz’ amounts to a form of social irresponsibility, and those who provide a platform for the activity have a share in the accountability.

One suspects that those who post objectionable material ‘for the lulz’ would not be at all happy if the ‘lulz’ were at their expense – if, for instance, a mocking meme incorporating their photo was shared across the Internet, and hence visible to their peers; or if their family received phone calls from anonymous trolls making fun of a personal tragedy. Whether sociopaths have an abundance or a scarcity of empathy, its nature is perverted in their case, and yet an appropriate quality of feeling is required for a functional moral life. One role of civilization is to cultivate this, and I suspect that the aggressive, mocking nature of the internet troll undermines it. The internet meme, so easily generated and disseminated, is a major vehicle for this undesirable tendency.

In his 2008 article, ‘The Trolls Among Us’ (cited above), Mattathias Schwartz suggested: ‘It may not be a bad thing that the least-mature users have built remote ghettos of anonymity where the malice is usually intramural.’ This may once have been true of the college frat-society, but the Internet doesn’t work like that, a situation fuelled by technology and faulty age- and identity-checks. The malice is not contained within the virtual walls of an internet ghetto. It is all too clear that it spills over into real-life actions and harm, as attested by many of the examples given in the newspaper articles above.

In one of the Guardian articles from May (‘Revealed: Facebook’s internal rulebook on sex, terrorism and violence’), Nick Hopkins drew attention to the following astute observation by Sarah T Roberts, an expert on content moderation:

It’s one thing when you’re a small online community with a group of people who share principles and values, but when you have a large percentage of the world’s population and say ‘share yourself’, you are going to be in quite a muddle. Then when you monetise that practice you are entering a disaster situation.

The same point was made by Geoff White, the reporter on a recent episode of BBC Radio’s File on 4 programme (‘Online Grooming’, originally broadcast on 13 June), albeit in a different context:

Shouldn’t the social-media companies themselves be doing more to protect children? After all, Facebook alone has almost two billion users worldwide; many of them are young people, valuable targets for the advertisers who fill the tech company’s coffers. Social-media sites are happy to capitalize on youngsters’ likes, shares, and messages, but are they getting the message when it comes to online grooming? (2:35)

Although primarily concerned with the worrying problem of online grooming, it is perhaps worth noting one complaint made by young people that is mentioned in the programme: the use of personal content by third parties, in a manner unwanted by the original poster of the content. This was reported by Children’s Commissioner, Anne Longfield:

My starting point is that the Internet is a force for good, but it wasn’t built for children. And a third of the users of the Internet are children, so we need to make special accommodation, if you like, for them. Now what they told me was that they often found themselves coming across content that they didn’t expect and they thought was nasty or distasteful; they sometimes found their own postings being used in ways that they weren’t happy with … There were children who found photos of themselves that had been used in other ways, children who had found content that they found very disturbing, and they felt, in the main, nothing was done about it. (5:48)

The allegation that social-media companies have difficulty policing the content on their sites occurred in several of the journalistic sources cited above. It was also made in the File on 4 programme. The source of the problem is twofold: burgeoning subscriber numbers, on the one hand, and the exceptional legal circumstances applying to social-media companies, on the other. Jenny Wiltshire, a criminal defence solicitor from law firm Hickman and Rose, described the situation as follows:

Social networks come within the regime of hosting companies, which are covered by an EC directive, which gives them a lot of protection. The directive essentially says that if the social network doesn’t have actual knowledge of unlawful activity, then they can’t be liable either criminally or in civil damages. It’s only once they are made aware and they are provided information to say that unlawful activity has happened that they are under an obligation to act expeditiously to remove that material or disable the access to that information. So that’s resulted in social networks acting reactively rather than proactively to the problem. (34:32)

If that is the case with ‘unlawful’ activity, then we can only suppose that companies like Facebook will be even less proactive when it comes to the grey area of questionable content that we have been discussing in this post.

Conclusion

The law offers a certain level of protection against some extremes of behaviour on social-media sites. In addition, social-media companies have guidelines that discourage certain behaviour, although the bar may be set quite low, and in any case their ability to enforce the guidelines is in question. Beyond such formal provisions, it remains within the power of individuals to decide what is acceptable. No one is compelled to visit websites containing objectionable content. No one is compelled to join adult banter groups. And no one is compelled to pass on ‘edgy’ or ‘dark’ memes. In the case of children and teenagers, the responsibility lies with their guardians to become aware of their online activity and make decisions about what is acceptable.

The issues can seem complex. First, there is privacy, by which I mean the ability of an individual to control what is made public. This is a difficult one for celebrities, whose lives are often subject to media scrutiny. But everyone has a right to some degree of control over their information, including images. Some private individuals find themselves in the media spotlight through no choice of their own, such as when they are victims of tragedy. They are rightly enraged when that exposure is exploited by anonymous individuals with sociopathic tendencies. We should also be enraged on their behalf.

Then there is free speech, which is always counterbalanced by social responsibility. Getting the balance right is a perennial political, and legal, problem. There is a role for legitimate protest, for criticism, and for satire. But ‘hate speech’ and harassment infringe other liberties and should not be tolerated, and they cannot be excused on the grounds that they are ‘for the lulz’.

If there is any truth in the old adage that ‘you are what you eat’, then perhaps it is also true that ‘you are what you attend to’. If it is possible to become ill through an unhealthy diet, then might it also be possible to malnourish the mind by feeding it junk? I suggest an affirmative answer, and that the darker side of internet memes are creating a toxic environment from which we may need to protect ourselves and those for whom we care.

Just when I thought I had finished this post, I was listening to a podcast of an episode from ABC Radio National’s Saturday Extra programme (‘Democracy and trust’). Presenter Geraldine Doogue raised the topic of ‘civility’ with Bill Emmott (8:00), former editor of The Economist and author of The Fate of the West: The Battle to Save the World’s Most Successful Political Idea. It occurred to me that the concepts of ‘civility’ and ‘civil discourse’ are very relevant to the point I have been trying to make here. As the Wikipedia article makes clear, there is much more to civility than ‘politeness’ or ‘good manners’:

Community, choices, conscience, character are all elements directly related to civility. Civility is more than just having manners, because it involves developing a civil attitude and civil responsibility. Civility often forms more meaningful friendships and relationships, with an underlying tone of civic duty to help more than the sum of its whole. (Wikipedia, ‘Civility’, accessed 22 July 2017)

This is reflected in the etymology of the word, from the Latin civilis, ‘relating to citizens’: ‘In early use, the term denoted the state of being a citizen and hence good citizenship or orderly behavior. The sense “politeness” arose in the mid-16th century’ (Wikipedia, ‘Civility’, accessed 22 July 2017). The same word is, of course, at the root of ‘civilization’.

The article on civil discourse reminds us that it ‘neither diminishes the other’s moral worth, nor questions their good judgment; it avoids hostility, direct antagonism, or excessive persuasion; it requires modesty and an appreciation for the other participant’s experiences’ (Wikipedia, ‘Civil discourse’, accessed 22 July 2017).

The opposite of civility is incivility, and the following paragraph from Wikipedia could have been written for this blog post:

Incivility is the polar opposite of civility, or in other words a lack or completely without civility. Verbal or physical attacks on others, cyberbullying, rudeness, religious intolerance, lack of respect, discrimination, and vandalism are just some of the acts that are generally considered acts of incivility. Incivility is a negative part of society that has impacted many people in the United States, but as the world is becoming increasingly more transparent in social interactions, it has become more increasingly apparent that incivility has become an issue on the global stage. Social media and the web have given people the ability around the globe to freely exchange ideas, but it has not come without its consequences. (Wikipedia, ‘Civility’, accessed 22 July 2017)

It may be that the Internet, while not the sole cause of a decline in civility, is playing a significant part in an ongoing process of decline. One reason why this matters is that a good society requires civility, so a society lacking civility will not be good. To use Aristotelian terminology, a good society encourages individual flourishing, or ‘living well’. By contrast, a bad society makes it more difficult to be a good person, and to flourish.

Perhaps it is appropriate to conclude this post with a meme:

Condescending Wonka - Lulz

Advertisements

Tags: , , , , , , , , , , ,


%d bloggers like this: