In times of social distancing and lockdown as a result of the global COVID-19 outbreak, digital media have become all the more central to our everyday lives. They enable us to communicate and stay connected with others without spatial or temporal constraints (as I am doing through this blog post), to learn, and to entertain ourselves. Despite the benefits that digital media offer, ample evidence shows that these platforms have also facilitated the viral spread of misinformation, and even disinformation, in relation to the epidemic. For example, the diagram below illustrating major symptoms of hypercapnia from Wikipedia was adapted to promote an ‘anti-mask’ agenda.
By slightly altering the diagram by adding a caption on top, the poster made it appear to be a convincing warning that wearing a mask leads to carbon dioxide toxicity. It does not. This modified version of the diagram was uploaded to Facebook and shared many times, according to BBC news.
A study conducted between April and May by a research team from King’s College London has alerted us to this potential problem arising from the consumption of false or inaccurate health information on digital media:
When used as an information source, unregulated social media may present a health risk that is partly but not wholly reducible to their role as disseminators of health-related conspiracy beliefs.
In reality, the dire consequences of believing such content are worrying, if we consider the case of Iran, where, just between February and April, thousands of people poisoned themselves by drinking methanol, a fake treatment for the coronavirus that went viral in the country, and hundreds of them lost their lives. Cases of misinformation are numerous; there is now a Wikipedia page documenting and classifying a long list of examples from around the world.
In a bid to address the alarming proliferation of misinformation and its associated problems, concerted efforts have been made by various parties: Since March, the World Health Organization has partnered with Rakuten Viber, a VoIP and IM software, and launched an interactive chatbot available in multiple languages, through which users can obtain updated information from “a trusted source”.
Mainstream media such as BBC News have been fact-checking and debunking claims. Social media platforms, bearing the brunt of this ‘infodemic’, have been scrutinizing potentially misleading content and taking necessary action. For instance, Facebook reported that, with the help of both external fact-checking organizations and internal AI systems, they had removed information that caused harm and had put warning labels on the content to prevent users from viewing it. What is notable here is that the ability for Facebook to remove or put warning labels on misinformation presupposes their ability to identify what is misinformation and what is not. In this connection, in addition to computer scientists, applied linguists may have something to offer.
The topic of mis/disinformation on digital media is nothing new. It has attracted much scholarly attention in recent years, in part due to the proliferation of disinformation during the 2016 presidential election in the US and the subsequent appropriation of the label ‘fake news’ by President Donald Trump to describe any news critical of him. Over the past four years, there have been a range of studies on media disinformation in various disciplines. In applied linguistics, there have been some ongoing attempts to analyze the language of fake news, or what Silje Susanne Alvestad and her team from University of Oslo call ‘fakespeak’. Researchers of these corpus-based studies believe that the linguistic features are key to detection of misinformation. These findings will benefit presumably not just whoever is responsible for performing the task of reviewing potentially misleading information, but also ordinary social media users, for it is the awareness that they need to develop in order to become critical consumers of information in the digital age. Of interest to scholars in the field of digital literacies is perhaps not what linguistic features can be found, but how users engage with these texts and how the engagement is shaped by their participation in digital media and relationships with those they interact with in these environments. The current epidemic certainly leads to more research.
As I am writing this post, I cannot help but relate the topic to a phenomenon I observe in Hong Kong – the spread of fake news through voice messages on WhatsApp. In a viral voice message that circulated in some groups in April, a speaker claiming to be Dr Ho Pak-leung, a local microbiologist commenting regularly on the government’s coronavirus prevention measures, advised recipients of the message against buying certain bottled water. Dr Ho explained on his own social media and in public appearances that this message was fake and warned people not to trust the unfounded content. Three months later, a similar situation occurred and this time Ho said he would consider reporting it to the Police.
Of course, fake COVID-19 voice messages triggering fear and panic are not unique to Hong Kong. In the UK, a clip asserted that the ambulance service would not be provided to patients with breathing issues. In Nigeria, an audio clip warned the audience that a huge number of locals could die of coronavirus according to WHO. In India, a message claimed that the government permitted using homeopaths as a treatment. So what actually makes WhatsApp an ideal vehicle for the circulation of these fake messages? One thing is that WhatsApp’s end-to-end encryption makes it particularly difficult to spot and stop the spread of such content. Sociologist and political economist William Davies suggested that it has also something to do with the high level of trust that people place especially in private WhatsApp groups:
It is a truism that nobody is as happy as they appear on Facebook, as attractive as they appear on Instagram or as angry as they appear on Twitter, which spawns a growing weariness with such endless performance. By contrast, closed groups are where people take off their public masks and let their critical guard down. Neither anonymity (a precondition of most trolling) nor celebrity are on offer. The speed with which rumours circulate on WhatsApp is partly a reflection of how altruistic and uncritical people can be in groups. Most of the time, people seem to share false theories about Covid-19 not with the intention of doing harm, but precisely out of concern for other group members.
What draws my attention is not just the use of WhatsApp in spreading fake voice messages in Hong Kong, but also how some people react to the messages. Back in February, when COVID-19 began to hit Hong Kong hard, a friend of mine forwarded a 35-second voice message to our common WhatsApp group. Initially, I thought it would just be another one containing rumours or unverified information that I occasionally received from some uncritical senders. It was not until I almost finished listening to it that I burst out laughing and realized my speculation was wrong. The voice message turned out to be a parody of some viral audio clips with false claims about the coronavirus.
As a researcher interested in digital communication, I was immediately fascinated by the creative ways in which the speaker used language to possibly perform a number of actions simultaneously: confront misinformation, create humour, reflect social phenomena, and even promote changes. The voice message is humorous in that it has a serious start, followed by an unexpected twist. On the surface, the speaker is informing the recipients of an imminent shortage of supply of “necessities”, asking them to share the news with only those they are close to as well as to place an order “as soon as possible”. The speaker tries to frame the initial part as exclusive news and to highlight the seriousness and urgency. What is unexpected but made clear towards the end of the message, however, is that the essentials are not what some citizens have been found to stock up on, such as food and toilet paper, but “conscience”, “rationality”, and “intelligence” – the qualities that the speaker accuses citizens of gradually losing. Obviously, the message conveys humour, which may have its therapeutic value in times of epidemic. But it does more than that. To a certain extent, it is through the audio clip that the speaker expresses his attitude towards panic buying in society and invites listeners to think before they act. If one situates it in a wider socio-political context of Hong Kong, where the (pro-government) ‘blue’ and (anti-government) ‘yellow’ divide has been heightened by large-scale protests just before the outbreak, it is not unreasonable to speculate that the speaker is also attempting to comment on the political situation, asking recipients to behave according to their conscience, rationality, and intelligence – words that are commonly found in recent political discourse. More research drawing on a corpus of these parodies might shed more light on the political function of memes and internet humour in different national contexts.