A recent report by moonshot found that there has been a 300% rise in hashtags on Twitter that “encourage or incite violence against China and Chinese people”. The tech company analyzed 193,000 COVID-19 related tweets between Feb 21 and Apr 17, 2020, and generated the top 10 English hashtags that potentially encourage violence or ‘hate’.
While many incidents of ‘hate’ target China, where the virus reportedly first spread, anti-Asian and anti-Semitic hashtags and social media posts are also on the rise. Although ‘hate speech’ has been around for decades in the physical world, the shareability of social media seems to amplify its spread. Verbal aggression around the pandemic has reportedly given rise to an increase in physical violence against people of Asian descent. As I was looking up recent incidents of ‘hate speech’, a full Wikipedia page has already been created to document the long list incidents of xenophobia and racism around COVID-19.
According to accessnow.org, an NGO dedicated to human rights on the internet, anti-Chinese ‘hate speech’ spiked after US President Donald Trump first referred to the coronavirus as the ‘China Virus’ and ‘Chinese Virus’ in March 2020 (which he later pulled back).
But just exactly how does a name become ‘hate’? Can a name alone incite racism and violence? I would like to begin my discussion by looking at some hashtags and expressions that have been perceived as ‘hate speech’ towards China during COVID-19. The examples, broadly classified into their linguistic devices and discourse strategies, are taken mainly from Twitter and Facebook.
- Names/Nicknames: Despite the official name, COVID-19, given by the World Health Organization (WHO), other names and labels have been used to link the pandemic to China, such as #chinavirus, #wuhanvirus, #chinesevirus.
- Dehumanization: Dehumanizing metaphors have been used to refer to Chinese people, such as 蝗蟲 (locust) and 支那狗 (Chi-na-dog).
- Stereotyping: Anti-Chinese sentiments are also expressed through stereotypes that associate bat-eating with all Chinese people such as #ChineseEatBats, and tweets like “Haha yeah asian people eat any animal RIGHT? Yeahhh so true for sure yeah definitely. Won’t be eating any Chinese food anymore” (Twitter)
- Blaming: #ChinaLiedPeopleDied, #blamechina
- Personification 殘體字 (‘handicapped writing’, referring to simplified Chinese characters used by mainland Chinese netizens)
- Metaphors: The ‘virus’ metaphor has been used to refer to out-groups: #ChineseisVirus, #CCPIsVirus.
- Aggression: using overtly offensive and aggressive language such as #fuckchina, #bombchina
When we situate these examples in the history of verbal aggression, online or offline, they are nothing new. First, expressions such as ‘locusts’ and ‘handicapped writing’ were already in use before COVID-19 in anti-Chinese discourse in Hong Kong and elsewhere. Second, they share rather similar linguistic and discourse patterns with existing discourse of racism and discrimination (Click links in the above categories for relevant references).
My interest here is not so much in documenting examples of ‘hate speech’ around COVID-19. While the above expressions may have been perceived as verbal abuse, ‘hate speech’ as a concept has always been extremely contested and ambiguous (and that is why I am using quotation marks for the term ‘hate speech’ in this post). Nonetheless, COVID-19 has provided an opportunity for us to further reflect and problematize the complexity of this concept, and to revisit the power of language in times of challenges.
Can ‘Hate Speech’ be Defined?
Hatred is one of the most common human emotions, and aggressive language has existed for centuries. The term ‘hate speech’, however, is a relatively contemporary notion. According to the Oxford English Dictionary, ‘hate speech’ is originally a ‘U.S.’ concept, first recorded use in 1938:
hate speech n. originally U.S. (a) a speech or address inciting hatred or intolerance, esp. towards a particular social group on the basis of ethnicity, religious beliefs, sexuality, etc.; (b) (as a mass noun) speech (or sometimes written material) inciting such hatred or intolerance.Oxford English Dictionary
1938 Syracuse (N.Y.) Herald 29 Sept. 21/8 Hitler’s single hate speech did more to alienate the world from Germany than anything he has done.
There is to date no universal definition of ‘hate speech’. Among existing definitions, there seems to be a clear consensus that ‘hate speech’ is any linguistic content that incites violence and discrimination against specific social groups, and is often associated with the social categories of race, religion, gender, and ethnicity.
Hate is defined as “any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.”United Nations (2019)
The term ‘hate crime’ can be used to describe a range of criminal behaviour where the perpetrator is motivated by hostility or demonstrates hostility towards the victim’s disability, race, religion, sexual orientation or transgender identity.The Crown Prosecution Service, UK
‘Hate speech’ is legislated against in some parts of the world. In Canada, “publication and public expression of messages intended to incite hatred towards members of particular groups” is ‘illegal, as is also the case in many European countries. Other countries that do not have designated hate speech laws may prohibit language that incites discrimination against identifiable groups under existing laws against racism and discrimination.
Most arguments against hate speech legislation focus primarily on its potential threat to freedom of expression. In the US, ‘hate speech’ is not regulated and is protected free speech under the First Amendment regardless of how offensive the content. ‘Hate speech’ only becomes an issue when it is clearly coupled with directed action, resulting in violence or other forms of danger to others, which is judged on a case-by-case basis.
“There is a fine line between free speech and hate speech. Free speech encourages debate whereas hate speech incites violence.”Newton Lee, Counterterrorism and Cybersecurity: Total Information Awareness
In reality, such ‘fine line’ between free speech and hate speech is hard to be drawn. Even if legal definitions exist, they are extremely vague and subject to interpretations. For example, what counts as ‘hateful’ content, or ‘incitement’ of violence and discrimination may vary across contexts and cultures. What further complicates the matter is the so-called ‘soft’ or covert hate speech, such as the use of metaphors and irony, which does not contain aggressive language, but can cause equal degree of harm to victims.
What’s in a Name? Labelling COVID-19
To further illustrate the complexity of ‘hate speech’, I now turn to what is perhaps one of the most debated topics around COVID-19 – the naming of the coronavirus.
On May 7, 2020, San Antonio City Council passed a resolution against the use terms such as ‘Chinese Virus’ and ‘Kung Fu Virus’ to describe COVID-19, and declared that these are ‘hate speech’, a decision which Ted Cruz calls ‘nuts’.
The naming and labelling of COVID-19 has sparked controversies since President Trump’s repeated mentions of ‘Chinese Virus’ and ‘China Virus’ in March 2020. Some say these are just convenient terms; others believe this practice further promotes racism and discrimination against Asian-Americans in the US, as these terms can easily fix blame on specific ethnic groups.
“Hate is a Virus…“
Common discourse patterns can be identified from both sides of the argument. Those who are against the name ‘Chinese Virus’ tend to frame the issue in terms of:
- Incitement of violence towards Chinese/Asian
- “Calling #Covid19 a Chinese Virus is incredibly insulting and incites violence.“(Twitter)
- “Using xenophobia when talking about coronavirus creates stigma and incites violence toward Asian American communities.” (Facebook)
- Reference to authority: WHO is often cited as the authority of naming the virus as COVID-19, as in:
- “do us all a favor and read this guidance from the @WHO on naming diseases, & stop calling it “Chinese Virus.“(Twitter)
- Metaphors: The ‘virus’ metaphor has also been taken up by (supporters of) victims of hate and discrimination:
- “Stop calling the illness the “Chinese virus.” We need to end the history of leaders who have … wonderful and interesting. Spread love. Not hate. Hate is a virus. “(Instagram)
- “stop the racism to Asian, I AM NOT A VIRUS.” (Instagram)
“It’s not racist at all…“
At the same time, what has been considered hate or racism has been legitimized and justified by those who support the use of ‘Chinese virus’.
- Attributing to origin: This is possibly the most commonly observed way of defending the term ‘Chinese Virus’, as in Trump’s response when asked why he repeatedly used the name at a press conference: “It’s not racist at all…It comes from China, that’s why.”
- Referring to historical antecedents: Another common strategy of ‘normalizing’ the use of ‘Chinese virus’ is to draw on examples of diseases that were named after countries or places, such as Spanish Flu and Hong Kong Flu.
- Others point out that ‘Wuhan Virus’ was already in use in China before the official name COVID-19: “FYI. The term of #WuhanVirus (#武汉肺炎 ）is quite common in Chinese media since 1/2020.”(Twitter)
- Appealing to common knowledge: Arguments of this type normalize the naming practice by framing it as common knowledge, as in the following tweet:
- “We all know that the virus is from Wuhan. Donald trump didn’t said anything wrong. We also called it Wuhan Virus or Chinese Virus. It is not racism #WuhanCoronavirus #ChineseCoronavirus” (Twitter)
- Legitimation: In response to the banning of the use of ‘Chinese Virus’ in San Antonio, some argue that this would seriously jeopardize freedom of speech, e.g. “They are violating the First Amendment.” (Twitter)
- ‘Just a joke’ defense: The naming practice has been masked as a joke or just humour, as in:
- I like “Kung Flu” better than kung fu virus. I think it’s funny. I don’t think of the Chinese as being lesser people but I’m sure not happy with their government. I don’t think that makes me racist. (Twitter)
- Disclaimer: When making potentially offensive or racist remarks about certain ethnicity, the hashtag #notracist is added to the end of the comment to act as a ‘disclaimer’ to guide readers’ interpretation. For example:
- Claiming insider status: Some would align themselves with the alleged victims’ identity, as in the following Twitter post:
- “I’m Chinese, this disgusting virus is called #Chinesevirus or #ccpvirus or #xijinpingvirus. The Chinese community here in England and across the world called it #wuhanpneumonia in Cantonese. Not racist.” (Twitter)
It becomes clear that what counts as hate or racism is still highly debatable. Questions remain as to the extent to which such repeated naming practice has any direct correlation with incidents of ‘offline’ hate or abuse, and the actual harm these names will cause to the target groups. There is also no consensus as to whether intent, context, perceptions should be taken into consideration when defining ‘hate speech’.
Hate Speech Will Not Stop
Expressing the feeling of hatred is not a crime. In Pragmatics, there is a long tradition of recognizing ‘expressive speech acts’, which serve the function of expression emotions, including anger and hatred. The advent of social media means the making of ‘affective publics’ (Papacharissi, 2015). Emotions and sentiments have become highly mediatized as we increasingly engage in public online debates and express dissatisfaction of all sorts of social issues.
However, society has long equated ‘hate speech’ with racism and discrimination, because hateful language does have the potential power to harm vulnerable people. During COVID-19, ‘hate speech’ has been framed as something as deadly as the pandemic, and both need to be ‘stopped’ or ‘defeated’. On May 8, 2020, UN Chief appeals to the world to address and counter hate speech related to COVID-19. He concludes his speech by saying: “Let’s defeat hate speech – and COVID-19 – together.”
The truth is, hate speech will not just stop. As Whillock (2000) argues, “[m]ost rational people would prefer that hate speech not exist. The fact is that it does and will continue. Many people have advocated that we try to “stop” hate speech through legislative attempts or legal remedy. Society often mistakenly assumes that hate speech is checked, when it is relatively contained.”
The label ‘hate speech’ is not unproblematic. When speech is premodified by ‘hate’, it immediately presupposes that it is something derogatory in nature and therefore must be banned, regardless of intent. The problem of prohibiting hate is that it may potentially restrict radicalist discourse or discourse of resistance, as they share similar discourse patterns to what can easily be justified as ‘hate speech’.
“If we become overzealous in our efforts to limit so-called hate speech, we run the risk of setting a trap for the very people we’re trying to defend.”Eric Nielson (2018)
“In radicalist discourse, they are mainly used to express hate, give warning and challenge authority; they are also used to show intent of purpose, and to attract media attention.”Chiluwa (2015)
Racism against Asian is not exclusive to COVID-19; it has been around for decades. Any incidents of racism during the pandemic only echo this long-standing historical issue. Legislation alone will not stop ‘hate speech’. What is perhaps more important is to enhance people’s awareness of the potential harm of all forms of aggressive behaviour online. One possible measure is media literacy education. The UNESCO has recently launched a media and information literacy campaign called ‘End Xenophobia Around COVID-19’, including the following list of ‘DOs and DON’Ts’.
What Can Language Researchers Offer?
However, what we need is not just lists of Dos and Don’ts. Aggressive behaviour online, as with any social actions, largely manifest itself in language and discourse. We need to first understand the context and conditions that engender aggressive language and behavior. Language researchers have a significant role to play in offering a better understanding of the nature of online aggression by informing the public and policy makers the overt and covert discourse strategies through which online abuse is enacted. Definitions of ‘hate speech’ and other forms of online aggression should be problematized through critical linguistic analyses of authentic online communication.
Language is powerful. Hateful language is particularly powerful. It works because it serves as a direct threat, a precursor to more explicit action.Whillock (2000)
Ultimately, it is people’s attitudes that make a difference, not just laws and policies. What a discourse-analytic approach to online aggression can offer is to uncover the discourse strategies of online abuse that may be hidden in everyday discourse processes. Understanding what constitutes online aggression should be an on-going effort from multiple perspectives and disciplines. My discussion here has hopefully taken a small step in that direction.
Chiluwa, I. (2015). Radicalist discourse: a study of the stances of Nigeria’s Boko Haram and Somalia’s Al Shabaab on Twitter. Journal of Multicultural Discourses, 10(2), 214-235.
Papacharissi, Z. (2015). Affective publics: Sentiment, technology, and politics. Oxford University Press.
Whillock, R. K. (2000). Ethical considerations of civil discourse: The implications of the rise of ‘hate speech’. Political communication ethics: An oxymoron, 75-90.
About the author: Carmen Lee is Associate Professor in the Department of English at the Chinese University of Hong Kong. Twitter @LeeCarmenlee