Twitter Hate Speech Changes Explained: Here’s What You Can & Can’t Say

Twitter Hate Speech Changes Explained: Here’s What You Can & Can’t Say

Twitter has expanded its rules regarding hateful conduct for the second time in less than an year. Social media services have faced heavy criticism recently regarding the way they regulate online discussions, as well as the suggestion that some services have a tendency to amplify extreme views over mainstream ones.

Services like Facebook and Twitter have tried to address sensitive issues like unethical data collection with varying degrees of success. However, attempts to curb hate speech remain limited and with little results to show. In fact, the one thing that most social media platforms have in common is their struggle to deal with unwanted content, especially when that content goes viral. This has typically resulted in extreme, and often misinformative (like anti-vaccination ads) content spreading further and wider than it should on social media. While Twitter has also failed to adequately restrict hateful content and personal attacks in the past, when it has introduced tools to combat these issues, they are often criticized for being too restrictive.

Twitter’s latest hate-speech changes are part of a wider and ever-evolving process to find an effective solution. Twitter has now expanded its hateful conduct rules to include language that dehumanizes on the basis of age, disability or disease. This follows the July 2019 update intended to take on language dehumanizing others on the basis of religion. Twitter explained that these changes are aimed at reducing the risk of offline harm which research has suggested can increases due to dehumanizing language.

Twitter Defining Dehumanizing Language

Twitter Hate Speech Changes Explained: Here’s What You Can & Can’t Say

The current update provides examples of the kind of tweets that will be removed going forward. Mostly, they fall into three categories with the first relating to age-groups. Anything that dehumanizes people from a certain age-group by using obscene terms, or by drawing offensive comparisons with animals will come under the expanded rules. Tweets on people with diseases form the second category and looks to protect victims of certain diseases from being demeaned or defined by their circumstances. Needless to say, this one is extremely topical due to the ongoing Coronavirus outbreak. The third group involves attacks ostracizing people with disabilities. The example given by Twitter is messages that deem people with disabilities to be “sub-human” or making the suggestion that they “shouldn’t be seen in public.” Again, all of these are on top of the rules that were already in use to limit Tweets targeting religious groups.

Any Tweets that are reported and found to be crossing any of these lines will be removed and could result in an account suspension. However, Twitter has confirmed that Tweets made before this update won’t result in direct suspension, although they will be removed. Twitter also made it clear that these rules are subject to change again in the future. Noting that it might expand these categories to include culture specific tweets involving anything deemed dehumanizing. While these are welcomed steps towards limiting hate-speech, many could argue why Twitter hasn’t acted sooner? After all, hateful and dehumanizing conduct on social media, including Twitter, is nothing new.