Twitter Follows YouTube with Greater AI Use to Combat Coronavirus

Twitter Follows YouTube with Greater AI Use to Combat Coronavirus

Broad policy changes have cropped up across all social media platforms and Twitter has indicated it plans to make changes as well. Content moderation has taken on a more prominent role as the public shows clear signs that social media will be the primary method of sharing and receiving news on the coronavirus.

Like most companies, Twitter has also taken steps to contribute to public safety and those steps include having its staff work from home. On an established platform as large as Twitter, the day-to-day engineering load and back-end work are far less intensive than the moderation side. Content moderators were constantly under pressure to tamp down hate speech, quell overzealous political bickering, and enforce the site’s other policies before COVID-19 became a global topic. Now, with that workload becoming even heavier, moderators are also forced to work from home, where they likely have to manage parenting and the other potential complications brought on by the transition.

For these and other reasons, Twitter is adjusting its approach to moderation. In a new blog post, the company outlined plans to rely more heavily on AI moderation, reducing the stress on its human moderators. Machine learning and automation aren’t perfect solutions, so the company has vowed to adapt their workflow so a human always has the final say on whether to ban a user. The post assures users no permanent suspensions will be issued by their AI.

The Growth and Benefits of Automated Moderation

Twitter Follows YouTube with Greater AI Use to Combat Coronavirus

Moderation is inherently subjective, so it will always require human oversight. However, services on the scale of Twitter or Facebook simply don’t have the option to exclusively leave moderation to human employees. There will naturally be cases where a detailed look into a user conflict is the only way to resolve it but (somewhat fortunately) the biggest moderation threats come in the form of individual, popular, misleading posts. If one tweet is flagged as inappropriate or is otherwise in violation of Twitter’s policies, it’s easy for a computer to locate each instance of that post and respond accordingly.

The aforementioned blog post also outlined Twitter’s plans to combat misinformation about the coronavirus, by extending its definition of “harm” to include posts advising users to take action contrary to what health organizations recommend. While this level of granular, deductive moderation can be coded into AI, it will most likely require dozens of man-hours to weed out such posts. As a result, switching to a greater emphasis on AI is also beneficial here, because it allows humans to more effectively tend to other issues.

As was the case earlier this week, when Facebook’s AI removed coronavirus-related posts, and will continue to be the case with YouTube and its policy adjustments, deploying AI for moderation will inevitably lead to mistakes. Some posts that shouldn’t be blocked will be, and punishments will go out when they shouldn’t. Moderation won’t be perfect. On their ends, platforms like Twitter are trying to address these issues, and have done a decent job of recognizing when these mistakes crop up. On the user side, this volatile time in social media history means being patient and understanding there are humans trying to get this right.