Twitter’s New Bounty Challenge Pays Hackers For Finding AI Bias In Images

Twitter’s New Bounty Challenge Pays Hackers For Finding AI Bias In Images

Twitter is launching a bounty program that aims to find AI bias and help fix its algorithms, touting it to be the first bug bounty program of its kind in the industry. Twitter and other social media platforms are no stranger to the AI bias problem. Earlier this year, Twitter dropped its image cropping algorithm after it was discovered that the AI behind it exhibited “unequal treatment based on demographic differences.” Users discovered that the cropping algorithm favored faces with lighter skin tones and highlighted them in the frame, while faces with dark skin were cropped out.

In a notable display of transparency, Twitter shared the findings in a detailed analysis after performing an assessment using 10,000 images. However, Twitter is far from being the only platform with algorithm bias issues. Mass surveillance tools have proven to be inaccurate when it comes to the facial recognition tech at their heart as the data set they’ve been trained on is skewed, and hence, they negatively affect minorities. Just over a year ago, an AI algorithm that was used to generate de-pixelated images from blurry pictures was found to produce photos of a white person from a pixelated image of Barack Obama.

Twitter, having tasted the ill-effects of a system with algorithmic bias, is now outsourcing the job of finding flaws in its own system through a bug bounty program. This is a common approach to solve problems with community-driven input. Giants like Apple and Microsoft have paid millions of dollars in bug bounty programs so far, and continue to offer a lucrative payout to detect serious vulnerabilities in their systems. However, what separates Twitter’s initiative is that it’s the first-of-kind bug bounty program that tasks researchers with handling the issue of algorithmic bias and detecting flaws in the AI-driven system Twitter has created.

Twitter Is Setting A Precedent For Social Media Platforms

Twitter’s New Bounty Challenge Pays Hackers For Finding AI Bias In Images

In particular, Twitter’s algorithmic bias bounty challenge covers its saliency algorithm — the software behind its image cropping system. The goal is to seek help from independent experts and identify potentially harmful aspects of the image cropping algorithm beyond what Twitter has already documented. The challenge, which is a part of 2021’s DEF CON AI Village convention, aims to tackle a broader range of issues that form a part of industry-wide Machine Learning Ethics, and collectively identify algorithmic bias on a much larger scale. Twitter will be offering a bounty of $3,500 as the top prize, and will be taking submissions until August 6, 2021, via the HackerOne portal. Participants will be asked to build their own assessment based on the saliency model and the code that has been released publicly.

While the challenge is a positive step forward in the right direction, Twitter still has a long way to go, and so do other social media companies. To recall, Facebook also announced plans for examining racial bias and boosting the platform’s inclusiveness last year. Back in 2019, a Cornell Study revealed that tweets by people from the African-American community are much more likely to be tagged as hate speech, showcasing a systematic racial bias in place. A study conducted by University of Washington researchers in the same year also found a similar set of issues with a tool made by Google.