Twitter pays $3,500 to Ukrainian who found flaw in image cropping algorithm
Ukrainian Bogdan Kulynych, a graduate student at Switzerland’s EFPL university, demonstrated racial bias in the Twitter algorithm used to create preview images. For this, the company has paid him a reward of $3,500.
As The Guardian reports, Kulynych found bugs in the image cropping algorithm, which seems to have regularly focused on white faces, cutting out images of black people. The algorithm even favored white dogs, cropping out black ones.
Twitter algorithm flaw
Twitter has been using automatic cropping of images for a long time so that they don’t take up too much space in the main feed and so that multiple images can be shown in a single tweet. The company uses several algorithmic tools to focus on the most important parts of an image, keeping faces and text.
But in 2020, users noticed that Twitter’s algorithms prefered white people and even white dogs, almost always cutting off black people.
The company’s research also confirmed a bias in favor of white and female faces. So, Twitter launched a reward program to find problems with the algorithm — the participants who would demonstrate flaws were promised a reward.
Research by Bogdan Kulynych
Kulynych has proved the bias by first artificially generating faces with different characteristics and then running them through Twitter’s cropping algorithm to see what the software was focusing on.
Since the faces themselves were artificial, it was possible to create faces that were nearly identical but with different skin tones, face forms, gender, or age and thus demonstrate that the algorithm focused on younger, slimmer, and lighter faces.
“Algorithmic harms are not only ‘bugs.’ Crucially, a lot of harmful techs are harmful not because of accidents, unintended mistakes, but rather by design. This comes from maximization of engagement and, in general, profit externalizing the costs to others. As an example, amplifying gentrification, driving down wages, spreading clickbait and misinformation are not necessarily due to ‘biased’ algorithms,” Bogdan said.