An algorithm that Twitter uses to decide how to crop photos on people’s timelines seems to automatically choose to display white people’s faces over people with darker skin pigmentation. The obvious trend was spotted in the last few days by Twitter users posting photos on the social media platform. A Twitter spokesperson said the company plans to re-evaluate the algorithm and share the results with others for review or replication.
JFC @jack https://t.co/Xm3D9qOgv5
– Marco Rogers (@polotek) September 19, 2020
Twitter scrapped its face recognition algorithm in 2017 for a manifestation recognition algorithm that can predict the most important part of an image. A Twitter spokesperson said today that evaluating the algorithm before it was deployed did not reveal any racial or gender bias, “but it is clear that we have more analysis to do.”
Twitter engineer Zehan Wang tweeted that bias was detected in 2017 before the algorithm was deployed, but not at “significant” levels. A Twitter spokesman declined to clarify why there is a loophole in the description of the bias found in the initial bias rating, saying the company is still collecting details on the rating that took place before the algorithm was released Has.
I wonder if Twitter does this for fictional characters as well.
Lenny Carl pic.twitter.com/fmJMWkkYEf
– Jordan Simonovski (@_jsimonovski) September 20, 2020
On Saturday, algorithmic bias researcher Vinay Prabhu, whose latest work led MIT to scrap its 80 million tiny images data set, created a method for evaluating the algorithm and planned to share the results through the recently created Twitter account To share cropping bias. After speaking with colleagues and the public reaction to the idea, Prabhu told VentureBeat he was considering whether to proceed with the assessment and questions the ethics of using saliency algorithms.
“Unbiased algorithmic saliency cropping is a pipe dream, and a bad one. The way the circumcision problem is outlined is sealed, and there is no downstream, unbiased algorithm that can fix it, ”Prabhu said in a Medium post.
Prabhu said he was also reconsidering the assessment because he feared that some people might use experimentation results to claim that there is no racial bias. What he said happened to the first evaluation results.
“If at the end of the day I do these large-scale experiments … what if it just serves to encourage apologists and people who come up with pseudo-intellectual excuses and use the 40:52 ratio as evidence that it’s not racist is? What if it further encourages that argument? That would be exactly the opposite of what I am aiming for. That’s my worst fear, ”he said.
Twitter’s chief design officer, Dantley Davis, said in a tweet this weekend that Twitter should stop cropping images altogether. VentureBeat asked a Twitter spokesman if cropping of images in Twitter timelines could possibly be eliminated, what ethical issues are related to using exception algorithms, and what datasets were used to train the pronouncement algorithm. A spokesman declined to respond to these questions, but said that Twitter workers are aware that people want more control over how images are cropped and are considering a number of options.
Updated September 21 at 10:09 am with replies from Twitter and Vinay Prabhu.