Original Source Here
Findings and the Changes proposed.
The paper explains how Twitter performed elaborate tests to check for potential gender and racial bias and the male gaze in their algorithm.
Dataset & Methodology
The dataset used is the WikiCeleb dataset consisting of images and labels of 4073 celebrities as recorded on Wikidata. The data is divided into subgroups based on race and gender, resulting in four subgroups → : Black-Female, Black-Male, White-Female, and White-Male.
Results- gender and racial disparity
The results show a strong gender favor for females over males and favor for white over Black individuals.
Results- the male gaze
For the male gaze issue, the results showed for every gender, 3 out of 100 images have the crop at a place other than the head. The crops were not on the face; it is usually at those areas on an image that displays numbers like sports jerseys. This behavior was similar for all the genders.
The paper also discusses additional points that amplify disparate effects like dark backgrounds, dark eyes, higher Variability, etc. Finally, here’s what the authors conclude
However, we demonstrate that formalized fairness metrics and quantitative analysis on their own are insufficient for capturing the risk of representational harm in automatic cropping. We suggest the removal of saliency-based cropping in favor of a solution that better preserves user agency. For developing a new solution that sufficiently address concerns related to representational harm, our critique motivates a combination of quantitative and qualitative methods that include human-centered design.
As a result, users will get an accurate preview of how their images will appear when they Tweet a photo. After testing this change, Twitter finally rolled out this updated feature in May 2021.
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot