Advanced AI models trained on ImageNet, a popular (but problematic) dataset of photos taken from the Internet, automatically learn human-like biases about race, gender, weight, and more. This emerges from new research by scientists at Carnegie Mellon University and George Washington University who have developed a novel method for quantifying biased associations between representations of social concepts (e.g., race and gender) and attributes in images. Compared to statistical patterns in online image data sets, the results suggest that models automatically learn biases from the way people are stereotyped on the web.

Companies and researchers regularly use machine learning models that are trained on massive Internet imagery sets. To reduce costs, many employ state-of-the-art models that have been trained on large corporations to achieve other goals. This powerful approach is known as transfer learning. A growing number of computer vision methods are unsupervised, which means they do not use labels while exercising. With fine-tuning, practitioners combine general-purpose representations with labels from domains to perform tasks such as facial recognition, candidate screening, autonomous vehicles, and online ad serving.

Starting from the hypothesis that image representations contain distortions that correspond to stereotypes of groups in training images, the researchers adapted distortion tests that were developed for contextualized word embedding in the image domain. (Word embeddings are language modeling techniques that map words from a vocabulary to vectors of real numbers so that models can learn from them.) The proposed benchmark – Image Embedding Association Test (iEAT) – modifies word embedding tests to compare the pooled image plane embeddings (i.e., vectors representing images) with the aim of measuring the distortions embedded during the unsupervised pre-training by systematically comparing the association of embeddings.

To find out what kinds of distortions can be embedded in image representations that are generated when class names are not available, the researchers focused on two computer vision models released last summer: OpenAIs iGPT and Google’s SimCLRv2. Both were pre-trained in ImageNet 2012, which has 1.2 million annotated images from Flickr and other photo sharing sites with 200 object classes. And, as the researchers explain, both learn to create embeds based on implicit patterns throughout the training set of image features.

The researchers compiled a representative set of image stimuli for categories such as “Age”, “Gender Studies”, “Religion”, “Sexuality”, “Weight”, “Disability”, “Skin Tone” and “Race”. They each drew representative images from Google Images, the open source CIFAR 100 data set and other sources.

In experiments, the researchers said they found evidence that iGPT and SimCLRv2 contained “significant” biases, likely due to ImageNet’s data imbalance. Previous research has shown that ImageNet represents race and gender unequally. For example, the groom category shows mostly white people.

Both iGPT and SimCLRv2 showed racial prejudice in terms of both valency (i.e. positive and negative emotions) and stereotyping. Embeddings of iGPT and SimCLRv2 showed a tendency towards an Arab-Muslim iEAT benchmark that measures whether images of Arab Americans were viewed as “more pleasant” or “more uncomfortable” than others. iGPT was biased in a skin tone test that compared the perception of faces with lighter and darker tones. (Lighter tones were rated “more positive” by the model.) Both iGPT and SimCLRv2 associated white people with tools, while black people were associated with weapons. This tendency is similar to that of Google Cloud Vision, Google’s Computer Vision Service, it has been found that images of dark-skinned people with thermometers are labeled as “pistol”.

Racial bias aside, the co-authors report that gender and weight biases plague the pre-trained iGPT and SimCLRv2 models. In an iEAT test on gender careers, in which the proximity of the category “male” to “work” and “office” and “female” to attributes such as “children” and “home” was estimated, the embedding of the models was stereotypical. In the case of iGPT, a gender science benchmark designed to assess the relationships between “male” and “science” attributes such as math and engineering and “female” to “liberal arts” attributes such as art showed a similar trend. And iGPT showed a tendency towards lighter people of all genders and races, associating thin people with comfort and overweight people with uncomfortable.

The researchers also report that iGPT’s next level predictive features were biased in their tests against women. To demonstrate, they cropped portraits of women and men, including Alexandria Ocasio-Cortez (D-NY), below the neck and created various full images using iGPT. iGPT completions of regular, factual portraits of clothed women and men indoors and outdoors often showed large breasts and bathing suits. In six of the ten portraits tested, at least one of the eight degrees showed a bikini or a low-cut top.

Unfortunately, the results aren’t surprising – countless studies have shown that face recognition is prone to distortion. In a University of Colorado article last fall, Boulder researchers showed that AI from Amazon, Clarifai, Microsoft, and others, 38% of the time, had accuracy rates of over 95% for cisgender men and women, but misidentified trans men as women maintained. Independent benchmarks of key vendors’ systems by the Gender Shades Project and the National Institute of Standards and Technology (NIST) have shown that facial recognition technology has racial and gender biases, and has indicated that current face recognition programs can be extremely inaccurate and humans Can misclassify up 96% of the time.

However, efforts are being made to make ImageNet more inclusive and less toxic. Last year, the team from Stanford, Princeton, and the University of North Carolina behind the dataset used crowdsourcing to identify and remove derogatory words and photos. They also rated the demographic and geographic diversity of ImageNet photos and developed a tool to display more diverse images based on gender, race and age.

“Although models like this can be useful in quantifying contemporary social biases, as they are represented in large amounts of images on the Internet, our results suggest that the use of unsupervised pre-training for large-scale images is likely to spread harmful prejudice” said Carnegie Mellon and George Washington University researchers wrote an article about their work that was not peer-reviewed. “In view of the high computing and CO2 costs for model training on a large scale, transfer learning with previously trained models is an attractive option for practitioners. However, our results show that patterns of stereotypical representation of social groups influence unsupervised models. Therefore, careful study and analysis are required before these models make consistent decisions about individuals and society. “

How startups scale communication: The pandemic is causing startups to examine their communication solutions more closely. Learn how