Does the media propagate stereotypes?
The media have a powerful influence on how people view and treat different groups based on gender and ethnicity, as news stories shape people's stereotypes, beliefs, and ultimately behaviors in areas such as education, family, and politics. It is, therefore, essential to understand how the media, and in particular news outlets, depict different groups and whether stereotypes drive these depictions.
Challenges arise for researchers in this regard, as stereotypes are encoded in the media in multiple ways, and not all are readily apparent or measurable. Recent research has tried to tackle this question by using natural language processing techniques that allow studying the associations between words that journalists use in their texts.
What has so far been overlooked is the importance of images when it comes to the slant and stereotypes in media outlets. In a recent research project, my co-authors Elliott Ash, Ruben Durante and I study visual stereotypes in newspaper images by making use of artificial intelligence techniques and, more specifically, recent advances in the field of computer vision.
In particular, we use a custom-trained deep-learning mode that can recognize the identity features (such as gender/race/ethnicity) of people that are shown in the images. In such a way, we can automate our analysis instead of having to code hundreds of thousands of images by hand. A further advantage of this approach stems from the consistency of the classification, which cannot be guaranteed for any human coder.
In our analysis, we focus on two major U.S. news outlets: the New York Times and Fox. In total, we analyzed over two million articles published on the web editions of the two outlets between 2000 and 2020, of which 690k are accompanied by an image.
One important finding of our paper focuses on occupational stereotypes in newspaper images. In other words, we analyze if newspapers perpetuate common stereotypes about the occupational choices of specific groups, e.g., White men working as managers.
Using our computer vision approach in combination with text analysis techniques, we can show that news article images display gender and racial stereotypes in the sense that jobs which stereotypically "female" or "black" are more likely to be represented by an image showing the respective identity group.
An occupation is considered stereotypical if a higher relative share of this identity group works in an occupation. For example, even though Blacks, due to their size in the overall U.S. population, occupations like "mail processor" might nonetheless be stereotypically black as a higher share of Blacks works in this occupation compared to other identity groups. To provide another example, the most stereotypically female job is "secretary".
Strikingly, our analysis controls for the true occupation shares in each profession, e.g., the share of "mail processor" who is Black. This allows us to disentangle stereotypes from pure differences in the representation of identity groups across occupations in the U.S.
As such, our results highlight the potential use cases of A.I. and computer vision tools for the study of social science research questions. Furthermore, due to the ubiquity of images in business, politics, and social media, modern computer vision tools significantly broaden the scope of the analysis in the social sciences.