While artificial intelligence needs to be closely monitored so it doesn’t perpetuate bias, it can also be used as a powerful tool for detecting it.
Research released by Google and the Geena Davis Institute on Gender in Media used AI to watch the top 100 grossing films in the United States from 2014-2016. It then analyzed them for how much time each gender spent speaking and was visible on screen.
Looking at overall on-screen and speaking time, Google found that men were seen and heard nearly twice as often. Women only appeared for 36% of the time that humans were seen on screen, and accounted for just 35% of speaking time. When broken down by movie rating, women were represented least in R-rated movies, appearing 34% of on-screen time. Women had the most representation in PG-rated movies, with 42% of on-screen time.
That speaking time drops even lower, to 27%, when looking at Academy Award-winning movies.
Disparity in screen time has been documented before. A 2016 study analyzing 30 years of top-grossing films found that a disproportionate amount of films were dominated by male dialogue.
Distribution of lines in US top grossing movies (1980s to 2010s)
Google’s tool is trained to understand gender based on looking at each actor’s faces and voices. (Animated films and movies with masked characters were not included.) The tool is made of three algorithms: one that identifies and tracks faces throughout the movie; one that determines the faces’ gender; and then a third to determine whether a voice is male or female.
The Geena Davis Institute’s CEO, Madeline Di Nonno, says this software could likely be used to analyze television for the same biases in the future.