When AI Tries to Rewrite History: Bias and Diverse Representation in AI Image Generation
SEPTEMBER 12, 2025
STEPHANIE McCULLY, ETSB
When AI Tries to Rewrite History: Bias and Diverse Representation in AI Image Generation
SEPTEMBER 12, 2025
STEPHANIE McCULLY, ETSB
I had a teacher friend who used the AI image generation tool in Canva with students as a way to get them to think deeply about a character in a novel. She challenged them to describe the character in such a detailed way that AI could generate a remarkably accurate image of the character. She had them enter physical characteristics and personality traits as well as any setting or accessory details that might help a viewer identify the character. This was a fun project and kids could re-enter commands to have AI improve the image until it represented what they saw in their heads when they pictured the fictional character.
An image generated in Canva AI. When prompted to show a settler and his family in New France AI generated an image that gave the settler black children and an Indigenous wife without being prompted to do so. This makes the image historically inaccurate.
I was inspired by her creativity to see if the same tool could generate an accurate representation of an historical event. The results were hilarious, and disturbing. When asked to generate a depiction of Champlain arriving and being greeted by Indigenous people in Canada it made Champlain himself as Indigenous. When asked to create an image showing Louis Hebert- one of the first European settlers in New France- and his family, it bizarrely gave him some Black children. Was AI attempting to add diversity to the image? To be clear, I was part of the problem, I needed to be far more descriptive and detailed with my request. But I wondered why some parts of the image seemed accurate, historically accurate clothing for example, and yet it still gave him a mixed race family. I chalked it up to how AI “invents” stuff and thought about how you can’t believe everything you see on the internet.
What I have come to discover is that the results of my image generation challenge were not isolated in their silliness. In Rendering misrepresentation: Diversity failures in AI image generation, the authors explain that when racial and gender bias appeared as problems in AI image generation, it was overcorrected for to such an extent that AI seemed to add different races to images at random, for example, “generating racially-diverse portrayals of World War II-era German soldiers” (Baum & Villasenor, 2024). The programmers knew there was a problem and in trying to solve it they seemed to have made it worse. While an image representing Jacques Cartier as a black woman might seem like a harmless goof, there is a mountain of historical, racial, and cultural bias behind it, a problem that sprinkling more people of color into an image doesn’t solve.
The way AI image generation works is that the user enters a prompt, AI adds details to the prompt and then AI generates the image. I ask for a firefighter, it produces for itself a detailed prompt where it likely adds white male, young, physically fit, in a firefighter’s suit, holding a hose- playing fast and loose with racial and gender bias. In response to complaints about the lack of diversity in images generated, some image generators have added what they call a “diversity filter” (Baum & Villasenor, 2024) to the detailed prompt AI creates before generating the image. This would, in theory, ensure that the image generated would be “diverse” in nature. This filter, however, seems only to be triggered by certain user prompts and seems also to be limited in how it produces “diversity”.
AI is especially guilty of all kinds of bias when it comes to asking it to generate an image of an individual. An input like “successful person” results in almost exclusively white young males. Baum & Villasenor mention that there is “a very long way to go in addressing bias in generative AI image generation, which needs to appropriately engage with diversity in all of its forms and complexity.” Added to the problematic nature of getting the amount of diversity “just right” designers must also contend with when diversity might be appropriate and when it might not be, the characters in a street scene in Tokyo will likely look different than those on a street in Sweden or Jerusalem. It seems to be currently a shooting darts in the dark kind of situation.
While it seems like dark times, programmers are aware of the issues with AI image generation. In fact, when results turn inappropriate or offensive, the programs are sometimes paused and designers go back to the drawing board. As the article’s authors point out, “one interesting question is whether these challenges will be addressed primarily by using prompt engineering to counteract biases in data sets, or whether the data sets themselves can be improved to reduce bias” (Baum & Villasenor, 2024). Either way, both need to improve before AI image generation can be relied upon for accurate results that don’t harm.
Can you use AI to make a lego version of yourself? Sure. But before using it to create, to publish or to story tell, check on the story it is telling. Look at the images it generates through a critical lens and ask yourself, does this image really align with my values, beliefs, and message? If not, hit pause.
References:
Baum, J., & Villasenor, J. (2024, April 17). Rendering misrepresentation: Diversity failures in AI image generation [Commentary]. Brookings Institution.
https://www.brookings.edu/articles/rendering-misrepresentation-diversity-failures-in-ai-image-generation/
Original student task shared by Laura Leblanc, Alexander Galt Regional High School, Sherbrooke, Quebec.
This blog was originally written for PME 815 Digital Literacy - Queen's University.
LITERACY TODAY IS UNDERSTANDING THE WORD AND THE WORLD.
© 2025, Literacy Today