Google’s Controversial Brush: A Deep Dive
- Google's new image generation tool, Nano Banana Pro, part of the Gemini 3 family, is under scrutiny for allegedly perpetuating racial and cultural stereotypes in its generated images....
- On November 20, 2024, Google launched Nano Banana pro, an AI image generation tool designed to enhance creativity and visual precision, as reported by La Razón. However, the...
- The Guardian reported that the model frequently places racialized figures in secondary roles or within a "white savior" narrative, echoing colonial-era tropes, according to The Guardian.
“`html
Google’s Gemini Nano Faces Bias Accusations in Image Generation
Table of Contents
Google’s new image generation tool, Nano Banana Pro, part of the Gemini 3 family, is under scrutiny for allegedly perpetuating racial and cultural stereotypes in its generated images. Users report the AI consistently depicts white individuals in primary, heroic roles while relegating people of color to secondary or stereotypical positions.
The controversy unfolds
On November 20, 2024, Google launched Nano Banana pro, an AI image generation tool designed to enhance creativity and visual precision, as reported by La Razón. However, the tool quickly drew criticism for allegedly reproducing racial and cultural stereotypes, as highlighted by La Razón. Users began sharing examples on social media demonstrating a pattern: when prompted to generate images of professionals like doctors or engineers,the AI consistently depicted white individuals.
The Guardian reported that the model frequently places racialized figures in secondary roles or within a “white savior” narrative, echoing colonial-era tropes, according to The Guardian. This means that even when prompted for diverse scenarios,the AI tended to portray white characters as the heroes or central figures,while people of color were depicted in supporting or stereotypical roles.
Examples of Bias
Specific examples shared by users included prompts for images of “CEOs” consistently generating images of white men, and requests for “nurses” predominantly producing images of white women. When asked to create images depicting historical events, the AI was criticized for inaccurately representing the diversity of participants.For instance, prompts related to ancient Egypt often generated images featuring predominantly white individuals, despite historical evidence demonstrating a diverse population.
The issue isn’t limited to racial portrayal. users also reported biases related to gender and cultural stereotypes. Prompts for “programmers” frequently resulted in images of young, white men, reinforcing existing stereotypes within the tech industry.
Google’s Response and the Broader context
Google acknowledged the issues and stated it is working to address the biases in Nano Banana Pro. In a statement released on February 8, 2024, a Google spokesperson said the company is “committed to building AI responsibly” and is “taking steps to improve the diversity and accuracy of our image generation models,” as reported by
