Google Pauses AI Chatbot Amid Controversy Over Race-Swapping Historical Figures

Google has announced a temporary pause on its AI image tool, Gemini, following widespread criticism on social media for inaccurately race-swapping historical individuals.
The controversy erupted as users showcased examples of Gemini’s image tool producing inaccurate portrayals of historically figures such as founding fathers, vikings, and even the pope. With each request for a historically white individual, the AI consistently depicted figures of different races, including black, Asian, or Native American.
One user on X pointed out that when you ask Gemini to create a picture of a white family, the AI tool refuses to fulfill the request, stating that it is “unable to generate images that specify ethnicity or race.” Gemini goes on to state that it can instead offer the user “images of families that celebrate diversity and inclusivity, featuring people of various ethnicities and backgrounds.” But despite this justification, Gemini will immediately create images of black families when prompted by a user.
The most shocking example of Gemini’s race-swapping was the creation of “racially diverse” Nazi soldiers, which included an asian woman and a black man in uniform.
In response to the controversy, Google released a statement acknowledging the need to address recent issues with Gemini’s image generation feature. The company attributed the inaccuracies to the complexity of the algorithms behind the image generation models.
“We’re already working to address recent issues with Gemini’s image generation feature,” Google said in a statement on X. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”
Jack Krawcyzk, a member of Google’s AI team, acknowledged the issue and stated that efforts are underway to rectify the inaccuracies. However, he emphasized Google’s commitment to designing “image generation capabilities to reflect [its] global user base,” stating it will continue to do this for open-ended prompts.
“Historical contexts have more nuance to them and we will further tune to accommodate that,” said Krawczyk.
Gemini attributed the blame of this issue to its algorithms, stating that the algorithms behind the models “are complex and still under development.” Critics, however, have raised concerns about potential manipulation of the AI tool by Google to advance a specific narrative, given the company’s clear advocacy of left-wing beliefs regarding race and history.