Google suspends the image tool of its "absurdly woke" Gemini AI chatbot following criticism of historically incorrect images

 


After receiving backlash for producing "diverse" images that were not historically or factually accurate—such

as black Vikings, female popes, and Native Americans among the Founding Fathers—Google announced

on Thursday that it would "pause" the image production feature of its Gemini chatbot.



After requests to create representative photographs for people who produced the oddly revisionist images,

social media users criticized Gemini, calling it "absurdly woke" and "unusable."


In a statement published on X, Google stated, "We're already working to address recent issues with

Gemini's image generation feature." "We're going to stop creating people's images while we do this, and

we'll re-release an improved version soon."


Among the examples was an AI representation of a black man dressed in a Continental Army suit and

wearing a white powdered wig, seemingly representing George Washington and a Southeast Asian lady

dressed as a pope, even though all 266 popes in history have been white men.


Gemini even produced "diverse" versions of Nazi-era German soldiers, such as an Asian woman and a

black man dressed in a 1943 military uniform, in another startling example that the Verge unearthed.



Google has not released the guidelines that control the Gemini chatbot's actions, so it is challenging to

understand why the program was creating different accounts of historical people and events.


The Equal Protection Project's creator, William A. Jacobson, a law professor at Cornell University, told

The Post that "actual bias is being built into the systems in the name of anti-bias."


"This is a problem not only for search results but also for real-world applications where aiming for

results that resemble quotas through "bias-free" algorithm testing introduce bias into the system."


Fabio Motoki, a lecturer at the University of East Anglia in the UK and co-author of a report last year

that identified a noticeable left-leaning bias in ChatGPT, suggesting that the issue may be with Google's

"training process" for the "large-language model" that powers Gemini's picture tool.


"Remember that people's feedback about what is better and worse is what constitutes reinforcement

learning from human feedback (RLHF)," Motoki said in an interview with The Post. "This effectively

shapes the model's'reward' function, which is technically its loss function." 


"Therefore, this issue may arise depending on the individual's Google hires or the guidance Google

provides them."


The search giant made a big mistake because it had just changed the name of its primary AI chatbot

from Bard earlier last month and added much-anticipated new features, such as picture creation.


The error also occurred a few days after OpenAI, the company behind the well-known ChatGPT,

unveiled Sora, a new artificial intelligence tool that makes films in response to user text input.


Before now, Google had acknowledged that the chatbot needs to have its unpredictable behavior

corrected.


Google's senior director of product management for Gemini experiences, Jack Krawczyk, told The Post,

"We're working to improve these kinds of depictions immediately."


"A diverse range of people are produced using Gemini's AI picture production. Additionally, the fact

that it is used by people worldwide is usually a positive thing. However, this is where it needs to catch up.


Google has been contacted by The Post for additional feedback.


Gemini stated they were not "publicly disclosed due to technical complexities and intellectual property

considerations" in response to The Post's request for its trust and safety rules.


Additionally, the chatbot acknowledged that it has heard "criticisms that Gemini might have prioritized

force diversity in its image generation, leading to historically inaccurate portrayals" in its responses to

prompts.


According to Gemini, "the algorithms underlying image generation models are intricate and still in

development "They might find it difficult to comprehend the subtleties of cultural representation and

historical context, which could result in inaccurate outputs."


Comments