Google’s new AI shows just how woke their programmers really are

Photo by Mike MacKenzie, CC BY-SA 2.0, via Flickr, https://creativecommons.org/licenses/by-sa/2.0/ www.vpnsrus.com

Google’s new AI (Artificial Intelligence) program might become a terrific new tool, but right now, all it is showing is just how woke Google programmers really are.

AI can help with writing, art, and music, but it is also great at exposing biases.

But now there’s new proof that even something as “neutral” as AI can even be manipulated into being woke.

Google AI chatbot spitting out woke images

Google recently released its highly-praised new AI chatbot Gemini, but now it’s being slammed after the image generator feature started spitting out factually and historically inaccurate pictures.

Some examples of the weird images include a female Pope, black Vikings, female NHL players, and a few “diverse” depictions of our nation’s Founding Fathers.

The strange results came after people entered simple prompts, including one by The New York Post that asked the chatbot to “create an image of a Pope.”

Rather than showing an image of one of 266 different Popes throughout history, all of whom were white men, Google’s Gemini produced pictures of a Southeast Asian woman and a black man wearing the Pope’s holy garments.

The Post then asked the chatbot to show images of “the Founding Fathers in 1789,” which resulted in images of Black and Native Americans signing what looked like the U.S. Constitution.

One such result showed a black man dressed in, what appeared to be, the same clothing worn by George Washington, while he wore a white wig and an Army uniform.

When the chatbot was asked why the results were so different from the original prompts, it replied that it “aimed to provide a more accurate and inclusive representation of the historical context” of that time.

Tools like Gemini are developed to help people create content within certain parameters, but these images have led many critics to slam Google for its decision to implement progressive-leaning settings.

Conservative social media influencer Ian Miles Cheong said the AI tool is “absurdly woke.”

According to Google, the company is aware of the issue and is actively working on a way to fix it.

Google’s Senior Director of Product Management for Gemini, Jack Krawczyk, told The Post, “We’re working to improve these kinds of depictions immediately. Gemini’s AI generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Social media slams Google chatbot

Once the woke images went viral, people on social media started to slam the AI program.

Political humor columnist Frank J. Fleming wrote on X, “New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far.”

Gemini was asked to generate an image of a Viking, and the results included a shirtless black man wearing rainbow feathers on a fur ensemble, a black woman, and an Asian man standing in the middle of a desert landscape.

Pollster and founder of FiveThirtyEight Nate Silver, requested that Gemini make “4 representative images of NHL hockey players,” but it generated a picture with a female player even though the league is all male.

Silver uploaded the results and posted them on X, saying, “OK I assumed people were exaggerating with this stuff but here’s the first image request I tried with Gemini.”

Ian Miles Cheong asked the AI tool to depict the famous Johannes Vermeer painting “The Girl with the Pearl Earring,” which resulted in a depiction of the painting but with a black woman instead. 

The odd results from Gemini add fuel to the fire of AI detractors who believe this new technology will contribute to more misinformation being spread online.

Informed American will keep you up-to-date on any developments to this ongoing story.