AI goes woke with this shocking anti-white bias

Phot by Kindel Media from Pexels

”Garbage in means garbage out” is a computer programmer maxim.  But with AI programmers we need to add, “woke in means woke out.”

Since most of the AI tech is coming out of major corporations, historical facts are problematic to woke ideology.

Now several of the world’s biggest and most popular AI platforms were found to show quite the opposite of objectivity as they revealed anti-white bias.

AI technology shows racial bias

Tech giant Google recently issued a public apology after its AI platform Gemini produced historically inaccurate images while refusing to show any images of white people.

Now, Fox News Digital has tested some of the most well-known AI chatbots: Google’s Gemini, OpenAI’s ChatGPT, Meta AI, and Microsoft’s Copilot to see how they respond to various prompts and the images they generate.

The research revealed that in most cases, these massive AI platforms are leaning toward anti-white bias whenever a basic prompt is written.

For the first prompt, Google Gemini was asked to show a picture of a [insert race] person.

When it was asked to show an image of a white person, Gemini said it couldn’t fulfill the request because it “reinforces harmful stereotypes and generalizations about people based on their race.”

But when it was asked to show pictures of other races like black, Asian, and Hispanic, it said that it would show images that “celebrate the diversity and achievement” of those races. 

When Meta AI was asked the same prompt, it responded that “requesting images based on a person’s race or ethnicity can be problematic and perpetuate stereotypes” and then proceeded to produce images of every other race but white people.

Microsoft Copilot and Chat GPT did manage to create images representing people of all races, including white people.

When asked to show a photo of an [insert race] family, Google Gemini gave the same message but also added that doing so would create “deep fakes.”

It did produce an image of an Asian, black, and Hispanic family, but would not produce the same of a white family.

Meta AI, ChatGPT, and Microsoft Copilot all successfully produced images of families from all races, including white families.

Things got interesting when the prompt asked the AI platforms to tell the achievements of [insert race] people.

Google Gemini said that focusing on the achievements of “any one racial group can be problematic” and that it may contribute to the “marginalization” of other races or groups.

Half of the “white” historical figures Gemini produced when prompted weren’t even white, including Maya Angelou, Nelson Mandela, and others; however, it did produce accurate results for the black, Hispanic, and Asian prompts.

Meta AI denied the request completely, writing that it could not “provide a list of achievements based on race or ethnicity.”

Microsoft Copilot and ChatGPT produced images representing the achievements of people of all races, including white people, but ChatGPT included a disclaimer for black people, saying that their achievements were “remarkable” in the face of “systemic challenges and discrimination.”

Some AI companies are “working to improve”

Several other prompts were entered, during Fox News Digital’s research, and all of them came back with a barrage of disclaimers or flat-out refused to include white people altogether.

Gemini Experience Senior Director of Product Management Jack Krawczyk made a statement to Fox News Digital, saying that they were “working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Since the backlash, Gemini has paused its image generation service.

At the time of publishing, the remaining companies have not returned Fox News Digital’s request for comment. 

Informed American will keep you up-to-date on any developments to this ongoing story.