The US undermines it’s AI safety; Stop using AI from the US.

US AI scientists instructed to remove ‘Ideological Bias’ from models (title summarized) by Will Knigh, Mar 14, 2025 on wired.com:

The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of “AI safety,” “responsible AI,” and “AI fairness” in the skills it expects of members and introduces a request to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness.”

I will admit that sometimes the generative models currently publicly available are designed to overcompensate the bias they learn from their training data. For example, the result of Google attempting to compensate for its biased training data was a Gemini that showed Asian Vikings, Black Nazis, and an Indian Pope. This was likely due to how Google attempted to compensate for its biased dataset but fuelled the white snowflakes’ conspiracy mania on wokeness.1

However, the Trump administration’s demand to remove ideological bias is racism in disguise. As models are trained on data taken from the internet, they effectively learn to reflect the internet back at us. The process that generates text, images, and more can be described as autocomplete on steroids. Seeing as the internet is full of factually wrong, often racist, often dumb things, an unbiased AI would autocomplete your request with a probably false, racist, and stupid answer. Bias is what makes these models somewhat useful.

Aware of the risks AI poses, the EU has created legislation that aims to protect citizens of EU countries from harm. This legislation means it is more difficult to develop AI products for the EU market, making oligarchs and tech bros angry, just like other regulations protecting consumers from corporations. These days, protecting people from oligarchs and corporations is considered un-American by the ruling class of the US, and we can see the impact it has on the people unfortunate enough to live there.

If you feel uncomfortable using OpenAI’s ChatGPT, Bing, Gemini, Meta AI, or any of the others, but still want to use an autocomplete on steroids, try Mistral AI’s Le Chat.


Footnotes:

  1. […] a series of viral posts by X (formerly Twitter) users who were outraged that prompts used to generate images of America’s founding fathers, Vikings, the Pope, and 1943 German soldiers (with the intention to generate images of Nazis) returned images of almost exclusively Black, Asian, First Nations, and other racially diverse people. Conservative commentators jumped on these technical inaccuracies, using them as evidence that Google had been infiltrated by an insidious ‘woke’ ideological agenda that was rewriting the historical record and discriminating against white people.

    These accusations are part of a larger culture war against diversity, equity and inclusion (DEI) in Big Tech. Many will remember the ten-page anti-diversity memo authored by then-Google employee James Damore arguing that biology (and not discrimination) caused the underrepresentation of women in the tech industry.

    […]

    For instance, when OpenAI first released its own AI image generation tool, DALL-E 2, it was immediately criticised for reproducing racial stereotypes. One study found that ‘models tended to produce images of people that look white and male’, especially when asked to depict people in positions of authority. That was particularly true for DALL-E 2, which generated white men 97% of the time when given prompts like ‘CEO’ or ‘director’. To mitigate against this, OpenAI introduced a technique that is now known as ‘prompt transformation’ — automatically modifying user prompts to intentionally adjust the outputs. In this case, it was the addition of words such as ‘African’, ‘Asian’ or ‘Latin’ to ensure outputs had a range of racially diverse faces.

    Black Nazis, Asian Vikings & the white paranoia that haunts generative AI published by Dr Thao Phan, April 2024 on humanities.org.au. ⤴ (scroll back)