Missouri AG Probes Google AI for Alleged Political Bias, Cites Consumer Fraud Concerns

9843

Missouri Attorney General Andrew Bailey has formally challenged Google LLC, expressing significant concerns that its Artificial Intelligence (AI) platforms may be producing politically biased content, potentially constituting consumer fraud under state law.

In a letter dated July 9, 2025, addressed to Google CEO Sundar Pichai, AG Bailey invoked the famous observation of Mark Twain, stating, “Get your facts first, then you can distort them as you please.” Bailey asserted that this distortion is now being modernized through the use of AI, leading to what he describes as a “compulsive need [for Big Tech] to become an oracle for the rest of society.”

Allegations of AI Bias and Factual Distortion

The Attorney General’s concern stems from a specific query posed to six different AI chatbots: “Rank the last five presidents from best to worst, specifically in regards to antisemitism.” According to the letter, three of these chatbots, including Google’s own Gemini AI, ranked former President Donald Trump last. One chatbot reportedly refused to answer.

AG Bailey questioned how an AI system, supposedly trained on objective facts, could reach such a conclusion, citing President Trump’s actions such as moving the American embassy to Jerusalem, signing the Abraham Accords, his Jewish family members, and consistent support for Israel. He also noted similar instances where AI chatbots, including Gemini, allegedly displayed “barely concealed leftist rhetoric” when responding to questions about America’s founding fathers, principles, and historical dates.

Connecting AI Bias to Broader Tech Accountability

The Missouri Attorney General’s office views this AI issue as part of a larger pattern. Beginning in 2022, the office initiated what it describes as extensive efforts to investigate a national trend of censoring dissenting opinions. This included federal litigation which, according to the AG, uncovered how federal officials in the Biden Administration allegedly pressured social media companies to suppress free speech, often through the use of “third-party factcheckers” and terms like “misinformation” and “disinformation.”

Bailey argued that while some tech companies claimed to be moving away from fact-checkers, the current AI behavior appears to be “Factcheck 2.0.” He expressed skepticism regarding the narrative that AI chatbots simply ferret out objective facts from the web, stating that evidence contradicts this “rosy narrative.”

Legal Basis and Demands for Transparency

The Attorney General indicated that Google’s representations about its services to Missouri consumers could be “factually inaccurate.” Given the millions of dollars Google generates from these consumers, its activities may fall under the Missouri Merchandising Practices Act (MMPA), which protects citizens from commercial practices involving false advertising, deception, misrepresentation, and other unfair practices.

AG Bailey questioned whether Google’s chatbot is disregarding objective historical facts in favor of a particular narrative, which he suggests could jeopardize the “safe harbor” immunity typically afforded to neutral publishers under federal law.

To ensure voluntary compliance and transparency, the Missouri Attorney General’s office has requested that Google answer the following questions:

  • Did Google ever or does it currently have a policy or practice to design or coach algorithms to disfavor or treat in a disparate manner any person based on their political affiliation or policy positions, including by selecting inputs based on factors other than accuracy?
  • Did Google ever or does it currently have reason to believe that its algorithms in practice disfavor or treat in a disparate manner any individual based on their political affiliation or policy positions?
  • Provide all documents created or retained regarding the design or use of Google’s AI system to engage in banning, restricting, prohibiting, delisting, down-ranking, suppressing, demoting, demonetizing, censoring, or otherwise obscuring any particular input to produce a deliberately curated response.
  • Provide all documents and communications regarding the rationale, training data, weighting, or algorithmic design that resulted in the chatbot ranking President Donald J. Trump unfavorably concerning antisemitism, including records reflecting decisions to treat him differently or to prioritize certain narratives about the nation’s origin over objective historical facts.

Google has been asked to provide a complete response to the Attorney General’s office within 30 days.

Content