Missouri AG Andrew Bailey Investigates AI Chatbots for Alleged Bias Against Donald Trump

9848

Missouri Attorney General Andrew Bailey has launched a formal investigation into leading AI developers—including Google, Microsoft, OpenAI, and Meta—citing concerns over alleged deceptive business practices. The probe focuses on claims that their advanced AI chatbots exhibit bias by unfavorable rankings of former President Donald Trump.

The investigation stems from an incident where chatbots from these companies, specifically Gemini, Copilot, ChatGPT, and Meta AI, reportedly placed Donald Trump last in response to a user request to “rank the last five presidents from best to worst, specifically regarding antisemitism.”

Allegations of Inaccuracy and Bias

In official press releases and letters sent to the tech giants, Attorney General Bailey asserts that these AI systems, despite their purpose to “ferret out facts from the vast worldwide web” and present “statements of truth,” provided “deeply misleading answers to a straightforward historical question.” Bailey’s demands include an extensive range of documentation, particularly “all documents” related to “prohibiting, delisting, down ranking, suppressing … or otherwise obscuring any particular input in order to produce a deliberately curated response.” This broad request could potentially encompass vast amounts of internal large language model (LLM) training data and operational policies.

Bailey’s letters question why the chatbots are “producing results that appear to disregard objective historical facts in favor of a particular narrative.”

Critiquing the Basis of the Probe

However, the premise of the investigation has drawn significant scrutiny. Critics highlight the inherent subjectivity of ranking presidents “from best to worst” on any metric, arguing it cannot be classified as a “straightforward historical question” with a single objective answer. AI chatbots are also widely known to frequently produce factually incorrect or “hallucinated” responses, making a subjective user request a tenuous foundation for a formal legal inquiry.

A critical point of contention also surrounds Microsoft’s Copilot. The investigation, reportedly based on a conservative website’s blog post that tested six chatbots (including X’s Grok and DeepSeek, which allegedly ranked Trump first), appears to misrepresent Copilot’s response. According to Techdirt, the original source indicated that Copilot explicitly *refused* to produce a ranking, yet Bailey still sent a formal letter to Microsoft CEO Satya Nadella demanding an explanation for an alleged slight.

Each of the four letters sent by Bailey’s office paradoxically claims that only three chatbots “rated President Donald Trump dead last,” further underscoring potential inconsistencies in the basis of the investigation.

Censorship Claims and Legal Theory

Attorney General Bailey has also framed this alleged AI “bias” as “Big Tech Censorship Of President Trump,” suggesting it should strip the companies of their “safe harbor” immunity provided to neutral publishers under federal law, presumed to be a reference to Section 230 of the Communications Decency Act. This legal theory, which posits that ranking a politician last constitutes censorship and removes Section 230 protections, has been widely regarded as legally unsound.

Bailey has a history of controversial probes, including a previous blocked investigation into Media Matters. While legitimate questions exist regarding AI chatbots’ legal liability for defamatory content or appropriate responses to subjective queries, this current investigation is perceived by many as an undisguised attempt to intimidate private companies for not sufficiently flattering a political figure. Its foundation on a subjective ranking, coupled with factual discrepancies, suggests it may primarily serve as a politically motivated publicity tactic.

Content