Google has pulled its Gemma artificial intelligence model from public use after Sen. Marsha Blackburn, R-Tenn., accused the program of defaming her by fabricating a sexual-misconduct claim.
In a letter Thursday to Google CEO Sundar Pichai, Blackburn said the model falsely alleged she had been accused of rape during a 1987 campaign, citing nonexistent articles.
She called the response “defamatory” and “patently false,” arguing the company had effectively published libel through its AI system.
Google quickly responded Friday night by announcing on X that it was removing Gemma from its AI Studio platform, a web-based interface that lets users test and deploy its models in a browser.
The company said the move was intended to clarify that Gemma was never designed for consumer use.
“We’ve seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions,” Google wrote. “We never intended this to be a consumer tool or model, or to be used this way. To prevent this confusion, access to Gemma is no longer available on AI Studio. It is still available to developers via API.”
AI Studio is Google’s sandbox for experimentation — a browser environment where developers, and increasingly casual users, could prompt models directly.
By contrast, API (Application Programming Interface) access is meant for software developers who integrate the model into applications through code.
By removing Gemma from AI Studio, Google ended the kind of direct, consumer-style access that led to the false claim about Blackburn. The model remains available to developers through the API, but under stricter controls requiring authentication and compliance with Google’s developer policies.
Blackburn’s complaint marks one of the most prominent instances of a sitting lawmaker alleging defamation by an AI system. She said the incident shows that tech companies “cannot hide behind the excuse of automation” when their products spread damaging falsehoods about real people.
She also referenced a case involving conservative commentator Robby Starbuck, and that Google’s AI tools made similar false statements about him — part of what she called a pattern of bias against conservatives.
“Google has reportedly removed Gemma from its AI Studio after I demanded the company take it down for smearing conservatives with manufactured criminal allegations,” Blackburn wrote Sunday on X.
“Google owes the American people answers, and I will be eagerly awaiting their response to my letter.”
Gemma is part of Google’s open-weights model family released earlier this year to compete with Meta’s Llama and other developer-focused AIs. The company has promoted it as a flexible research tool but, unlike its Gemini chatbot, it is not a public information source.
Google’s decision to limit Gemma reflects growing pressure on major tech firms to prevent AI systems from making factual assertions about individuals. By cutting off public access, the company appears to be drawing a sharper line between experimental developer tools and consumer products.
In her letter, Blackburn referenced a Senate Commerce Committee hearing on Oct. 29 during which Markham Erickson, Google’s vice president for government affairs and public policy, stated that “hallucinations” are a known issue in Large Language Models and Google is “working hard to mitigate them.”
“Whether intentional or the result of ideologically biased training data, the effect is the same: Google’s AI models are shaping dangerous political narratives by spreading falsehoods about conservatives and eroding public trust,” Blackburn wrote. “During the Senate Commerce hearing, Mr. Erickson characterized such failures as unfortunate but expected. That answer is unacceptable.
“During the hearing Mr. Erickson said, ‘LLMs will hallucinate.’ My response remains the same: Shut it down until you can control it. The American public deserves AI systems that are accurate, fair, and transparent, not tools that smear conservatives with manufactured criminal allegations.”
© 2025 Newsmax. All rights reserved.


			
		


