RFK Jr. Is Letting AI Help Run the FDA. There’s Just One Problem
The FDA’s new AI assistant is straight-up inventing data.

Reliance on artificial intelligence is breaking down the Food and Drug Administration.
Health officials in the Trump administration have vaunted the burgeoning technology as a way to fast-track and streamline drug approvals, but that hasn’t been the case. Instead, the program is cooking up nonexistent studies, a process referred to as “hallucinating,” according to current and former FDA officials that spoke with CNN.
“Anything that you don’t have time to double-check is unreliable. It hallucinates confidently,” one agency worker told the network.
Insiders claim that the program, Elsa, is helpful when it comes to summarizing meetings or email templates, but its usefulness ends there.
“AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have” regarding fact-checking potentially fake or misrepresented studies, another FDA employee told CNN.
Hallucinations are a known problem with generative AI models—and Elsa is no different, according to Jeremy Walsh, the head of AI at the FDA.
“Elsa is no different from lots of [large language models] and generative AI,” Walsh told CNN. “They could potentially hallucinate.”
It’s not the first time that the Department of Health and Human Services has majorly fumbled its use of artificial intelligence.
In May, AI researchers claimed there was “definitive” proof that Health Secretary Robert F. Kennedy Jr. and his team used the tech to write his “Make America Healthy Again” report, and had completely botched the job in the meantime.
Kennedy’s report projected a new vision for America’s health policy, taking aim at childhood vaccines, ultraprocessed foods, and pesticides. But a NOTUS investigation found seven studies referenced in Kennedy’s 68-page report that the listed study authors said were either wildly misinterpreted or never occurred at all. Researchers noted 522 scientific references in the report that included the phrase “OAIcite” in their URLs—a marker indicating the use of OpenAI.
At the time, administration officials brushed off the controversy as a temporary flub. But the new over-reliance on the tech indicates that the MAHA report was actually a horrifically dangerous precedent, allowing the White House to tiptoe into the realm of unvetted and unverified AI usage to form the basis of America’s public health policy.