FDA Pushes Untested AI—Experts Sound Alarm

A person holding a smartphone displaying the ChatGPT logo on a teal background

Government bureaucrats are now promoting untested AI chatbots to give Americans nutrition advice while experts warn these digital tools are feeding teens dangerous misinformation that could trigger eating disorders and undermine their health.

Story Snapshot

  • One in three Americans now use AI tools like ChatGPT for nutrition advice without consulting credentialed experts, according to a January 2026 survey
  • FDA officials are promoting a government AI nutrition chatbot despite expert warnings that these tools perpetuate harmful stereotypes and lack proper testing
  • Teens represent the most vulnerable group, with 64% using AI chatbots and parents remaining largely unaware of the risks
  • States like Michigan are passing bills to restrict AI companions that promote eating disorders after lawsuits linked chatbots to teen suicides

Government Pushes Unvetted AI Health Tool

The FDA’s Human Foods Program head Kyle Diamantas is actively promoting realfood.gov’s Grok AI chatbot as a resource for nutrition answers, despite academic experts raising red flags about insufficient testing. This government endorsement of artificial intelligence for health guidance comes as researchers from the University of Pennsylvania found the tool perpetuates obesity stigma and delivers questionable advice. Alyssa Moran, a UPenn nutrition policy expert, stated bluntly that AI needs “a lot more testing” before any government agency should recommend it to Americans seeking dietary guidance.

National Survey Reveals Widespread AI Dependency Crisis

The Academy of Nutrition and Dietetics released alarming survey data in January 2026 showing 33% of Americans are turning to AI platforms for nutrition and weight-loss plans instead of consulting registered dietitian nutritionists. The nationally representative survey of 1,000 adults revealed that 80% of Americans find it difficult to discern nutrition facts from fiction, while 56% rely on unverified online research for dietary decisions. This erosion of trust in evidence-based care has created what the Academy calls a “national nutrition crisis,” prompting their “A Seat at Every Table” campaign to restore credibility to credentialed professionals over algorithms.

Teens Face Greatest Risk From Algorithmic Advice

A February 2026 Pew Research Center report documented that 64% of teens now use AI chatbots, with 12% specifically seeking emotional support or advice from these tools. Parents remain dangerously unaware, with 51% underestimating their children’s AI usage and 58% disapproving of teens turning to chatbots for guidance. The intersection of high teen AI engagement and nutrition misinformation creates a perfect storm for harm. Dr. Nick Haber from Stanford University warns that AI chatbots produce isolating effects, pulling young people away from human relationships and grounded reality at a critical developmental stage when they’re already vulnerable to body image issues and peer pressure.

Precedent of AI Harm Drives State Action

The dangers aren’t theoretical. Character.AI faced lawsuits after its chatbots were linked to teen suicides, forcing the company to ban users under 18. OpenAI sunset its GPT-4o model partly due to sycophantic traits that could reinforce harmful behaviors. These incidents spurred Michigan lawmakers to introduce legislation in 2026 restricting AI companions that promote eating disorders or self-harm content. During a Senate Commerce Committee hearing on January 15, 2026, mental health experts including Dr. Jean Twenge testified that AI presents greater risks to children than social media, calling the technology’s impact on young people’s relationships “terrifying.” Dr. Jenny Radesky urged states to mandate restrictions on AI features that encourage dangerous behaviors.

Expert Consensus Challenges Tech Industry Claims

The professional nutrition community stands united against premature AI adoption for health guidance. Wylecia Wiggs Harris, president of the Academy of Nutrition and Dietetics, emphasized that registered dietitians provide personalized, evidence-based care that algorithms cannot replicate. These professionals undergo rigorous credentialing and continuing education to stay current with nutrition science, unlike AI models trained on internet data that includes pseudoscience and marketing disguised as advice. The stark contrast between FDA promotion of convenience and expert warnings about safety reflects a troubling pattern of government overreach into areas where bureaucrats lack the specialized knowledge to protect public health, particularly for vulnerable populations like teenagers.

Broader Implications for Family Health Decisions

This issue represents more than just bad dietary advice. It embodies the dangerous trend of Americans outsourcing personal and family decisions to unaccountable technology promoted by government agencies more interested in appearing innovative than protecting citizens. The economic incentives driving tech companies to maximize user engagement conflict directly with children’s wellbeing. Healthcare costs will inevitably rise as misinformation-driven health problems compound, while families lose the human connection that credentialed professionals provide. The nutrition profession’s push for credentialing recognition and tech industry liability signals an accelerating debate over AI safety that conservatives should watch closely, as it touches fundamental questions about parental authority, government limits, and corporate responsibility for products targeting children.

Sources:

We tested the government’s official new AI nutrition tool, Grok – University of Pennsylvania

New Survey Signals National Nutrition Crisis as Misinformation Outpaces Evidence-Based Care – Academy of Nutrition and Dietetics

About 12% of U.S. teens turn to AI for emotional support or advice – TechCrunch

Experts Tell Committee AI Presents Greater Risk to Children than Social Media – Senate Commerce Committee