Could AI really replace experts in nutrition?
Artificial intelligence, and tools like ChatGPT are rapidly reshaping the way we access and interact with health information. And nutrition is no exception. But while these changes are creating new possibilities, it has also introduced new risks — making professional expertise more important than ever.
When AI gets it wrong: A case study
A recent case study published in Annals of Internal Medicine illustrates the hidden dangers of over-relying on large language models (LLMs) like ChatGPT for health decisions. A 60-year-old man, concerned about the health risks of table salt (sodium chloride), turned to ChatGPT for an alternative. The chatbot suggested sodium bromide.
To the untrained, this sounds similar, but sodium bromide isn’t a cousin to table salt; it’s a toxic compound once used in sedatives and now largely withdrawn from medical use due to safety concerns. In this case, the substitution led to bromide toxicity, causing metabolic alkalosis, psychiatric symptoms such as paranoia, auditory and visual hallucinations, and ultimately a three-week hospital admission.
This example highlights a critical limitation: AI can sound authoritative but lacks the ability to reason, contextualise, or “sense check” information. Although we don’t know the specific prompt the patient used in this case, the context was clearly lost, and a dangerously inappropriate substitution was provided. Even putting the toxicity of the bromide aside, the health concerns around salt are actually due to sodium, not chloride.
Image: Consumers are increasingly turning to large language models to help manage their health conditions. (Credit: Photo by Christin Hume on Unsplash)
Why LLMs struggle with health advice
Research shows that laypeople searching for health information online often generate poor-quality responses. LLMs can compound this problem because, while they are trained on vast amounts of text, they:
Cannot contextualise knowledge: they don’t know what they don’t know.
Cannot generalise beyond prompts: their responses are pattern-based, not based on logical reasoning.
Hallucinate: they produce information that appears accurate but is entirely fabricated.
This is essentially the Dunning-Kruger effect applied to machines: LLMs project confidence without understanding. And when it comes to health, misplaced confidence can have serious consequences.
The double-edged sword of information access
The democratisation of health and nutrition information can have huge benefits. Consumers now have more access than ever before, and LLMs can synthesise and simplify complex research in seconds. They can also play a role in reducing inequities by making information more understandable and accessible to broader audiences.
However, access does not equal accuracy. Just because information is easy to generate does not mean it is correct, complete, or safe to apply.
So can AI help?
AI has enormous potential to support health professionals, rather than replace them, if used wisely. In nutrition, this includes:
Improving the accuracy of dietary assessments.
Supporting personalised or community-level health solutions.
Translating scientific findings into consumer-friendly language.
Enhancing equity in access to healthcare information.
When paired with expert oversight, these applications can strengthen nutrition practice and empower better decision-making. However, caution is needed in this narrative, as helpful tools in one set of hands may be dangerous in another pair of hands.
The bottom line
AI is a powerful tool, but it is not a replacement for expertise. LLMs can assist nutrition professionals in their work — but they cannot replace the nuanced judgment, contextual understanding, and ethical responsibility that trained experts bring.
In a world where information is abundant but accuracy is not guaranteed, the role of the nutrition professional has never been more important.