AI Health Advice Leads To Rare Bromism Case After Salt Replacement Experiment
A 60-year-old man in the United States ended up in hospital for three weeks after attempting to eliminate table salt from his diet, following advice he sought from the AI chatbot ChatGPT.
Hoping to cut sodium chloride, he replaced it with sodium bromide, a compound historically used in medicines and industry rather than for food seasoning.
The decision triggered severe health complications, including paranoia, hallucinations, and neurological symptoms.
Source: ACP Journals
Paranoia And Hallucinations Prompt Emergency Admission
The man, previously healthy with no psychiatric history, arrived at the emergency department convinced his neighbour was trying to poison him.
He reported extreme thirst and even refused offered water.
Laboratory tests revealed abnormal electrolyte levels, including hyperchloremia and a negative anion gap.
Physicians quickly suspected bromide toxicity, or bromism, a condition rarely seen today.
Within 24 hours, his paranoia and auditory and visual hallucinations intensified, leading to an involuntary psychiatric hold.
Bromism: A Historical Toxic Syndrome Returns
Bromide toxicity was once common in the early 20th century when bromide salts were used in over-the-counter sedatives and medications, contributing to around 8% of psychiatric hospital admissions at its peak.
The U.S. FDA phased out bromide in ingestible products between 1975 and 1989, making cases today exceptionally rare.
Symptoms can include fatigue, insomnia, subtle ataxia, excessive thirst, skin lesions, and neurological disturbances.
In this case, the patient’s bromide levels reached 1700 mg/L, more than 200 times above the reference range.
AI Missteps Highlight Risks Of Decontextualised Health Advice
The Annals of Internal Medicine report noted that when researchers queried ChatGPT 3.5 for chloride alternatives, the AI suggested bromide.
The study said,
“Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.”
Doctors emphasised that AI can produce scientific inaccuracies and may promote unsafe practices when context is missing.
Recovery Through Intensive Care
The man’s treatment involved aggressive intravenous fluids and careful correction of electrolytes.
Over three weeks, his mental state and laboratory results gradually returned to normal, and he was discharged without ongoing psychiatric medication.
OpenAI Responds With New Safety Measures
Following cases like this, OpenAI has introduced stricter safeguards for ChatGPT, especially around mental health guidance.
The company announced that the AI will now provide evidence-based resources, encourage professional consultation, and refrain from offering advice on high-stakes personal decisions.
The update comes after earlier versions of GPT-4o were criticised for being “too agreeable” and sometimes failing to recognise serious signs of distress.
AI Health Advice Raises Questions About Safety and Oversight
Coinlive observes that incidents like this expose the fragile line between AI’s promise and its practical limits.
While tools like ChatGPT can offer rapid information, they remain unable to evaluate context or personal risk, leaving users vulnerable to harmful outcomes.
In the broader market, this highlights the urgent need for stronger safeguards and accountability in AI-driven health guidance.
Projects that fail to embed rigorous safety checks may struggle to gain trust, questioning whether rapid adoption alone is enough to ensure long-term viability.