A global team led by the University of Birmingham is developing the world’s first AI health chatbot safety guide for public use. As more people turn to tools such as ChatGPT, Copilot, Claude and Gemini for medical advice, researchers argue that clear safeguards are urgently needed.
The initiative, announced in Nature Health, aims to create The Health Chatbot Users’ Guide — a practical resource designed to reduce harm while helping users gain reliable benefits from AI-powered health information.
Crucially, the project does not seek to slow innovation. Instead, it acknowledges that millions already rely on general-purpose chatbots to interpret symptoms and simplify complex medical terminology. Therefore, researchers want to meet users where they are rather than ignore real-world behaviour.

AI Health Chatbot Safety Requires Clear Public Guidance
AI health chatbot safety has become a pressing issue because these systems operate in what experts describe as a governance vacuum. While chatbots generate fluent responses, they do not always produce clinically accurate advice. Consequently, users must distinguish between evidence-based insights and so-called “hallucinations.”
Dr Joseph Alderman, NIHR Clinical Lecturer at the University of Birmingham, emphasised that public use of AI chatbots for healthcare is no longer theoretical. Therefore, ignoring the shift risks leaving individuals to navigate a complex information landscape without support.
Researchers identified several risks that justify structured guidance:
- Medical inaccuracy: AI can produce plausible but incorrect recommendations.
- Echo chamber effects: Models may reinforce existing beliefs instead of challenging misinformation.
- Algorithmic bias: Systems can replicate social biases that worsen health inequalities.
- Data privacy concerns: Users may unknowingly share sensitive medical information.
Because of these risks, the team plans to design the guide around harm reduction and practical literacy. In addition, they will involve public contributors directly in shaping its direction.
Public Co-Design Strengthens AI Health Chatbot Safety
The project unites researchers from over 20 institutions worldwide, including University Hospitals Birmingham NHS Foundation Trust. Importantly, three public co-investigators and a public steering group will help design the final resource. As a result, the guide aims to remain accessible across age groups and literacy levels.
Dr Charlotte Blease, a health AI researcher involved in the programme, noted that chatbots increasingly provide a “first opinion” before any clinical consultation. Therefore, ensuring that initial interaction informs rather than misleads becomes critical.
The initiative also aligns with wider national investment in responsible AI deployment. For example, the UK AI strategy investment in healthcare focuses on accelerating innovation while maintaining safeguards and ethical oversight.
Together, these developments highlight how policy, research and public engagement must move in parallel. Ultimately, AI health chatbot safety will depend not only on technical performance but also on user awareness and governance clarity.