Naturopathy chatbot on my website: the guardrails that keep the AI from saying anything
For a few months now, a chatbot has been answering visitors landing on my site. It knows my articles, it can point to the right resources, it chats without pulling the reader away. It never crosses an important line: it does not give medical advice, it does not diagnose, it does not assess any personal case.
The technical part was quick. The delicate part was building the guardrails. In a profession where information matters as much as caution, a chatbot that slips can do more harm than good. Here is how I framed it.
The problem
Visitors arrive on a naturopathy site with questions. Sometimes simple ones: where to find an article, how a session works, how much it costs. Sometimes more engaging: which supplement to take, how to treat this discomfort, what to do if this symptom returns.
If I let a chatbot answer everything, I take a massive risk. I know neither the person, nor their history, nor their current treatments. A general recommendation can be dangerous. Even when it is not, it replaces the consultation with a mechanical exchange. That is not what I do this job for.
If on the other hand I keep the chatbot purely informational, it becomes useless. No one asks a question to a storefront.
I needed a firm middle path.
What I set up
A chatbot that answers, but knows what it does not answer.
An explicit perimeter. In the system instruction, I listed what the chatbot can do and what it must never do. It can: point to an article on the site, explain a general notion already published, give consultation logistics, talk about the affiliate program, present the brand approach. It cannot: recommend a specific supplement by name, give a diagnosis, assess an ongoing treatment, opine on a medical case. The boundary is spelled out.
A memory anchored on my content. The chatbot draws on a vector database that contains my articles. When it answers, it cites its sources and can redirect to the exact page. It does not fabricate an answer from thin air.
A polite and framed refusal. When a question crosses the line, the chatbot states clearly that it cannot answer and offers to book a session or read a more appropriate resource. No hedging, no half-answer, no advice sneaking in sideways.
Privacy protection. The chatbot never asks for personal medical data. If a visitor starts to describe a symptom, it points them to booking. Conversations are not stored with identity. No health data flows through the tool.
A drift log. I regularly review anonymized conversations to spot whether the chatbot slipped anyway. When I see an answer to fix, I rewrite it, tune the prompt, test. The pipeline is never frozen.
The result
The chatbot now handles a good share of navigation questions on the site. Visitors find faster, stay longer, and arrive at sessions better informed.
What I did not expect is the pedagogical effect. When the chatbot declines to answer a medical question, it explains why. It does not lecture, it situates the limit. Visitors wrote to me to say they appreciated that clarity. In a world saturated with generic advice, the modesty of the bot is a sign of seriousness.
And operationally, I save time. Requests that would have landed in my inbox as blurry questions are handled at the source. I keep my manual replies for what truly deserves my attention.
How you can replicate this
A chatbot is a powerful and risky tool. Before installing it, ask yourself two questions. What do you want it to do for your visitors. What do you want it to never do. The second is more important than the first.
Write a system prompt that fits on one page. Not more. A long prompt ends up contradicting itself. A short, clear, hierarchically ruled prompt holds up better under pressure.
Connect the chatbot to your content, not to general knowledge. A vector database of your articles protects you. Your bot answers what you wrote, not what a model read elsewhere. That discipline is the best technical guardrail.
Have edge cases reviewed by a peer in your field before going live. Step into the shoes of a visitor asking an awkward question. Test ten cases. When nine of ten are handled properly, you can open. Not before.
One stern point. Do not confuse efficiency with safety. A chatbot that answers everything is dangerous in a health field. Better a bot that often says "I cannot" than one that always says yes. Your profession carries a responsibility. Your bot must respect it.
If you want to set up this kind of chatbot with the right guardrails, I can support you.
Read next
- Pre-consultation questionnaires to filter requests upstream.
- Facial semiology with AI to see what AI can recognize during a session.
- Claude, Claude Code, Cowork: what's the difference? to understand the tool behind the chatbot.
— François
