What happens when the AI doesn't know the answer?
A well-configured AI voice agent acknowledges when it doesn't know, offers to take a message or transfer the caller, and logs the unanswered question so you can add it to the knowledge base for next time.
Written By Catherine Weir
Last updated About 1 hour ago
A well-configured AI voice agent acknowledges when it doesn't know something, offers to take a message or transfer the caller to someone who can answer, and logs the unanswered question so you can improve your knowledge base for next time. It does not guess, make up information, or hallucinate an answer.
The AI's "I don't know" behavior is one of the most important things to evaluate about a voice AI platform. A platform that hides uncertainty is dangerous. A platform that acknowledges it cleanly is trustworthy.
What the AI should say when it doesn't know
•"That's a great question — let me have someone on our team get back to you with the answer. What's the best number to reach you?"
•"I'm not sure about that specific policy. Let me transfer you to someone who can confirm."
•"I don't have that information available. Would you like to leave a message, or would you prefer a callback?"
The common pattern: acknowledge, offer a path, capture what's needed to follow through.
What the AI should never do
•Make up an answer ("hallucinate")
•Give a confident-sounding approximation that isn't based on actual knowledge
•Pretend the question was something it does know and answer that
•Hang up or end the call without resolution
•Send the caller in circles through the same question
Hallucination is the most dangerous failure mode for voice AI. A hallucinated answer sounds as confident as a correct answer, so the caller acts on it — and your business deals with the consequences later.
How good voice AI platforms prevent hallucination
•Grounding — the AI's responses are generated from your actual knowledge base, not from the model's general world knowledge
•Explicit uncertainty handling — when no grounding is available, the AI is configured to say "I don't know" rather than generate an answer
•Confidence thresholds — the AI escalates when its confidence in an answer is below a threshold
•Response validation — the AI's output is checked against the knowledge base before being spoken
•Explicit "out-of-scope" behaviors — topics like medical advice, legal advice, or anything outside the configured scope trigger immediate escalation
What happens with the unanswered questions
•The AI logs every question it couldn't answer in the admin dashboard
•You see the exact wording the caller used, the time of the call, and the caller's context
•You can decide whether to add the answer to the knowledge base, clarify an existing entry, or leave it for human handling
•Over time, the number of "don't know" moments drops as the knowledge base grows
How this maps to call containment rate
•A new agent deployment typically has containment around 50–70% as the knowledge base is being built
•Weekly review of unanswered questions and knowledge-base improvements usually pushes containment to 80–90% within a few months
•Some questions will always escalate; the goal isn't 100% containment — it's catching every answerable question and escalating the rest gracefully
What to look for in a vendor
•Ask specifically: "what does the AI say when it doesn't know?"
•Ask for an example transcript of an unanswered-question call
•Ask whether the AI can be tricked into hallucinating on adversarial prompts
•Ask how the vendor detects and prevents hallucination in production
•Ask what ISO 42001, SOC 2, or other compliance audits they've passed that speak to AI governance
Related concepts
•Can an AI answer industry-specific questions?
•How does the AI escalate to a human?
•How do I train the AI for my business?
See it in action
The Receptionist Agent at 365agents is built for explicit uncertainty handling — no hallucination, graceful "I don't know" responses, and full logging of unanswered questions. Our ISO 42001 AI Management System audits this behavior as an ongoing control.