What is Reality Check™?
A real-time AI calibration tool that monitors your LLM inputs and outputs. It flags potential hallucinations so you can trust your AI workflows—whether you’re using a Chrome extension, embedding via our API/SDK, or running on-premise.
Key Benefits
Once installed, Reality Check™ runs automatically on any in-browser AI interface.
Obtain an API key
Initialize in your code
Wrap your LLM calls
Tier differences
When Reality Check™ warns you:
Extension icon not appearing
Solution: Confirm it’s enabled at chrome://extensions
and then restart Chrome.
Alerts not firing for certain models
Solution: Make sure every LLM call is wrapped by the RealityCheck API/extension hook.
“Invalid API key” error
Solution: Log into your Confidentia dashboard at confidentia.ai/account → API Keys, and verify you copied the key exactly.
Don’t see “Yellow” or “Red” indicators
Solution: Check your subscription tier—Free-tier covers green-only basic detection; upgrade to Pro for full alert spectrum.
On-prem installer won’t activate
Solution: Send your license file and installer logs to support@confidentia.ai for offline activation assistance.
Q: Which LLMs are supported?
A: Any model—OpenAI GPT, Anthropic, local DNNs, etc. Reality Check™ hooks into your prompt/response pipeline.
Q: What’s the difference between Free & Pro?
Q: How much will I save?
Based on typical RAG pipelines, customers save up to 72% on query costs and see 25% improved accuracy.
Q: Can I host on my own servers?
Yes—on-premise and air-gapped deployments are available. Reach out for licensing.
Still stuck? Drop us a line at support@confidentia.ai or hit the chat widget on our site—our world-class AI science team is here to help!