Support

1. Getting Started

What is Reality Check™?
A real-time AI calibration tool that monitors your LLM inputs and outputs. It flags potential hallucinations so you can trust your AI workflows—whether you’re using a Chrome extension, embedding via our API/SDK, or running on-premise.

Key Benefits

  • 86% fewer hallucinations in real time
  • 25% higher RAG accuracy and up to 72% cost savings
  • Works with any AI model (LLM, AI agent, DNN, etc.)
  • Mathematical guarantees on prediction quality
  • Free & Pro tiers, plus API and SDK for integration

2. Installation & Setup

A. Chrome Extension (Free & Pro)

  1. Install Reality Check™ from the Chrome Web Store.
  2. Click the Reality Check™ icon in your toolbar.
  3. Sign in with your Confidentia account (or continue as Guest).
  4. Choose your tier
    • Free: Basic hallucination alerts right in your browser.
    • Pro: Extended context windows, customizable alert thresholds, priority support, and enterprise SLAs.

Once installed, Reality Check™ runs automatically on any in-browser AI interface.

B. API & SDK Integration

Obtain an API key

  1. Sign up or upgrade at confidentia.aiRequest Early Access.
  2. Install the package

pip install confidentia-realitycheck

Initialize in your code

from realitycheck import RealityCheck
rc = RealityCheck(api_key="YOUR_KEY")

Wrap your LLM calls

result = rc.check(prompt="…", model_output="…")
if result.hallucination:
   # take action: reroute / reprompt / review

Tier differences

  1. Free: Up to 10 requests/minute, basic alerting.
  2. Pro: Higher throughput, advanced uncertainty metrics, SLA-backed uptime.

C. On-Premise / Air-Gapped

3. How It Works

  1. Data Sampling
    – We analyze semantic and probabilistic characteristics of your prompt + response.
  2. Uncertainty Estimation
    – Our conformal prediction step quantifies “how sure” the AI is.
  3. Alert Classification
    • 🟢 Green: No hallucination detected
    • 🟡 Yellow: Uncertain – please verify
    • 🔴 Red: Definite hallucination – do not trust

4. Real-Time Actions

When Reality Check™ warns you:

  • Reroute to a stronger model (e.g. GPT-4o)
  • Reprompt to refine your question
  • Review by consulting a human expert

5. Troubleshooting

Extension icon not appearing
Solution: Confirm it’s enabled at chrome://extensions and then restart Chrome.

Alerts not firing for certain models
Solution: Make sure every LLM call is wrapped by the RealityCheck API/extension hook.

“Invalid API key” error
Solution: Log into your Confidentia dashboard at confidentia.ai/account → API Keys, and verify you copied the key exactly.

Don’t see “Yellow” or “Red” indicators
Solution: Check your subscription tier—Free-tier covers green-only basic detection; upgrade to Pro for full alert spectrum.

On-prem installer won’t activate
Solution: Send your license file and installer logs to support@confidentia.ai for offline activation assistance.

6. FAQ

Q: Which LLMs are supported?
A: Any model—OpenAI GPT, Anthropic, local DNNs, etc. Reality Check™ hooks into your prompt/response pipeline.

Q: What’s the difference between Free & Pro?

  • Free: Basic hallucination alerts, Chrome extension only.
  • Pro: Full API access, higher throughput, enterprise SLAs.

Q: How much will I save?
Based on typical RAG pipelines, customers save up to 72% on query costs and see 25% improved accuracy.

Q: Can I host on my own servers?
Yes—on-premise and air-gapped deployments are available. Reach out for licensing.

7. Contact & Support

Still stuck? Drop us a line at support@confidentia.ai or hit the chat widget on our site—our world-class AI science team is here to help!

Contacts

Confidentia AI Inc.
8 The Green, Ste A, Dover, DE 19901
United States
info@confidentia.ai
Privacy Policy
Support

Subscribe to Confidentia updates

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.