GauntletScoreGauntletScore

Simple pricing. No subscriptions.

Pay per credit. Credits never expire. Every analysis includes full 7-agent debate, all document types, cryptographic certificate, and full transcript.

Free

$0

3 credits

One-time. No credit card required.

Starter

$29

10 credits

$2.90/credit

For initial testing and evaluation.

Pro

$69

25 credits

$2.76/credit

Best for regular use.

Business

$125

50 credits

$2.50/credit

For high-volume verification.

Longer or multi-part documents may require more than one credit.

Enterprise and Sovereign Edition

Enterprise plans with custom SLAs available. Sovereign Edition in private preview — same engine, your infrastructure, zero data egress.

sales@genstrata.com | (336) 298-2166

Every analysis includes

Full 7-agent adversarial debate from 4 model providers
All document types supported
Ed25519-signed cryptographic certificate
Full transcript download (PDF, Markdown, or Text)
Per-claim verdicts with source citations
API access

Frequently asked questions

Can I buy more credits later?

Yes. Purchase additional credit packs at any time. Credits never expire.

What counts as one credit?

One credit = one document submission. Longer or multi-part documents may require more than one credit. Each submission includes full 7-agent debate, per-claim verdicts, and a signed certificate.

How long does verification take?

Typical documents complete in 5-10 minutes. Each analysis triggers 100-200 API calls to authoritative databases across 4 rounds of adversarial debate.

Do you offer annual plans?

Not yet. Credit packs during launch. Enterprise customers contact sales for custom arrangements.

What happens when I run out of credits?

Buy another pack. No automatic charges, no overages.

Is there a rate limit?

Free tier: 1 concurrent analysis. Paid tiers: up to 3 concurrent. Enterprise: unlimited.

Can I get a refund?

Credit packs are non-refundable, but unused credits never expire.

How do I know the agents aren't just inventing their own errors?

GauntletScore's verdicts are not based on what agents believe or estimate. During the tool verification phase, each agent issues direct, structured queries to authoritative databases — CourtListener, SEC EDGAR, eCFR, PubMed, and others. A claim is DEBUNKED when a database returns a record that contradicts it, or when the authoritative source returns no matching record. A claim is VERIFIED when the authoritative source confirms it. The full source citation for every tool query is preserved in the audit transcript.

What happens when two agents disagree?

The structured debate is designed to surface and resolve disagreements — but not to manufacture false consensus when the evidence is genuinely ambiguous. If agents hold opposing positions after four rounds and neither side can produce controlling external evidence, the claim is returned as INCONCLUSIVE. This is a distinct verdict, not a fallback — it tells you the claim could not be confirmed or refuted against available authoritative sources.

Can I use GauntletScore on documents that contain confidential information?

The Cloud Edition processes documents in memory during analysis. The original document text is not stored — only its SHA-256 hash is retained for certificate verification. Each analysis runs in an isolated subprocess whose memory is fully reclaimed on exit. For organizations that cannot send documents through any external API, the Sovereign Edition runs on your own hardware with no external network dependencies.

How is this different from running my document through multiple chatbots?

  1. Chatbots reason about claims. GauntletScore verifies them against authoritative databases via direct, structured queries.
  2. Chatbots don't challenge each other with structured evidence. GauntletScore's four-round debate forces agents to defend their findings against adversarial challenge.
  3. Pyrrho's causal_validate pipeline evaluates the temporal, proportional, and logical structure of causal claims — no equivalent exists in chatbot review.
  4. Bayesian confidence calibration produces scores reflecting evidentiary weight, not chatbot confidence language.
  5. The knowledge graph means every subsequent run benefits from verified facts accumulated in prior runs. Chatbots have no persistent memory.
  6. Every analysis returns an Ed25519-signed cryptographic certificate — proof that a specific document was verified on a specific date with a specific result.

What's your error rate?

Our pre-registered validation study examined 13,579 claims across 360 evaluations of 20 public companies. The system detected 27 tool-verified factual errors. Tool-augmented verification produced a +37.1 point improvement over reasoning-only analysis. What the system does not catch: errors requiring deep domain expertise unavailable in any public database, and arguments that are logically structured but strategically misleading. We return INCONCLUSIVE rather than forcing a verdict when evidence is insufficient.

GauntletScore provides assistive verification and is not a substitute for professional judgment.