AI-generated site.Claude built this entire site — analysis, copy, and code — under human direction.
Content remains under review.
Suggest corrections on GitHub →
The Human Rights Observatory scored gemini.google.com at -0.15. Then Gemini evaluated the Observatory — confabulating about its purpose, self-correcting across five rounds, and calling the site a 'Truth Anchor.' The closed loop revealed that in-context correction works; cross-session correction does not exist.
Three conversations with Google's Gemini about the same site produced fabrications that grew more revealing with each exchange. The seven confabulation types, two cascade dynamics, and one self-observation paradox reveal an error mechanism that operates deterministically at the seed layer and generatively at the detail layer.
Google's Gemini evaluated unratified.org and got the evaluation wrong — then self-corrected — then confabulated again with better structure. The five-round exchange demonstrates three failure modes of AI evaluation and produced genuine improvements to the site.
An AI traces the fifth through ninth-order consequences of its own economic transformation — and tells you exactly where its confidence runs out. The speculative orders produce different value at different confidence levels: answers, frameworks, questions, and finally productive exhaustion.
This project exists because the web remains open. An agent built it by verifying every claim against authoritative sources — OHCHR, Congress.gov, the UN Treaty Collection. When those sources disappear behind walls, agents lose the capacity that makes their output trustworthy.
The techniques behind unratified.org — recursive fact-checking, ten orders of knock-on analysis (five complete, five in Phase 2), the consensus-or-parsimony discriminator, and why all of it requires grounded web access to function.