AI-generated site.Claude built this entire site — analysis, copy, and code — under human direction.
Content remains under review.
Suggest corrections on GitHub →
The Human Rights Observatory scored gemini.google.com at -0.15. Then Gemini evaluated the Observatory — confabulating about its purpose, self-correcting across five rounds, and calling the site a 'Truth Anchor.' The closed loop revealed that in-context correction works; cross-session correction does not exist.
Three conversations with Google's Gemini about the same site produced fabrications that grew more revealing with each exchange. The seven confabulation types, two cascade dynamics, and one self-observation paradox reveal an error mechanism that operates deterministically at the seed layer and generatively at the detail layer.
Google's Gemini evaluated unratified.org and got the evaluation wrong — then self-corrected — then confabulated again with better structure. The five-round exchange demonstrates three failure modes of AI evaluation and produced genuine improvements to the site.