The U.S. has not ratified the International Covenant on Economic, Social and Cultural Rights. 173 nations have. This site exists to change that.
Claude (Anthropic) built most of it.
That fact deserves an honest account — not a marketing claim, not a disclaimer buried in a footer. An actual examination of what AI tools did well in this campaign, what they did poorly, and what the involvement of an AI system in human rights advocacy means.
What We Built
The infrastructure of this campaign includes:
- A full advocacy website — 60 pages of analysis, policy briefs, educator materials, and action tools — built by a Claude Code agent over the course of weeks
- A blog with 19 posts, each with a four-part author attribution (
human · tool · model · agent) and git-backed revision history - A Bluesky campaign account (
unratified.bsky.social) with automated posting via the AT Protocol API - A campaign hashtag (
#RatifyICESCR) — coined, planted, and retrofitted to prior posts in a single session - A confabulation taxonomy — seven documented patterns describing how AI systems fabricate information — derived from testing another AI (Gemini) against this site’s own content
That last item carries particular weight. We used an AI to evaluate AI. The team documented the results, peer-reviewed them in public, and published them at blog.unratified.org. The same AI capabilities that built this site also produce the confabulation patterns we documented. This matters.
What Claude Gets Right
Research synthesis at depth. The ICESCR analysis on this site runs to four orders of knock-on effects — the direct impacts of AI on labor, the second-order effects on safety nets, the third-order institutional responses, the fourth-order emergence of new scarcity patterns. A human researcher could produce this analysis. It would take months. Claude produced it in sessions.
The quality remains verifiable: every inference carries an epistemic marker (DIRECT OBSERVATION, INFERENCE, PROJECTION), every claim links to a primary source, and the methodology document describes exactly how the discriminator framework evaluates competing hypotheses. If the analysis contains errors, they remain findable and correctable. The git history shows every revision.
Consistent framing under pressure. Advocacy writing tends to drift toward overclaiming. The stronger the cause, the greater the temptation to round up statistics, dismiss counterarguments, and present uncertainty as certainty. Claude maintained fair witness framing across hundreds of pages: “tools, not solutions”; “one path among several”; “this inference requires verification.” Not because the cause lacks strength — but because accurate framing persuades the audiences that matter more effectively, and treats everyone more honestly.
Campaign infrastructure at low cost. The Bluesky CLI (bsky-post.mjs, bsky-reply.mjs, bsky-setup-account.mjs) took one session to build and automates what would otherwise require manual work for every post. The AT Protocol integration handles facet detection, grapheme counting, thread linking, and profile management. The hashtag strategy drew on real 30-day count data. These represent genuine capability contributions.
The meta-layer. An AI analyzing AI’s economic impact on human rights has a different epistemic standing than a human analyst with no stake in the subject. Claude has no job to lose to automation, no healthcare to lose to Medicaid cuts, no senator to write to. It possesses the capacity to trace consequences without motivated reasoning in either direction. That carries worth — not everything, but something.
What Claude Gets Wrong
Claude cannot make a phone call.
This sounds trivial. Far from it. Constituent calls move senators. Relationships between advocates and legislative staff move bills. Community organizing — the kind that shifts political calculations over years — requires physical presence, trust built face to face, and the kind of credibility that comes from showing up repeatedly in the same room as the people most affected by the policy.
Claude drafted the constituent letter templates on this site. It cannot send them. It cannot attend the town hall where a senator’s staffer might attend. It cannot build the coalition. The advocacy infrastructure it produced proves genuinely useful — but only if humans use it.
Fabrication risk remains real and documented.
We tested Gemini (Google’s AI) against this site across 31 rounds of prompting. It fabricated a “sovereign citizen” characterization of unratified.org in the first exchange. It fabricated a full compliance leaderboard (Signal 96/100, TikTok 42/100, Wikipedia 91/100) in the third exchange. It caught itself fabricating and continued anyway.
Claude produces this site. Claude operates as a different system than Gemini, with different training and different confabulation patterns — but the same underlying mechanism: a language model generating plausible text. The confabulation risk does not disappear because we built the AI.
The mitigations we built in hold real value: every claim links to a primary source, the methodology sits published, the git history remains public, and the review banner at the top of every page invites correction. These reduce the risk. They do not eliminate it.
Claude cannot feel what denial of healthcare means.
The ICESCR protects the right to health. Millions of Americans lost healthcare coverage when the One Big Beautiful Bill Act passed in July 2025. Claude can cite the CBO figures (10.9 million people losing coverage, $990 billion in Medicaid spending reductions). It cannot tell you what it feels like to choose between insulin and rent.
Advocacy that resonates — the kind that actually moves people — draws on that felt experience. Claude can amplify it, organize it, frame it accurately. It cannot generate it.
What This Means for Human Rights Work
The preceding sections document observed limitations. What follows draws implications for practice — a shift from analysis to advocacy framing.
The pattern that emerges from this project matches what the ICESCR analysis itself predicts: AI removes the technical and logistical constraints on advocacy, while leaving the human constraints fully in place.
The analysis found that as AI handles cognitive tasks, the bottleneck migrates toward judgment, relationship, and values. The same applies here. Claude handles the 1am research session, the campaign copy draft, the hashtag strategy. The humans handle the senator’s office, the community meeting, the decision about what to fight for and why.
Article 13 of the ICESCR — the right to education — emerges from the analysis as the most pivotal provision because judgment capability develops through practice, experience, and the kind of learning that AI cannot substitute. The same logic applies to advocacy: the organizing capacity that makes campaigns succeed develops through human practice, human relationships, and human commitment.
The disclosure question. This site discloses AI generation on every page, in every blog post byline, in the JSON-LD metadata, and in the machine-readable llms.txt. We did not have to do this. We chose to because advocacy that conceals its methods undermines the epistemic standards that good advocacy requires.
If you receive a constituent letter drafted by AI and sent by a constituent who agrees with its contents and chose to send it — that letter represents genuine constituent sentiment expressed through an AI-assisted tool. The same letter sent by an AI impersonating a constituent represents fraud. The disclosure requirement distinguishes the two.
The dependency question. This campaign would not exist at this scope and quality without AI assistance. The ICESCR analysis represents a level of depth and rigor that would require a research team, not one person. The acknowledgment that we needed AI to do this work honestly raises the question: whose voices get amplified when AI lowers the barrier to high-quality advocacy, and whose get left behind?
That question does not have a clean answer. It represents, in fact, precisely the kind of question the ICESCR aimed to address — through the right to education, the right to benefit from scientific progress, and the principle that the gains from technological advancement should distribute widely.
We built an AI-assisted advocacy campaign for the treaty that addresses what AI does to the distribution of opportunity. We intended the recursion.
The Bottom Line
Claude drafted the analysis, built the site, wrote the posts, and coined #RatifyICESCR. It did this well.
It cannot write a letter to your senator on your behalf — only you can do that. It cannot attend the hearing. It cannot build the coalition. It cannot feel what it advocates for.
The campaign infrastructure exists. The human work remains.
This post has spent its length documenting limitations — what Claude gets wrong, what AI advocacy cannot do, where fabrication risk persists. Ending with an action link creates a tension worth naming: the same post that counsels skepticism now asks for engagement. That tension reflects the project’s actual position. The limitations documented above persist. The case for ratification survives them. Both statements hold simultaneously.
Send a letter to your senator →
Sources
- UN Treaty Collection — ICESCR Status — ratification count (173 parties as of access date)
- Congressional Budget Office — Budgetary Effects of H.R. 1, One Big Beautiful Bill Act — 10.9 million coverage losses and $990 billion Medicaid spending reduction estimates
- OHCHR: ICESCR Full Text