Seed Data

YaCursed launched with approximately 200 public figures and 12,000 synthetic casts to populate the leaderboards and demonstrate the experience before real users arrived. All synthetic casts are flagged as such in the database and will be filtered out 30 days after launch or after 10,000 real casts, whichever comes first.

How the figures were selected

We asked Claude, Anthropic's AI assistant, to suggest public figures across five tiers of impact — from heads of state and global institutions (Tier 1) down to notable cultural figures (Tier 5). For each tier, Claude proposed figures broadly perceived as harmful or unethical (curse-leaning), figures broadly perceived as beneficial or admirable (bless-leaning), and a set of genuinely polarizing "battleground" figures who attract both curses and blessings for substantive reasons.

We also specified geographic diversity as a selection criterion — the lists should represent multiple continents and political systems rather than defaulting to an Anglophone lens. The resulting set spans over 30 countries, from Venezuelan opposition leaders and Congolese Nobel laureates to Burmese generals and New Zealand prime ministers. That said, we deliberately weighted the selection toward American public figures, expecting our initial audience to be predominantly US-based and most likely to engage with figures they recognize from domestic politics, business, and culture.

Cast volumes were scaled to tier: a Tier 1 autocrat receives far more casts than a Tier 5 celebrity, reflecting the rough proportionality of real-world public attention. Geographic distribution of casters was also modeled per figure — a Venezuelan opposition leader draws casts from Caracas and Miami, not randomly from the globe.

Why Claude specifically

We could have used any large language model to generate a list of controversial and admired public figures. We chose Claude in part because of Anthropic's published constitutional AI approach, which includes explicit discussion of what it means to be an ethical actor — how to weigh harm, how to reason about competing moral frameworks, and how to navigate the tension between neutrality and moral clarity. We were curious what an intelligence that has been deliberately shaped to think about ethics would actually produce when asked to name exemplars of good and poor ethical conduct. Not as a definitive judgment, but as a starting point worth examining.

Why we found this interesting

There's a recursive quality to asking an AI to seed a karmic accounting ledger. Claude's suggestions aren't its own opinions — they're a distillation of patterns in public discourse, filtered through a model trained on a broad cross-section of human writing and then shaped by a constitution that foregrounds ethical reasoning. The seed data is, in a sense, a mirror of collective reputation as digested by a machine that has been asked to care about ethics: who it considers villainous, heroic, or genuinely contested. We thought it made a fitting starting point for an app that invites humans to register their own moral verdicts — letting real users confirm, complicate, or overturn the AI's opening assessment. The leaderboards begin as a reflection of consensus; what they become is up to you.

Last updated: February 2026