Curse Of Knowledge Detector
by @quochungto
Diagnose a draft for the Curse of Knowledge — the expert blind spot that makes insiders write copy full of unexplained jargon, buried assumptions, strategy-l...
Scenario: Product announcement for a developer tool, audience is non-technical buyers
Trigger: User pastes a 400-word announcement that opens "We're shipping v2 of our feature-flag rollout engine with staged canaries and blast-radius controls." Audience: "VP of Engineering buyers at mid-market SaaS companies — they manage engineers but haven't written code in five years."
Process: (1) Listener baseline notes "feature flag" = probably known, "canary", "blast radius" = probably not known, "staged rollout" = known in principle, "rollout engine" = ambiguous. (2) Pass A flags canaries, blast-radius, rollout engine. (3) Pass B flags the abstract noun "controls" with no concrete behavior. (4) Pass C flags the buried assumption that the reader already wanted v2. (5) Pass D flags the buried lead — the actual news ("our tool now prevents the top three rollout-caused outages buyers told us about") is in paragraph 4.
Output: curse-of-knowledge-report.md with Top 3 = (1) move the paragraph-4 news to sentence 1, (2) define canary on first use as "a small slice of users who see the change first", (3) replace "blast-radius controls" with "automatic rollback if error rate crosses a threshold you set." Handoff note: "Structural rework — lead repositioning plus three jargon substitutions."
Scenario: Nonprofit fundraising letter, audience is first-time donors
Trigger: User pastes a 600-word letter from a global-health nonprofit that leads with program architecture, regional coverage statistics, and the M&E framework. Audience: "First-time individual donors, $25–$100 range, no prior engagement with global health."
Process: (1) Listener baseline: knows what a nonprofit is, does not know M&E, catchment area, DALY, does not know the named regions, does not know the organization's history. (2) Pass A flags four acronyms. (3) Pass B flags "program architecture" and "regional coverage" as strategy-level. (4) Pass C flags the assumption that the reader already cares about sub-Saharan maternal health statistics. (5) Pass D flags the buried lead (there is no Rokia-style individual named person) and flags "we are committed to lasting impact" as common-sense sedation.
Output: Top 3 = (1) open with a named beneficiary (the Mother Teresa / Rokia effect — identifiable victim beats statistics), (2) replace M&E, DALY, and catchment area with plain equivalents or cut them, (3) cut the "committed to lasting impact" sentence entirely. Handoff note: "Full rewrite from the core message — the current draft is architected around the organization's self-description, not the donor's decision."
Scenario: Internal all-hands memo about a strategy pivot, audience is all employees
Trigger: CEO pastes a memo that mentions "shareholder value," "synergies across business units," and "our North Star metric." Audience: "All 400 employees — roles range from engineers to facilities staff. Most have not attended exec strategy meetings."
Process: (1) Listener baseline: broadly varied — assume the lowest-context reader does not know North Star metric, does not know which business units exist, does not know what "shareholder value" means for their day-to-day. (2) Pass A flags North Star, synergies. (3) Pass B flags the entire memo as strategy-level with no concrete behavior change. (4) Pass C flags the assumption that readers know why the pivot is happening. (5) Pass D flags severe common-sense sedation across three paragraphs that any reader would already agree with.
Output: Top 3 = (1) add a concrete "what changes for you on Monday" paragraph per role category, (2) replace North Star with the actual metric and its target number, (3) cut the two paragraphs of values-language that add no news. Handoff note: "Full rewrite — this is the Made to Stick-canonical 'maximize shareholder value' failure at scale."
clawhub install bookforge-curse-of-knowledge-detector