🎁 Get the FREE AI Skills Starter GuideSubscribe →
BytesAgainBytesAgain
🦀 ClawHub

Curse Of Knowledge Detector

by @quochungto

Diagnose a draft for the Curse of Knowledge — the expert blind spot that makes insiders write copy full of unexplained jargon, buried assumptions, strategy-l...

Versionv1.0.0
When to Use
TriggerAction
**Preconditions to verify before starting:**
- The draft exists as text the agent can read (paste, markdown file, or document).
- The audience is named and at least roughly profiled — without a target audience, "too expert" is undefined.
- The user wants a critique, not a full rewrite. If they want a rewrite, hand off to `message-clinic-runner` after producing the report.
**The Curse of Knowledge, restated for the agent:** Once someone knows something, they cannot imagine not knowing it. Experts "tap" a tune they can hear in their own head (Elizabeth Newton's 1990 Stanford study: tappers predicted listeners would guess 50% of tunes; listeners got 2.5% — a 20x gap). The same expertise that produced the Answer backfires during the Telling Others stage. This skill catches the tapping.
---
💡 Examples

Scenario: Product announcement for a developer tool, audience is non-technical buyers

Trigger: User pastes a 400-word announcement that opens "We're shipping v2 of our feature-flag rollout engine with staged canaries and blast-radius controls." Audience: "VP of Engineering buyers at mid-market SaaS companies — they manage engineers but haven't written code in five years."

Process: (1) Listener baseline notes "feature flag" = probably known, "canary", "blast radius" = probably not known, "staged rollout" = known in principle, "rollout engine" = ambiguous. (2) Pass A flags canaries, blast-radius, rollout engine. (3) Pass B flags the abstract noun "controls" with no concrete behavior. (4) Pass C flags the buried assumption that the reader already wanted v2. (5) Pass D flags the buried lead — the actual news ("our tool now prevents the top three rollout-caused outages buyers told us about") is in paragraph 4.

Output: curse-of-knowledge-report.md with Top 3 = (1) move the paragraph-4 news to sentence 1, (2) define canary on first use as "a small slice of users who see the change first", (3) replace "blast-radius controls" with "automatic rollback if error rate crosses a threshold you set." Handoff note: "Structural rework — lead repositioning plus three jargon substitutions."

Scenario: Nonprofit fundraising letter, audience is first-time donors

Trigger: User pastes a 600-word letter from a global-health nonprofit that leads with program architecture, regional coverage statistics, and the M&E framework. Audience: "First-time individual donors, $25–$100 range, no prior engagement with global health."

Process: (1) Listener baseline: knows what a nonprofit is, does not know M&E, catchment area, DALY, does not know the named regions, does not know the organization's history. (2) Pass A flags four acronyms. (3) Pass B flags "program architecture" and "regional coverage" as strategy-level. (4) Pass C flags the assumption that the reader already cares about sub-Saharan maternal health statistics. (5) Pass D flags the buried lead (there is no Rokia-style individual named person) and flags "we are committed to lasting impact" as common-sense sedation.

Output: Top 3 = (1) open with a named beneficiary (the Mother Teresa / Rokia effect — identifiable victim beats statistics), (2) replace M&E, DALY, and catchment area with plain equivalents or cut them, (3) cut the "committed to lasting impact" sentence entirely. Handoff note: "Full rewrite from the core message — the current draft is architected around the organization's self-description, not the donor's decision."

Scenario: Internal all-hands memo about a strategy pivot, audience is all employees

Trigger: CEO pastes a memo that mentions "shareholder value," "synergies across business units," and "our North Star metric." Audience: "All 400 employees — roles range from engineers to facilities staff. Most have not attended exec strategy meetings."

Process: (1) Listener baseline: broadly varied — assume the lowest-context reader does not know North Star metric, does not know which business units exist, does not know what "shareholder value" means for their day-to-day. (2) Pass A flags North Star, synergies. (3) Pass B flags the entire memo as strategy-level with no concrete behavior change. (4) Pass C flags the assumption that readers know why the pivot is happening. (5) Pass D flags severe common-sense sedation across three paragraphs that any reader would already agree with.

Output: Top 3 = (1) add a concrete "what changes for you on Monday" paragraph per role category, (2) replace North Star with the actual metric and its target number, (3) cut the two paragraphs of values-language that add no news. Handoff note: "Full rewrite — this is the Made to Stick-canonical 'maximize shareholder value' failure at scale."


View on ClawHub
TERMINAL
clawhub install bookforge-curse-of-knowledge-detector

🧪 Use this skill with your agent

Most visitors already have an agent. Pick your environment, install or copy the workflow, then run the smoke-test prompt above.

🔍 Can't find the right skill?

Search 60,000+ AI agent skills — free, no login needed.

Search Skills →