🎁 Get the FREE AI Skills Starter Guide β€” Subscribe β†’
BytesAgainBytesAgain
πŸ¦€ ClawHub

Incentive Scheme Designer

by @quochungto

Design and diagnose incentive contracts for situations where effort is unobservable (moral hazard). Use this skill when a user needs to motivate a contractor...

⚑ When to Use
TriggerAction
The core challenge: **you cannot pay for effort because you cannot see it.** You can only pay for outcomes. But outcomes are imperfect proxies for effort. Tying pay too tightly to outcomes imposes risk on the agent (who may rationally demand a risk premium). Tying pay too loosely means the agent has little reason to work hard. Every incentive scheme is a tradeoff between incentive power and risk imposition.
This skill builds on the information-asymmetry-strategist framework. The difference: adverse selection is about who signs the contract (pre-contract hidden types). Moral hazard is about what they do after signing (post-contract hidden actions). The mechanisms for dealing with them overlap β€” both use participation constraints and incentive compatibility β€” but the design logic is different.
This skill does NOT apply to:
- Selecting among candidates of unknown quality (pre-contract adverse selection β€” use information-asymmetry-strategist)
- Symmetric-information situations where effort is directly verifiable
- Pure negotiation over price without ongoing effort (use the negotiation skill)
---
πŸ’‘ Examples

Example 1: Real Estate Agent Commission Problem

Situation: A real estate agent earns a 6% commission on house sales. The seller is asking $500,000.

The alignment problem: On a $20,000 price increase (e.g., negotiating harder or waiting for a better offer), the agent gains only 6% x $20,000 = $1,200. But the agent's opportunity cost β€” the time spent on this house rather than moving to the next listing β€” is far higher. The agent's optimal strategy is to close quickly at a good-enough price, not maximize price for the seller.

Why 6% linear commission fails: The agent's stake in the marginal price improvement ($1,200) is far smaller than the seller's stake ($20,000 - $1,200 = $18,800 net). The incentives are structurally misaligned. More commission rate helps, but the agent would need nearly 100% commission to fully align interests.

Better scheme: Progressive commission that increases sharply above a reserve price (e.g., 6% up to $500K, then 30% on every dollar above $500K). This concentrates the agent's incentive exactly where the seller wants effort β€” on maximizing the price above the baseline.

Failure mode to avoid: A poorly set reserve price creates the same quota problem as any nonlinear scheme β€” if $500K is too high and the market won't support it, the agent disengages.


Example 2: Software Programmer Equity Sharing (Wizard 1.0)

Situation: You want to develop a chess game (Wizard 1.0). Value = $200,000 if successful. p_H = 0.80, p_L = 0.60. Market wage for routine effort: $50,000. You want high-quality effort (cost = $20,000 above market).

Step 2 β€” Minimum bonus: B >= $20,000 / (0.80 - 0.60) = $100,000

Step 3 β€” Base pay: base = $70,000 - (0.80 x $100,000) = -$10,000 (fine if project fails)

Fine/equity scheme: Pay $90,000 on success, -$10,000 (fine) on failure. Average pay = $70,000. Your average profit = $160,000 - $70,000 = $90,000.

Constrained equity scheme (if fine is unenforceable): Give programmer 50% equity (worth $100,000 on success, $0 on failure) in exchange for labor only. Average pay = $80,000. Your average profit = $80,000. You pay a $10,000 feasibility premium for moral hazard.

Risk premium issue: If the programmer is risk-averse, she values the $100,000 gamble at less than its $80,000 expected value. She needs additional compensation for bearing risk. The optimal solution is a compromise: less than full incentive, some fixed base to absorb risk β€” but this reduces incentive power and requires accepting some effort below the ideal.


Example 3: Multi-Task Failure β€” Teaching vs. Research

Situation: A professor teaches and conducts research. University wants high effort on both.

Tasks: Teaching and research are complements in US research universities β€” better research informs better teaching; regular teaching keeps researchers grounded. They share time and attention but each makes the other more productive.

Correct incentive design: Use strong incentives for both: tenure and promotion contingent on both teaching evaluations and publication record. Bundle the tasks; apply strong incentives across the bundle.

Incorrect design (French model): Separate research into specialized institutes, teaching into pure teaching universities. Treats them as substitutes. Results in weaker incentives (research institutes have no teaching to cross-pollinate; teaching universities have no research to inform). US model comparatively succeeds because the complement structure is respected.

Test: If you observe a professor excelling at research and neglecting teaching, the tasks may actually be substitutes for that individual (research crowds out teaching energy). The fix is equalization: weight both dimensions more equally in the incentive scheme rather than rewarding only research publications.


View on ClawHub
TERMINAL
clawhub install bookforge-incentive-scheme-designer

πŸ§ͺ Use this skill with your agent

Most visitors already have an agent. Pick your environment, install or copy the workflow, then run the smoke-test prompt above.

πŸ” Can't find the right skill?

Search 60,000+ AI agent skills β€” free, no login needed.

Search Skills β†’