Skip to content

How We Work

REFERENCE | DERIVED | Updated 2026-04-08 | Owner: Leadership

Frameworks, vocabulary, and decision-making culture at Dapper Labs.

"Dapper Labs is a venture studio shaping the future at the intersection of culture and emerging technology." -- Roham Gharegozlou, All-Hands, April 9, 2026

"Every person here is a founder or founder-in-training." -- Roham Gharegozlou, All-Hands, April 9, 2026

"We make bets, we fund them, we gate them with evidence, and then based on that we stop them or we scale them." -- Roham Gharegozlou, All-Hands, April 9, 2026


Core Frameworks

The Greenlight Framework

The canonical decision-making framework for product bets. Four gates, applied to every initiative.

Step Gate Rule
01 Hypothesis "What we believe, why we believe it, and what changes if we're right."
02 Smallest Test "Prove or disprove fast. No test longer than two weeks without a gate."
03 Measure "Did it work? Organic demand, retention, unit economics -- by the numbers."
04 Kill or Scale "No sunk-cost sentimentality. Evidence says go or stop."

Art of the Bet

Product development is hypothesis-driven. Every feature is a bet with a measurable outcome.

  • Before building: Define the hypothesis, success criteria, and kill criteria
  • While building: Track the experiment, not just the feature
  • After shipping: Evaluate against the criteria, not against feelings

"Stop framing new features as hope. Start framing them as experiments with measurable outcomes." -- CEO directive, April 2026

MSCW (MoSCoW Scoping)

Hold dates. Flex scope. MSCW is a scoping tool per release, not a prioritization framework.

Category Meaning Rule
Must Ship breaks without it Non-negotiable for the release
Should Expected and important Cut if timeline demands
Could Nice to have Cut freely
Won't Explicitly out of scope Prevents scope creep

If the date slips, you've misjudged the Musts — not the Coulds.

ICE Scoring

How we prioritize across competing options:

Factor What It Measures
Impact What changes for a user if we ship this?
Confidence How sure are we about the impact?
Effort How long does it take?

Score each 1-10. Multiply. Highest ICE wins. The key discipline: Impact is measured by what changed for a user, not by engineering effort or internal complexity.

Bold Beats

Time-limited experiments to test hypotheses before committing.

  • Duration: 2-4 weeks
  • Budget: Small (test the concept, not the scale)
  • Kill criteria: Defined before the experiment starts
  • Output: Go/No-Go decision with data

"If the Bold Beat doesn't produce the signal you expected, kill it. Don't rationalize one more iteration without a clear hypothesis for what changes."

Eat the Fog

Navigating uncertainty. Make decisions with incomplete information.

The principle: waiting for perfect information is a decision to do nothing. In a shrinking market with seasonal urgency, the cost of inaction exceeds the cost of being wrong about a reversible decision. Irreversible decisions get more deliberation.


Decision-Making Culture

Experiment Culture

Every product initiative should be framed as an experiment:

  1. Hypothesis: "If we do X, metric Y will improve by Z%"
  2. Measurement: How will we know?
  3. Timeline: When do we evaluate?
  4. Kill criteria: What result tells us to stop?

The "Mini AGI" Thesis

Every person defines an evaluation matrix for what "good" looks like in their domain, then points an AI agent at it for continuous improvement.

  • Guy Bennett: "Every new pack configuration gets an experiment brief: hypothesis, hit rate, margin target, cohort target, verdict."
  • The team: Each function defines what success looks like, measures it, and uses AI agents to iterate faster.

Impact Over Activity

The ONLY question that matters: What changed for a user?

Not "how many PRs were merged" or "how many meetings were held." What happened in the product that a collector can see, touch, or feel? This applies to engineering, product, marketing, and operations equally.


Operational Vocabulary

Term Meaning
Loop Herding The organizational intelligence model -- composable loops of sense -> decide -> act -> learn. Each person or agent runs a loop. Management is herding loops, not managing tasks.
Drop A pack release event. The primary revenue driver.
Pack Math The configuration of a pack: how many packs, distribution of rarities, pricing, expected revenue.
Tent-Pole A major drop event (vs. a routine weekly release). Monthly tent-poles replaced weekly drops in 2025-26.
Positive EV Expected value of pack contents exceeds pack price. The pricing principle that drove 4x W0 conversion.
XL / L / M / S User spending segments. XL = whale ($2,648/week avg). See Whale Economics.
Pipeline The funnel from new user → whale. See Pipeline Health.
Set Challenge A gamified collecting challenge requiring specific moments. Load-bearing retention infrastructure.
TST (Top Shot This) An open-ended revenue feature with parallel chase mechanics.
Fast Break A series of NBA Top Shot drops. Guy Bennett's domain.
NPRR Net pack revenue retention. Measures revenue from returning pack buyers.
The Borderline 71 71 users who oscillate between L and XL weekly. Stabilizing them = ~$400K/month protected volume.
W0 Conversion Signup to first purchase within 7 days. Currently at 12.3% (4x improvement from 2.9%).

Communication Norms

Slack

  • One-line parent message with relevant emoji
  • ALL detail goes in the thread
  • Never clutter the channel with long messages

Meetings

  • Come with a specific agenda and desired outcome
  • "What decision do we need to make?" not "Let's discuss X"
  • Action items with named owners and deadlines

Documentation

  • If it's not written down, it didn't happen
  • This wiki is the canonical reference for product knowledge
  • Playbooks document operational processes

What Makes Dapper Different

  1. We're the last one standing. Every competitor in digital collectibles has failed or pivoted. This creates both opportunity (no competition) and responsibility (if we fail, the category fails).

  2. The audience is sophisticated. Our collectors are financial gambler-collectors who track prices and manage portfolios. Design for intelligence, not simplicity.

  3. AI is infrastructure, not hype. AI agents handle morning briefs, campaign building, data analysis, and operational monitoring. The intelligence architecture (sensing -> context -> evaluation -> execution -> operations) is how the company actually runs. See AI at Dapper.

    "Your job is no longer to do the work. It's to build systems that do the work, and make them better." -- Roham Gharegozlou, All-Hands, April 9, 2026

  4. Data drives decisions. 22 validated findings from BigQuery analysis inform every product bet. See Data Science Insights. "Because the data says so" is a complete justification for a product decision.


Loop Levels (Canonical -- 5 Levels)

Every loop in the system operates at a level that graduates based on demonstrated competence:

Level The Person Does... The Loop Does... Graduation Signal
L1 You do the work Assists Can describe "good." Knows failure modes.
L2 You and the loop collaborate Drafts, you refine Minor edits >80% of the time
L3 Loop works, you review daily Plans and executes, escalates exceptions Intervenes on <20% of outputs
L4 Loop runs, you review weekly Full cycle autonomously Decisions match human's >90%
L5 Fully autonomous, you audit monthly Self-improving Sustained performance, no drift

"90 people + AI. Do the math. Specify. Execute. Verify. Humans own the first and last. AI does the middle." -- Roham Gharegozlou, All-Hands, April 9, 2026