Module 3, Lesson 4: Designing RICE
In this lesson, I'm picking up right where we left off. We've got our specs in the project. Claude Code is installed. Now we need to answer the question that every product person, every founder, and honestly most engineers avoid asking until it's too late: what should we actually build first?
Before We Start
Here's what I'd expect you to have in place before this lesson:
From previous lessons:
- Your
/specsfolder exists in the project with feature specs inside it — from Module 3, Lesson 2 - Claude Code is installed, authenticated, and you've confirmed it runs in your WebStorm terminal — from Module 3, Lesson 3
- You're comfortable opening the terminal in WebStorm and typing a prompt into Claude Code
Tools / setup you'll need:
- WebStorm open with your project loaded
- Claude Code running in the terminal (
claudecommand works) - Your specs folder populated — this is what Claude will read to generate the matrix
By the end of this lesson, you'll:
- Understand what RICE scoring is and why it matters for deciding what to ship first
- Know how to write a prompt that gets Claude Code to generate a Feature Priority Matrix from your specs
- Have a
.project/feature-priority-matrix.mdfile saved in your project — a real, data-driven view of your features ranked by priority - Understand how to read that output and actually use it to guide what you build next
About This Lesson
Duration: ~12 minutes video + ~20 minutes practice
Skill Level: Intermediate
What You'll Build: A Feature Priority Matrix document saved in your project — RICE-scored features, hook analysis, dependency map, and a continuous deployment strategy.
This is still planning, not coding — and that's deliberate. The reason I put this lesson here is that most people jump to building whatever feature sounds most exciting, or whatever feels most technically interesting. RICE is how you stop doing that. It gives you a framework to make the decision objectively, based on your actual users and your actual constraints. The output of this lesson is one of the most valuable documents you'll have going into the build phase.
Watch the Lesson
What We're Covering
Here's what I'm walking you through in this lesson and why it matters:
- Why you can't build everything at once — this is obvious when you say it out loud, but most people don't act like it until they've wasted weeks building the wrong thing first
- What RICE actually is — Reach, Impact, Confidence, Effort — and how the scoring formula turns fuzzy opinions into a ranked list
- How to write the prompt — the specific prompt structure that tells Claude Code what to generate and where to save it
- How to read the output — what the matrix tells you and how to make decisions from it
- The hook analysis — looking at your highest-scoring features and asking whether they're painkillers or vitamins, and whether they create habit loops
- What comes next — the RICE matrix tells you what has priority; the next lesson is about turning that into a delivery timeline
1. Let's Set the Scene (~0:00)
So I'm working through a project, and the first thing I always ask myself — and I've noticed it's the same question that trips up most people building software — is: what makes sense to do first? What makes sense to do next?
We've got our specs folder. We've got features documented. But we cannot do everything at once. We need to define what makes sense to ship first, and that's what this lesson is about.
This brings us to a concept that's fairly well-established in product management: RICE. Reach, Impact, Confidence, and Effort. It sounds technical but it's actually pretty intuitive once you see it in action. And the good news is, we're not going to fill out a spreadsheet manually — we're going to let Claude Code do the heavy lifting while we make the decisions.
2. The Core Idea
2.1 What RICE Scoring Actually Is
RICE is a prioritisation framework. It stands for:
- Reach — How many of your users does this feature affect in a given period?
- Impact — How significantly does this feature move your key metric for the users it reaches?
- Confidence — How confident are you in your Reach and Impact estimates? (This keeps you honest.)
- Effort — How much work does it take to build? Typically measured in person-days.
The formula combines them into a single score:
RICE Score = (Reach × Impact × Confidence) ÷ Effort
The higher the score, the higher the priority. Features with a big reach, strong impact, and high confidence — but relatively low effort — float to the top. Features that are technically interesting but touch only a few users and require enormous effort sink to the bottom.
What this does is take the decision out of gut feeling and give you a number to point at. You can still override it — and sometimes you should — but at least you're overriding it with full awareness.
2.2 The Hook Analysis — Painkillers vs Vitamins
For your highest-scoring features, RICE alone isn't the whole story. There's a question worth asking for each one: is this feature a painkiller or a vitamin?
A painkiller solves an urgent, real problem. Users seek it out. They feel the absence immediately. A vitamin is nice to have — users appreciate it, but they won't churn without it.
Painkillers ship first. Every time.
The other thing we're looking for in this analysis is whether a feature creates a habit loop — does using this feature make users more likely to come back and use the product again? Features that create habit loops drive retention, which is ultimately the thing that keeps your product alive.
In the prompt I use for this lesson, I ask Claude to run a hook analysis on the high-scoring features to surface exactly these things. It's not fluff — it's the kind of thinking that separates products people use once from products people rely on.
2.3 How This Builds on Module 3, Lesson 2
In Module 3, Lesson 2, we wrote the feature specs and saved them to the /specs folder in the project. That folder is the input to everything we're doing in this lesson. The whole point of those specs existing in the codebase is that Claude Code can read them directly. We don't copy and paste anything manually. We just point Claude at the folder and let it work.
3. Let Me Show You How It Works (~1:00)
3.1 Setting Up a Scratch File (Optional but Useful)
Before I paste the prompt into Claude Code, I like to write it out first in a scratch file so I can read through it and make edits before running it. In WebStorm, I right-click in the file tree, create a new plain text file, and just call it something throwaway like scratch. No extension, just a temporary place to type.
This is optional — you can type directly into the Claude Code terminal. But I find it easier to compose longer prompts in a file first, then copy them across.
3.2 The Prompt
Here's the prompt I use to generate the Feature Priority Matrix. You have access to this — copy and paste it into Claude Code:
Based on the spec requirements in the specs folder, create a Feature Priority Matrix document.
Write to: .project/feature-priority-matrix.md
Include these sections:
## 1. Scoring Criteria Definition
Define what Reach, Impact, Confidence, and Effort mean for [PRODUCT NAME]:
- Reach: [Our target user base for Q1 is X users]
- Impact: Map to our key metric [e.g., time reduction, engagement rate]
- Effort: Dev time only in person-days (design tracked separately)
## 2. RICE Scoring Table
| Feature | Reach | Impact | Confidence | Effort | RICE Score |
|---------|-------|--------|------------|--------|------------|
## 3. Hook Analysis
For each high-scoring feature, analyze:
- What makes this feature "sticky" for users?
- Is this a painkiller (solves urgent problem) or vitamin (nice to have)?
- Does this create a habit loop?
## 4. Dependency Map
Which features must be built first due to technical dependencies?
## 5. Continuous Deployment Strategy
How will we deliver incremental value every 2-week sprint?
Additionally, where relevant, do additional web search for context with the web search tool and ask me additional questions for clarity with the ask question tool.
3.3 Pasting and Running in Claude Code
Open your terminal in WebStorm (Cmd + E → Terminal on Mac, or Ctrl + E on Windows), and if Claude Code isn't already running, type claude to start it.
Paste the prompt and hit enter.
You'll likely see a message asking whether Claude can make changes to your files for this session. When that appears — say yes to all edits for the session. You don't want Claude asking permission before every file write. Once you've allowed it, it will do its thing.
Claude will:
- Read your specs folder
- Potentially ask you clarifying questions (this is what the
ask question toolinstruction does — it's built into Claude Code) - Potentially do some web research for market context (the
web search toolinstruction handles this) - Write the completed matrix to
.project/feature-priority-matrix.md
When it's done, navigate to that file in your project folder. You should have a Markdown document waiting for you.
3.4 What's Going On Under the Hood
The reason this prompt works so well is the combination of things we're asking Claude to do:
- It reads your spec files directly from the project — it's not guessing, it's working from your actual documented features
- It applies a well-established framework (RICE) with real numbers, not just opinions
- It goes beyond the score to ask whether features are sticky and habit-forming — this is product thinking, not just engineering prioritisation
- It maps technical dependencies so you're not accidentally trying to build something that requires something else to exist first
- It thinks in sprints — because the point is to ship incrementally, not in one big release
4. Reading the Output
The matrix Claude produces will be specific to your product and your specs. Here's what to look for when you read it:
In the Scoring Criteria section — check that the definitions make sense for your context. RICE is flexible; the numbers only mean something relative to each other and relative to your user base. If Claude's definitions feel off, adjust them and re-run.
In the RICE Scoring Table — look at the top 3–5 features by score. These are your priorities. Then look at the bottom. If anything surprises you at either end, that's worth thinking about — sometimes the model catches something you'd been underweighting.
In the Hook Analysis — this is the one I find most valuable to actually read carefully. It'll tell you which of your high-scoring features are true painkillers, and which are nice-to-haves that scored well on reach but don't drive retention. The ones that create habit loops and have high switching costs — those are your moat. Build them early and build them well.
In the Dependency Map — treat this as a constraint layer on top of the RICE scores. A feature might score high, but if it technically depends on something else that needs to be built first, that dependency takes precedence. This is your build order.
5. Watch Out For These
Here's why this happens: the framework feels authoritative, so people stop thinking once they see the numbers.
The way I use it: RICE is a data point, not a decree. Read the scores, understand what's driving them, and then apply your own judgement. If something feels wrong, dig into the scoring criteria — the numbers are only as good as the inputs.
Here's why this happens: if you leave the placeholders as [PRODUCT NAME] or don't define what your key metric is, Claude will make reasonable guesses — but they won't be grounded in your actual context.
The way I avoid it: spend five minutes filling in the criteria definitions before you run the prompt. Be specific about your user target and your success metric. Everything flows from there.
Here's why this happens: the first time Claude Code is about to write a file in a session, it checks in before proceeding.
The way I handle it: when you see the permission prompt, choose "yes to all for this session." Otherwise Claude will ask before every file change and it gets tedious fast.
6. Practice
Exercise 1: Generate Your Matrix
What to do: Copy the prompt from Section 3.2, fill in your product name, user target, and key metric, then run it in Claude Code. Navigate to .project/feature-priority-matrix.md and open the file.
A nudge if you're stuck: If Claude asks you clarifying questions before writing the file, answer them specifically. That's the ask question tool doing exactly what it's supposed to — use it, don't skip past it.
How you'll know it's working: You have a .project/feature-priority-matrix.md file with all five sections populated, and the RICE scoring table has a row for every feature from your specs folder.
Exercise 2: Read the Hook Analysis and Mark Your Painkillers
What to do: Open the Hook Analysis section of the document. For each feature listed, manually note in the file (or just in your own notes) whether you agree with Claude's classification. Put a ✅ next to the ones you're confident are genuine painkillers. Put a ❓ next to anything you're not sure about.
What this is practising: Critical thinking about your own product. The AI gives you the framework — but you know your users. Trust your instincts when something doesn't feel right.
7. You Should Be Able to Build This Now
Here's what you can do with what we just covered:
- Generate a data-driven feature priority ranking from any set of documented specs
- Identify which features are genuine painkillers vs nice-to-haves
- Map technical dependencies before you start building so you don't back yourself into a corner
- Walk into the next lesson with a clear, prioritised list — not a hunch
Check Yourself
.project/feature-priority-matrix.mdexists in my project- The RICE Scoring Table has a row for each feature in my specs folder
- I've read the Hook Analysis and I understand which features are my highest-value priorities
- I've checked the Dependency Map and I know which features need to come first for technical reasons
If Something's Not Working
What's happening: Claude may have been waiting for your answers to clarifying questions and timed out, or the specs folder is empty / not where it expected.
How to fix it: Check that your
/specs folder has files in it. Then re-run the prompt, and when Claude asks questions, answer them fully before it proceeds to write..project folder doesn't existWhat's happening: Claude will usually create it, but if it ran into a permissions issue it may not have.
How to fix it: Create the
.project folder manually in your project root (right-click in WebStorm → New Directory → .project), then re-run the prompt.The Short Version
Here's what I want you to walk away with:
- RICE = Reach × Impact × Confidence ÷ Effort — a score that ranks your features by actual priority, not gut feel or enthusiasm
- Painkillers before vitamins — features that solve urgent problems ship first, every time. The hook analysis helps you tell them apart
- Dependencies constrain order — a feature might score high, but if it needs something else to exist first, that something else comes first
- What you can do now: You have a Feature Priority Matrix in your project — a document that tells you what to build first and why. The next lesson turns that into a timeline
Quick Reference
The RICE Formula
RICE Score = (Reach × Impact × Confidence) ÷ Effort
Output File Location
.project/feature-priority-matrix.md
How to Run the Prompt
- Open terminal in WebStorm (
Cmd + E→ Terminal on Mac) - Run
claudeif not already running - Paste the prompt (with placeholders filled in)
- Allow file writes when prompted
- Navigate to
.project/feature-priority-matrix.mdto review the output
Resources
Links & Docs
- RICE Scoring — Intercom's original writeup — this is where the framework was popularised; worth reading for the rationale behind each component
Tools Used
Questions I Get Asked
Q: Do I have to use RICE, or can I use a different framework?
RICE is what I use and what I've found works well — the formula is simple and the output is a ranked list you can actually act on. If you have a different framework you trust, use it. The point is to make the decision with data rather than vibes. What matters is that you have a defensible reason for your build order before you start coding.
Q: What if Claude's scoring feels off for my product?
That's normal — and it's actually useful information. If a feature is scoring high but your instinct says it shouldn't, dig into the criteria definitions. Usually the issue is that Reach or Impact was defined too broadly. Tighten up the definitions, re-run, and see if it changes. If it still feels off after that, trust your domain knowledge — but be honest with yourself about whether you're overriding the data for good reasons.
Q: Can I run this with Claude Desktop instead of Claude Code?
Technically yes — you could paste the same prompt into Claude Desktop with the JetBrains MCP active. But Claude Code reads the specs folder directly, which means it's working from your actual files rather than you copying and pasting them in. Use Claude Code here.
Q: How do I know I'm ready for the next lesson?
You've got your matrix file, you've read it, and you understand which features are top priority and in what order they need to be built. That's all you need.
💬 Stuck? Come Talk to Us
Build What Ships community → https://discord.gg/RFXRf9yg
Drop your question in the right channel. The community's active and I check in there too.
Glossary
RICE Scoring: A feature prioritisation framework. Stands for Reach, Impact, Confidence, and Effort. The score is calculated as (Reach × Impact × Confidence) ÷ Effort. Higher score = higher priority. (first introduced in Module 3, Lesson 4)
Reach: In RICE, the number of users a feature affects in a given time period. Usually expressed as a count or a percentage of your active user base. (first introduced in Module 3, Lesson 4)
Impact: In RICE, how much a feature moves your key metric for the users it reaches. Often scored on a scale (e.g., 0.25 = minimal, 0.5 = low, 1 = medium, 2 = high, 3 = massive). (first introduced in Module 3, Lesson 4)
Confidence: In RICE, how certain you are about your Reach and Impact estimates. Expressed as a percentage (100% = fully confident, 50% = educated guess). Keeps you honest about assumptions. (first introduced in Module 3, Lesson 4)
Effort: In RICE, the total person-days required to design, build, and test a feature. Lower effort with high reach/impact/confidence = higher priority. (first introduced in Module 3, Lesson 4)
Painkiller (feature): A feature that solves an urgent, existing problem for users. Users actively seek it out and feel its absence immediately. Contrast with vitamins. (first introduced in Module 3, Lesson 4)
Vitamin (feature): A feature that's nice to have, appreciated when present, but users won't churn without it. Lower priority than painkillers. (first introduced in Module 3, Lesson 4)
Habit Loop: A product pattern where using a feature increases the likelihood that a user will return and use the product again. Features that create habit loops drive retention. (first introduced in Module 3, Lesson 4)
Feature Priority Matrix: The output document from this lesson — a structured analysis of all features ranked by RICE score, with hook analysis, dependency map, and deployment strategy. Saved as .project/feature-priority-matrix.md. (first introduced in Module 3, Lesson 4)