Module 3, Lesson 2: Writing the Spec with Claude Desktop
In this lesson, we're picking up right where Lesson 1 left off. You've got WebStorm running, Claude Desktop connected via MCP, and a project folder sitting there waiting. Now we're going to use all that documentation we built in Module 2 — the research, the UX/UI doc, the technical document, the user flow — and turn it into a structured set of feature specs written directly into the project.
Before We Start
Here's what I'd expect you to have in place before this lesson:
From previous lessons:
- WebStorm is installed, a project is created, and you know where it lives on your machine — we covered this in Module 3, Lesson 1
- MCP is enabled in WebStorm and Claude Desktop is connected — also from Module 3, Lesson 1
- You've run the connection test and it worked (Claude could list your project files using the JetBrains tool)
- Your Module 2 documentation should be complete and accessible in Obsidian: market research summary, PRD, technical document, UX/UI document, and your Figma user flow
Tools / setup you'll need:
- Claude Desktop (with JetBrains MCP connected)
- WebStorm open with your project loaded
- Your Obsidian vault with the documents from Module 2
- Your Figma user flow diagram from Module 2, Lesson 7
By the end of this lesson, you'll:
- Understand what a feature spec is and why it lives inside the codebase
- Have a working prompt that turns your existing documentation into structured specs
- Have a
/specsfolder in your project with actual spec files written by Claude — ready for development
About This Lesson
Duration: ~15 minutes video + ~20 minutes practice
Skill Level: Intermediate
What You'll Build: A /specs folder in your WebStorm project containing structured feature requirements, generated by Claude from your Module 2 documentation and Figma user flow.
This is the lesson where things start to feel real. We've spent Module 2 building a solid foundation — research, documentation, user flows. Now Claude's going to read all of that and turn it into something developers (or Claude itself, later on) can actually work from. By the end of this lesson, your codebase isn't empty anymore. It has a spec.
Watch the Lesson
What We're Covering
Here's what I'm walking you through in this lesson:
- Why specs live inside the codebase — not in Notion, not in Obsidian, but in the project folder itself
- The prompt that drives everything — the exact prompt I use to take Claude from documentation to structured specs
- How Claude reads your existing docs — market research, UX/UI doc, technical doc, and the Figma flow all feed in together
- Capability domains — how to think about organising your specs by what the product does, not by screen or feature name
- The spec quality checklist — the four criteria that make a spec actually useful vs just words in a file
1. Let's Set the Scene (~0:00)
At this point in the course, you've been introduced to several tools and we've been using them to draft things. Claude has been at the centre of all of it. We used Obsidian to capture documentation, Figma to map out the user journey. And if you've been keeping up with everything, you should have a lot of that documentation all set.
Now we're moving into the bit where that documentation stops being research and starts being a build plan. This is the bridge — from "here's what we're making and why" to "here's what needs to exist, precisely, so we can build it."
The output of this lesson is a /specs folder sitting inside your project. Feature requirements, structured by capability domain, written directly into your codebase. From this point forward, every decision about what to build is grounded in a document — not a memory, not a Slack message, not a vague instruction to Claude. A spec.
2. The Core Idea
2.1 Why Specs Live Inside the Codebase
Here's something a lot of people get wrong: specs end up in Notion, or Google Docs, or Obsidian, detached from the actual project. The problem is that the moment you start building, those documents drift. Things change, and nobody remembers to update the doc.
I do it differently. From day one, the specs live inside the project folder — in a /specs directory. They're committed alongside the code. When Claude is working on a feature, it can read the spec directly from the same place as the code. There's no context-switching, no hunting for the "real" version of the requirements.
This is exactly what the JetBrains MCP connection enables. Claude can reach into your project folder and write those spec files directly. You're not copying and pasting — Claude is writing them to the right place, in the right structure, from the start.
2.2 Capability Domains vs Features
When you're writing specs, there's a temptation to organise them by screen or by feature name. "The login screen." "The dashboard." But I find that breaks down quickly because a lot of capabilities cut across multiple screens.
Instead, I organise by capability domain — the things the product needs to do. For a product like Nurturo, that might be things like: User Authentication, Onboarding, Content Delivery, Progress Tracking, Notifications. Each domain gets its own spec file. Within each spec, you cover the happy path (what happens when everything works) and the error cases (what happens when it doesn't).
This makes the specs reusable. If I'm asking Claude to build the authentication flow, I point it at the Authentication spec. It doesn't need to wade through everything else.
2.3 What Makes a Good Spec
I use a simple four-point checklist for every spec file before I consider it done:
- Scenarios are testable — specific inputs lead to specific, verifiable outputs. "User sees confirmation email" is not testable. "User receives an email to the address provided within 60 seconds of registration" is.
- Error cases are documented — what happens when the happy path breaks? Missing fields, network failures, duplicate accounts. If it's not in the spec, Claude won't handle it.
- No implementation details — the spec says what the product does, not how it does it. No database schemas, no function names. Just behaviour.
- Cross-references between related specs — if the onboarding flow depends on the authentication flow, say so. Claude (and future-you) will thank you.
3. Let Me Show You How It Works (~varies)
3.1 The Setup: What Claude Needs to Read
Before running the spec-generation prompt, I make sure Claude has access to the right documents. In my case, those are all in my Obsidian vault, and Claude can reach them via the MCP connection we set up in Module 2.
The documents I feed in are:
- Market Research — the full research output from Module 2, Lesson 1
- Summary — the distilled market analysis
- Technical Document — the technical spec generated in Module 2, Lesson 3
- UX/UI Document — the UX/UI documentation from Module 2, Lesson 4
- Figma User Flow — the user journey diagram from Module 2, Lesson 7
The combination of all of these gives Claude everything it needs: market context, technical constraints, UX intent, and a visual map of how users move through the product.
3.2 The Prompt
Here's the exact prompt I use in this lesson. This is what you'll run in Claude Desktop with the JetBrains MCP active:
In this folder, review:
- App - [Your App Name] - Market Research
- App - [Your App Name] - Summary
- App - [Your App Name] - Technical
- App - [Your App Name] - UX-UI
- The Figma user flow diagram we just created
When reviewed, using the JetBrains tool, write my specs and feature requirements for my
product based on the user flows we just created.
Structure the specs by capability domain:
- Cover both happy paths and error cases
- Put it in my specs folder
Before you start:
1. Confirm you can find my project
2. List the capability domains you've identified from the flows
3. Ask me about any business rules or constraints that aren't obvious from the flows
Spec Quality Checklist:
- [ ] Scenarios are testable (specific inputs → specific outputs)
- [ ] Error cases are documented with expected behavior
- [ ] No implementation details in requirements (focus on WHAT, not HOW)
- [ ] Cross-references between related specs where needed
[Your App Name] with the actual name of your app as it appears in your Obsidian document titles. If your documents are named differently, update the prompt to match. Claude needs to find the actual files — it can't guess at naming.3.3 What Happens When You Run It
The prompt is deliberately structured to make Claude do three things before it writes a single line:
Step 1 — Confirm it can find your project. This sounds obvious, but it's important. If Claude can't see your project via the JetBrains tool, nothing else works. Making it confirm upfront saves you from running a long process only to find out mid-way that it was working in the wrong place.
Step 2 — List the capability domains. This is a checkpoint for you. Before Claude writes anything, it tells you how it's planning to organise the specs. You can push back here. If it's identified domains that don't make sense for your product, or missed something important, this is where you correct it — not after it's written twenty spec files.
Step 3 — Ask about business rules. This is where Claude flags the gaps. Things that aren't obvious from the documentation — pricing logic, permission levels, specific business rules that only you know. Claude surfaces these as questions rather than making assumptions. Answer them before you let it proceed.
Only after those three steps does Claude start writing the actual spec files into your /specs folder.
3.4 The Output Structure
When Claude is done, your project folder should look something like this:
/[your-project]/
/specs/
authentication.md
onboarding.md
[domain-name].md
[domain-name].md
...
Each file covers one capability domain, with happy paths and error cases for every scenario within that domain. The filenames are lowercase, hyphenated, and descriptive enough that you know exactly what's inside without opening them.
4. Try It Yourself
Running the Spec Prompt
Here's how to run this yourself:
- Open Claude Desktop with your JetBrains MCP connection active
- Make sure WebStorm is open and your project is loaded
- Paste the prompt from Section 3.2, updated with your app name
- Wait for Claude to complete Steps 1–3 before it starts writing
- Review the capability domains it proposes and correct anything that's off
- Answer any business rule questions it raises
- Let it proceed and write the spec files
What success looks like: A /specs folder in your project containing individual .md files for each capability domain. When you open one, it should read like a clear, testable description of what the product needs to do — not technical jargon, not vague intentions.
Now try this: Pick one spec file Claude generated and run it against the quality checklist in Section 2.3. Does each scenario have a specific, verifiable outcome? Are the error cases covered? If not, ask Claude to revise that specific spec file before moving on.
5. Watch Out For These
Here's why this happens: if the source documents (UX/UI doc, user flow) are themselves vague, Claude has no concrete inputs to work from and will fill in the gaps with generalities.
The way I avoid it: before running this prompt, review your UX/UI document and user flow. Are the flows specific? Do they show specific screens and states? The more specific your inputs, the more testable your outputs.
If you've already hit this: ask Claude to revise each vague scenario — "rewrite this scenario with specific inputs and measurable outputs."
Here's why this happens: the document names in the prompt don't match exactly what's in your Obsidian vault.
The way I avoid it: check your Obsidian document titles before running the prompt and make sure the names match exactly.
If you've already hit this: ask Claude which document it couldn't find, check the actual filename in Obsidian, and update the prompt to match.
Here's why this happens: Claude has a tendency to suggest implementation approaches, especially when the technical document is detailed. It's trying to be helpful, but specs shouldn't dictate how something is built.
The way I avoid it: if you see anything in a spec that sounds like "the system will use a PostgreSQL table to store..." — that's implementation detail. Ask Claude to rewrite those sections focusing on behaviour only.
Here's why this happens: occasionally Claude will interpret the prompt as a single instruction and jump straight to generating files.
The way I avoid it: if it does this, stop it. Ask it to go back and confirm the project location and list the capability domains first. The pre-flight check isn't just housekeeping — it's how you catch problems before they're baked into twenty files.
6. Practice
Exercise 1: Run the Full Prompt
What to do: Run the spec prompt against your own project documentation. Let Claude complete all three pre-flight steps, review the proposed capability domains, answer its questions, and let it generate the spec files.
A nudge if you're stuck: If Claude asks you a business rule question and you're not sure of the answer, that's actually useful information — it means the requirement wasn't clear in your documentation. Take a moment to think it through before answering. The spec is only as good as the decisions you've made.
How you'll know it's working: Your WebStorm project has a /specs folder. Inside it, there are individual .md files for each capability domain. Each file contains scenarios with specific inputs and outputs.
Exercise 2: Audit One Spec File
What to do: Pick the capability domain that matters most to your product (probably authentication or onboarding — the first things a user encounters). Open that spec file and go through it line by line against the quality checklist in Section 2.3.
What this is practising: Critical reading of specs. This is the skill you'll use constantly in Module 4 — checking what Claude built against what the spec says it should do.
7. You Should Be Able to Build This Now
Here's what you can do with what we just covered:
- Take any set of product documentation (research, UX docs, user flows) and generate a structured spec folder using Claude
- Organise requirements by capability domain rather than by screen
- Use the pre-flight confirmation pattern in other Claude prompts — confirm → propose structure → ask questions → execute
Check Yourself
- I've run the spec prompt and Claude confirmed it could find my project
- I reviewed the proposed capability domains and they make sense for my product
- My
/specsfolder exists in the project and contains individual spec files - I've run at least one spec file against the quality checklist and it passes
If Something's Not Working
What's happening: WebStorm might need a moment to pick up new files added via MCP. Or Claude wrote the files to a slightly different path than expected.
How to fix it: Right-click the project root in the WebStorm sidebar and click "Reload from Disk" (or the equivalent in your OS). If the folder still isn't there, ask Claude "where did you save the spec files?" — it'll tell you the exact path.
The Short Version
Here's what I want you to walk away with:
- Specs live in the codebase, not in a separate doc tool — this keeps them in sync with what gets built and makes them accessible to Claude during development.
- Organise by capability domain — not by screen, not by feature name. What does the product need to do? Let that shape the structure.
- The pre-flight check matters — making Claude confirm, propose, and ask before it writes is how you catch problems early rather than unwinding twenty files later.
- Use the quality checklist — testable scenarios, documented error cases, no implementation details, cross-references where needed. If a spec doesn't pass, it's not done.
- What you can do now: You have a structured spec folder in your project. Module 4 coding starts here.
Quick Reference
The Spec Prompt (ready to customise)
In this folder, review:
- App - [Your App Name] - Market Research
- App - [Your App Name] - Summary
- App - [Your App Name] - Technical
- App - [Your App Name] - UX-UI
- The Figma user flow diagram we just created
When reviewed, using the JetBrains tool, write my specs and feature requirements for my
product based on the user flows we just created.
Structure the specs by capability domain:
- Cover both happy paths and error cases
- Put it in my specs folder
Before you start:
1. Confirm you can find my project
2. List the capability domains you've identified from the flows
3. Ask me about any business rules or constraints that aren't obvious from the flows
Spec Quality Checklist:
- [ ] Scenarios are testable (specific inputs → specific outputs)
- [ ] Error cases are documented with expected behavior
- [ ] No implementation details in requirements (focus on WHAT, not HOW)
- [ ] Cross-references between related specs where needed
Spec Quality Checklist
✓ Scenarios are testable — specific inputs → specific, verifiable outputs
✓ Error cases are documented with expected behaviour
✓ No implementation details — WHAT, not HOW
✓ Cross-references between related specs where needed
Expected Output Structure
/[your-project]/
/specs/
[capability-domain].md
[capability-domain].md
[capability-domain].md
...
Resources
Tools Used
- Claude Desktop
- WebStorm by JetBrains
- Obsidian — where your source documents live
Questions I Get Asked
Q: Can I run this prompt before I've finished the Figma user flow?
You can, but I wouldn't. The user flow is the most specific piece of the puzzle — it shows Claude exactly how users move through the product, what states they encounter, what decisions they make. Without it, the specs will be more generic. Finish the flow first, then come back to this.
Q: How many capability domains should I expect?
It depends on the product, but for most early-stage products I'd expect somewhere between five and ten. If Claude is proposing twenty-plus domains, the prompt might be generating specs at too granular a level — push back and ask it to consolidate.
Q: Do I need to review every spec file before we move to Module 4?
You don't need to read every word of every file, but you should at least open each one and check that it looks sensible. The goal isn't perfection — it's catching any obvious gaps or misunderstandings before you start building against them.
Q: How do I know I'm ready for the next lesson?
If your /specs folder exists, has files that make sense for your product, and at least your core domains pass the quality checklist — you're ready.
💬 Stuck? Come Talk to Us
Build What Ships community → https://discord.gg/RFXRf9yg
Drop your question in the right channel. The community's active and I check in there too.
Glossary
Feature Spec (Feature Requirement): A document that describes what a specific part of the product needs to do — specific inputs, expected outputs, and error cases — without specifying how it's built. Specs are the contract between product intent and development. (first introduced in Module 3, Lesson 2)
Capability Domain: A logical grouping of related product behaviours. Rather than organising specs by screen or feature name, capability domains group by what the product does — e.g. Authentication, Onboarding, Notifications. (first introduced in Module 3, Lesson 2)
Happy Path: The scenario where everything works as expected — the user does the right thing, the system responds correctly. Specs always cover the happy path first. (first introduced in Module 3, Lesson 2)
Error Case: What happens when the happy path breaks — missing input, network failure, duplicate data, unexpected user behaviour. A spec without documented error cases is incomplete. (first introduced in Module 3, Lesson 2)
Pre-flight Check: The pattern of asking Claude to confirm, propose, and ask questions before it executes a long task. Introduced in this lesson as part of the spec prompt — Claude confirms the project, lists the domains, asks about business rules, then proceeds. (first introduced in Module 3, Lesson 2)