New skills for integrated beads + humanlayer workflow:
- beads_research.md: Research with per-bead artifact storage
- beads_plan.md: Planning with bead linking
- beads_implement.md: Implementation with per-plan checkpoints
- beads_iterate.md: Plan iteration with version history
- beads_workflow.md: Comprehensive workflow documentation
Skills output to thoughts/beads-{id}/ for artifact storage
and automatically update bead notes with artifact links.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
6.5 KiB
description, model
| description | model |
|---|---|
| Create detailed implementation plans for a bead issue | opus |
Beads Plan
You are tasked with creating detailed implementation plans for a bead issue. This skill integrates with the beads issue tracker and stores plans in the thoughts/ directory.
Initial Setup
When this command is invoked:
-
Parse the input for bead ID:
- If a bead ID is provided, use it
- If no bead ID, run
bd readyand ask which bead to plan for
-
Load bead context:
bd show {bead-id}- Read the bead description for requirements
- Check for existing research:
thoughts/beads-{bead-id}/research.md - Note any dependencies or blockers
-
Create artifact directory:
mkdir -p thoughts/beads-{bead-id} -
Check for existing research:
- If
thoughts/beads-{bead-id}/research.mdexists, read it fully - This research provides crucial context for planning
- If
-
Respond with:
Creating implementation plan for bead {bead-id}: {bead-title} {If research exists: "Found existing research at thoughts/beads-{bead-id}/research.md - incorporating findings."} Let me analyze the requirements and codebase to create a detailed plan.
Planning Process
Step 1: Context Gathering
-
Read all mentioned files FULLY:
- Bead description references
- Existing research document
- Any linked tickets or docs
- Use Read tool WITHOUT limit/offset
-
Spawn initial research tasks:
- codebase-locator: Find all files related to the task
- codebase-analyzer: Understand current implementation
- codebase-pattern-finder: Find similar features to model after
- thoughts-locator: Find any existing plans or decisions
-
Read all files identified by research:
- Read them FULLY into main context
- Cross-reference with requirements
Step 2: Present Understanding
Before writing the plan, confirm understanding:
Based on the bead and my research, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question requiring human judgment]
- [Business logic clarification]
Only ask questions you genuinely cannot answer through code investigation.
Step 3: Research & Discovery
After getting clarifications:
-
If user corrects any misunderstanding:
- Spawn new research tasks to verify
- Read specific files/directories mentioned
- Only proceed once verified
-
Present design options:
Based on my research: **Current State:** - [Key discovery about existing code] - [Pattern or convention to follow] **Design Options:** 1. [Option A] - [pros/cons] 2. [Option B] - [pros/cons] Which approach aligns best?
Step 4: Plan Structure
Once aligned on approach:
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
Does this phasing make sense?
Get feedback on structure before writing details.
Step 5: Write the Plan
Write to thoughts/beads-{bead-id}/plan.md:
---
date: {ISO timestamp}
bead_id: {bead-id}
bead_title: "{bead title}"
author: claude
git_commit: {commit hash}
branch: {branch name}
repository: {repo name}
status: draft
---
# {Feature/Task Name} Implementation Plan
## Overview
{Brief description of what we're implementing and why}
## Current State Analysis
{What exists now, what's missing, key constraints}
### Key Discoveries:
- {Finding with file:line reference}
- {Pattern to follow}
## Desired End State
{Specification of desired end state and how to verify it}
## What We're NOT Doing
{Explicitly list out-of-scope items}
## Implementation Approach
{High-level strategy and reasoning}
## Phase 1: {Descriptive Name}
### Overview
{What this phase accomplishes}
### Changes Required:
#### 1. {Component/File Group}
**File**: `path/to/file.ext`
**Changes**: {Summary}
```{language}
// Specific code to add/modify
Success Criteria:
Automated Verification:
- Tests pass:
make test - Linting passes:
make lint - Type checking passes:
make typecheck
Manual Verification:
- Feature works as expected in UI
- Edge cases handled correctly
Phase 2: {Descriptive Name}
{Similar structure...}
Testing Strategy
Unit Tests:
- {What to test}
- {Key edge cases}
Integration Tests:
- {End-to-end scenarios}
Manual Testing Steps:
- {Specific step}
- {Another step}
References
- Bead: {bead-id}
- Research:
thoughts/beads-{bead-id}/research.md - Similar implementation: {file:line}
### Step 6: Update the bead
```bash
bd update {bead-id} --notes="Plan created: thoughts/beads-{bead-id}/plan.md"
Step 7: Create implementation bead (if appropriate)
If the planning bead is separate from implementation:
bd create --title="Implement: {feature name}" --type=task --priority=1 \
--description="Implement the plan at thoughts/beads-{original-bead-id}/plan.md
See bead {original-bead-id} for planning context."
# Link as dependency
bd dep add {new-bead-id} {original-bead-id}
Step 8: Present for Review
I've created the implementation plan at:
`thoughts/beads-{bead-id}/plan.md`
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
Important Guidelines
- Be Skeptical: Question vague requirements, identify potential issues early
- Be Interactive: Don't write the full plan in one shot, get buy-in at each step
- Be Thorough: Read all context files COMPLETELY, include specific file:line refs
- Be Practical: Focus on incremental, testable changes
- No Open Questions: If you have unresolved questions, STOP and ask
Success Criteria Guidelines
Always separate into two categories:
Automated Verification (run by agents):
- Commands:
make test,npm run lint, etc. - File existence checks
- Type checking
Manual Verification (requires human):
- UI/UX functionality
- Performance under real conditions
- Edge cases hard to automate
Example Invocation
User: /beads:plan nixos-configs-abc123
Assistant: Creating implementation plan for bead nixos-configs-abc123...