Files
beads/examples/claude-code-skill/references/ISSUE_CREATION.md
spm1001 bdaf82026a Enhance Claude Code skill with real-world usage patterns (#116)
* Update skill installation path and document new features

Installation path changes:
- Update from ~/.claude/skills/bd to bd-issue-tracking
- Matches internal skill name for consistency with other skills

Documentation additions:
- Add 3 new reference files to documentation list
- Document compaction survival patterns (critical for Claude Code)
- Mention self-check checklists and quality guidelines

This prepares the README for upcoming skill content improvements.

* Add new reference files for enhanced skill guidance

New reference documentation:

1. ISSUE_CREATION.md - When to ask vs create issues
   - Decision criteria for knowledge work vs technical work
   - Issue quality guidelines and best practices
   - Design vs acceptance criteria guidance
   - Self-check questions for well-scoped issues

2. RESUMABILITY.md - Making issues resumable across sessions
   - Patterns for complex technical work with APIs
   - Working code examples and API response samples
   - When enhanced documentation matters vs simple descriptions
   - Critical for multi-session work and crash recovery

3. STATIC_DATA.md - Using bd for reference databases
   - Alternative use case beyond work tracking
   - Glossaries and terminology management
   - Dual format patterns (database + markdown)
   - When bd helps vs when simpler formats suffice

These additions emerged from real-world usage patterns and enable
Claude to make better decisions about issue structure and resumability.

* Enhance existing skill files with real-world usage patterns

SKILL.md major enhancements (~194 net lines added):
- Add "Test Yourself" decision framework with self-check questions
- Document compaction survival patterns (critical for Claude Code)
- Add Notes Quality Self-Check (future-me test, stranger test)
- Session Start Checklist with copy-paste templates
- Field Usage Reference table (when to use each bd field)
- Progress Checkpointing triggers and patterns
- Issue Creation Checklist with quality self-checks
- Enhanced session handoff protocols

WORKFLOWS.md enhancements (~114 lines added):
- Session handoff workflow with detailed checklists
- Collaborative handoff between Claude and user
- Compaction survival workflow
- Notes format guidelines (current state, not cumulative)
- Session start checklist expansion

BOUNDARIES.md updates (~49 lines removed):
- Streamlined content (moved detailed workflows to WORKFLOWS.md)
- Retained core decision criteria
- Improved examples and integration patterns

CLI_REFERENCE.md minor updates:
- Additional command examples
- Clarified flag usage

These improvements emerged from extensive real-world usage, particularly
around crash recovery, compaction events, and multi-session workflows.
The additions make the skill more practical for autonomous agent use.
2025-10-23 09:26:30 -07:00

5.0 KiB

Issue Creation Guidelines

Guidance on when and how to create bd issues for maximum effectiveness.

Contents

When to Ask First vs Create Directly

Ask the user before creating when:

  • Knowledge work with fuzzy boundaries
  • Task scope is unclear
  • Multiple valid approaches exist
  • User's intent needs clarification

Create directly when:

  • Clear bug discovered during implementation
  • Obvious follow-up work identified
  • Technical debt with clear scope
  • Dependency or blocker found

Why ask first for knowledge work? Task boundaries in strategic/research work are often unclear until discussed, whereas technical implementation tasks are usually well-defined. Discussion helps structure the work properly before creating issues, preventing poorly-scoped issues that need immediate revision.

Issue Quality

Use clear, specific titles and include sufficient context in descriptions to resume work later.

Field Usage

Use --design flag for:

  • Implementation approach decisions
  • Architecture notes
  • Trade-offs considered

Use --acceptance flag for:

  • Definition of done
  • Testing requirements
  • Success metrics

Making Issues Resumable (Complex Technical Work)

For complex technical features spanning multiple sessions, enhance notes field with implementation details.

Optional but valuable for technical work:

  • Working API query code (tested, with response structure)
  • Sample API responses showing actual data
  • Desired output format examples (show, don't describe)
  • Research context (why this approach, what was discovered)

Example pattern:

bd update issue-9 --notes "IMPLEMENTATION GUIDE:
WORKING CODE: service.about().get(fields='importFormats')
Returns: dict with 49 entries like {'text/markdown': [...]}
OUTPUT FORMAT: # Drive Import Formats (markdown with categorized list)
CONTEXT: text/markdown support added July 2024, not in static docs"

When to add: Multi-session technical features with APIs or specific formats. Skip for simple tasks.

For detailed patterns and examples, read: RESUMABILITY.md

Design vs Acceptance Criteria (Critical Distinction)

Common mistake: Putting implementation details in acceptance criteria. Here's the difference:

DESIGN field (HOW to build it):

  • "Use two-phase batchUpdate approach: insert text first, then apply formatting"
  • "Parse with regex to find * and _ markers"
  • "Use JWT tokens with 1-hour expiry"
  • Trade-offs: "Chose batchUpdate over streaming API for atomicity"

ACCEPTANCE CRITERIA (WHAT SUCCESS LOOKS LIKE):

  • "Bold and italic markdown formatting renders correctly in the Doc"
  • "Solution accepts markdown input and creates Doc with specified title"
  • "Returns doc_id and webViewLink to caller"
  • "User tokens persist across sessions and refresh automatically"

Why this matters:

  • Design can change during implementation (e.g., use library instead of regex)
  • Acceptance criteria should remain stable across sessions
  • Criteria should be outcome-focused ("what must be true?") not step-focused ("do these steps")
  • Each criterion should be verifiable - you can definitively say yes/no

The pitfall

Writing criteria like "- [ ] Use batchUpdate approach" locks you into one implementation.

Better: "- [ ] Formatting is applied atomically (all at once or not at all)" - allows flexible implementation.

Test yourself

If you rewrote the solution using a different approach, would the acceptance criteria still apply? If not, they're design notes, not criteria.

Example of correct structure

Design field:

Two-phase Docs API approach:
1. Parse markdown to positions
2. Create doc + insert text in one call
3. Apply formatting in second call
Rationale: Atomic operations, easier to debug formatting separately

Acceptance criteria:

- [ ] Markdown formatting renders in Doc (bold, italic, headings)
- [ ] Lists preserve order and nesting
- [ ] Links are clickable
- [ ] Large documents (>50KB) process without timeout

Wrong (design masquerading as criteria):

- [ ] Use two-phase batchUpdate approach
- [ ] Apply formatting in second batchUpdate call

Quick Reference

Creating good issues:

  1. Title: Clear, specific, action-oriented
  2. Description: Problem statement, context, why it matters
  3. Design: Approach, architecture, trade-offs (can change)
  4. Acceptance: Outcomes, success criteria (should be stable)
  5. Notes: Implementation details, session handoffs (evolves over time)

Common mistakes:

  • Vague titles: "Fix bug" → "Fix: auth token expires before refresh"
  • Implementation in acceptance: "Use JWT" → "Auth tokens persist across sessions"
  • Missing context: "Update database" → "Update database: add user_last_login for session analytics"