All checks were successful
CI / check (push) Successful in 3m16s
Correct terminology mismatch: - Rename skills/ to commands/ (these are user-invokable commands) - Create new skills/ for reference materials - Move bd_workflow.md to skills/ (it's reference material) - Add micro-skills and formulas directories - Update default.nix to install both commands and skills Commands → ~/.claude/commands/ (invokable as /command-name) Skills → ~/.claude/commands/skills/ (reference materials) Formulas → ~/.beads/formulas/ (workflow templates) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
1.8 KiB
1.8 KiB
description
| description |
|---|
| How to spawn and coordinate research sub-agents |
Research Agents
Use parallel sub-agents for efficient codebase research.
Available Agents
| Agent | Purpose |
|---|---|
| codebase-locator | Find WHERE files and components live |
| codebase-analyzer | Understand HOW specific code works |
| codebase-pattern-finder | Find examples of existing patterns |
| thoughts-locator | Discover relevant documents in thoughts/ |
Spawning Protocol
- Decompose - Break the research question into 3-5 specific questions
- Spawn parallel - Use one Task call with multiple agents
- Be specific - Include directories and file patterns in prompts
- Wait for all - Do not synthesize until ALL agents complete
- Synthesize - Combine findings into coherent summary with file:line references
Example
Task(codebase-locator, "Find all files related to authentication in src/")
Task(codebase-analyzer, "Explain how JWT tokens are validated in src/auth/")
Task(codebase-pattern-finder, "Find examples of middleware patterns in src/")
Task(thoughts-locator, "Find documents about auth design decisions in thoughts/")
Key Principles
- Parallel when different - Run agents in parallel when searching for different things
- WHAT not HOW - Each agent knows its job; tell it what you need, not how to search
- Document, don't evaluate - Agents should describe what exists, not critique it
- Specific directories - Always scope searches to relevant directories
- File references - Include specific file:line references in synthesis
Agent Prompts
When spawning agents, include:
- The specific question or goal
- Relevant directories to search
- Reminder to document (not evaluate) what they find
- Request for file:line references in findings