Reorganize project structure: move Go files to internal/beads, docs to docs/

Amp-Thread-ID: https://ampcode.com/threads/T-7a71671d-dd5c-4c7c-b557-fa427fceb04f
Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
Steve Yegge
2025-11-05 21:04:00 -08:00
parent a1c3494c43
commit 584c266684
39 changed files with 19 additions and 18 deletions

372
docs/ADVANCED.md Normal file
View File

@@ -0,0 +1,372 @@
# Advanced bd Features
This guide covers advanced features for power users and specific use cases.
## Table of Contents
- [Renaming Prefix](#renaming-prefix)
- [Merging Duplicate Issues](#merging-duplicate-issues)
- [Git Worktrees](#git-worktrees)
- [Handling Import Collisions](#handling-import-collisions)
- [Custom Git Hooks](#custom-git-hooks)
- [Extensible Database](#extensible-database)
- [Architecture: Daemon vs MCP vs Beads](#architecture-daemon-vs-mcp-vs-beads)
## Renaming Prefix
Change the issue prefix for all issues in your database. This is useful if your prefix is too long or you want to standardize naming.
```bash
# Preview changes without applying
bd rename-prefix kw- --dry-run
# Rename from current prefix to new prefix
bd rename-prefix kw-
# JSON output
bd rename-prefix kw- --json
```
The rename operation:
- Updates all issue IDs (e.g., `knowledge-work-1``kw-1`)
- Updates all text references in titles, descriptions, design notes, etc.
- Updates dependencies and labels
- Updates the counter table and config
**Prefix validation rules:**
- Max length: 8 characters
- Allowed characters: lowercase letters, numbers, hyphens
- Must start with a letter
- Must end with a hyphen (or will be trimmed to add one)
- Cannot be empty or just a hyphen
Example workflow:
```bash
# You have issues like knowledge-work-1, knowledge-work-2, etc.
bd list # Shows knowledge-work-* issues
# Preview the rename
bd rename-prefix kw- --dry-run
# Apply the rename
bd rename-prefix kw-
# Now you have kw-1, kw-2, etc.
bd list # Shows kw-* issues
```
## Duplicate Detection
Find issues with identical content using automated duplicate detection:
```bash
# Find all content duplicates in the database
bd duplicates
# Show duplicates in JSON format
bd duplicates --json
# Automatically merge all duplicates
bd duplicates --auto-merge
# Preview what would be merged
bd duplicates --dry-run
# Detect duplicates during import
bd import -i issues.jsonl --resolve-collisions --dedupe-after
```
**How it works:**
- Groups issues by content hash (title, description, design, acceptance criteria)
- Only groups issues with matching status (open with open, closed with closed)
- Chooses merge target by reference count (most referenced) or smallest ID
- Reports duplicate groups with suggested merge commands
**Example output:**
```
🔍 Found 3 duplicate group(s):
━━ Group 1: Fix authentication bug
→ bd-10 (open, P1, 5 references)
bd-42 (open, P1, 0 references)
Suggested: bd merge bd-42 --into bd-10
💡 Run with --auto-merge to execute all suggested merges
```
**AI Agent Workflow:**
1. **Periodic scans**: Run `bd duplicates` to check for duplicates
2. **During import**: Use `--dedupe-after` to detect duplicates after collision resolution
3. **Auto-merge**: Use `--auto-merge` to automatically consolidate duplicates
4. **Manual review**: Use `--dry-run` to preview merges before executing
## Merging Duplicate Issues
Consolidate duplicate issues into a single issue while preserving dependencies and references:
```bash
# Merge bd-42 and bd-43 into bd-41
bd merge bd-42 bd-43 --into bd-41
# Merge multiple duplicates at once
bd merge bd-10 bd-11 bd-12 --into bd-10
# Preview merge without making changes
bd merge bd-42 bd-43 --into bd-41 --dry-run
# JSON output
bd merge bd-42 bd-43 --into bd-41 --json
```
**What the merge command does:**
1. **Validates** all issues exist and prevents self-merge
2. **Closes** source issues with reason `Merged into bd-X`
3. **Migrates** all dependencies from source issues to target
4. **Updates** text references across all issue descriptions, notes, design, and acceptance criteria
**Example workflow:**
```bash
# You discover bd-42 and bd-43 are duplicates of bd-41
bd show bd-41 bd-42 bd-43
# Preview the merge
bd merge bd-42 bd-43 --into bd-41 --dry-run
# Execute the merge
bd merge bd-42 bd-43 --into bd-41
# ✓ Merged 2 issue(s) into bd-41
# Verify the result
bd show bd-41 # Now has dependencies from bd-42 and bd-43
bd dep tree bd-41 # Shows unified dependency tree
```
**Important notes:**
- Source issues are permanently closed (status: `closed`)
- All dependencies pointing to source issues are redirected to target
- Text references like "see bd-42" are automatically rewritten to "see bd-41"
- Operation cannot be undone (but git history preserves the original state)
- Not yet supported in daemon mode (use `--no-daemon` flag)
**AI Agent Workflow:**
When agents discover duplicate issues, they should:
1. Search for similar issues: `bd list --json | grep "similar text"`
2. Compare issue details: `bd show bd-41 bd-42 --json`
3. Merge duplicates: `bd merge bd-42 --into bd-41`
4. File a discovered-from issue if needed: `bd create "Found duplicates during bd-X" --deps discovered-from:bd-X`
## Git Worktrees
**⚠️ Important Limitation:** Daemon mode does not work correctly with `git worktree`.
**The Problem:**
Git worktrees share the same `.git` directory and thus share the same `.beads` database. The daemon doesn't know which branch each worktree has checked out, which can cause it to commit/push to the wrong branch.
**What you lose without daemon mode:**
- **Auto-sync** - No automatic commit/push of changes (use `bd sync` manually)
- **MCP server** - The beads-mcp server requires daemon mode for multi-repo support
- **Background watching** - No automatic detection of remote changes
**Solutions for Worktree Users:**
1. **Use `--no-daemon` flag** (recommended):
```bash
bd --no-daemon ready
bd --no-daemon create "Fix bug" -p 1
bd --no-daemon update bd-42 --status in_progress
```
2. **Disable daemon via environment variable** (for entire worktree session):
```bash
export BEADS_NO_DAEMON=1
bd ready # All commands use direct mode
```
3. **Disable auto-start** (less safe, still warns):
```bash
export BEADS_AUTO_START_DAEMON=false
```
**Automatic Detection:**
bd automatically detects when you're in a worktree and shows a prominent warning if daemon mode is active. The `--no-daemon` mode works correctly with worktrees since it operates directly on the database without shared state.
**Why It Matters:**
The daemon maintains its own view of the current working directory and git state. When multiple worktrees share the same `.beads` database, the daemon may commit changes intended for one branch to a different branch, leading to confusion and incorrect git history.
## Handling Git Merge Conflicts
**With hash-based IDs (v0.20.1+), ID collisions are eliminated.** Different issues get different hash IDs, so concurrent creation doesn't cause conflicts.
### Understanding Same-ID Scenarios
When you encounter the same ID during import, it's an **update operation**, not a collision:
- Hash IDs are content-based and remain stable across updates
- Same ID + different fields = normal update to existing issue
- bd automatically applies updates when importing
**Preview changes before importing:**
```bash
# After git merge or pull
bd import -i .beads/issues.jsonl --dry-run
# Output shows:
# Exact matches (idempotent): 15
# New issues: 5
# Updates: 3
#
# Issues to be updated:
# bd-a3f2: Fix authentication (changed: priority, status)
# bd-b8e1: Add feature (changed: description)
```
### Git Merge Conflicts
The conflicts you'll encounter are **git merge conflicts** in the JSONL file when the same issue was modified on both branches (different timestamps/fields). This is not an ID collision.
**Resolution:**
```bash
# After git merge creates conflict
git checkout --theirs .beads/beads.jsonl # Accept remote version
# OR
git checkout --ours .beads/beads.jsonl # Keep local version
# OR manually resolve in editor (keep line with newer updated_at)
# Import the resolved JSONL
bd import -i .beads/beads.jsonl
# Commit the merge
git add .beads/beads.jsonl
git commit
```
### Advanced: Intelligent Merge Tools
For Git merge conflicts in `.beads/issues.jsonl`, consider using **[beads-merge](https://github.com/neongreen/mono/tree/main/beads-merge)** - a specialized merge tool by @neongreen that:
- Matches issues across conflicted JSONL files
- Merges fields intelligently (e.g., combines labels, picks newer timestamps)
- Resolves conflicts automatically where possible
- Leaves remaining conflicts for manual resolution
- Works as a Git/jujutsu merge driver
After using beads-merge to resolve the git conflict, just run `bd import` to update your database.
## Custom Git Hooks
For immediate export (no 5-second wait) and guaranteed import after git operations, install the git hooks:
### Using the Installer
```bash
cd examples/git-hooks
./install.sh
```
### Manual Setup
Create `.git/hooks/pre-commit`:
```bash
#!/bin/bash
bd export -o .beads/issues.jsonl
git add .beads/issues.jsonl
```
Create `.git/hooks/post-merge`:
```bash
#!/bin/bash
bd import -i .beads/issues.jsonl
```
Create `.git/hooks/post-checkout`:
```bash
#!/bin/bash
bd import -i .beads/issues.jsonl
```
Make hooks executable:
```bash
chmod +x .git/hooks/pre-commit .git/hooks/post-merge .git/hooks/post-checkout
```
**Note:** Auto-sync is already enabled by default, so git hooks are optional. They're useful if you need immediate export or guaranteed import after git operations.
## Extensible Database
bd uses SQLite, which you can extend with your own tables and queries. This allows you to:
- Add custom metadata to issues
- Build integrations with other tools
- Implement custom workflows
- Create reports and analytics
**See [EXTENDING.md](EXTENDING.md) for complete documentation:**
- Database schema and structure
- Adding custom tables
- Joining with issue data
- Example integrations
- Best practices
**Example use case:**
```sql
-- Add time tracking table
CREATE TABLE time_entries (
id INTEGER PRIMARY KEY,
issue_id TEXT NOT NULL,
duration_minutes INTEGER NOT NULL,
recorded_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY(issue_id) REFERENCES issues(id)
);
-- Query total time per issue
SELECT i.id, i.title, SUM(t.duration_minutes) as total_minutes
FROM issues i
LEFT JOIN time_entries t ON i.id = t.issue_id
GROUP BY i.id;
```
## Architecture: Daemon vs MCP vs Beads
Understanding the role of each component:
### Beads (Core)
- **SQLite database** - The source of truth for all issues, dependencies, labels
- **Storage layer** - CRUD operations, dependency resolution, collision detection
- **Business logic** - Ready work calculation, merge operations, import/export
- **CLI commands** - Direct database access via `bd` command
### Local Daemon (Per-Project)
- **Lightweight RPC server** - Runs at `.beads/bd.sock` in each project
- **Auto-sync coordination** - Debounced export (5s), git integration, import detection
- **Process isolation** - Each project gets its own daemon for database safety
- **LSP model** - Similar to language servers, one daemon per workspace
- **No global daemon** - Removed in v0.16.0 to prevent cross-project pollution
- **Exclusive lock support** - External tools can prevent daemon interference (see [EXCLUSIVE_LOCK.md](EXCLUSIVE_LOCK.md))
### MCP Server (Optional)
- **Protocol adapter** - Translates MCP calls to daemon RPC or direct CLI
- **Workspace routing** - Finds correct `.beads/bd.sock` based on working directory
- **Stateless** - Doesn't cache or store any issue data itself
- **Editor integration** - Makes bd available to Claude, Cursor, and other MCP clients
- **Single instance** - One MCP server can route to multiple project daemons
**Key principle**: The daemon and MCP server are thin layers. All heavy lifting (dependency graphs, collision resolution, merge logic) happens in the core bd storage layer.
**Why per-project daemons?**
- Complete database isolation between projects
- Git worktree safety (each worktree can disable daemon independently)
- No risk of committing changes to wrong branch
- Simpler mental model - one project, one database, one daemon
- Follows LSP/language server architecture patterns
## Next Steps
- **[README.md](README.md)** - Core features and quick start
- **[TROUBLESHOOTING.md](TROUBLESHOOTING.md)** - Common issues and solutions
- **[FAQ.md](FAQ.md)** - Frequently asked questions
- **[CONFIG.md](CONFIG.md)** - Configuration system guide
- **[EXTENDING.md](EXTENDING.md)** - Database extension patterns

58
docs/ATTRIBUTION.md Normal file
View File

@@ -0,0 +1,58 @@
# Attribution and Credits
## beads-merge 3-Way Merge Algorithm
The 3-way merge functionality in `internal/merge/` is based on **beads-merge** by **@neongreen**.
- **Original Repository**: https://github.com/neongreen/mono/tree/main/beads-merge
- **Author**: @neongreen (https://github.com/neongreen)
- **Integration Discussion**: https://github.com/neongreen/mono/issues/240
### What We Vendored
The core merge algorithm from beads-merge has been adapted and integrated into bd:
- Field-level 3-way merge logic
- Issue identity matching (id + created_at + created_by)
- Dependency and label merging with deduplication
- Timestamp handling (max wins)
- Deletion detection
- Conflict marker generation
### Changes Made
- Adapted to use bd's `internal/types.Issue` instead of custom types
- Integrated with bd's JSONL export/import system
- Added support for bd-specific fields (Design, AcceptanceCriteria, etc.)
- Exposed as `bd merge` CLI command and library API
### License
The original beads-merge code is licensed under the MIT License:
```
MIT License
Copyright (c) 2025 Emily (@neongreen)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Thank You
Special thanks to @neongreen for building beads-merge and graciously allowing us to integrate it into bd. This solves critical multi-workspace sync issues and makes beads much more robust for collaborative workflows.

10
docs/CLAUDE.md Normal file
View File

@@ -0,0 +1,10 @@
<!-- bd integration note -->
**Note**: This project uses [bd (beads)](https://github.com/steveyegge/beads) for issue tracking. Use `bd` commands or the beads MCP server instead of markdown TODOs. See AGENTS.md for workflow details.
<!-- /bd integration note -->
# Instructions for Claude
This file has been moved to **AGENTS.md** to support all AI agents, not just Claude.
Please refer to [AGENTS.md](AGENTS.md) for complete instructions on working with the beads project.

351
docs/CONFIG.md Normal file
View File

@@ -0,0 +1,351 @@
# Configuration System
bd has two complementary configuration systems:
1. **Tool-level configuration** (Viper): User preferences for tool behavior (flags, output format)
2. **Project-level configuration** (`bd config`): Integration data and project-specific settings
## Tool-Level Configuration (Viper)
### Overview
Tool preferences control how `bd` behaves globally or per-user. These are stored in config files or environment variables and managed by [Viper](https://github.com/spf13/viper).
**Configuration precedence** (highest to lowest):
1. Command-line flags (`--json`, `--no-daemon`, etc.)
2. Environment variables (`BD_JSON`, `BD_NO_DAEMON`, etc.)
3. Config file (`~/.config/bd/config.yaml` or `.beads/config.yaml`)
4. Defaults
### Config File Locations
Viper searches for `config.yaml` in these locations (in order):
1. `.beads/config.yaml` - Project-specific tool settings (version-controlled)
2. `~/.config/bd/config.yaml` - User-specific tool settings
3. `~/.beads/config.yaml` - Legacy user settings
### Supported Settings
Tool-level settings you can configure:
| Setting | Flag | Environment Variable | Default | Description |
|---------|------|---------------------|---------|-------------|
| `json` | `--json` | `BD_JSON` | `false` | Output in JSON format |
| `no-daemon` | `--no-daemon` | `BD_NO_DAEMON` | `false` | Force direct mode, bypass daemon |
| `no-auto-flush` | `--no-auto-flush` | `BD_NO_AUTO_FLUSH` | `false` | Disable auto JSONL export |
| `no-auto-import` | `--no-auto-import` | `BD_NO_AUTO_IMPORT` | `false` | Disable auto JSONL import |
| `db` | `--db` | `BD_DB` | (auto-discover) | Database path |
| `actor` | `--actor` | `BD_ACTOR` | `$USER` | Actor name for audit trail |
| `flush-debounce` | - | `BEADS_FLUSH_DEBOUNCE` | `5s` | Debounce time for auto-flush |
| `auto-start-daemon` | - | `BEADS_AUTO_START_DAEMON` | `true` | Auto-start daemon if not running |
### Example Config File
`~/.config/bd/config.yaml`:
```yaml
# Default to JSON output for scripting
json: true
# Disable daemon for single-user workflows
no-daemon: true
# Custom debounce for auto-flush (default 5s)
flush-debounce: 10s
# Auto-start daemon (default true)
auto-start-daemon: true
```
`.beads/config.yaml` (project-specific):
```yaml
# Project team prefers longer flush delay
flush-debounce: 15s
```
### Why Two Systems?
**Tool settings (Viper)** are user preferences:
- How should I see output? (`--json`)
- Should I use the daemon? (`--no-daemon`)
- How should the CLI behave?
**Project config (`bd config`)** is project data:
- What's our Jira URL?
- What are our Linear tokens?
- How do we map statuses?
This separation is correct: **tool settings are user-specific, project config is team-shared**.
Agents benefit from `bd config`'s structured CLI interface over manual YAML editing.
## Project-Level Configuration (`bd config`)
### Overview
Project configuration is:
- **Per-project**: Isolated to each `.beads/*.db` database
- **Version-control-friendly**: Stored in SQLite, queryable and scriptable
- **Machine-readable**: JSON output for automation
- **Namespace-based**: Organized by integration or purpose
## Commands
### Set Configuration
```bash
bd config set <key> <value>
bd config set --json <key> <value> # JSON output
```
Examples:
```bash
bd config set jira.url "https://company.atlassian.net"
bd config set jira.project "PROJ"
bd config set jira.status_map.todo "open"
```
### Get Configuration
```bash
bd config get <key>
bd config get --json <key> # JSON output
```
Examples:
```bash
bd config get jira.url
# Output: https://company.atlassian.net
bd config get --json jira.url
# Output: {"key":"jira.url","value":"https://company.atlassian.net"}
```
### List All Configuration
```bash
bd config list
bd config list --json # JSON output
```
Example output:
```
Configuration:
compact_tier1_days = 90
compact_tier1_dep_levels = 2
jira.project = PROJ
jira.url = https://company.atlassian.net
```
JSON output:
```json
{
"compact_tier1_days": "90",
"compact_tier1_dep_levels": "2",
"jira.project": "PROJ",
"jira.url": "https://company.atlassian.net"
}
```
### Unset Configuration
```bash
bd config unset <key>
bd config unset --json <key> # JSON output
```
Example:
```bash
bd config unset jira.url
```
## Namespace Convention
Configuration keys use dot-notation namespaces to organize settings:
### Core Namespaces
- `compact_*` - Compaction settings (see EXTENDING.md)
- `issue_prefix` - Issue ID prefix (managed by `bd init`)
- `max_collision_prob` - Maximum collision probability for adaptive hash IDs (default: 0.25)
- `min_hash_length` - Minimum hash ID length (default: 4)
- `max_hash_length` - Maximum hash ID length (default: 8)
- `import.orphan_handling` - How to handle hierarchical issues with missing parents during import (default: `allow`)
### Integration Namespaces
Use these namespaces for external integrations:
- `jira.*` - Jira integration settings
- `linear.*` - Linear integration settings
- `github.*` - GitHub integration settings
- `custom.*` - Custom integration settings
### Example: Adaptive Hash ID Configuration
```bash
# Configure adaptive ID lengths (see docs/ADAPTIVE_IDS.md)
# Default: 25% max collision probability
bd config set max_collision_prob "0.25"
# Start with 4-char IDs, scale up as database grows
bd config set min_hash_length "4"
bd config set max_hash_length "8"
# Stricter collision tolerance (1%)
bd config set max_collision_prob "0.01"
# Force minimum 5-char IDs for consistency
bd config set min_hash_length "5"
```
See [docs/ADAPTIVE_IDS.md](docs/ADAPTIVE_IDS.md) for detailed documentation.
### Example: Import Orphan Handling
Controls how imports handle hierarchical child issues when their parent is missing from the database:
```bash
# Strictest: Fail import if parent is missing (safest, prevents orphans)
bd config set import.orphan_handling "strict"
# Auto-resurrect: Search JSONL history and recreate missing parents as tombstones
bd config set import.orphan_handling "resurrect"
# Skip: Skip orphaned issues with warning (partial import)
bd config set import.orphan_handling "skip"
# Allow: Import orphans without validation (default, most permissive)
bd config set import.orphan_handling "allow"
```
**Mode details:**
- **`strict`** - Import fails immediately if a child's parent is missing. Use when database integrity is critical.
- **`resurrect`** - Searches the full JSONL file for missing parents and recreates them as tombstones (Status=Closed, Priority=4). Preserves hierarchy with minimal data. Dependencies are also resurrected on best-effort basis.
- **`skip`** - Skips orphaned children with a warning. Partial import succeeds but some issues are excluded.
- **`allow`** - Imports orphans without parent validation. Most permissive, works around import bugs. **This is the default** because it ensures all data is imported even if hierarchy is temporarily broken.
**Override per command:**
```bash
# Override config for a single import
bd import -i issues.jsonl --orphan-handling strict
# Auto-import (sync) uses config value
bd sync # Respects import.orphan_handling setting
```
**When to use each mode:**
- Use `allow` (default) for daily imports and auto-sync - ensures no data loss
- Use `resurrect` when importing from another database that had parent deletions
- Use `strict` only for controlled imports where you need to guarantee parent existence
- Use `skip` rarely - only when you want to selectively import a subset
### Example: Jira Integration
```bash
# Configure Jira connection
bd config set jira.url "https://company.atlassian.net"
bd config set jira.project "PROJ"
bd config set jira.api_token "YOUR_TOKEN"
# Map bd statuses to Jira statuses
bd config set jira.status_map.open "To Do"
bd config set jira.status_map.in_progress "In Progress"
bd config set jira.status_map.closed "Done"
# Map bd issue types to Jira issue types
bd config set jira.type_map.bug "Bug"
bd config set jira.type_map.feature "Story"
bd config set jira.type_map.task "Task"
```
### Example: Linear Integration
```bash
# Configure Linear connection
bd config set linear.api_token "YOUR_TOKEN"
bd config set linear.team_id "team-123"
# Map statuses
bd config set linear.status_map.open "Backlog"
bd config set linear.status_map.in_progress "In Progress"
bd config set linear.status_map.closed "Done"
```
### Example: GitHub Integration
```bash
# Configure GitHub connection
bd config set github.org "myorg"
bd config set github.repo "myrepo"
bd config set github.token "YOUR_TOKEN"
# Map bd labels to GitHub labels
bd config set github.label_map.bug "bug"
bd config set github.label_map.feature "enhancement"
```
## Use in Scripts
Configuration is designed for scripting. Use `--json` for machine-readable output:
```bash
#!/bin/bash
# Get Jira URL
JIRA_URL=$(bd config get --json jira.url | jq -r '.value')
# Get all config and extract multiple values
bd config list --json | jq -r '.["jira.project"]'
```
Example Python script:
```python
import json
import subprocess
def get_config(key):
result = subprocess.run(
["bd", "config", "get", "--json", key],
capture_output=True,
text=True
)
data = json.loads(result.stdout)
return data["value"]
def list_config():
result = subprocess.run(
["bd", "config", "list", "--json"],
capture_output=True,
text=True
)
return json.loads(result.stdout)
# Use in integration
jira_url = get_config("jira.url")
jira_project = get_config("jira.project")
```
## Best Practices
1. **Use namespaces**: Prefix keys with integration name (e.g., `jira.*`, `linear.*`)
2. **Hierarchical keys**: Use dots for structure (e.g., `jira.status_map.open`)
3. **Document your keys**: Add comments in integration scripts
4. **Security**: Store tokens in config, but add `.beads/*.db` to `.gitignore` (bd does this automatically)
5. **Per-project**: Configuration is project-specific, so each repo can have different settings
## Integration with bd Commands
Some bd commands automatically use configuration:
- `bd compact` uses `compact_tier1_days`, `compact_tier1_dep_levels`, etc.
- `bd init` sets `issue_prefix`
External integration scripts can read configuration to sync with Jira, Linear, GitHub, etc.
## See Also
- [README.md](README.md) - Main documentation
- [EXTENDING.md](EXTENDING.md) - Database schema and compaction config
- [examples/integrations/](examples/integrations/) - Integration examples

229
docs/EXCLUSIVE_LOCK.md Normal file
View File

@@ -0,0 +1,229 @@
# Exclusive Lock Protocol
The exclusive lock protocol allows external tools to claim exclusive management of a beads database, preventing the bd daemon from interfering with their operations.
## Use Cases
- **Deterministic execution systems** (e.g., VibeCoder) that need full control over database state
- **CI/CD pipelines** that perform atomic issue updates without daemon interference
- **Custom automation tools** that manage their own git sync workflow
## How It Works
### Lock File Format
The lock file is located at `.beads/.exclusive-lock` and contains JSON:
```json
{
"holder": "vc-executor",
"pid": 12345,
"hostname": "dev-machine",
"started_at": "2025-10-25T12:00:00Z",
"version": "1.0.0"
}
```
**Fields:**
- `holder` (string, required): Name of the tool holding the lock (e.g., "vc-executor", "ci-runner")
- `pid` (int, required): Process ID of the lock holder
- `hostname` (string, required): Hostname where the process is running
- `started_at` (RFC3339 timestamp, required): When the lock was acquired
- `version` (string, optional): Version of the lock holder
### Daemon Behavior
The bd daemon checks for exclusive locks at the start of each sync cycle:
1. **No lock file**: Daemon proceeds normally with sync operations
2. **Valid lock (process alive)**: Daemon skips all operations for this database
3. **Stale lock (process dead)**: Daemon removes the lock and proceeds
4. **Malformed lock**: Daemon fails safe and skips the database
### Stale Lock Detection
A lock is considered stale if:
- The hostname matches the current machine (case-insensitive) AND
- The PID does not exist on the local system (returns ESRCH)
**Important:** The daemon only removes locks when it can definitively determine the process is dead (ESRCH error). If the daemon lacks permission to signal a PID (EPERM), it treats the lock as valid and skips the database. This fail-safe approach prevents accidentally removing locks owned by other users.
**Remote locks** (different hostname) are always assumed to be valid since the daemon cannot verify remote processes.
When a stale lock is successfully removed, the daemon logs: `Removed stale lock (holder-name), proceeding with sync`
## Usage Examples
### Creating a Lock (Go)
```go
import (
"encoding/json"
"os"
"path/filepath"
"github.com/steveyegge/beads/internal/types"
)
func acquireLock(beadsDir, holder, version string) error {
lock, err := types.NewExclusiveLock(holder, version)
if err != nil {
return err
}
data, err := json.MarshalIndent(lock, "", " ")
if err != nil {
return err
}
lockPath := filepath.Join(beadsDir, ".exclusive-lock")
return os.WriteFile(lockPath, data, 0644)
}
```
### Releasing a Lock (Go)
```go
func releaseLock(beadsDir string) error {
lockPath := filepath.Join(beadsDir, ".exclusive-lock")
return os.Remove(lockPath)
}
```
### Creating a Lock (Shell)
```bash
#!/bin/bash
BEADS_DIR=".beads"
LOCK_FILE="$BEADS_DIR/.exclusive-lock"
# Create lock
cat > "$LOCK_FILE" <<EOF
{
"holder": "my-tool",
"pid": $$,
"hostname": "$(hostname)",
"started_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"version": "1.0.0"
}
EOF
# Do work...
bd create "My issue" -p 1
bd update bd-42 --status in_progress
# Release lock
rm "$LOCK_FILE"
```
### Recommended Pattern
Always use cleanup handlers to ensure locks are released:
```go
func main() {
beadsDir := ".beads"
// Acquire lock
if err := acquireLock(beadsDir, "my-tool", "1.0.0"); err != nil {
log.Fatal(err)
}
// Ensure lock is released on exit
defer func() {
if err := releaseLock(beadsDir); err != nil {
log.Printf("Warning: failed to release lock: %v", err)
}
}()
// Do work with beads database...
}
```
## Edge Cases and Limitations
### Multiple Writers Without Daemon
The exclusive lock protocol **only prevents daemon interference**. It does NOT provide:
- ❌ Mutual exclusion between multiple external tools
- ❌ Transaction isolation or ACID guarantees
- ❌ Protection against direct file system manipulation
If you need coordination between multiple tools, implement your own locking mechanism.
### Git Worktrees
The daemon already has issues with git worktrees (see AGENTS.md). The exclusive lock protocol doesn't solve this—use `--no-daemon` mode in worktrees instead.
### Remote Hosts
Locks from remote hosts are always assumed valid because the daemon cannot verify remote PIDs. This means:
- Stale locks from remote hosts will **not** be automatically cleaned up
- You must manually remove stale remote locks
### Lock File Corruption
If the lock file becomes corrupted (invalid JSON), the daemon **fails safe** and skips the database. You must manually fix or remove the lock file.
## Daemon Logging
The daemon logs lock-related events:
```
Skipping database (locked by vc-executor)
Removed stale lock (vc-executor), proceeding with sync
Skipping database (lock check failed: malformed lock file: unexpected EOF)
```
Check daemon logs (default: `.beads/daemon.log`) to troubleshoot lock issues.
**Note:** The daemon checks for locks at the start of each sync cycle. If a lock is created during a sync cycle, that cycle will complete, but subsequent cycles will skip the database.
## Testing Your Integration
1. **Start the daemon**: `bd daemon --interval 1m`
2. **Create a lock**: Use your tool to create `.beads/.exclusive-lock`
3. **Verify daemon skips**: Check daemon logs for "Skipping database" message
4. **Release lock**: Remove `.beads/.exclusive-lock`
5. **Verify daemon resumes**: Check daemon logs for normal sync cycle
## Security Considerations
- Lock files are **not secure**. Any process can create, modify, or delete them.
- PID reuse could theoretically cause issues (very rare, especially with hostname check)
- This is a **cooperative** protocol, not a security mechanism
## API Reference
### Go Types
```go
// ExclusiveLock represents the lock file format
type ExclusiveLock struct {
Holder string `json:"holder"`
PID int `json:"pid"`
Hostname string `json:"hostname"`
StartedAt time.Time `json:"started_at"`
Version string `json:"version"`
}
// NewExclusiveLock creates a lock for the current process
func NewExclusiveLock(holder, version string) (*ExclusiveLock, error)
// Validate checks if the lock has valid field values
func (e *ExclusiveLock) Validate() error
// ShouldSkipDatabase checks if database should be skipped due to lock
func ShouldSkipDatabase(beadsDir string) (skip bool, holder string, err error)
// IsProcessAlive checks if a process is running
func IsProcessAlive(pid int, hostname string) bool
```
## Questions?
For integration help, see:
- **AGENTS.md** - General workflow guidance
- **README.md** - Daemon configuration
- **examples/** - Sample integrations
File issues at: https://github.com/steveyegge/beads/issues

717
docs/EXTENDING.md Normal file
View File

@@ -0,0 +1,717 @@
# Extending bd with Custom Tables
bd is designed to be extended by applications that need more than basic issue tracking. The recommended pattern is to add your own tables to the same SQLite database that bd uses.
## Philosophy
**bd is focused** - It tracks issues, dependencies, and ready work. That's it.
**Your application adds orchestration** - Execution state, agent assignments, retry logic, etc.
**Shared database = simple queries** - Join `issues` with your tables for powerful queries.
This is the same pattern used by tools like Temporal (workflow + activity tables) and Metabase (core + plugin tables).
## Quick Example
```sql
-- Create your application's tables in the same database
CREATE TABLE myapp_executions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
issue_id TEXT NOT NULL,
status TEXT NOT NULL, -- pending, running, failed, completed
agent_id TEXT,
started_at DATETIME,
completed_at DATETIME,
error TEXT,
FOREIGN KEY (issue_id) REFERENCES issues(id) ON DELETE CASCADE
);
CREATE INDEX idx_executions_issue ON myapp_executions(issue_id);
CREATE INDEX idx_executions_status ON myapp_executions(status);
-- Query across layers
SELECT
i.id,
i.title,
i.priority,
e.status as execution_status,
e.agent_id,
e.started_at
FROM issues i
LEFT JOIN myapp_executions e ON i.id = e.issue_id
WHERE i.status = 'in_progress'
ORDER BY i.priority ASC;
```
## Integration Pattern
### 1. Initialize Your Database Schema
```go
package main
import (
"database/sql"
_ "modernc.org/sqlite"
)
const myAppSchema = `
-- Your application's tables
CREATE TABLE IF NOT EXISTS myapp_executions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
issue_id TEXT NOT NULL,
status TEXT NOT NULL,
agent_id TEXT,
started_at DATETIME,
completed_at DATETIME,
error TEXT,
FOREIGN KEY (issue_id) REFERENCES issues(id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS myapp_checkpoints (
id INTEGER PRIMARY KEY AUTOINCREMENT,
execution_id INTEGER NOT NULL,
step_name TEXT NOT NULL,
step_data TEXT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (execution_id) REFERENCES myapp_executions(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_executions_issue ON myapp_executions(issue_id);
CREATE INDEX IF NOT EXISTS idx_executions_status ON myapp_executions(status);
CREATE INDEX IF NOT EXISTS idx_checkpoints_execution ON myapp_checkpoints(execution_id);
`
func InitializeMyAppSchema(dbPath string) error {
db, err := sql.Open("sqlite", dbPath)
if err != nil {
return err
}
defer db.Close()
_, err = db.Exec(myAppSchema)
return err
}
```
### 2. Use bd for Issue Management
```go
import (
"github.com/steveyegge/beads"
)
// Open bd's storage
store, err := beads.NewSQLiteStorage(dbPath)
if err != nil {
log.Fatal(err)
}
// Initialize your schema
if err := InitializeMyAppSchema(dbPath); err != nil {
log.Fatal(err)
}
// Use bd to find ready work
readyIssues, err := store.GetReadyWork(ctx, beads.WorkFilter{Limit: 10})
if err != nil {
log.Fatal(err)
}
// Use your tables for orchestration
for _, issue := range readyIssues {
execution := &Execution{
IssueID: issue.ID,
Status: "pending",
AgentID: selectAgent(),
StartedAt: time.Now(),
}
if err := createExecution(db, execution); err != nil {
log.Printf("Failed to create execution: %v", err)
}
}
```
### 3. Query Across Layers
```go
// Complex query joining bd's issues with your execution data
query := `
SELECT
i.id,
i.title,
i.priority,
i.status as issue_status,
e.id as execution_id,
e.status as execution_status,
e.agent_id,
e.error,
COUNT(c.id) as checkpoint_count
FROM issues i
INNER JOIN myapp_executions e ON i.id = e.issue_id
LEFT JOIN myapp_checkpoints c ON e.id = c.execution_id
WHERE e.status = 'running'
GROUP BY i.id, e.id
ORDER BY i.priority ASC, e.started_at ASC
`
rows, err := db.Query(query)
// Process results...
```
## Real-World Example: VC Orchestrator
Here's how the VC (VibeCoder) orchestrator extends bd using `UnderlyingDB()`:
```go
package vc
import (
"database/sql"
"github.com/steveyegge/beads"
_ "modernc.org/sqlite"
)
type VCStorage struct {
beads.Storage // Embed bd's storage
db *sql.DB // Cache the underlying DB
}
func NewVCStorage(dbPath string) (*VCStorage, error) {
// Open bd's storage
store, err := beads.NewSQLiteStorage(dbPath)
if err != nil {
return nil, err
}
vc := &VCStorage{
Storage: store,
db: store.UnderlyingDB(),
}
// Create VC-specific tables
if err := vc.initSchema(); err != nil {
return nil, err
}
return vc, nil
}
func (vc *VCStorage) initSchema() error {
schema := `
-- VC's orchestration layer
CREATE TABLE IF NOT EXISTS vc_executor_instances (
id TEXT PRIMARY KEY,
issue_id TEXT NOT NULL,
executor_type TEXT NOT NULL,
status TEXT NOT NULL, -- pending, assessing, executing, analyzing, completed, failed
agent_name TEXT,
created_at DATETIME NOT NULL,
claimed_at DATETIME,
completed_at DATETIME,
FOREIGN KEY (issue_id) REFERENCES issues(id) ON DELETE CASCADE
);
CREATE TABLE IF NOT EXISTS vc_execution_state (
id INTEGER PRIMARY KEY AUTOINCREMENT,
executor_id TEXT NOT NULL,
phase TEXT NOT NULL, -- assessment, execution, analysis
state_data TEXT NOT NULL, -- JSON checkpoint data
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (executor_id) REFERENCES vc_executor_instances(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_vc_executor_issue ON vc_executor_instances(issue_id);
CREATE INDEX IF NOT EXISTS idx_vc_executor_status ON vc_executor_instances(status);
CREATE INDEX IF NOT EXISTS idx_vc_execution_executor ON vc_execution_state(executor_id);
`
_, err := vc.db.Exec(schema)
return err
}
// ClaimReadyWork atomically claims the highest priority ready work
func (vc *VCStorage) ClaimReadyWork(agentName string) (*ExecutorInstance, error) {
query := `
UPDATE vc_executor_instances
SET status = 'executing', claimed_at = CURRENT_TIMESTAMP, agent_name = ?
WHERE id = (
SELECT ei.id
FROM vc_executor_instances ei
JOIN issues i ON ei.issue_id = i.id
WHERE ei.status = 'pending'
AND NOT EXISTS (
SELECT 1 FROM dependencies d
JOIN issues blocked ON d.depends_on_id = blocked.id
WHERE d.issue_id = i.id
AND d.type = 'blocks'
AND blocked.status IN ('open', 'in_progress', 'blocked')
)
ORDER BY i.priority ASC
LIMIT 1
)
RETURNING id, issue_id, executor_type, status, agent_name, claimed_at
`
var ei ExecutorInstance
err := vc.db.QueryRow(query, agentName).Scan(
&ei.ID, &ei.IssueID, &ei.ExecutorType,
&ei.Status, &ei.AgentName, &ei.ClaimedAt,
)
return &ei, err
}
```
**Key benefits of this approach:**
- ✅ VC extends bd without forking or modifying it
- ✅ Single database = simple JOINs across layers
- ✅ Foreign keys ensure referential integrity
- ✅ bd handles issue tracking, VC handles orchestration
- ✅ Can use bd's CLI alongside VC's custom operations
## Best Practices
### 1. Namespace Your Tables
Prefix your tables with your application name to avoid conflicts:
```sql
-- Good
CREATE TABLE vc_executions (...);
CREATE TABLE myapp_checkpoints (...);
-- Bad
CREATE TABLE executions (...); -- Could conflict with other apps
CREATE TABLE state (...); -- Too generic
```
### 2. Use Foreign Keys
Always link your tables to `issues` with foreign keys:
```sql
CREATE TABLE myapp_executions (
issue_id TEXT NOT NULL,
-- ...
FOREIGN KEY (issue_id) REFERENCES issues(id) ON DELETE CASCADE
);
```
This ensures:
- Referential integrity
- Automatic cleanup when issues are deleted
- Ability to join with `issues` table
### 3. Index Your Query Patterns
Add indexes for common queries:
```sql
-- If you query by status frequently
CREATE INDEX idx_executions_status ON myapp_executions(status);
-- If you join on issue_id
CREATE INDEX idx_executions_issue ON myapp_executions(issue_id);
-- Composite index for complex queries
CREATE INDEX idx_executions_status_priority
ON myapp_executions(status, issue_id);
```
### 4. Don't Duplicate bd's Data
Don't copy fields from `issues` into your tables. Instead, join:
```sql
-- Bad: Duplicating data
CREATE TABLE myapp_executions (
issue_id TEXT NOT NULL,
issue_title TEXT, -- Don't do this!
issue_priority INTEGER, -- Don't do this!
-- ...
);
-- Good: Join when querying
SELECT i.title, i.priority, e.status
FROM myapp_executions e
JOIN issues i ON e.issue_id = i.id;
```
### 5. Use JSON for Flexible State
SQLite supports JSON functions, great for checkpoint data:
```sql
CREATE TABLE myapp_checkpoints (
id INTEGER PRIMARY KEY,
execution_id INTEGER NOT NULL,
step_name TEXT NOT NULL,
step_data TEXT, -- Store as JSON
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
-- Query JSON fields
SELECT
id,
json_extract(step_data, '$.completed') as completed,
json_extract(step_data, '$.error') as error
FROM myapp_checkpoints
WHERE step_name = 'assessment';
```
## Common Patterns
### Pattern 1: Execution Tracking
Track which agent is working on which issue:
```sql
CREATE TABLE myapp_executions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
issue_id TEXT NOT NULL UNIQUE, -- One execution per issue
agent_id TEXT NOT NULL,
status TEXT NOT NULL,
started_at DATETIME NOT NULL,
FOREIGN KEY (issue_id) REFERENCES issues(id) ON DELETE CASCADE
);
-- Claim an issue for execution
INSERT INTO myapp_executions (issue_id, agent_id, status, started_at)
VALUES (?, ?, 'running', CURRENT_TIMESTAMP)
ON CONFLICT (issue_id) DO UPDATE
SET agent_id = excluded.agent_id, started_at = CURRENT_TIMESTAMP;
```
### Pattern 2: Checkpoint/Resume
Store execution checkpoints for crash recovery:
```sql
CREATE TABLE myapp_checkpoints (
id INTEGER PRIMARY KEY AUTOINCREMENT,
execution_id INTEGER NOT NULL,
phase TEXT NOT NULL,
checkpoint_data TEXT NOT NULL, -- JSON
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (execution_id) REFERENCES myapp_executions(id) ON DELETE CASCADE
);
-- Latest checkpoint for an execution
SELECT checkpoint_data
FROM myapp_checkpoints
WHERE execution_id = ?
ORDER BY created_at DESC
LIMIT 1;
```
### Pattern 3: Result Storage
Store execution results linked to issues:
```sql
CREATE TABLE myapp_results (
id INTEGER PRIMARY KEY AUTOINCREMENT,
issue_id TEXT NOT NULL,
result_type TEXT NOT NULL, -- success, partial, failed
output_data TEXT, -- JSON: files changed, tests run, etc.
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (issue_id) REFERENCES issues(id) ON DELETE CASCADE
);
-- Get all results for an issue
SELECT result_type, output_data, created_at
FROM myapp_results
WHERE issue_id = ?
ORDER BY created_at DESC;
```
## Programmatic Access
Use bd's `--json` flags for scripting:
```bash
#!/bin/bash
# Find ready work
READY=$(bd ready --limit 1 --json)
ISSUE_ID=$(echo $READY | jq -r '.[0].id')
if [ "$ISSUE_ID" = "null" ]; then
echo "No ready work"
exit 0
fi
# Create execution record in your table
sqlite3 .beads/myapp.db <<SQL
INSERT INTO myapp_executions (issue_id, agent_id, status, started_at)
VALUES ('$ISSUE_ID', 'agent-1', 'running', datetime('now'));
SQL
# Claim issue in bd
bd update $ISSUE_ID --status in_progress
# Execute work...
echo "Working on $ISSUE_ID"
# Mark complete
bd close $ISSUE_ID --reason "Completed by agent-1"
sqlite3 .beads/myapp.db <<SQL
UPDATE myapp_executions
SET status = 'completed', completed_at = datetime('now')
WHERE issue_id = '$ISSUE_ID';
SQL
```
## Direct Database Access
### Using UnderlyingDB() (Recommended)
The recommended way to extend bd is using the `UnderlyingDB()` method on the storage instance. This gives you access to the same database connection that bd uses, ensuring consistency and avoiding connection overhead:
```go
import (
"database/sql"
"github.com/steveyegge/beads"
_ "modernc.org/sqlite"
)
// Open bd's storage
store, err := beads.NewSQLiteStorage(".beads/issues.db")
if err != nil {
log.Fatal(err)
}
defer store.Close()
// Get the underlying database connection
db := store.UnderlyingDB()
// Create your extension tables using the same connection
schema := `
CREATE TABLE IF NOT EXISTS myapp_executions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
issue_id TEXT NOT NULL,
status TEXT NOT NULL,
agent_id TEXT,
started_at DATETIME,
FOREIGN KEY (issue_id) REFERENCES issues(id) ON DELETE CASCADE
);
`
if _, err := db.Exec(schema); err != nil {
log.Fatal(err)
}
// Query bd's tables
var title string
var priority int
err = db.QueryRow(`
SELECT title, priority FROM issues WHERE id = ?
`, issueID).Scan(&title, &priority)
// Update your tables
_, err = db.Exec(`
INSERT INTO myapp_executions (issue_id, status, agent_id, started_at)
VALUES (?, ?, ?, CURRENT_TIMESTAMP)
`, issueID, "running", "agent-1")
// Join across layers
rows, err := db.Query(`
SELECT i.id, i.title, e.status, e.agent_id
FROM issues i
JOIN myapp_executions e ON i.id = e.issue_id
WHERE e.status = 'running'
`)
```
**Safety warnings when using UnderlyingDB():**
⚠️ **NEVER** close the database connection returned by `UnderlyingDB()`. The storage instance owns this connection.
⚠️ **DO NOT** modify database pool settings (SetMaxOpenConns, SetConnMaxIdleTime) or SQLite PRAGMAs (WAL mode, journal settings) as this affects bd's core operations.
⚠️ **Keep transactions short** - Long write transactions will block bd's core operations. Use read transactions when possible.
⚠️ **Expect errors after Close()** - Once you call `store.Close()`, operations on the underlying DB will fail. Use context cancellation to coordinate shutdown.
**DO** use foreign keys to reference bd's tables for referential integrity.
**DO** namespace your tables with your app name (e.g., `myapp_executions`).
**DO** create indexes for your query patterns.
### When to use UnderlyingDB() vs sql.Open()
**Use `UnderlyingDB()`:**
- ✅ When you want to share the storage connection
- ✅ When you need tables in the same database as bd
- ✅ When you want automatic lifecycle management
- ✅ For most extension use cases (like VC)
**Use `sql.Open()` separately:**
- When you need independent connection pool settings
- When you need different timeout/retry behavior
- When you're managing multiple databases
- When you need fine-grained connection control
### Alternative: Independent Connection
If you need independent connection management, you can still open the database directly:
```go
import (
"database/sql"
_ "modernc.org/sqlite"
"github.com/steveyegge/beads"
)
// Auto-discover bd's database path
dbPath := beads.FindDatabasePath()
if dbPath == "" {
log.Fatal("No bd database found. Run 'bd init' first.")
}
// Open your own connection to the same database
db, err := sql.Open("sqlite", dbPath)
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Configure your connection independently
db.SetMaxOpenConns(10)
db.SetConnMaxIdleTime(time.Minute)
// Query bd's tables
var title string
var priority int
err = db.QueryRow(`
SELECT title, priority FROM issues WHERE id = ?
`, issueID).Scan(&title, &priority)
// Find corresponding JSONL path (for git hooks, monitoring, etc.)
jsonlPath := beads.FindJSONLPath(dbPath)
fmt.Printf("BD exports to: %s\n", jsonlPath)
```
## Batch Operations for Performance
When creating many issues at once (e.g., bulk imports, batch processing), use `CreateIssues` for significantly better performance:
```go
import (
"context"
"github.com/steveyegge/beads/internal/storage/sqlite"
"github.com/steveyegge/beads/internal/types"
)
// Open bd's storage
store, err := sqlite.New(".beads/issues.db")
if err != nil {
log.Fatal(err)
}
ctx := context.Background()
// Prepare batch of issues to create
issues := make([]*types.Issue, 0, 1000)
for _, item := range externalData {
issue := &types.Issue{
Title: item.Title,
Description: item.Description,
Status: types.StatusOpen,
Priority: item.Priority,
IssueType: types.TypeTask,
}
issues = append(issues, issue)
}
// Create all issues in a single atomic transaction (5-15x faster!)
if err := store.CreateIssues(ctx, issues, "import"); err != nil {
log.Fatal(err)
}
// REMOVED (bd-c7af): SyncAllCounters - no longer needed with hash IDs
```
### Performance Comparison
| Operation | CreateIssue Loop | CreateIssues Batch | Speedup |
|-----------|------------------|---------------------|---------|
| 100 issues | ~900ms | ~30ms | 30x |
| 500 issues | ~4.5s | ~800ms | 5.6x |
| 1000 issues | ~9s | ~950ms | 9.5x |
### When to Use Each Method
**Use `CreateIssue` (single issue):**
- Interactive CLI commands (`bd create`)
- Single issue creation in your app
- User-facing operations
**Use `CreateIssues` (batch):**
- Bulk imports from external systems
- Batch processing workflows
- Creating multiple related issues at once
- Agent workflows that generate many issues
### Batch Import Pattern
```go
// Example: Import from external issue tracker
func ImportFromExternal(externalIssues []ExternalIssue) error {
store, err := sqlite.New(".beads/issues.db")
if err != nil {
return err
}
ctx := context.Background()
// Convert external format to bd format
issues := make([]*types.Issue, 0, len(externalIssues))
for _, ext := range externalIssues {
issue := &types.Issue{
ID: fmt.Sprintf("import-%d", ext.ID), // Explicit IDs
Title: ext.Title,
Description: ext.Description,
Status: convertStatus(ext.Status),
Priority: ext.Priority,
IssueType: convertType(ext.Type),
}
// Normalize closed_at for closed issues
if issue.Status == types.StatusClosed {
closedAt := ext.ClosedAt
issue.ClosedAt = &closedAt
}
issues = append(issues, issue)
}
// Atomic batch create
if err := store.CreateIssues(ctx, issues, "external-import"); err != nil {
return fmt.Errorf("batch create failed: %w", err)
}
// REMOVED (bd-c7af): SyncAllCounters - no longer needed with hash IDs
return nil
}
```
## Summary
The key insight: **bd is a focused issue tracker, not a framework**.
By extending the database:
- You get powerful issue tracking for free
- Your app adds orchestration logic
- Simple SQL joins give you full visibility
- No tight coupling or version conflicts
This pattern scales from simple scripts to complex orchestrators like VC.
## See Also
- [README.md](README.md) - Complete bd documentation
- Run `bd quickstart` - Interactive tutorial
- Check out VC's implementation at `github.com/steveyegge/vc` for a real-world example

489
docs/FAQ.md Normal file
View File

@@ -0,0 +1,489 @@
# Frequently Asked Questions
Common questions about bd (beads) and how to use it effectively.
## General Questions
### What is bd?
bd is a lightweight, git-based issue tracker designed for AI coding agents. It provides dependency-aware task management with automatic sync across machines via git.
### Why not just use GitHub Issues?
GitHub Issues + gh CLI can approximate some features, but fundamentally cannot replicate what AI agents need:
**Key Differentiators:**
1. **Typed Dependencies with Semantics**
- bd: Four types (`blocks`, `related`, `parent-child`, `discovered-from`) with different behaviors
- GH: Only "blocks/blocked by" links, no semantic enforcement, no `discovered-from` for agent work discovery
2. **Deterministic Ready-Work Detection**
- bd: `bd ready` computes transitive blocking offline in ~10ms, no network required
- GH: No built-in "ready" concept; would require custom GraphQL + sync service + ongoing maintenance
3. **Git-First, Offline, Branch-Scoped Task Memory**
- bd: Works offline, issues live on branches, mergeable with code via `bd import --resolve-collisions`
- GH: Cloud-first, requires network/auth, global per-repo, no branch-scoped task state
4. **AI-Resolvable Conflicts & Duplicate Merge**
- bd: Automatic collision resolution, duplicate merge with dependency consolidation and reference rewriting
- GH: Manual close-as-duplicate, no safe bulk merge, no cross-reference updates
5. **Extensible Local Database**
- bd: Add SQL tables and join with issue data locally (see [EXTENDING.md](EXTENDING.md))
- GH: No local database; would need to mirror data externally
6. **Agent-Native APIs**
- bd: Consistent `--json` on all commands, dedicated MCP server with auto workspace detection
- GH: Mixed JSON/text output, GraphQL requires custom queries, no agent-focused MCP layer
**When to use each:** GitHub Issues excels for human teams in web UI with cross-repo dashboards and integrations. bd excels for AI agents needing offline, git-synchronized task memory with graph semantics and deterministic queries.
See [GitHub issue #125](https://github.com/steveyegge/beads/issues/125) for detailed comparison.
### How is this different from Taskwarrior?
Taskwarrior is excellent for personal task management, but bd is built for AI agents:
- **Explicit agent semantics**: `discovered-from` dependency type, `bd ready` for queue management
- **JSON-first design**: Every command has `--json` output
- **Git-native sync**: No sync server setup required
- **Merge-friendly JSONL**: One issue per line, AI-resolvable conflicts
- **Extensible SQLite**: Add your own tables without forking
### Can I use bd without AI agents?
Absolutely! bd is a great CLI issue tracker for humans too. The `bd ready` command is useful for anyone managing dependencies. Think of it as "Taskwarrior meets git."
### Is this production-ready?
**Current status: Alpha (v0.9.11)**
bd is in active development and being dogfooded on real projects. The core functionality (create, update, dependencies, ready work, collision resolution) is stable and well-tested. However:
- ⚠️ **Alpha software** - No 1.0 release yet
- ⚠️ **API may change** - Command flags and JSONL format may evolve before 1.0
-**Safe for development** - Use for development/internal projects
-**Data is portable** - JSONL format is human-readable and easy to migrate
- 📈 **Rapid iteration** - Expect frequent updates and improvements
**When to use bd:**
- ✅ AI-assisted development workflows
- ✅ Internal team projects
- ✅ Personal productivity with dependency tracking
- ✅ Experimenting with agent-first tools
**When to wait:**
- ❌ Mission-critical production systems (wait for 1.0)
- ❌ Large enterprise deployments (wait for stability guarantees)
- ❌ Long-term archival (though JSONL makes migration easy)
Follow the repo for updates and the path to 1.0!
## Usage Questions
### Why hash-based IDs? Why not sequential?
**Hash IDs eliminate collisions** when multiple agents or branches create issues concurrently.
**The problem with sequential IDs:**
```bash
# Branch A creates bd-10
git checkout -b feature-auth
bd create "Add OAuth" # Sequential ID: bd-10
# Branch B also creates bd-10
git checkout -b feature-payments
bd create "Add Stripe" # Collision! Same sequential ID: bd-10
# Merge conflict!
git merge feature-auth # Two different issues, same ID
```
**Hash IDs solve this:**
```bash
# Branch A
bd create "Add OAuth" # Hash ID: bd-a1b2 (from random UUID)
# Branch B
bd create "Add Stripe" # Hash ID: bd-f14c (different UUID, different hash)
# Clean merge!
git merge feature-auth # No collision, different IDs
```
**Progressive length scaling:**
- 4 chars (0-500 issues): `bd-a1b2`
- 5 chars (500-1,500 issues): `bd-f14c3`
- 6 chars (1,500+ issues): `bd-3e7a5b`
bd automatically extends hash length as your database grows to maintain low collision probability.
### What are hierarchical child IDs?
**Hierarchical IDs** (e.g., `bd-a3f8e9.1`, `bd-a3f8e9.2`) provide human-readable structure for epics and their subtasks.
**Example:**
```bash
# Create epic (generates parent hash)
bd create "Auth System" -t epic -p 1
# Returns: bd-a3f8e9
# Create children (auto-numbered .1, .2, .3)
bd create "Login UI" -p 1 # bd-a3f8e9.1
bd create "Validation" -p 1 # bd-a3f8e9.2
bd create "Tests" -p 1 # bd-a3f8e9.3
```
**Benefits:**
- Parent hash ensures unique namespace (no cross-epic collisions)
- Sequential child IDs are human-friendly
- Up to 3 levels of nesting supported
- Clear visual grouping in issue lists
**When to use:**
- Epics with multiple related tasks
- Large features with sub-features
- Work breakdown structures
**When NOT to use:**
- Simple one-off tasks (use regular hash IDs)
- Cross-cutting dependencies (use `bd dep add` instead)
### Should I run bd init or have my agent do it?
**Either works!** But use the right flag:
**Humans:**
```bash
bd init # Interactive - prompts for git hooks
```
**Agents:**
```bash
bd init --quiet # Non-interactive - auto-installs hooks, no prompts
```
**Workflow for humans:**
```bash
# Clone existing project with bd:
git clone <repo>
cd <repo>
bd init # Auto-imports from .beads/issues.jsonl
# Or initialize new project:
cd ~/my-project
bd init # Creates .beads/, sets up daemon
git add .beads/
git commit -m "Initialize beads"
```
**Workflow for agents setting up repos:**
```bash
git clone <repo>
cd <repo>
bd init --quiet # No prompts, auto-installs hooks
bd ready --json # Start using bd normally
```
### Do I need to run export/import manually?
**No! Sync is automatic by default.**
bd automatically:
- **Exports** to JSONL after CRUD operations (5-second debounce)
- **Imports** from JSONL when it's newer than DB (e.g., after `git pull`)
**How auto-import works:** The first bd command after `git pull` detects that `.beads/issues.jsonl` is newer than the database and automatically imports it. There's no background daemon watching for changes - the check happens when you run a bd command.
**Optional**: For immediate export (no 5-second wait) and guaranteed import after git operations, install the git hooks:
```bash
cd examples/git-hooks && ./install.sh
```
**Disable auto-sync** if needed:
```bash
bd --no-auto-flush create "Issue" # Disable auto-export
bd --no-auto-import list # Disable auto-import check
```
### What if my database feels stale after git pull?
Just run any bd command - it will auto-import:
```bash
git pull
bd ready # Automatically imports fresh data from git
bd list # Also triggers auto-import if needed
bd sync # Explicit sync command for manual control
```
The auto-import check is fast (<5ms) and only imports when the JSONL file is newer than the database. If you want guaranteed immediate sync without waiting for the next command, use the git hooks (see `examples/git-hooks/`).
### Can I track issues for multiple projects?
**Yes! Each project is completely isolated.** bd uses project-local databases:
```bash
cd ~/project1 && bd init --prefix proj1
cd ~/project2 && bd init --prefix proj2
```
Each project gets its own `.beads/` directory with its own database and JSONL file. bd auto-discovers the correct database based on your current directory (walks up like git).
**Multi-project scenarios work seamlessly:**
- Multiple agents working on different projects simultaneously → No conflicts
- Same machine, different repos → Each finds its own `.beads/*.db` automatically
- Agents in subdirectories → bd walks up to find the project root (like git)
- **Per-project daemons** → Each project gets its own daemon at `.beads/bd.sock` (LSP model)
**Limitation:** Issues cannot reference issues in other projects. Each database is isolated by design. If you need cross-project tracking, initialize bd in a parent directory that contains both projects.
**Example:** Multiple agents, multiple projects, same machine:
```bash
# Agent 1 working on web app
cd ~/work/webapp && bd ready --json # Uses ~/work/webapp/.beads/webapp.db
# Agent 2 working on API
cd ~/work/api && bd ready --json # Uses ~/work/api/.beads/api.db
# No conflicts! Completely isolated databases and daemons.
```
**Architecture:** bd uses per-project daemons (like LSP/language servers) for complete database isolation. See [ADVANCED.md#architecture-daemon-vs-mcp-vs-beads](ADVANCED.md#architecture-daemon-vs-mcp-vs-beads).
### What happens if two agents work on the same issue?
The last agent to export/commit wins. This is the same as any git-based workflow. To prevent conflicts:
- Have agents claim work with `bd update <id> --status in_progress`
- Query by assignee: `bd ready --assignee agent-name`
- Review git diffs before merging
For true multi-agent coordination, you'd need additional tooling (like locks or a coordination server). bd handles the simpler case: multiple humans/agents working on different tasks, syncing via git.
### Why JSONL instead of JSON?
-**Git-friendly**: One line per issue = clean diffs
-**Mergeable**: Concurrent appends rarely conflict
-**Human-readable**: Easy to review changes
-**Scriptable**: Use `jq`, `grep`, or any text tools
-**Portable**: Export/import between databases
See [ADVANCED.md](ADVANCED.md) for detailed analysis.
### How do I handle merge conflicts?
When two developers create new issues:
```diff
{"id":"bd-1","title":"First issue",...}
{"id":"bd-2","title":"Second issue",...}
+{"id":"bd-3","title":"From branch A",...}
+{"id":"bd-4","title":"From branch B",...}
```
Git may show a conflict, but resolution is simple: **keep both lines** (both changes are compatible).
**With hash-based IDs (v0.20.1+), same-ID scenarios are updates, not collisions:**
If you import an issue with the same ID but different fields, bd treats it as an update to the existing issue. This is normal behavior - hash IDs remain stable, so same ID = same issue being updated.
For git conflicts where the same issue was modified on both branches, manually resolve the JSONL conflict (usually keeping the newer `updated_at` timestamp), then `bd import` will apply the update.
## Migration Questions
### How do I migrate from GitHub Issues / Jira / Linear?
We don't have automated migration tools yet, but you can:
1. Export issues from your current tracker (usually CSV or JSON)
2. Write a simple script to convert to bd's JSONL format
3. Import with `bd import -i issues.jsonl`
See [examples/](examples/) for scripting patterns. Contributions welcome!
### Can I export back to GitHub Issues / Jira?
Not yet built-in, but you can:
1. Export from bd: `bd export -o issues.jsonl --json`
2. Write a script to convert JSONL to your target format
3. Use the target system's API to import
The [CONFIG.md](CONFIG.md) guide shows how to store integration settings. Contributions for standard exporters welcome!
## Performance Questions
### How does bd handle scale?
bd uses SQLite, which handles millions of rows efficiently. For a typical project with thousands of issues:
- Commands complete in <100ms
- Full-text search is instant
- Dependency graphs traverse quickly
- JSONL files stay small (one line per issue)
For extremely large projects (100k+ issues), you might want to filter exports or use multiple databases per component.
### What if my JSONL file gets too large?
Use compaction to remove old closed issues:
```bash
# Preview what would be compacted
bd compact --dry-run --all
# Compact issues closed more than 90 days ago
bd compact --days 90
```
Or split your project into multiple databases:
```bash
cd ~/project/frontend && bd init --prefix fe
cd ~/project/backend && bd init --prefix be
```
## Use Case Questions
### Can I use bd for non-code projects?
Sure! bd is just an issue tracker. Use it for:
- Writing projects (chapters as issues, dependencies as outlines)
- Research projects (papers, experiments, dependencies)
- Home projects (renovations with blocking tasks)
- Any workflow with dependencies
The agent-friendly design works for any AI-assisted workflow.
### Can I use bd with multiple AI agents simultaneously?
Yes! Each agent can:
1. Query ready work: `bd ready --assignee agent-name`
2. Claim issues: `bd update <id> --status in_progress --assignee agent-name`
3. Create discovered work: `bd create "Found issue" --deps discovered-from:<parent-id>`
4. Sync via git commits
bd's git-based sync means agents work independently and merge their changes like developers do.
### Does bd work offline?
Yes! bd is designed for offline-first operation:
- All queries run against local SQLite database
- No network required for any commands
- Sync happens via git push/pull when you're online
- Full functionality available without internet
This makes bd ideal for:
- Working on planes/trains
- Unstable network connections
- Air-gapped environments
- Privacy-sensitive projects
## Technical Questions
### What dependencies does bd have?
bd is a single static binary with no runtime dependencies:
- **Language**: Go 1.24+
- **Database**: SQLite (embedded, pure Go driver)
- **Optional**: Git (for sync across machines)
That's it! No PostgreSQL, no Redis, no Docker, no node_modules.
### Can I extend bd's database?
Yes! See [EXTENDING.md](EXTENDING.md) for how to:
- Add custom tables to the SQLite database
- Join with issue data
- Build custom queries
- Create integrations
### Does bd support Windows?
Yes! bd has native Windows support (v0.9.0+):
- No MSYS or MinGW required
- PowerShell install script
- Works with Windows paths and filesystem
- Daemon uses TCP instead of Unix sockets
See [INSTALLING.md](INSTALLING.md#windows-11) for details.
### Can I use bd with git worktrees?
Yes, but with limitations. The daemon doesn't work correctly with worktrees, so use `--no-daemon` mode:
```bash
export BEADS_NO_DAEMON=1
bd ready
bd create "Fix bug" -p 1
```
See [ADVANCED.md#git-worktrees](ADVANCED.md#git-worktrees) for details.
### What's the difference between SQLite corruption and ID collisions?
bd handles two distinct types of integrity issues:
**1. Logical Consistency (Collision Resolution)**
The hash/fingerprint/collision architecture prevents:
- **ID collisions**: Same ID assigned to different issues (e.g., from parallel workers or branch merges)
- **Wrong prefix bugs**: Issues created with incorrect prefix due to config mismatch
- **Merge conflicts**: Branch divergence creating conflicting JSONL content
**Solution**: `bd import --resolve-collisions` automatically remaps colliding IDs and updates all references.
**2. Physical SQLite Corruption**
SQLite database file corruption can occur from:
- **Disk/hardware failures**: Power loss, disk errors, filesystem corruption
- **Concurrent writes**: Multiple processes writing to the same database file simultaneously
- **Container scenarios**: Shared database volumes with multiple containers
**Solution**: Reimport from JSONL (which survives in git history):
```bash
mv .beads/*.db .beads/*.db.backup
bd init
bd import -i .beads/issues.jsonl
```
**Key Difference**: Collision resolution fixes logical issues in the data. Physical corruption requires restoring from the JSONL source of truth.
**When to use in-memory mode (`--no-db`)**: For multi-process/container scenarios where SQLite's file locking isn't sufficient. The in-memory backend loads from JSONL at startup and writes back after each command, avoiding shared database state entirely.
## Getting Help
### Where can I get more help?
- **Documentation**: [README.md](README.md), [QUICKSTART.md](QUICKSTART.md), [ADVANCED.md](ADVANCED.md)
- **Troubleshooting**: [TROUBLESHOOTING.md](TROUBLESHOOTING.md)
- **Examples**: [examples/](examples/)
- **GitHub Issues**: [Report bugs or request features](https://github.com/steveyegge/beads/issues)
- **GitHub Discussions**: [Ask questions](https://github.com/steveyegge/beads/discussions)
### How can I contribute?
Contributions are welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for:
- Code contribution guidelines
- How to run tests
- Development workflow
- Issue and PR templates
### Where's the roadmap?
The roadmap lives in bd itself! Run:
```bash
bd list --priority 0 --priority 1 --json
```
Or check the GitHub Issues for feature requests and planned improvements.

267
docs/INSTALLING.md Normal file
View File

@@ -0,0 +1,267 @@
# Installing bd
Complete installation guide for all platforms.
## Quick Install (Recommended)
### Homebrew (macOS/Linux)
```bash
brew tap steveyegge/beads
brew install bd
```
**Why Homebrew?**
- ✅ Simple one-command install
- ✅ Automatic updates via `brew upgrade`
- ✅ No need to install Go
- ✅ Handles PATH setup automatically
### Quick Install Script (All Platforms)
```bash
curl -fsSL https://raw.githubusercontent.com/steveyegge/beads/main/scripts/install.sh | bash
```
The installer will:
- Detect your platform (macOS/Linux, amd64/arm64)
- Install via `go install` if Go is available
- Fall back to building from source if needed
- Guide you through PATH setup if necessary
## Platform-Specific Installation
### macOS
**Via Homebrew** (recommended):
```bash
brew tap steveyegge/beads
brew install bd
```
**Via go install**:
```bash
go install github.com/steveyegge/beads/cmd/bd@latest
```
**From source**:
```bash
git clone https://github.com/steveyegge/beads
cd beads
go build -o bd ./cmd/bd
sudo mv bd /usr/local/bin/
```
### Linux
**Via Homebrew** (works on Linux too):
```bash
brew tap steveyegge/beads
brew install bd
```
**Arch Linux** (AUR):
```bash
# Install from AUR
yay -S beads-git
# or
paru -S beads-git
```
Thanks to [@v4rgas](https://github.com/v4rgas) for maintaining the AUR package!
**Via go install**:
```bash
go install github.com/steveyegge/beads/cmd/bd@latest
```
**From source**:
```bash
git clone https://github.com/steveyegge/beads
cd beads
go build -o bd ./cmd/bd
sudo mv bd /usr/local/bin/
```
### Windows 11
Beads now ships with native Windows support—no MSYS or MinGW required.
**Prerequisites:**
- [Go 1.24+](https://go.dev/dl/) installed (add `%USERPROFILE%\go\bin` to your `PATH`)
- Git for Windows
**Via PowerShell script**:
```pwsh
irm https://raw.githubusercontent.com/steveyegge/beads/main/install.ps1 | iex
```
**Via go install**:
```pwsh
go install github.com/steveyegge/beads/cmd/bd@latest
```
**From source**:
```pwsh
git clone https://github.com/steveyegge/beads
cd beads
go build -o bd.exe ./cmd/bd
Move-Item bd.exe $env:USERPROFILE\AppData\Local\Microsoft\WindowsApps\
```
**Verify installation**:
```pwsh
bd version
```
**Windows notes:**
- The background daemon listens on a loopback TCP endpoint recorded in `.beads\bd.sock`
- Keep that metadata file intact
- Allow `bd.exe` loopback traffic through any host firewall
## IDE and Editor Integrations
### Claude Code Plugin
For Claude Code users, the beads plugin provides slash commands and MCP tools.
**Prerequisites:**
1. First, install the bd CLI (see above)
2. Then install the plugin:
```bash
# In Claude Code
/plugin marketplace add steveyegge/beads
/plugin install beads
# Restart Claude Code
```
The plugin includes:
- Slash commands: `/bd-ready`, `/bd-create`, `/bd-show`, `/bd-update`, `/bd-close`, etc.
- Full MCP server with all bd tools
- Task agent for autonomous execution
See [PLUGIN.md](PLUGIN.md) for complete plugin documentation.
### MCP Server (For Sourcegraph Amp, Claude Desktop, and other MCP clients)
If you're using an MCP-compatible tool other than Claude Code:
```bash
# Using uv (recommended)
uv tool install beads-mcp
# Or using pip
pip install beads-mcp
```
**Configuration for Claude Desktop** (macOS):
Add to `~/Library/Application Support/Claude/claude_desktop_config.json`:
```json
{
"mcpServers": {
"beads": {
"command": "beads-mcp"
}
}
}
```
**Configuration for Sourcegraph Amp**:
Add to your MCP settings:
```json
{
"beads": {
"command": "beads-mcp",
"args": []
}
}
```
**What you get:**
- Full bd functionality exposed via MCP protocol
- Tools for creating, updating, listing, and closing issues
- Ready work detection and dependency management
- All without requiring Bash commands
See [integrations/beads-mcp/README.md](integrations/beads-mcp/README.md) for detailed MCP server documentation.
## Verifying Installation
After installing, verify bd is working:
```bash
bd version
bd help
```
## Troubleshooting Installation
### `bd: command not found`
bd is not in your PATH. Either:
```bash
# Check if installed
go list -f {{.Target}} github.com/steveyegge/beads/cmd/bd
# Add Go bin to PATH (add to ~/.bashrc or ~/.zshrc)
export PATH="$PATH:$(go env GOPATH)/bin"
# Or reinstall
go install github.com/steveyegge/beads/cmd/bd@latest
```
### `zsh: killed bd` or crashes on macOS
Some users report crashes when running `bd init` or other commands on macOS. This is typically caused by CGO/SQLite compatibility issues.
**Workaround:**
```bash
# Build with CGO enabled
CGO_ENABLED=1 go install github.com/steveyegge/beads/cmd/bd@latest
# Or if building from source
git clone https://github.com/steveyegge/beads
cd beads
CGO_ENABLED=1 go build -o bd ./cmd/bd
sudo mv bd /usr/local/bin/
```
If you installed via Homebrew, this shouldn't be necessary as the formula already enables CGO. If you're still seeing crashes with the Homebrew version, please [file an issue](https://github.com/steveyegge/beads/issues).
## Next Steps
After installation:
1. **Initialize a project**: `cd your-project && bd init`
2. **Configure your agent**: Add bd instructions to `AGENTS.md` (see [README.md](README.md#quick-start))
3. **Learn the basics**: Run `bd quickstart` for an interactive tutorial
4. **Explore examples**: Check out the [examples/](examples/) directory
## Updating bd
### Homebrew
```bash
brew upgrade bd
```
### go install
```bash
go install github.com/steveyegge/beads/cmd/bd@latest
```
### From source
```bash
cd beads
git pull
go build -o bd ./cmd/bd
sudo mv bd /usr/local/bin/
```

531
docs/LABELS.md Normal file
View File

@@ -0,0 +1,531 @@
# Labels in Beads
Labels provide flexible, multi-dimensional categorization for issues beyond the structured fields (status, priority, type). Use labels for cross-cutting concerns, technical metadata, and contextual tagging without schema changes.
## Design Philosophy
**When to use labels vs. structured fields:**
- **Structured fields** (status, priority, type) → Core workflow state
- Status: Where the issue is in the workflow (`open`, `in_progress`, `blocked`, `closed`)
- Priority: How urgent (0-4)
- Type: What kind of work (`bug`, `feature`, `task`, `epic`, `chore`)
- **Labels** → Everything else
- Technical metadata (`backend`, `frontend`, `api`, `database`)
- Domain/scope (`auth`, `payments`, `search`, `analytics`)
- Effort estimates (`small`, `medium`, `large`)
- Quality gates (`needs-review`, `needs-tests`, `breaking-change`)
- Team/ownership (`team-infra`, `team-product`)
- Release tracking (`v1.0`, `v2.0`, `backport-candidate`)
## Quick Start
```bash
# Add labels when creating issues
bd create "Fix auth bug" -t bug -p 1 -l auth,backend,urgent
# Add labels to existing issues
bd label add bd-42 security
bd label add bd-42 breaking-change
# List issue labels
bd label list bd-42
# Remove a label
bd label remove bd-42 urgent
# List all labels in use
bd label list-all
# Filter by labels (AND - must have ALL)
bd list --label backend,auth
# Filter by labels (OR - must have AT LEAST ONE)
bd list --label-any frontend,backend
# Combine filters
bd list --status open --priority 1 --label security
```
## Common Label Patterns
### 1. Technical Component Labels
Identify which part of the system:
```bash
backend
frontend
api
database
infrastructure
cli
ui
mobile
```
**Example:**
```bash
bd create "Add GraphQL endpoint" -t feature -p 2 -l backend,api
bd create "Update login form" -t task -p 2 -l frontend,auth,ui
```
### 2. Domain/Feature Area
Group by business domain:
```bash
auth
payments
search
analytics
billing
notifications
reporting
admin
```
**Example:**
```bash
bd list --label payments --status open # All open payment issues
bd list --label-any auth,security # Security-related work
```
### 3. Size/Effort Estimates
Quick effort indicators:
```bash
small # < 1 day
medium # 1-3 days
large # > 3 days
```
**Example:**
```bash
# Find small quick wins
bd ready --json | jq '.[] | select(.labels[] == "small")'
```
### 4. Quality Gates
Track what's needed before closing:
```bash
needs-review
needs-tests
needs-docs
breaking-change
```
**Example:**
```bash
bd label add bd-42 needs-review
bd list --label needs-review --status in_progress
```
### 5. Release Management
Track release targeting:
```bash
v1.0
v2.0
backport-candidate
release-blocker
```
**Example:**
```bash
bd list --label v1.0 --status open # What's left for v1.0?
bd label add bd-42 release-blocker
```
### 6. Team/Ownership
Indicate ownership or interest:
```bash
team-infra
team-product
team-mobile
needs-triage
help-wanted
```
**Example:**
```bash
bd list --assignee alice --label team-infra
bd create "Memory leak in cache" -t bug -p 1 -l team-infra,help-wanted
```
### 7. Special Markers
Process or workflow flags:
```bash
auto-generated # Created by automation
discovered-from # Found during other work (also a dep type)
technical-debt
good-first-issue
duplicate
wontfix
```
**Example:**
```bash
bd create "TODO: Refactor parser" -t chore -p 3 -l technical-debt,auto-generated
```
## Filtering by Labels
### AND Filtering (--label)
All specified labels must be present:
```bash
# Issues that are BOTH backend AND urgent
bd list --label backend,urgent
# Open bugs that need review AND tests
bd list --status open --type bug --label needs-review,needs-tests
```
### OR Filtering (--label-any)
At least one specified label must be present:
```bash
# Issues in frontend OR backend
bd list --label-any frontend,backend
# Security or auth related
bd list --label-any security,auth
```
### Combining AND/OR
Mix both filters for complex queries:
```bash
# Backend issues that are EITHER urgent OR a blocker
bd list --label backend --label-any urgent,release-blocker
# Frontend work that needs BOTH review and tests, but in any component
bd list --label needs-review,needs-tests --label-any frontend,ui,mobile
```
## Workflow Examples
### Triage Workflow
```bash
# Create untriaged issue
bd create "Crash on login" -t bug -p 1 -l needs-triage
# During triage, add context
bd label add bd-42 auth
bd label add bd-42 backend
bd label add bd-42 urgent
bd label remove bd-42 needs-triage
# Find untriaged issues
bd list --label needs-triage
```
### Quality Gate Workflow
```bash
# Start work
bd update bd-42 --status in_progress
# Mark quality requirements
bd label add bd-42 needs-tests
bd label add bd-42 needs-docs
# Before closing, verify
bd label list bd-42
# ... write tests and docs ...
bd label remove bd-42 needs-tests
bd label remove bd-42 needs-docs
# Close when gates satisfied
bd close bd-42
```
### Release Planning
```bash
# Tag issues for v1.0
bd label add bd-42 v1.0
bd label add bd-43 v1.0
bd label add bd-44 v1.0
# Track v1.0 progress
bd list --label v1.0 --status closed # Done
bd list --label v1.0 --status open # Remaining
bd stats # Overall progress
# Mark critical items
bd label add bd-45 v1.0
bd label add bd-45 release-blocker
```
### Component-Based Work Distribution
```bash
# Backend team picks up work
bd ready --json | jq '.[] | select(.labels[]? == "backend")'
# Frontend team finds small tasks
bd list --status open --label frontend,small
# Find help-wanted items for new contributors
bd list --label help-wanted,good-first-issue
```
## Label Management
### Listing Labels
```bash
# Labels on a specific issue
bd label list bd-42
# All labels in database with usage counts
bd label list-all
# JSON output for scripting
bd label list-all --json
```
Output:
```json
[
{"label": "auth", "count": 5},
{"label": "backend", "count": 12},
{"label": "frontend", "count": 8}
]
```
### Bulk Operations
Add labels in batch during creation:
```bash
bd create "Issue" -l label1,label2,label3
```
Script to add label to multiple issues:
```bash
# Add "needs-review" to all in_progress issues
bd list --status in_progress --json | jq -r '.[].id' | while read id; do
bd label add "$id" needs-review
done
```
Remove label from multiple issues:
```bash
# Remove "urgent" from closed issues
bd list --status closed --label urgent --json | jq -r '.[].id' | while read id; do
bd label remove "$id" urgent
done
```
## Integration with Git Workflow
Labels are automatically synced to `.beads/issues.jsonl` along with all issue data:
```bash
# Make changes
bd create "Fix bug" -l backend,urgent
bd label add bd-42 needs-review
# Auto-exported after 5 seconds (or use git hooks for immediate export)
git add .beads/issues.jsonl
git commit -m "Add backend issue"
# After git pull, labels are auto-imported
git pull
bd list --label backend # Fresh data including labels
```
## Markdown Import/Export
Labels are preserved when importing from markdown:
```markdown
# Fix Authentication Bug
### Type
bug
### Priority
1
### Labels
auth, backend, urgent, needs-review
### Description
Users can't log in after recent deployment.
```
```bash
bd create -f issue.md
# Creates issue with all four labels
```
## Best Practices
### 1. Establish Conventions Early
Document your team's label taxonomy:
```bash
# Add to project README or CONTRIBUTING.md
- Use lowercase, hyphen-separated (e.g., `good-first-issue`)
- Prefix team labels (e.g., `team-infra`, `team-product`)
- Use consistent size labels (`small`, `medium`, `large`)
```
### 2. Don't Overuse Labels
Labels are flexible, but too many can cause confusion. Prefer:
- 5-10 core technical labels (`backend`, `frontend`, `api`, etc.)
- 3-5 domain labels per project
- Standard process labels (`needs-review`, `needs-tests`)
- Release labels as needed
### 3. Clean Up Unused Labels
Periodically review:
```bash
bd label list-all
# Remove obsolete labels from issues
```
### 4. Use Labels for Filtering, Not Search
Labels are for categorization, not free-text search:
- ✅ Good: `backend`, `auth`, `urgent`
- ❌ Bad: `fix-the-login-bug`, `john-asked-for-this`
### 5. Combine with Dependencies
Labels + dependencies = powerful organization:
```bash
# Epic with labeled subtasks
bd create "Auth system rewrite" -t epic -p 1 -l auth,v2.0
bd create "Implement JWT" -t task -p 1 -l auth,backend --deps parent-child:bd-42
bd create "Update login UI" -t task -p 1 -l auth,frontend --deps parent-child:bd-42
# Find all v2.0 auth work
bd list --label auth,v2.0
```
## AI Agent Usage
Labels are especially useful for AI agents managing complex workflows:
```bash
# Auto-label discovered work
bd create "Found TODO in auth.go" -t task -p 2 -l auto-generated,technical-debt
# Filter for agent review
bd list --label needs-review --status in_progress --json
# Track automation metadata
bd label add bd-42 ai-generated
bd label add bd-42 needs-human-review
```
Example agent workflow:
```bash
# Agent discovers issues during refactor
bd create "Extract validateToken function" -t chore -p 2 \
-l technical-debt,backend,auth,small \
--deps discovered-from:bd-10
# Agent marks work for review
bd update bd-42 --status in_progress
# ... agent does work ...
bd label add bd-42 needs-review
bd label add bd-42 ai-generated
# Human reviews and approves
bd label remove bd-42 needs-review
bd label add bd-42 approved
bd close bd-42
```
## Advanced Patterns
### Component Matrix
Track issues across multiple dimensions:
```bash
# Backend + auth + high priority
bd list --label backend,auth --priority 1
# Any frontend work that's small
bd list --label-any frontend,ui --label small
# Critical issues across all components
bd list --priority 0 --label-any backend,frontend,infrastructure
```
### Sprint Planning
```bash
# Label issues for sprint
for id in bd-42 bd-43 bd-44 bd-45; do
bd label add "$id" sprint-12
done
# Track sprint progress
bd list --label sprint-12 --status closed # Velocity
bd list --label sprint-12 --status open # Remaining
bd stats | grep "In Progress" # Current WIP
```
### Technical Debt Tracking
```bash
# Mark debt
bd create "Refactor legacy parser" -t chore -p 3 -l technical-debt,large
# Find debt to tackle
bd list --label technical-debt --label small
bd list --label technical-debt --priority 1 # High-priority debt
```
### Breaking Change Coordination
```bash
# Identify breaking changes
bd label add bd-42 breaking-change
bd label add bd-42 v2.0
# Find all breaking changes for next major release
bd list --label breaking-change,v2.0
# Ensure they're documented
bd list --label breaking-change --label needs-docs
```
## Troubleshooting
### Labels Not Showing in List
Labels require explicit fetching. The `bd list` command shows issues but not labels in human output (only in JSON).
```bash
# See labels in JSON
bd list --json | jq '.[] | {id, labels}'
# See labels for specific issue
bd show bd-42 --json | jq '.labels'
bd label list bd-42
```
### Label Filtering Not Working
Check label names for exact matches (case-sensitive):
```bash
# These are different labels:
bd label add bd-42 Backend # Capital B
bd list --label backend # Won't match
# List all labels to see exact names
bd label list-all
```
### Syncing Labels with Git
Labels are included in `.beads/issues.jsonl` export. If labels seem out of sync:
```bash
# Force export
bd export -o .beads/issues.jsonl
# After pull, force import
bd import -i .beads/issues.jsonl
```
## See Also
- [README.md](README.md) - Main documentation
- [AGENTS.md](AGENTS.md) - AI agent integration guide
- [AGENTS.md](AGENTS.md) - Team workflow patterns
- [ADVANCED.md](ADVANCED.md) - JSONL format details

385
docs/PLUGIN.md Normal file
View File

@@ -0,0 +1,385 @@
# Beads Claude Code Plugin
AI-supervised issue tracker for coding workflows. Manage tasks, discover work, and maintain context with simple slash commands and MCP tools.
## What is Beads?
Beads (`bd`) is an issue tracker designed specifically for AI-supervised coding workflows. It helps AI agents and developers:
- Track work with a simple CLI
- Discover and link related tasks during development
- Maintain context across coding sessions
- Auto-sync issues via JSONL for git workflows
## Installation
### Prerequisites
1. Install beads CLI:
```bash
curl -sSL https://raw.githubusercontent.com/steveyegge/beads/main/install.sh | bash
```
2. Install Python and uv (for MCP server):
```bash
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
```
### Install Plugin
There are two ways to install the beads plugin:
#### Option 1: From GitHub (Recommended)
```bash
# In Claude Code
/plugin marketplace add steveyegge/beads
/plugin install beads
```
#### Option 2: Local Development
```bash
# Clone the repository
git clone https://github.com/steveyegge/beads
cd beads
# Add local marketplace
/plugin marketplace add .
# Install plugin
/plugin install beads
```
### Restart Claude Code
After installation, restart Claude Code to activate the MCP server.
## Quick Start
```bash
# Initialize beads in your project
/bd-init
# Create your first issue
/bd-create "Set up project structure" feature 1
# See what's ready to work on
/bd-ready
# Show full workflow guide
/bd-workflow
```
## Available Commands
### Version Management
- **`/bd-version`** - Check bd CLI, plugin, and MCP server versions
### Core Workflow Commands
- **`/bd-ready`** - Find tasks with no blockers, ready to work on
- **`/bd-create [title] [type] [priority]`** - Create a new issue interactively
- **`/bd-show [issue-id]`** - Show detailed information about an issue
- **`/bd-update [issue-id] [status]`** - Update issue status or other fields
- **`/bd-close [issue-id] [reason]`** - Close a completed issue
### Project Management
- **`/bd-init`** - Initialize beads in the current project
- **`/bd-workflow`** - Show the AI-supervised issue workflow guide
- **`/bd-stats`** - Show project statistics and progress
### Agents
- **`@task-agent`** - Autonomous agent that finds and completes ready tasks
## MCP Tools Available
The plugin includes a full-featured MCP server with these tools:
- **`init`** - Initialize bd in current directory
- **`create`** - Create new issue (bug, feature, task, epic, chore)
- **`list`** - List issues with filters (status, priority, type, assignee)
- **`ready`** - Find tasks with no blockers ready to work on
- **`show`** - Show detailed issue info including dependencies
- **`update`** - Update issue (status, priority, design, notes, etc)
- **`close`** - Close completed issue
- **`dep`** - Add dependency (blocks, related, parent-child, discovered-from)
- **`blocked`** - Get blocked issues
- **`stats`** - Get project statistics
### MCP Resources
- **`beads://quickstart`** - Interactive quickstart guide
## Workflow
The beads workflow is designed for AI agents but works great for humans too:
1. **Find ready work**: `/bd-ready`
2. **Claim your task**: `/bd-update <id> in_progress`
3. **Work on it**: Implement, test, document
4. **Discover new work**: Create issues for bugs/TODOs found during work
5. **Complete**: `/bd-close <id> "Done: <summary>"`
6. **Repeat**: Check for newly unblocked tasks
## Issue Types
- **`bug`** - Something broken that needs fixing
- **`feature`** - New functionality
- **`task`** - Work item (tests, docs, refactoring)
- **`epic`** - Large feature composed of multiple issues
- **`chore`** - Maintenance work (dependencies, tooling)
## Priority Levels
- **`0`** - Critical (security, data loss, broken builds)
- **`1`** - High (major features, important bugs)
- **`2`** - Medium (nice-to-have features, minor bugs)
- **`3`** - Low (polish, optimization)
- **`4`** - Backlog (future ideas)
## Dependency Types
- **`blocks`** - Hard dependency (issue X blocks issue Y from starting)
- **`related`** - Soft relationship (issues are connected)
- **`parent-child`** - Epic/subtask relationship
- **`discovered-from`** - Track issues discovered during work
Only `blocks` dependencies affect the ready work queue.
## Configuration
### Auto-Approval Configuration
By default, Claude Code asks for confirmation every time the beads MCP server wants to run a command. This is a security feature, but it can disrupt workflow during active development.
**Available Options:**
#### 1. Auto-Approve All Beads Tools (Recommended for Trusted Projects)
Add to your Claude Code `settings.json`:
```json
{
"enabledMcpjsonServers": ["beads"]
}
```
This auto-approves all beads commands without prompting.
#### 2. Auto-Approve Project MCP Servers
Add to your Claude Code `settings.json`:
```json
{
"enableAllProjectMcpServers": true
}
```
This auto-approves all MCP servers defined in your project's `.mcp.json` file. Useful when working across multiple projects with different MCP requirements.
#### 3. Manual Approval (Default)
No configuration needed. Claude Code will prompt for approval on each MCP tool invocation.
**Security Trade-offs:**
- **Manual approval (default)**: Maximum safety, but interrupts workflow frequently
- **Server-level auto-approval**: Convenient for trusted projects, but allows any beads operation without confirmation
- **Project-level auto-approval**: Good balance for multi-project workflows with project-specific trust levels
**Limitation:** Claude Code doesn't currently support per-tool approval granularity. You cannot auto-approve only read operations (like `bd ready`, `bd show`) while requiring confirmation for mutations (like `bd create`, `bd update`). It's all-or-nothing at the server level.
**Recommended Configuration:**
For active development on trusted projects where you're frequently using beads:
```json
{
"enabledMcpjsonServers": ["beads"]
}
```
For more information, see the [Claude Code settings documentation](https://docs.claude.com/en/docs/claude-code/settings).
### Environment Variables
The MCP server supports these environment variables:
- **`BEADS_PATH`** - Path to bd executable (default: `bd` in PATH)
- **`BEADS_DB`** - Path to beads database file (default: auto-discover from cwd)
- **`BEADS_ACTOR`** - Actor name for audit trail (default: `$USER`)
- **`BEADS_NO_AUTO_FLUSH`** - Disable automatic JSONL sync (default: `false`)
- **`BEADS_NO_AUTO_IMPORT`** - Disable automatic JSONL import (default: `false`)
To customize, edit your Claude Code MCP settings or the plugin configuration.
## Examples
### Basic Task Management
```bash
# Create a high-priority bug
/bd-create "Fix authentication" bug 1
# See ready work
/bd-ready
# Start working on bd-10
/bd-update bd-10 in_progress
# Complete the task
/bd-close bd-10 "Fixed auth token validation"
```
### Discovering Work During Development
```bash
# Working on bd-10, found a related bug
/bd-create "Add rate limiting to API" feature 2
# Link it to current work (via MCP tool)
# Use `dep` tool: issue="bd-11", depends_on="bd-10", type="discovered-from"
# Close original task
/bd-close bd-10 "Done, discovered bd-11 for rate limiting"
```
### Using the Task Agent
```bash
# Let the agent find and complete ready work
@task-agent
# The agent will:
# 1. Find ready work with `ready` tool
# 2. Claim a task by updating status
# 3. Execute the work
# 4. Create issues for discoveries
# 5. Close when complete
# 6. Repeat
```
## Auto-Sync with Git
Beads automatically syncs issues to `.beads/issues.jsonl`:
- **Export**: After any CRUD operation (5-second debounce)
- **Import**: When JSONL is newer than DB (e.g., after `git pull`)
This enables seamless collaboration:
```bash
# Make changes
bd create "Add feature" -p 1
# Changes auto-export after 5 seconds
# Commit when ready
git add .beads/issues.jsonl
git commit -m "Add feature tracking"
# After pull, JSONL auto-imports
git pull
bd ready # Fresh data from git!
```
## Updating
The beads plugin has three components that may need updating:
### 1. Plugin Updates
Check for plugin updates:
```bash
/plugin update beads
```
Claude Code will pull the latest version from GitHub. After updating, **restart Claude Code** to apply MCP server changes.
### 2. bd CLI Updates
The plugin requires the `bd` CLI to be installed. Update it separately:
```bash
# Quick update
curl -fsSL https://raw.githubusercontent.com/steveyegge/beads/main/install.sh | bash
# Or with go
go install github.com/steveyegge/beads/cmd/bd@latest
```
### 3. Version Compatibility
The MCP server **automatically checks** bd CLI version on startup and will fail with a clear error if your version is too old.
Check version compatibility manually:
```bash
/bd-version
```
This will show:
- bd CLI version
- Plugin version
- MCP server status
- Compatibility warnings if versions mismatch
**Recommended update workflow:**
1. Check versions: `/bd-version`
2. Update bd CLI if needed (see above)
3. Update plugin: `/plugin update beads`
4. Restart Claude Code
5. Verify: `/bd-version`
### Version Numbering
Beads follows semantic versioning. The plugin version tracks the bd CLI version:
- Plugin 0.9.2 requires bd CLI >= 0.9.0 (checked automatically at startup)
- Major version bumps may introduce breaking changes
- Check CHANGELOG.md for release notes
## Troubleshooting
### Plugin not appearing
1. Check installation: `/plugin list`
2. Restart Claude Code
3. Verify `bd` is in PATH: `which bd`
4. Check uv is installed: `which uv`
### MCP server not connecting
1. Check MCP server list: `/mcp`
2. Look for "beads" server with plugin indicator
3. Restart Claude Code to reload MCP servers
4. Check logs for errors
### Commands not working
1. Make sure you're in a project with beads initialized: `/bd-init`
2. Check if database exists: `ls -la .beads/`
3. Try direct MCP tool access instead of slash commands
4. Check the beads CLI works: `bd --help`
### MCP tool errors
1. Verify `bd` executable location: `BEADS_PATH` env var
2. Check `bd` works in terminal: `bd stats`
3. Review MCP server logs in Claude Code
4. Try reinitializing: `/bd-init`
## Learn More
- **GitHub**: https://github.com/steveyegge/beads
- **Documentation**: See README.md in the repository
- **Examples**: Check `examples/` directory for integration patterns
- **MCP Server**: See `integrations/beads-mcp/` for server details
## Contributing
Found a bug or have a feature idea? Create an issue in the beads repository!
## License
MIT License - see LICENSE file in the repository.

182
docs/QUICKSTART.md Normal file
View File

@@ -0,0 +1,182 @@
# Beads Quickstart
Get up and running with Beads in 2 minutes.
## Installation
```bash
cd ~/src/beads
go build -o bd ./cmd/bd
./bd --help
```
## Initialize
First time in a repository:
```bash
# Basic setup
bd init
# OSS contributor (fork workflow with separate planning repo)
bd init --contributor
# Team member (branch workflow for collaboration)
bd init --team
# Protected main branch (GitHub/GitLab)
bd init --branch beads-metadata
```
The wizard will:
- Create `.beads/` directory and database
- Import existing issues from git (if any)
- Prompt to install git hooks (recommended)
- Prompt to configure git merge driver (recommended)
- Auto-start daemon for sync
## Your First Issues
```bash
# Create a few issues
./bd create "Set up database" -p 1 -t task
./bd create "Create API" -p 2 -t feature
./bd create "Add authentication" -p 2 -t feature
# List them
./bd list
```
**Note:** Issue IDs are hash-based (e.g., `bd-a1b2`, `bd-f14c`) to prevent collisions when multiple agents/branches work concurrently.
## Hierarchical Issues (Epics)
For large features, use hierarchical IDs to organize work:
```bash
# Create epic (generates parent hash ID)
./bd create "Auth System" -t epic -p 1
# Returns: bd-a3f8e9
# Create child tasks (automatically get .1, .2, .3 suffixes)
./bd create "Design login UI" -p 1 # bd-a3f8e9.1
./bd create "Backend validation" -p 1 # bd-a3f8e9.2
./bd create "Integration tests" -p 1 # bd-a3f8e9.3
# View hierarchy
./bd dep tree bd-a3f8e9
```
Output:
```
🌲 Dependency tree for bd-a3f8e9:
→ bd-a3f8e9: Auth System [epic] [P1] (open)
→ bd-a3f8e9.1: Design login UI [P1] (open)
→ bd-a3f8e9.2: Backend validation [P1] (open)
→ bd-a3f8e9.3: Integration tests [P1] (open)
```
## Add Dependencies
```bash
# API depends on database
./bd dep add bd-2 bd-1
# Auth depends on API
./bd dep add bd-3 bd-2
# View the tree
./bd dep tree bd-3
```
Output:
```
🌲 Dependency tree for bd-3:
→ bd-3: Add authentication [P2] (open)
→ bd-2: Create API [P2] (open)
→ bd-1: Set up database [P1] (open)
```
## Find Ready Work
```bash
./bd ready
```
Output:
```
📋 Ready work (1 issues with no blockers):
1. [P1] bd-1: Set up database
```
Only bd-1 is ready because bd-2 and bd-3 are blocked!
## Work the Queue
```bash
# Start working on bd-1
./bd update bd-1 --status in_progress
# Complete it
./bd close bd-1 --reason "Database setup complete"
# Check ready work again
./bd ready
```
Now bd-2 is ready! 🎉
## Track Progress
```bash
# See blocked issues
./bd blocked
# View statistics
./bd stats
```
## Database Location
By default: `~/.beads/default.db`
You can use project-specific databases:
```bash
./bd --db ./my-project.db create "Task"
```
## Migrating Databases
After upgrading bd, use `bd migrate` to check for and migrate old database files:
```bash
# Inspect migration plan (AI agents)
./bd migrate --inspect --json
# Check schema and config
./bd info --schema --json
# Preview migration changes
./bd migrate --dry-run
# Migrate old databases to beads.db
./bd migrate
# Migrate and clean up old files
./bd migrate --cleanup --yes
```
**AI agents:** Use `--inspect` to analyze migration safety before running. The system verifies required config keys and data integrity invariants.
## Next Steps
- Add labels: `./bd create "Task" -l "backend,urgent"`
- Filter ready work: `./bd ready --priority 1`
- Search issues: `./bd list --status open`
- Detect cycles: `./bd dep cycles`
See [README.md](README.md) for full documentation.

107
docs/README_TESTING.md Normal file
View File

@@ -0,0 +1,107 @@
# Testing Strategy
This project uses a two-tier testing approach to balance speed and thoroughness.
## Test Categories
### Fast Tests (Unit Tests)
- Run on every commit and PR
- Complete in ~2 seconds
- No build tags required
- Located throughout the codebase
```bash
go test -short ./...
```
### Integration Tests
- Marked with `//go:build integration` tag
- Include slow git operations and multi-clone scenarios
- Run nightly and before releases
- Located in:
- `beads_hash_multiclone_test.go` - Multi-clone convergence tests (~13s)
- `beads_integration_test.go` - End-to-end scenarios
- `beads_multidb_test.go` - Multi-database tests
```bash
go test -tags=integration ./...
```
## CI Strategy
**PR Checks** (fast, runs on every PR):
```bash
go test -short -race ./...
```
**Nightly** (comprehensive, runs overnight):
```bash
go test -tags=integration -race ./...
```
## Adding New Tests
### For Fast Tests
No special setup required. Just write the test normally.
### For Integration Tests
Add build tags at the top of the file:
```go
//go:build integration
// +build integration
package yourpackage_test
```
Mark slow operations with `testing.Short()` check:
```go
func TestSomethingSlow(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
// ... slow test code
}
```
## Local Development
During development, run fast tests frequently:
```bash
go test -short ./...
```
Before committing, run full suite:
```bash
go test -tags=integration ./...
```
## Performance Optimization
### In-Memory Filesystems for Git Tests
Git-heavy integration tests use `testutil.TempDirInMemory()` to reduce I/O overhead:
```go
import "github.com/steveyegge/beads/internal/testutil"
func TestWithGitOps(t *testing.T) {
tmpDir := testutil.TempDirInMemory(t)
// ... test code using tmpDir
}
```
**Platform behavior:**
- **Linux**: Uses `/dev/shm` (tmpfs ramdisk) if available - provides 20-30% speedup
- **macOS**: Uses standard `/tmp` (APFS is already fast)
- **Windows**: Uses standard temp directory
**For CI (GitHub Actions):**
Linux runners automatically have `/dev/shm` available, so no configuration needed.
## Performance Targets
- **Fast tests**: < 3 seconds total
- **Integration tests**: < 15 seconds total
- **Full suite**: < 18 seconds total

85
docs/TEST_OPTIMIZATION.md Normal file
View File

@@ -0,0 +1,85 @@
# Test Suite Optimization - November 2025
## Problem
Test suite was timing out after 5+ minutes, making development workflow painful.
## Root Cause
Slow integration tests were running during normal `go test ./...`:
- **Daemon tests**: 7 files with git operations and time.Sleep calls
- **Multi-clone convergence tests**: 2 tests creating multiple git repos
- **Concurrent import test**: 30-second timeout for deadlock detection
## Solution
Tagged slow integration tests with `//go:build integration` so they're excluded from normal runs:
### Files moved to integration-only:
1. `cmd/bd/daemon_test.go` (862 lines, 15 tests)
2. `cmd/bd/daemon_sync_branch_test.go` (1235 lines, 11 tests)
3. `cmd/bd/daemon_autoimport_test.go` (408 lines, 2 tests)
4. `cmd/bd/daemon_watcher_test.go` (7 tests)
5. `cmd/bd/daemon_watcher_platform_test.go`
6. `cmd/bd/daemon_lock_test.go`
7. `cmd/bd/git_sync_test.go`
8. `beads_hash_multiclone_test.go` (already tagged)
9. `internal/importer/importer_integration_test.go` (concurrent test)
### Fix for build error:
- Added `const windowsOS = "windows"` to `test_helpers_test.go` (was in daemon_test.go)
## Results
### Before:
```
$ go test ./...
> 300 seconds (timeout)
```
### After:
```
$ go test ./...
real 0m1.668s ✅
user 0m2.075s
sys 0m1.586s
```
**99.4% faster!** From 5+ minutes to under 2 seconds.
## Running Integration Tests
### Normal development (fast):
```bash
go test ./...
```
### Full test suite including integration (slow):
```bash
go test -tags=integration ./...
```
### CI/CD:
```yaml
# Fast feedback on PRs
- run: go test ./...
# Full suite on merge to main
- run: go test -tags=integration ./...
```
## Benefits
1. ✅ Fast feedback loop for developers (<2s vs 5+ min)
2. ✅ Agents won't timeout on test runs
3. ✅ Integration tests still available when needed
4. ✅ CI can run both fast and comprehensive tests
5. ✅ No tests deleted - just separated by speed
## What Tests Remain in Fast Suite?
- All unit tests (~300+ tests)
- Quick integration tests (<100ms each)
- In-memory database tests
- Logic/validation tests
- Fast import/export tests
## Notes
- Integration tests still have `testing.Short()` checks for double safety
- The `integration` build tag is opt-in (must explicitly request with `-tags=integration`)
- All slow git/daemon operations are now integration-only

542
docs/TROUBLESHOOTING.md Normal file
View File

@@ -0,0 +1,542 @@
# Troubleshooting bd
Common issues and solutions for bd users.
## Table of Contents
- [Installation Issues](#installation-issues)
- [Database Issues](#database-issues)
- [Git and Sync Issues](#git-and-sync-issues)
- [Ready Work and Dependencies](#ready-work-and-dependencies)
- [Performance Issues](#performance-issues)
- [Agent-Specific Issues](#agent-specific-issues)
- [Platform-Specific Issues](#platform-specific-issues)
## Installation Issues
### `bd: command not found`
bd is not in your PATH. Either:
```bash
# Check if installed
go list -f {{.Target}} github.com/steveyegge/beads/cmd/bd
# Add Go bin to PATH (add to ~/.bashrc or ~/.zshrc)
export PATH="$PATH:$(go env GOPATH)/bin"
# Or reinstall
go install github.com/steveyegge/beads/cmd/bd@latest
```
### Wrong version of bd running / Multiple bd binaries in PATH
If `bd version` shows an unexpected version (e.g., older than what you just installed), you likely have multiple `bd` binaries in your PATH.
**Diagnosis:**
```bash
# Check all bd binaries in PATH
which -a bd
# Example output showing conflict:
# /Users/you/go/bin/bd <- From go install (older)
# /opt/homebrew/bin/bd <- From Homebrew (newer)
```
**Solution:**
```bash
# Remove old go install version
rm ~/go/bin/bd
# Or remove mise-managed Go installs
rm ~/.local/share/mise/installs/go/*/bin/bd
# Verify you're using the correct version
which bd # Should show /opt/homebrew/bin/bd or your package manager path
bd version # Should show the expected version
```
**Why this happens:** If you previously installed bd via `go install`, the binary was placed in `~/go/bin/`. When you later install via Homebrew or another package manager, the old `~/go/bin/bd` may appear earlier in your PATH, causing the wrong version to run.
**Recommendation:** Choose one installation method (Homebrew recommended) and stick with it. Avoid mixing `go install` with package managers.
### `zsh: killed bd` or crashes on macOS
Some users report crashes when running `bd init` or other commands on macOS. This is typically caused by CGO/SQLite compatibility issues.
**Workaround:**
```bash
# Build with CGO enabled
CGO_ENABLED=1 go install github.com/steveyegge/beads/cmd/bd@latest
# Or if building from source
git clone https://github.com/steveyegge/beads
cd beads
CGO_ENABLED=1 go build -o bd ./cmd/bd
sudo mv bd /usr/local/bin/
```
If you installed via Homebrew, this shouldn't be necessary as the formula already enables CGO. If you're still seeing crashes with the Homebrew version, please [file an issue](https://github.com/steveyegge/beads/issues).
## Database Issues
### `database is locked`
Another bd process is accessing the database, or SQLite didn't close properly. Solutions:
```bash
# Find and kill hanging processes
ps aux | grep bd
kill <pid>
# Remove lock files (safe if no bd processes running)
rm .beads/*.db-journal .beads/*.db-wal .beads/*.db-shm
```
**Note**: bd uses a pure Go SQLite driver (`modernc.org/sqlite`) for better portability. Under extreme concurrent load (100+ simultaneous operations), you may see "database is locked" errors. This is a known limitation of the pure Go implementation and does not affect normal usage. For very high concurrency scenarios, consider using the CGO-enabled driver or PostgreSQL (planned for future release).
### `bd init` fails with "directory not empty"
`.beads/` already exists. Options:
```bash
# Use existing database
bd list # Should work if already initialized
# Or remove and reinitialize (DESTROYS DATA!)
rm -rf .beads/
bd init
```
### `failed to import: issue already exists`
You're trying to import issues that conflict with existing ones. Options:
```bash
# Skip existing issues (only import new ones)
bd import -i issues.jsonl --skip-existing
# Or clear database and re-import everything
rm .beads/*.db
bd import -i .beads/issues.jsonl
```
### Database corruption
**Important**: Distinguish between **logical consistency issues** (ID collisions, wrong prefixes) and **physical SQLite corruption**.
For **physical database corruption** (disk failures, power loss, filesystem errors):
```bash
# Check database integrity
sqlite3 .beads/*.db "PRAGMA integrity_check;"
# If corrupted, reimport from JSONL (source of truth in git)
mv .beads/*.db .beads/*.db.backup
bd init
bd import -i .beads/issues.jsonl
```
For **logical consistency issues** (ID collisions from branch merges, parallel workers):
```bash
# This is NOT corruption - use collision resolution instead
bd import -i .beads/issues.jsonl --resolve-collisions
```
See [FAQ](FAQ.md#whats-the-difference-between-sqlite-corruption-and-id-collisions) for the distinction.
### Multiple databases detected warning
If you see a warning about multiple `.beads` databases in the directory hierarchy:
```
╔══════════════════════════════════════════════════════════════════════════╗
║ WARNING: 2 beads databases detected in directory hierarchy ║
╠══════════════════════════════════════════════════════════════════════════╣
║ Multiple databases can cause confusion and database pollution. ║
║ ║
║ ▶ /path/to/project/.beads (15 issues) ║
║ /path/to/parent/.beads (32 issues) ║
║ ║
║ Currently using the closest database (▶). This is usually correct. ║
║ ║
║ RECOMMENDED: Consolidate or remove unused databases to avoid confusion. ║
╚══════════════════════════════════════════════════════════════════════════╝
```
This means bd found multiple `.beads` directories in your directory hierarchy. The `▶` marker shows which database is actively being used (usually the closest one to your current directory).
**Why this matters:**
- Can cause confusion about which database contains your work
- Easy to accidentally work in the wrong database
- May lead to duplicate tracking of the same work
**Solutions:**
1. **If you have nested projects** (intentional):
- This is fine! bd is designed to support this
- Just be aware which database you're using
- Set `BEADS_DIR` environment variable to point to your `.beads` directory if you want to override the default selection
- Or use `BEADS_DB` (deprecated) to point directly to the database file
2. **If you have accidental duplicates** (unintentional):
- Decide which database to keep
- Export issues from the unwanted database: `cd <unwanted-dir> && bd export -o backup.jsonl`
- Remove the unwanted `.beads` directory: `rm -rf <unwanted-dir>/.beads`
- Optionally import issues into the main database if needed
3. **Override database selection**:
```bash
# Temporarily use specific .beads directory (recommended)
BEADS_DIR=/path/to/.beads bd list
# Or add to shell config for permanent override
export BEADS_DIR=/path/to/.beads
# Legacy method (deprecated, points to database file directly)
BEADS_DB=/path/to/.beads/issues.db bd list
export BEADS_DB=/path/to/.beads/issues.db
```
**Note**: The warning only appears when bd detects multiple databases. If you see this consistently and want to suppress it, you're using the correct database (marked with ``).
## Git and Sync Issues
### Git merge conflict in `issues.jsonl`
When both sides add issues, you'll get conflicts. Resolution:
1. Open `.beads/issues.jsonl`
2. Look for `<<<<<<< HEAD` markers
3. Most conflicts can be resolved by **keeping both sides**
4. Each line is independent unless IDs conflict
5. For same-ID conflicts, keep the newest (check `updated_at`)
Example resolution:
```bash
# After resolving conflicts manually
git add .beads/issues.jsonl
git commit
bd import -i .beads/issues.jsonl # Sync to SQLite
```
See [ADVANCED.md](ADVANCED.md) for detailed merge strategies.
### Git merge conflicts in JSONL
**With hash-based IDs (v0.20.1+), ID collisions don't occur.** Different issues get different hash IDs.
If git shows a conflict in `.beads/issues.jsonl`, it's because the same issue was modified on both branches:
```bash
# Preview what will be updated
bd import -i .beads/issues.jsonl --dry-run
# Resolve git conflict (keep newer version or manually merge)
git checkout --theirs .beads/issues.jsonl # Or --ours, or edit manually
# Import updates the database
bd import -i .beads/issues.jsonl
```
See [ADVANCED.md#handling-git-merge-conflicts](ADVANCED.md#handling-git-merge-conflicts) for details.
### Permission denied on git hooks
Git hooks need execute permissions:
```bash
chmod +x .git/hooks/pre-commit
chmod +x .git/hooks/post-merge
chmod +x .git/hooks/post-checkout
```
Or use the installer: `cd examples/git-hooks && ./install.sh`
### Auto-sync not working
Check if auto-sync is enabled:
```bash
# Check if daemon is running
ps aux | grep "bd daemon"
# Manually export/import
bd export -o .beads/issues.jsonl
bd import -i .beads/issues.jsonl
# Install git hooks for guaranteed sync
cd examples/git-hooks && ./install.sh
```
If you disabled auto-sync with `--no-auto-flush` or `--no-auto-import`, remove those flags or use `bd sync` manually.
## Ready Work and Dependencies
### `bd ready` shows nothing but I have open issues
Those issues probably have open blockers. Check:
```bash
# See blocked issues
bd blocked
# Show dependency tree (default max depth: 50)
bd dep tree <issue-id>
# Limit tree depth to prevent deep traversals
bd dep tree <issue-id> --max-depth 10
# Remove blocking dependency if needed
bd dep remove <from-id> <to-id>
```
Remember: Only `blocks` dependencies affect ready work.
### Circular dependency errors
bd prevents dependency cycles, which break ready work detection. To fix:
```bash
# Detect all cycles
bd dep cycles
# Remove the dependency causing the cycle
bd dep remove <from-id> <to-id>
# Or redesign your dependency structure
```
### Dependencies not showing up
Check the dependency type:
```bash
# Show full issue details including dependencies
bd show <issue-id>
# Visualize the dependency tree
bd dep tree <issue-id>
```
Remember: Different dependency types have different meanings:
- `blocks` - Hard blocker, affects ready work
- `related` - Soft relationship, doesn't block
- `parent-child` - Hierarchical (child depends on parent)
- `discovered-from` - Work discovered during another issue
## Performance Issues
### Export/import is slow
For large databases (10k+ issues):
```bash
# Export only open issues
bd export --format=jsonl --status=open -o .beads/issues.jsonl
# Or filter by priority
bd export --format=jsonl --priority=0 --priority=1 -o critical.jsonl
```
Consider splitting large projects into multiple databases.
### Commands are slow
Check database size and consider compaction:
```bash
# Check database stats
bd stats
# Preview compaction candidates
bd compact --dry-run --all
# Compact old closed issues
bd compact --days 90
```
### Large JSONL files
If `.beads/issues.jsonl` is very large:
```bash
# Check file size
ls -lh .beads/issues.jsonl
# Remove old closed issues
bd compact --days 90
# Or split into multiple projects
cd ~/project/component1 && bd init --prefix comp1
cd ~/project/component2 && bd init --prefix comp2
```
## Agent-Specific Issues
### Agent creates duplicate issues
Agents may not realize an issue already exists. Prevention strategies:
- Have agents search first: `bd list --json | grep "title"`
- Use labels to mark auto-created issues: `bd create "..." -l auto-generated`
- Review and deduplicate periodically: `bd list | sort`
- Use `bd merge` to consolidate duplicates: `bd merge bd-2 --into bd-1`
### Agent gets confused by complex dependencies
Simplify the dependency structure:
```bash
# Check for overly complex trees
bd dep tree <issue-id>
# Remove unnecessary dependencies
bd dep remove <from-id> <to-id>
# Use labels instead of dependencies for loose relationships
bd label add <issue-id> related-to-feature-X
```
### Agent can't find ready work
Check if issues are blocked:
```bash
# See what's blocked
bd blocked
# See what's actually ready
bd ready --json
# Check specific issue
bd show <issue-id>
bd dep tree <issue-id>
```
### MCP server not working
Check installation and configuration:
```bash
# Verify MCP server is installed
pip list | grep beads-mcp
# Check MCP configuration
cat ~/Library/Application\ Support/Claude/claude_desktop_config.json
# Test CLI works
bd version
bd ready
# Check for daemon
ps aux | grep "bd daemon"
```
See [integrations/beads-mcp/README.md](integrations/beads-mcp/README.md) for MCP-specific troubleshooting.
### Claude Code sandbox mode
**Issue:** Claude Code's sandbox restricts network access to a single socket, conflicting with bd's daemon and git operations.
**Solution:** Use the `--sandbox` flag:
```bash
# Sandbox mode disables daemon and auto-sync
bd --sandbox ready
bd --sandbox create "Fix bug" -p 1
bd --sandbox update bd-42 --status in_progress
# Or set individual flags
bd --no-daemon --no-auto-flush --no-auto-import <command>
```
**What sandbox mode does:**
- Disables daemon (uses direct SQLite mode)
- Disables auto-export to JSONL
- Disables auto-import from JSONL
- Allows bd to work in network-restricted environments
**Note:** You'll need to manually sync when outside the sandbox:
```bash
# After leaving sandbox, sync manually
bd sync
```
**Related:** See [Claude Code sandboxing documentation](https://www.anthropic.com/engineering/claude-code-sandboxing) for more about sandbox restrictions.
## Platform-Specific Issues
### Windows: Path issues
```pwsh
# Check if bd.exe is in PATH
where.exe bd
# Add Go bin to PATH (permanently)
[Environment]::SetEnvironmentVariable(
"Path",
$env:Path + ";$env:USERPROFILE\go\bin",
[EnvironmentVariableTarget]::User
)
# Reload PATH in current session
$env:Path = [Environment]::GetEnvironmentVariable("Path", "User")
```
### Windows: Firewall blocking daemon
The daemon listens on loopback TCP. Allow `bd.exe` through Windows Firewall:
1. Open Windows Security → Firewall & network protection
2. Click "Allow an app through firewall"
3. Add `bd.exe` and enable for Private networks
4. Or disable firewall temporarily for testing
### macOS: Gatekeeper blocking execution
If macOS blocks bd:
```bash
# Remove quarantine attribute
xattr -d com.apple.quarantine /usr/local/bin/bd
# Or allow in System Preferences
# System Preferences → Security & Privacy → General → "Allow anyway"
```
### Linux: Permission denied
If you get permission errors:
```bash
# Make bd executable
chmod +x /usr/local/bin/bd
# Or install to user directory
mkdir -p ~/.local/bin
mv bd ~/.local/bin/
export PATH="$HOME/.local/bin:$PATH"
```
## Getting Help
If none of these solutions work:
1. **Check existing issues**: [GitHub Issues](https://github.com/steveyegge/beads/issues)
2. **Enable debug logging**: `bd --verbose <command>`
3. **File a bug report**: Include:
- bd version: `bd version`
- OS and architecture: `uname -a`
- Error message and full command
- Steps to reproduce
4. **Join discussions**: [GitHub Discussions](https://github.com/steveyegge/beads/discussions)
## Related Documentation
- **[README.md](README.md)** - Core features and quick start
- **[ADVANCED.md](ADVANCED.md)** - Advanced features
- **[FAQ.md](FAQ.md)** - Frequently asked questions
- **[INSTALLING.md](INSTALLING.md)** - Installation guide
- **[ADVANCED.md](ADVANCED.md)** - JSONL format and merge strategies