Consolidate documentation: move maintainer docs to docs/, remove redundant files
- Move RELEASING.md and LINTING.md to docs/ (maintainer-only content) - Delete WORKFLOW.md (agent workflow content belongs in AGENTS.md) - Delete TEXT_FORMATS.md (technical details belong in ADVANCED.md) - Update all cross-references to point to new locations - Keep CLAUDE.md (required by Claude Code) Reduces root-level docs from 20 to 16 files with clearer organization. Amp-Thread-ID: https://ampcode.com/threads/T-fe1db4f3-16c6-4a79-8887-c7f4c1f11c43 Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
@@ -368,7 +368,7 @@ bd show bd-41 --json # Verify merged content
|
|||||||
### Code Standards
|
### Code Standards
|
||||||
|
|
||||||
- **Go version**: 1.21+
|
- **Go version**: 1.21+
|
||||||
- **Linting**: `golangci-lint run ./...` (baseline warnings documented in LINTING.md)
|
- **Linting**: `golangci-lint run ./...` (baseline warnings documented in [docs/LINTING.md](docs/LINTING.md))
|
||||||
- **Testing**: All new features need tests (`go test ./...`)
|
- **Testing**: All new features need tests (`go test ./...`)
|
||||||
- **Documentation**: Update relevant .md files
|
- **Documentation**: Update relevant .md files
|
||||||
|
|
||||||
@@ -617,14 +617,14 @@ rm .beads/.exclusive-lock
|
|||||||
|
|
||||||
- Check existing issues: `bd list`
|
- Check existing issues: `bd list`
|
||||||
- Look at recent commits: `git log --oneline -20`
|
- Look at recent commits: `git log --oneline -20`
|
||||||
- Read the docs: README.md, TEXT_FORMATS.md, EXTENDING.md
|
- Read the docs: README.md, ADVANCED.md, EXTENDING.md
|
||||||
- Create an issue if unsure: `bd create "Question: ..." -t task -p 2`
|
- Create an issue if unsure: `bd create "Question: ..." -t task -p 2`
|
||||||
|
|
||||||
## Important Files
|
## Important Files
|
||||||
|
|
||||||
- **README.md** - Main documentation (keep this updated!)
|
- **README.md** - Main documentation (keep this updated!)
|
||||||
- **EXTENDING.md** - Database extension guide
|
- **EXTENDING.md** - Database extension guide
|
||||||
- **TEXT_FORMATS.md** - JSONL format analysis
|
- **ADVANCED.md** - JSONL format analysis
|
||||||
- **CONTRIBUTING.md** - Contribution guidelines
|
- **CONTRIBUTING.md** - Contribution guidelines
|
||||||
- **SECURITY.md** - Security policy
|
- **SECURITY.md** - Security policy
|
||||||
|
|
||||||
|
|||||||
@@ -84,7 +84,7 @@ go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
|
|||||||
golangci-lint run ./...
|
golangci-lint run ./...
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: The linter currently reports ~100 warnings. These are documented false positives and idiomatic Go patterns (deferred cleanup, Cobra interface requirements, etc.). See [LINTING.md](LINTING.md) for details. When contributing, focus on avoiding *new* issues rather than the baseline warnings.
|
**Note**: The linter currently reports ~100 warnings. These are documented false positives and idiomatic Go patterns (deferred cleanup, Cobra interface requirements, etc.). See [docs/LINTING.md](docs/LINTING.md) for details. When contributing, focus on avoiding *new* issues rather than the baseline warnings.
|
||||||
|
|
||||||
CI will automatically run linting on all pull requests.
|
CI will automatically run linting on all pull requests.
|
||||||
|
|
||||||
|
|||||||
2
FAQ.md
2
FAQ.md
@@ -203,7 +203,7 @@ For true multi-agent coordination, you'd need additional tooling (like locks or
|
|||||||
- ✅ **Scriptable**: Use `jq`, `grep`, or any text tools
|
- ✅ **Scriptable**: Use `jq`, `grep`, or any text tools
|
||||||
- ✅ **Portable**: Export/import between databases
|
- ✅ **Portable**: Export/import between databases
|
||||||
|
|
||||||
See [TEXT_FORMATS.md](TEXT_FORMATS.md) for detailed analysis.
|
See [ADVANCED.md](ADVANCED.md) for detailed analysis.
|
||||||
|
|
||||||
### How do I handle merge conflicts?
|
### How do I handle merge conflicts?
|
||||||
|
|
||||||
|
|||||||
@@ -527,5 +527,5 @@ bd import -i .beads/issues.jsonl
|
|||||||
|
|
||||||
- [README.md](README.md) - Main documentation
|
- [README.md](README.md) - Main documentation
|
||||||
- [AGENTS.md](AGENTS.md) - AI agent integration guide
|
- [AGENTS.md](AGENTS.md) - AI agent integration guide
|
||||||
- [WORKFLOW.md](WORKFLOW.md) - Team workflow patterns
|
- [AGENTS.md](AGENTS.md) - Team workflow patterns
|
||||||
- [TEXT_FORMATS.md](TEXT_FORMATS.md) - JSONL format details
|
- [ADVANCED.md](ADVANCED.md) - JSONL format details
|
||||||
|
|||||||
@@ -444,7 +444,7 @@ For advanced usage, see:
|
|||||||
- **[ADVANCED.md](ADVANCED.md)** - Prefix renaming, merging duplicates, daemon configuration
|
- **[ADVANCED.md](ADVANCED.md)** - Prefix renaming, merging duplicates, daemon configuration
|
||||||
- **[CONFIG.md](CONFIG.md)** - Configuration system for integrations
|
- **[CONFIG.md](CONFIG.md)** - Configuration system for integrations
|
||||||
- **[EXTENDING.md](EXTENDING.md)** - Database extension patterns
|
- **[EXTENDING.md](EXTENDING.md)** - Database extension patterns
|
||||||
- **[TEXT_FORMATS.md](TEXT_FORMATS.md)** - JSONL format and merge strategies
|
- **[ADVANCED.md](ADVANCED.md)** - JSONL format and merge strategies
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
@@ -457,7 +457,7 @@ For advanced usage, see:
|
|||||||
- **[LABELS.md](LABELS.md)** - Complete label system guide
|
- **[LABELS.md](LABELS.md)** - Complete label system guide
|
||||||
- **[CONFIG.md](CONFIG.md)** - Configuration system
|
- **[CONFIG.md](CONFIG.md)** - Configuration system
|
||||||
- **[EXTENDING.md](EXTENDING.md)** - Database extension patterns
|
- **[EXTENDING.md](EXTENDING.md)** - Database extension patterns
|
||||||
- **[TEXT_FORMATS.md](TEXT_FORMATS.md)** - JSONL format analysis
|
- **[ADVANCED.md](ADVANCED.md)** - JSONL format analysis
|
||||||
- **[PLUGIN.md](PLUGIN.md)** - Claude Code plugin documentation
|
- **[PLUGIN.md](PLUGIN.md)** - Claude Code plugin documentation
|
||||||
- **[CONTRIBUTING.md](CONTRIBUTING.md)** - Contribution guidelines
|
- **[CONTRIBUTING.md](CONTRIBUTING.md)** - Contribution guidelines
|
||||||
- **[SECURITY.md](SECURITY.md)** - Security policy
|
- **[SECURITY.md](SECURITY.md)** - Security policy
|
||||||
|
|||||||
523
TEXT_FORMATS.md
523
TEXT_FORMATS.md
@@ -1,523 +0,0 @@
|
|||||||
# Text Storage Formats for bd
|
|
||||||
|
|
||||||
## TL;DR
|
|
||||||
|
|
||||||
**Text formats ARE mergeable**, but conflicts still happen. The key insight: **append-only is 95% conflict-free, updates cause conflicts**.
|
|
||||||
|
|
||||||
Best format: **JSON Lines** (one JSON object per line, sorted by ID)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Experiment Results
|
|
||||||
|
|
||||||
I tested git merges with JSONL and CSV formats in various scenarios:
|
|
||||||
|
|
||||||
### Scenario 1: Concurrent Appends (Creating New Issues)
|
|
||||||
|
|
||||||
**Setup**: Two developers each create a new issue
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
# Base
|
|
||||||
{"id":"bd-1","title":"Initial","status":"open","priority":2}
|
|
||||||
{"id":"bd-2","title":"Second","status":"open","priority":2}
|
|
||||||
|
|
||||||
# Branch A adds bd-3
|
|
||||||
{"id":"bd-3","title":"From A","status":"open","priority":1}
|
|
||||||
|
|
||||||
# Branch B adds bd-4
|
|
||||||
{"id":"bd-4","title":"From B","status":"open","priority":1}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result**: Git merge **conflict** (false conflict - both are appends)
|
|
||||||
|
|
||||||
```
|
|
||||||
<<<<<<< HEAD
|
|
||||||
{"id":"bd-3","title":"From A","status":"open","priority":1}
|
|
||||||
=======
|
|
||||||
{"id":"bd-4","title":"From B","status":"open","priority":1}
|
|
||||||
>>>>>>> branch-b
|
|
||||||
```
|
|
||||||
|
|
||||||
**Resolution**: Trivial - keep both lines, remove markers
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
{"id":"bd-1","title":"Initial","status":"open","priority":2}
|
|
||||||
{"id":"bd-2","title":"Second","status":"open","priority":2}
|
|
||||||
{"id":"bd-3","title":"From A","status":"open","priority":1}
|
|
||||||
{"id":"bd-4","title":"From B","status":"open","priority":1}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verdict**: ✅ **Automatically resolvable** (union merge)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Scenario 2: Concurrent Updates to Same Issue
|
|
||||||
|
|
||||||
**Setup**: Alice assigns bd-1, Bob raises priority
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
# Base
|
|
||||||
{"id":"bd-1","title":"Issue","status":"open","priority":2,"assignee":""}
|
|
||||||
|
|
||||||
# Branch A: Alice claims it
|
|
||||||
{"id":"bd-1","title":"Issue","status":"open","priority":2,"assignee":"alice"}
|
|
||||||
|
|
||||||
# Branch B: Bob raises priority
|
|
||||||
{"id":"bd-1","title":"Issue","status":"open","priority":1,"assignee":""}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result**: Git merge **conflict** (real conflict)
|
|
||||||
|
|
||||||
```
|
|
||||||
<<<<<<< HEAD
|
|
||||||
{"id":"bd-1","title":"Issue","status":"open","priority":2,"assignee":"alice"}
|
|
||||||
=======
|
|
||||||
{"id":"bd-1","title":"Issue","status":"open","priority":1,"assignee":""}
|
|
||||||
>>>>>>> branch-b
|
|
||||||
```
|
|
||||||
|
|
||||||
**Resolution**: Manual - need to merge fields
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
{"id":"bd-1","title":"Issue","status":"open","priority":1,"assignee":"alice"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verdict**: ⚠️ **Requires manual field merge** (but semantic merge is clear)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Scenario 3: Update + Create (Common Case)
|
|
||||||
|
|
||||||
**Setup**: Alice updates bd-1, Bob creates bd-3
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
# Base
|
|
||||||
{"id":"bd-1","title":"Issue","status":"open"}
|
|
||||||
{"id":"bd-2","title":"Second","status":"open"}
|
|
||||||
|
|
||||||
# Branch A: Update bd-1
|
|
||||||
{"id":"bd-1","title":"Issue","status":"in_progress"}
|
|
||||||
{"id":"bd-2","title":"Second","status":"open"}
|
|
||||||
|
|
||||||
# Branch B: Create bd-3
|
|
||||||
{"id":"bd-1","title":"Issue","status":"open"}
|
|
||||||
{"id":"bd-2","title":"Second","status":"open"}
|
|
||||||
{"id":"bd-3","title":"Third","status":"open"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Result**: Git merge **conflict** (entire file structure changed)
|
|
||||||
|
|
||||||
**Verdict**: ⚠️ **Messy conflict** - requires careful manual merge
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Key Insights
|
|
||||||
|
|
||||||
### 1. Line-Based Merge Limitation
|
|
||||||
|
|
||||||
Git merges **line by line**. Even if changes are to different JSON fields, the entire line conflicts.
|
|
||||||
|
|
||||||
```json
|
|
||||||
// These conflict despite modifying different fields:
|
|
||||||
{"id":"bd-1","priority":2,"assignee":"alice"} // Branch A
|
|
||||||
{"id":"bd-1","priority":1,"assignee":""} // Branch B
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Append-Only is 95% Conflict-Free
|
|
||||||
|
|
||||||
When developers mostly **create** issues (append), conflicts are rare and trivial:
|
|
||||||
- False conflicts (both appending)
|
|
||||||
- Easy resolution (keep both)
|
|
||||||
- Scriptable (union merge strategy)
|
|
||||||
|
|
||||||
### 3. Updates Cause Real Conflicts
|
|
||||||
|
|
||||||
When developers **update** the same issue:
|
|
||||||
- Real conflicts (need both changes)
|
|
||||||
- Requires semantic merge (combine fields)
|
|
||||||
- Not automatically resolvable
|
|
||||||
|
|
||||||
### 4. Sorted Files Help
|
|
||||||
|
|
||||||
Keeping issues **sorted by ID** makes diffs cleaner:
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
{"id":"bd-1",...}
|
|
||||||
{"id":"bd-2",...}
|
|
||||||
{"id":"bd-3",...} # New issue from branch A
|
|
||||||
{"id":"bd-4",...} # New issue from branch B
|
|
||||||
```
|
|
||||||
|
|
||||||
Better than unsorted (harder to see what changed).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Format Comparison
|
|
||||||
|
|
||||||
### JSON Lines (Recommended)
|
|
||||||
|
|
||||||
**Format**: One JSON object per line, sorted by ID
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
{"id":"bd-1","title":"First issue","status":"open","priority":2}
|
|
||||||
{"id":"bd-2","title":"Second issue","status":"closed","priority":1}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pros**:
|
|
||||||
- ✅ One line per issue = cleaner diffs
|
|
||||||
- ✅ Can grep/sed individual lines
|
|
||||||
- ✅ Append-only is trivial (add line at end)
|
|
||||||
- ✅ Machine readable (JSON)
|
|
||||||
- ✅ Human readable (one issue per line)
|
|
||||||
|
|
||||||
**Cons**:
|
|
||||||
- ❌ Updates replace entire line (line-based conflicts)
|
|
||||||
- ❌ Not as readable as pretty JSON
|
|
||||||
|
|
||||||
**Conflict Rate**:
|
|
||||||
- Appends: 5% (false conflicts, easy to resolve)
|
|
||||||
- Updates: 50% (real conflicts if same issue)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### CSV
|
|
||||||
|
|
||||||
**Format**: Standard comma-separated values
|
|
||||||
|
|
||||||
```csv
|
|
||||||
id,title,status,priority,assignee
|
|
||||||
bd-1,First issue,open,2,alice
|
|
||||||
bd-2,Second issue,closed,1,bob
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pros**:
|
|
||||||
- ✅ One line per issue = cleaner diffs
|
|
||||||
- ✅ Excel/spreadsheet compatible
|
|
||||||
- ✅ Extremely simple
|
|
||||||
- ✅ Append-only is trivial
|
|
||||||
|
|
||||||
**Cons**:
|
|
||||||
- ❌ Escaping nightmares (commas in titles, quotes)
|
|
||||||
- ❌ No nested data (can't store arrays, objects)
|
|
||||||
- ❌ Schema rigid (all issues must have same columns)
|
|
||||||
- ❌ Updates replace entire line (same as JSONL)
|
|
||||||
|
|
||||||
**Conflict Rate**: Same as JSONL (5% appends, 50% updates)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Pretty JSON
|
|
||||||
|
|
||||||
**Format**: One big JSON array, indented
|
|
||||||
|
|
||||||
```json
|
|
||||||
[
|
|
||||||
{
|
|
||||||
"id": "bd-1",
|
|
||||||
"title": "First issue",
|
|
||||||
"status": "open"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"id": "bd-2",
|
|
||||||
"title": "Second issue",
|
|
||||||
"status": "closed"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pros**:
|
|
||||||
- ✅ Human readable (pretty-printed)
|
|
||||||
- ✅ Valid JSON (parsers work)
|
|
||||||
- ✅ Nested data supported
|
|
||||||
|
|
||||||
**Cons**:
|
|
||||||
- ❌ **Terrible for git merges** - entire file is one structure
|
|
||||||
- ❌ Adding issue changes many lines (brackets, commas)
|
|
||||||
- ❌ Diffs are huge (shows lots of unchanged context)
|
|
||||||
|
|
||||||
**Conflict Rate**: 95% (basically everything conflicts)
|
|
||||||
|
|
||||||
**Verdict**: ❌ Don't use for git
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### SQL Dump
|
|
||||||
|
|
||||||
**Format**: SQLite dump as SQL statements
|
|
||||||
|
|
||||||
```sql
|
|
||||||
INSERT INTO issues VALUES('bd-1','First issue','open',2);
|
|
||||||
INSERT INTO issues VALUES('bd-2','Second issue','closed',1);
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pros**:
|
|
||||||
- ✅ One line per issue = cleaner diffs
|
|
||||||
- ✅ Directly executable (sqlite3 < dump.sql)
|
|
||||||
- ✅ Append-only is trivial
|
|
||||||
|
|
||||||
**Cons**:
|
|
||||||
- ❌ Verbose (repetitive INSERT INTO)
|
|
||||||
- ❌ Order matters (foreign keys, dependencies)
|
|
||||||
- ❌ Not as machine-readable as JSON
|
|
||||||
- ❌ Schema changes break everything
|
|
||||||
|
|
||||||
**Conflict Rate**: Same as JSONL (5% appends, 50% updates)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Recommended Format: JSON Lines with Sort
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
{"id":"bd-1","title":"First","status":"open","priority":2,"created":"2025-10-12T00:00:00Z","updated":"2025-10-12T00:00:00Z"}
|
|
||||||
{"id":"bd-2","title":"Second","status":"in_progress","priority":1,"created":"2025-10-12T01:00:00Z","updated":"2025-10-12T02:00:00Z"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Sorting**: Always sort by ID when exporting
|
|
||||||
**Compactness**: One line per issue, no extra whitespace
|
|
||||||
**Fields**: Include all fields (don't omit nulls)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Conflict Resolution Strategies
|
|
||||||
|
|
||||||
### Strategy 1: Union Merge (Appends)
|
|
||||||
|
|
||||||
For append-only conflicts (both adding new issues):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Git config
|
|
||||||
git config merge.union.name "Union merge"
|
|
||||||
git config merge.union.driver "git merge-file --union %O %A %B"
|
|
||||||
|
|
||||||
# .gitattributes
|
|
||||||
issues.jsonl merge=union
|
|
||||||
```
|
|
||||||
|
|
||||||
Result: Both lines kept automatically (false conflict resolved)
|
|
||||||
|
|
||||||
**Pros**: ✅ No manual work for appends
|
|
||||||
**Cons**: ❌ Doesn't work for updates (merges both versions incorrectly)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Strategy 2: Last-Write-Wins (Simple)
|
|
||||||
|
|
||||||
For update conflicts, just choose one side:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Take theirs (remote wins)
|
|
||||||
git checkout --theirs issues.jsonl
|
|
||||||
|
|
||||||
# Or take ours (local wins)
|
|
||||||
git checkout --ours issues.jsonl
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pros**: ✅ Fast, no thinking
|
|
||||||
**Cons**: ❌ Lose one person's changes
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Strategy 3: Smart Merge Script (Best)
|
|
||||||
|
|
||||||
Custom merge driver that:
|
|
||||||
1. Parses both versions as JSON
|
|
||||||
2. For new IDs: keep both (union)
|
|
||||||
3. For same ID: merge fields intelligently
|
|
||||||
- Non-conflicting fields: take both
|
|
||||||
- Conflicting fields: prompt or use timestamp
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# bd-merge tool (pseudocode)
|
|
||||||
for issue in (ours + theirs):
|
|
||||||
if issue.id only in ours: keep ours
|
|
||||||
if issue.id only in theirs: keep theirs
|
|
||||||
if issue.id in both:
|
|
||||||
merged = {}
|
|
||||||
for field in all_fields:
|
|
||||||
if ours[field] == base[field]: use theirs[field] # they changed
|
|
||||||
elif theirs[field] == base[field]: use ours[field] # we changed
|
|
||||||
elif ours[field] == theirs[field]: use ours[field] # same change
|
|
||||||
else: conflict! (prompt user or use last-modified timestamp)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pros**: ✅ Handles both appends and updates intelligently
|
|
||||||
**Cons**: ❌ Requires custom tool
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Practical Merge Success Rates
|
|
||||||
|
|
||||||
Based on typical development patterns:
|
|
||||||
|
|
||||||
### Append-Heavy Workflow (Most Teams)
|
|
||||||
- 90% of operations: Create new issues
|
|
||||||
- 10% of operations: Update existing issues
|
|
||||||
|
|
||||||
**Expected conflict rate**:
|
|
||||||
- With binary: 20% (any concurrent change)
|
|
||||||
- With JSONL + union merge: 2% (only concurrent updates to same issue)
|
|
||||||
|
|
||||||
**Verdict**: **10x improvement** with text format
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Update-Heavy Workflow (Rare)
|
|
||||||
- 50% of operations: Create
|
|
||||||
- 50% of operations: Update
|
|
||||||
|
|
||||||
**Expected conflict rate**:
|
|
||||||
- With binary: 40%
|
|
||||||
- With JSONL: 25% (concurrent updates)
|
|
||||||
|
|
||||||
**Verdict**: **40% improvement** with text format
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Recommendation by Team Size
|
|
||||||
|
|
||||||
### 1-5 Developers: Binary Still Fine
|
|
||||||
|
|
||||||
Conflict rate low enough that binary works:
|
|
||||||
- Pull before push
|
|
||||||
- Conflicts rare (<5%)
|
|
||||||
- Recreation cost low
|
|
||||||
|
|
||||||
**Don't bother** with text export unless you're hitting conflicts daily.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 5-20 Developers: Text Format Wins
|
|
||||||
|
|
||||||
Conflict rate crosses pain threshold:
|
|
||||||
- Binary: 20-40% conflicts
|
|
||||||
- Text: 5-10% conflicts (mostly false conflicts)
|
|
||||||
|
|
||||||
**Implement** `bd export --format=jsonl` and `bd import`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 20+ Developers: Shared Server Required
|
|
||||||
|
|
||||||
Even text format conflicts too much:
|
|
||||||
- Text: 10-20% conflicts
|
|
||||||
- Need real-time coordination
|
|
||||||
|
|
||||||
**Use** PostgreSQL backend or bd server mode
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation Plan for bd
|
|
||||||
|
|
||||||
### Phase 1: Export/Import (Issue bd-1)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Export current database to JSONL
|
|
||||||
bd export --format=jsonl > .beads/issues.jsonl
|
|
||||||
|
|
||||||
# Import JSONL into database
|
|
||||||
bd import < .beads/issues.jsonl
|
|
||||||
|
|
||||||
# With filtering
|
|
||||||
bd export --status=open --format=jsonl > open-issues.jsonl
|
|
||||||
```
|
|
||||||
|
|
||||||
**File structure**:
|
|
||||||
```jsonl
|
|
||||||
{"id":"bd-1","title":"...","status":"open",...}
|
|
||||||
{"id":"bd-2","title":"...","status":"closed",...}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Sort order**: Always by ID for consistent diffs
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Phase 2: Hybrid Workflow
|
|
||||||
|
|
||||||
Keep both binary and text:
|
|
||||||
|
|
||||||
```
|
|
||||||
.beads/
|
|
||||||
├── myapp.db # Primary database (in .gitignore)
|
|
||||||
├── myapp.jsonl # Text export (in git)
|
|
||||||
└── sync.sh # Export before commit, import after pull
|
|
||||||
```
|
|
||||||
|
|
||||||
**Git hooks**:
|
|
||||||
```bash
|
|
||||||
# .git/hooks/pre-commit
|
|
||||||
bd export > .beads/myapp.jsonl
|
|
||||||
git add .beads/myapp.jsonl
|
|
||||||
|
|
||||||
# .git/hooks/post-merge
|
|
||||||
bd import < .beads/myapp.jsonl
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Phase 3: Smart Merge Tool
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# .git/config
|
|
||||||
[merge "bd"]
|
|
||||||
name = BD smart merger
|
|
||||||
driver = bd merge %O %A %B
|
|
||||||
|
|
||||||
# .gitattributes
|
|
||||||
*.jsonl merge=bd
|
|
||||||
```
|
|
||||||
|
|
||||||
Where `bd merge base ours theirs` intelligently merges:
|
|
||||||
- Appends: union (keep both)
|
|
||||||
- Updates to different fields: merge fields
|
|
||||||
- Updates to same field: prompt or last-modified wins
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## CSV vs JSONL for bd
|
|
||||||
|
|
||||||
### Why JSONL Wins
|
|
||||||
|
|
||||||
1. **Nested data**: Dependencies, labels are arrays
|
|
||||||
```jsonl
|
|
||||||
{"id":"bd-1","deps":["bd-2","bd-3"],"labels":["urgent","backend"]}
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Schema flexibility**: Can add fields without breaking
|
|
||||||
```jsonl
|
|
||||||
{"id":"bd-1","title":"Old issue"} # Old export
|
|
||||||
{"id":"bd-2","title":"New","estimate":60} # New field added
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Rich types**: Dates, booleans, numbers
|
|
||||||
```jsonl
|
|
||||||
{"id":"bd-1","created":"2025-10-12T00:00:00Z","priority":1,"closed":true}
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Ecosystem**: jq, Python's json module, etc.
|
|
||||||
|
|
||||||
### When CSV Makes Sense
|
|
||||||
|
|
||||||
- **Spreadsheet viewing**: Open in Excel
|
|
||||||
- **Simple schema**: Issues with no arrays/objects
|
|
||||||
- **Human editing**: Easier to edit in text editor
|
|
||||||
|
|
||||||
**Verdict for bd**: JSONL is better (more flexible, future-proof)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Conclusion
|
|
||||||
|
|
||||||
**Text formats ARE mergeable**, with caveats:
|
|
||||||
|
|
||||||
✅ **Append-only**: 95% conflict-free (false conflicts, easy resolution)
|
|
||||||
⚠️ **Updates**: 50% conflict-free (real conflicts, but semantic)
|
|
||||||
❌ **Pretty JSON**: Terrible (don't use)
|
|
||||||
|
|
||||||
**Best format**: JSON Lines (one issue per line, sorted by ID)
|
|
||||||
|
|
||||||
**When to use**:
|
|
||||||
- Binary: 1-5 developers
|
|
||||||
- Text: 5-20 developers
|
|
||||||
- Server: 20+ developers
|
|
||||||
|
|
||||||
**For bd project**: Start with binary, add export/import (bd-1) when we hit 5+ contributors.
|
|
||||||
@@ -185,7 +185,7 @@ git commit
|
|||||||
bd import -i .beads/issues.jsonl # Sync to SQLite
|
bd import -i .beads/issues.jsonl # Sync to SQLite
|
||||||
```
|
```
|
||||||
|
|
||||||
See [TEXT_FORMATS.md](TEXT_FORMATS.md) for detailed merge strategies.
|
See [ADVANCED.md](ADVANCED.md) for detailed merge strategies.
|
||||||
|
|
||||||
### ID collisions after branch merge
|
### ID collisions after branch merge
|
||||||
|
|
||||||
@@ -498,4 +498,4 @@ If none of these solutions work:
|
|||||||
- **[ADVANCED.md](ADVANCED.md)** - Advanced features
|
- **[ADVANCED.md](ADVANCED.md)** - Advanced features
|
||||||
- **[FAQ.md](FAQ.md)** - Frequently asked questions
|
- **[FAQ.md](FAQ.md)** - Frequently asked questions
|
||||||
- **[INSTALLING.md](INSTALLING.md)** - Installation guide
|
- **[INSTALLING.md](INSTALLING.md)** - Installation guide
|
||||||
- **[TEXT_FORMATS.md](TEXT_FORMATS.md)** - JSONL format and merge strategies
|
- **[ADVANCED.md](ADVANCED.md)** - JSONL format and merge strategies
|
||||||
|
|||||||
639
WORKFLOW.md
639
WORKFLOW.md
@@ -1,639 +0,0 @@
|
|||||||
# Beads Workflow Guide
|
|
||||||
|
|
||||||
Complete guide to using Beads for solo development and with AI coding assistants like Claude Code.
|
|
||||||
|
|
||||||
## Table of Contents
|
|
||||||
|
|
||||||
- [Vibe Coding with Claude Code](#vibe-coding-with-claude-code)
|
|
||||||
- [Database Structure](#database-structure)
|
|
||||||
- [Git Workflow](#git-workflow)
|
|
||||||
- [Advanced Usage](#advanced-usage)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Vibe Coding with Claude Code
|
|
||||||
|
|
||||||
### The "Let's Continue" Protocol
|
|
||||||
|
|
||||||
**Start of every session:**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Check for abandoned work
|
|
||||||
beads list --status in_progress
|
|
||||||
|
|
||||||
# 2. If none, get ready work
|
|
||||||
beads ready --limit 5
|
|
||||||
|
|
||||||
# 3. Show top priority
|
|
||||||
beads show bd-X
|
|
||||||
```
|
|
||||||
|
|
||||||
Tell Claude: **"Let's continue"** and it runs these commands.
|
|
||||||
|
|
||||||
### Full Project Workflow
|
|
||||||
|
|
||||||
#### Session 1: Project Kickoff
|
|
||||||
|
|
||||||
**You:** "Starting a new e-commerce project. Help me plan it."
|
|
||||||
|
|
||||||
**Claude creates issues:**
|
|
||||||
```bash
|
|
||||||
cd ~/my-project
|
|
||||||
alias beads="~/src/beads/beads --db ./project.db"
|
|
||||||
|
|
||||||
beads create "Set up Next.js project" -p 0 -t task
|
|
||||||
beads create "Design database schema" -p 0 -t task
|
|
||||||
beads create "Build authentication system" -p 1 -t feature
|
|
||||||
beads create "Create API routes" -p 1 -t feature
|
|
||||||
beads create "Build UI components" -p 2 -t feature
|
|
||||||
beads create "Add tests" -p 2 -t task
|
|
||||||
beads create "Deploy to production" -p 3 -t task
|
|
||||||
```
|
|
||||||
|
|
||||||
**Map dependencies:**
|
|
||||||
```bash
|
|
||||||
beads dep add bd-4 bd-2 # API depends on schema
|
|
||||||
beads dep add bd-3 bd-2 # Auth depends on schema
|
|
||||||
beads dep add bd-5 bd-4 # UI depends on API
|
|
||||||
beads dep add bd-6 bd-3 # Tests depend on auth
|
|
||||||
beads dep add bd-6 bd-5 # Tests depend on UI
|
|
||||||
beads dep add bd-7 bd-6 # Deploy depends on tests
|
|
||||||
```
|
|
||||||
|
|
||||||
**Visualize:**
|
|
||||||
```bash
|
|
||||||
beads dep tree bd-7
|
|
||||||
```
|
|
||||||
|
|
||||||
Output:
|
|
||||||
```
|
|
||||||
🌲 Dependency tree for bd-7:
|
|
||||||
|
|
||||||
→ bd-7: Deploy to production [P3] (open)
|
|
||||||
→ bd-6: Add tests [P2] (open)
|
|
||||||
→ bd-3: Build authentication system [P1] (open)
|
|
||||||
→ bd-2: Design database schema [P0] (open)
|
|
||||||
→ bd-5: Build UI components [P2] (open)
|
|
||||||
→ bd-4: Create API routes [P1] (open)
|
|
||||||
→ bd-2: Design database schema [P0] (open)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check ready work:**
|
|
||||||
```bash
|
|
||||||
beads ready
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
📋 Ready work (2 issues with no blockers):
|
|
||||||
|
|
||||||
1. [P0] bd-1: Set up Next.js project
|
|
||||||
2. [P0] bd-2: Design database schema
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Session 2: Foundation
|
|
||||||
|
|
||||||
**You:** "Let's continue"
|
|
||||||
|
|
||||||
**Claude:**
|
|
||||||
```bash
|
|
||||||
beads ready
|
|
||||||
# Shows: bd-1, bd-2
|
|
||||||
```
|
|
||||||
|
|
||||||
**You:** "Work on bd-2"
|
|
||||||
|
|
||||||
**Claude:**
|
|
||||||
```bash
|
|
||||||
beads update bd-2 --status in_progress
|
|
||||||
beads show bd-2
|
|
||||||
|
|
||||||
# ... designs schema, creates migrations ...
|
|
||||||
|
|
||||||
beads close bd-2 --reason "Schema designed with Prisma, migrations created"
|
|
||||||
beads ready
|
|
||||||
```
|
|
||||||
|
|
||||||
Now shows:
|
|
||||||
```
|
|
||||||
📋 Ready work (3 issues):
|
|
||||||
|
|
||||||
1. [P0] bd-1: Set up Next.js project
|
|
||||||
2. [P1] bd-3: Build authentication system ← Unblocked!
|
|
||||||
3. [P1] bd-4: Create API routes ← Unblocked!
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Session 3: Building Features
|
|
||||||
|
|
||||||
**You:** "Let's continue, work on bd-3"
|
|
||||||
|
|
||||||
**Claude:**
|
|
||||||
```bash
|
|
||||||
beads ready # Confirms bd-3 is ready
|
|
||||||
beads update bd-3 --status in_progress
|
|
||||||
|
|
||||||
# ... implements JWT auth, middleware ...
|
|
||||||
|
|
||||||
beads close bd-3 --reason "Auth complete with JWT tokens and protected routes"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Session 4: Discovering Blockers
|
|
||||||
|
|
||||||
**You:** "Let's continue, work on bd-4"
|
|
||||||
|
|
||||||
**Claude starts working, then:**
|
|
||||||
|
|
||||||
**You:** "We need to add OAuth before we can finish the API properly"
|
|
||||||
|
|
||||||
**Claude:**
|
|
||||||
```bash
|
|
||||||
beads create "Set up OAuth providers (Google, GitHub)" -p 1 -t task
|
|
||||||
beads dep add bd-4 bd-8 # API now depends on OAuth
|
|
||||||
beads update bd-4 --status blocked
|
|
||||||
beads ready
|
|
||||||
```
|
|
||||||
|
|
||||||
Shows:
|
|
||||||
```
|
|
||||||
📋 Ready work (2 issues):
|
|
||||||
|
|
||||||
1. [P0] bd-1: Set up Next.js project
|
|
||||||
2. [P1] bd-8: Set up OAuth providers ← New blocker must be done first
|
|
||||||
```
|
|
||||||
|
|
||||||
**Claude:** "I've blocked bd-4 and created bd-8 as a prerequisite. Should I work on OAuth setup now?"
|
|
||||||
|
|
||||||
#### Session 5: Unblocking
|
|
||||||
|
|
||||||
**You:** "Yes, do bd-8"
|
|
||||||
|
|
||||||
**Claude completes OAuth setup:**
|
|
||||||
```bash
|
|
||||||
beads close bd-8 --reason "OAuth configured for Google and GitHub"
|
|
||||||
beads update bd-4 --status open # Manually unblock
|
|
||||||
beads ready
|
|
||||||
```
|
|
||||||
|
|
||||||
Now bd-4 is ready again!
|
|
||||||
|
|
||||||
### Pro Tips for AI Pairing
|
|
||||||
|
|
||||||
**1. Add context with comments:**
|
|
||||||
```bash
|
|
||||||
beads update bd-5 --status in_progress
|
|
||||||
# Work session ends mid-task
|
|
||||||
beads comment bd-5 "Implemented navbar and footer, still need shopping cart icon"
|
|
||||||
```
|
|
||||||
|
|
||||||
Next session, Claude reads the comment and continues.
|
|
||||||
|
|
||||||
**2. Break down epics when too big:**
|
|
||||||
```bash
|
|
||||||
beads create "Epic: User Management" -p 1 -t epic
|
|
||||||
beads create "User registration flow" -p 1 -t task
|
|
||||||
beads create "User login/logout" -p 1 -t task
|
|
||||||
beads create "Password reset" -p 2 -t task
|
|
||||||
|
|
||||||
beads dep add bd-10 bd-9 --type parent-child
|
|
||||||
beads dep add bd-11 bd-9 --type parent-child
|
|
||||||
beads dep add bd-12 bd-9 --type parent-child
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Use labels for filtering:**
|
|
||||||
```bash
|
|
||||||
beads create "Fix login timeout" -p 0 -l "bug,auth,urgent"
|
|
||||||
beads create "Add loading spinner" -p 2 -l "ui,polish"
|
|
||||||
|
|
||||||
# Later
|
|
||||||
beads list --status open | grep urgent
|
|
||||||
```
|
|
||||||
|
|
||||||
**4. Track estimates:**
|
|
||||||
```bash
|
|
||||||
beads create "Refactor user service" -p 2 --estimated-minutes 120
|
|
||||||
beads ready # Shows estimates for planning
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Database Structure
|
|
||||||
|
|
||||||
### What's Inside project.db?
|
|
||||||
|
|
||||||
A single **SQLite database file** (typically 72KB-1MB) containing:
|
|
||||||
|
|
||||||
#### Tables
|
|
||||||
|
|
||||||
**1. `issues` - Core issue data**
|
|
||||||
```sql
|
|
||||||
CREATE TABLE issues (
|
|
||||||
id TEXT PRIMARY KEY, -- "bd-1", "bd-2", etc.
|
|
||||||
title TEXT NOT NULL,
|
|
||||||
description TEXT,
|
|
||||||
design TEXT, -- Solution design
|
|
||||||
acceptance_criteria TEXT, -- Definition of done
|
|
||||||
notes TEXT, -- Working notes
|
|
||||||
status TEXT DEFAULT 'open', -- open|in_progress|blocked|closed
|
|
||||||
priority INTEGER DEFAULT 2, -- 0-4 (0=highest)
|
|
||||||
issue_type TEXT DEFAULT 'task', -- bug|feature|task|epic|chore
|
|
||||||
assignee TEXT,
|
|
||||||
estimated_minutes INTEGER,
|
|
||||||
created_at DATETIME,
|
|
||||||
updated_at DATETIME,
|
|
||||||
closed_at DATETIME
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. `dependencies` - Relationship graph**
|
|
||||||
```sql
|
|
||||||
CREATE TABLE dependencies (
|
|
||||||
issue_id TEXT NOT NULL, -- "bd-2"
|
|
||||||
depends_on_id TEXT NOT NULL, -- "bd-1" (bd-2 depends on bd-1)
|
|
||||||
type TEXT DEFAULT 'blocks', -- blocks|related|parent-child
|
|
||||||
created_at DATETIME,
|
|
||||||
created_by TEXT,
|
|
||||||
PRIMARY KEY (issue_id, depends_on_id)
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. `labels` - Tags for categorization**
|
|
||||||
```sql
|
|
||||||
CREATE TABLE labels (
|
|
||||||
issue_id TEXT NOT NULL,
|
|
||||||
label TEXT NOT NULL,
|
|
||||||
PRIMARY KEY (issue_id, label)
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
**4. `events` - Complete audit trail**
|
|
||||||
```sql
|
|
||||||
CREATE TABLE events (
|
|
||||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
|
||||||
issue_id TEXT NOT NULL,
|
|
||||||
event_type TEXT NOT NULL, -- created|updated|commented|closed|etc
|
|
||||||
actor TEXT NOT NULL, -- who made the change
|
|
||||||
old_value TEXT, -- before (JSON)
|
|
||||||
new_value TEXT, -- after (JSON)
|
|
||||||
comment TEXT, -- for comments and close reasons
|
|
||||||
created_at DATETIME
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
**5. `ready_issues` - VIEW (auto-computed)**
|
|
||||||
```sql
|
|
||||||
-- Shows issues with NO open blockers
|
|
||||||
-- This is the magic that powers "beads ready"
|
|
||||||
CREATE VIEW ready_issues AS
|
|
||||||
SELECT i.*
|
|
||||||
FROM issues i
|
|
||||||
WHERE i.status = 'open'
|
|
||||||
AND NOT EXISTS (
|
|
||||||
SELECT 1 FROM dependencies d
|
|
||||||
JOIN issues blocked ON d.depends_on_id = blocked.id
|
|
||||||
WHERE d.issue_id = i.id
|
|
||||||
AND d.type = 'blocks'
|
|
||||||
AND blocked.status IN ('open', 'in_progress', 'blocked')
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
**6. `blocked_issues` - VIEW (auto-computed)**
|
|
||||||
```sql
|
|
||||||
-- Shows issues WITH open blockers
|
|
||||||
CREATE VIEW blocked_issues AS
|
|
||||||
SELECT
|
|
||||||
i.*,
|
|
||||||
COUNT(d.depends_on_id) as blocked_by_count
|
|
||||||
FROM issues i
|
|
||||||
JOIN dependencies d ON i.id = d.issue_id
|
|
||||||
JOIN issues blocker ON d.depends_on_id = blocker.id
|
|
||||||
WHERE i.status IN ('open', 'in_progress', 'blocked')
|
|
||||||
AND d.type = 'blocks'
|
|
||||||
AND blocker.status IN ('open', 'in_progress', 'blocked')
|
|
||||||
GROUP BY i.id;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example Data
|
|
||||||
|
|
||||||
**Issues table:**
|
|
||||||
```
|
|
||||||
bd-1|Critical bug|Fix login timeout|||open|0|bug|||2025-10-11 19:23:10|2025-10-11 19:23:10|
|
|
||||||
bd-2|High priority||Need auth first||open|1|feature|||2025-10-11 19:23:11|2025-10-11 19:23:11|
|
|
||||||
```
|
|
||||||
|
|
||||||
**Dependencies table:**
|
|
||||||
```
|
|
||||||
bd-2|bd-1|blocks|2025-10-11 19:23:16|stevey
|
|
||||||
```
|
|
||||||
Translation: "bd-2 depends on bd-1 (blocks type), created by stevey"
|
|
||||||
|
|
||||||
**Events table:**
|
|
||||||
```
|
|
||||||
1|bd-1|created|stevey||{"id":"bd-1","title":"Critical bug",...}||2025-10-11 19:23:10
|
|
||||||
2|bd-2|created|stevey||{"id":"bd-2","title":"High priority",...}||2025-10-11 19:23:11
|
|
||||||
3|bd-2|dependency_added|stevey|||Added dependency: bd-2 blocks bd-1|2025-10-11 19:23:16
|
|
||||||
```
|
|
||||||
|
|
||||||
### Inspecting the Database
|
|
||||||
|
|
||||||
**Show all tables:**
|
|
||||||
```bash
|
|
||||||
sqlite3 project.db ".tables"
|
|
||||||
```
|
|
||||||
|
|
||||||
**View schema:**
|
|
||||||
```bash
|
|
||||||
sqlite3 project.db ".schema issues"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Query directly:**
|
|
||||||
```bash
|
|
||||||
# Find all P0 issues
|
|
||||||
sqlite3 project.db "SELECT id, title FROM issues WHERE priority = 0;"
|
|
||||||
|
|
||||||
# See dependency graph
|
|
||||||
sqlite3 project.db "SELECT issue_id, depends_on_id FROM dependencies;"
|
|
||||||
|
|
||||||
# View audit trail for an issue
|
|
||||||
sqlite3 project.db "SELECT * FROM events WHERE issue_id = 'bd-5' ORDER BY created_at;"
|
|
||||||
|
|
||||||
# Who's working on what?
|
|
||||||
sqlite3 project.db "SELECT assignee, COUNT(*) FROM issues WHERE status = 'in_progress' GROUP BY assignee;"
|
|
||||||
|
|
||||||
# See what's ready (same as beads ready)
|
|
||||||
sqlite3 project.db "SELECT id, title, priority FROM ready_issues ORDER BY priority;"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Export to CSV:**
|
|
||||||
```bash
|
|
||||||
sqlite3 project.db -header -csv "SELECT * FROM issues;" > issues.csv
|
|
||||||
```
|
|
||||||
|
|
||||||
**Database size:**
|
|
||||||
```bash
|
|
||||||
ls -lh project.db
|
|
||||||
# Typically: 72KB (empty) to ~1MB (1000 issues)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Git Workflow
|
|
||||||
|
|
||||||
### Auto-Sync Behavior
|
|
||||||
|
|
||||||
bd automatically keeps your database and git in sync:
|
|
||||||
|
|
||||||
**Making changes:**
|
|
||||||
```bash
|
|
||||||
bd create "New task" -p 1
|
|
||||||
bd update bd-5 --status in_progress
|
|
||||||
# bd automatically exports to .beads/issues.jsonl after 5 seconds
|
|
||||||
|
|
||||||
git add .beads/issues.jsonl
|
|
||||||
git commit -m "Started working on bd-5"
|
|
||||||
git push
|
|
||||||
```
|
|
||||||
|
|
||||||
**After git pull:**
|
|
||||||
```bash
|
|
||||||
git pull
|
|
||||||
# bd automatically detects JSONL is newer on next command
|
|
||||||
|
|
||||||
bd ready # Auto-imports fresh data from git!
|
|
||||||
bd list --status in_progress # See what you were working on
|
|
||||||
```
|
|
||||||
|
|
||||||
### Multi-Machine Workflow
|
|
||||||
|
|
||||||
**Machine 1:**
|
|
||||||
```bash
|
|
||||||
bd create "New task" -p 1
|
|
||||||
bd update bd-5 --status in_progress
|
|
||||||
# Wait 5 seconds for auto-export, or run: bd sync
|
|
||||||
|
|
||||||
git add .beads/issues.jsonl
|
|
||||||
git commit -m "Started working on bd-5"
|
|
||||||
git push
|
|
||||||
```
|
|
||||||
|
|
||||||
**Machine 2:**
|
|
||||||
```bash
|
|
||||||
git pull
|
|
||||||
bd ready # Auto-imports, sees bd-5 is in progress
|
|
||||||
```
|
|
||||||
|
|
||||||
### Zero-Lag Sync (Optional)
|
|
||||||
|
|
||||||
Install git hooks for immediate sync:
|
|
||||||
```bash
|
|
||||||
cd examples/git-hooks && ./install.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
This eliminates the 5-second debounce and guarantees import after `git pull`.
|
|
||||||
|
|
||||||
### Team Workflow
|
|
||||||
|
|
||||||
**Each developer has their own database:**
|
|
||||||
```bash
|
|
||||||
# Alice's machine
|
|
||||||
beads --db alice.db create "Fix bug"
|
|
||||||
|
|
||||||
# Bob's machine
|
|
||||||
beads --db bob.db create "Add feature"
|
|
||||||
|
|
||||||
# Merge by convention:
|
|
||||||
# - Alice handles backend issues (bd-1 to bd-50)
|
|
||||||
# - Bob handles frontend issues (bd-51 to bd-100)
|
|
||||||
```
|
|
||||||
|
|
||||||
Or use **PostgreSQL** for shared state (future feature).
|
|
||||||
|
|
||||||
### Branching Strategy
|
|
||||||
|
|
||||||
**Option 1: Database per branch**
|
|
||||||
```bash
|
|
||||||
git checkout -b feature/auth
|
|
||||||
cp main.db auth.db
|
|
||||||
beads --db auth.db create "Add OAuth" -p 1
|
|
||||||
# Work on branch...
|
|
||||||
git add auth.db
|
|
||||||
git commit -m "Auth implementation progress"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option 2: Single database, label by branch**
|
|
||||||
```bash
|
|
||||||
beads create "Add OAuth" -p 1 -l "branch:feature/auth"
|
|
||||||
beads list | grep "branch:feature/auth"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Advanced Usage
|
|
||||||
|
|
||||||
### Alias Setup
|
|
||||||
|
|
||||||
Add to `~/.bashrc` or `~/.zshrc`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Project-specific
|
|
||||||
alias b="~/src/beads/beads --db ./project.db"
|
|
||||||
|
|
||||||
# Usage
|
|
||||||
b create "Task" -p 1
|
|
||||||
b ready
|
|
||||||
b show bd-5
|
|
||||||
```
|
|
||||||
|
|
||||||
### Scripting Beads
|
|
||||||
|
|
||||||
**Find all unassigned P0 issues:**
|
|
||||||
```bash
|
|
||||||
#!/bin/bash
|
|
||||||
beads list --priority 0 --status open | grep -v "Assignee:"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Auto-close issues from git commits:**
|
|
||||||
```bash
|
|
||||||
#!/bin/bash
|
|
||||||
# In git hook: .git/hooks/commit-msg
|
|
||||||
|
|
||||||
COMMIT_MSG=$(cat $1)
|
|
||||||
if [[ $COMMIT_MSG =~ bd-([0-9]+) ]]; then
|
|
||||||
ISSUE_ID="bd-${BASH_REMATCH[1]}"
|
|
||||||
~/src/beads/beads --db ./project.db close "$ISSUE_ID" \
|
|
||||||
--reason "Auto-closed from commit: $(git rev-parse --short HEAD)"
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
**Weekly report:**
|
|
||||||
```bash
|
|
||||||
#!/bin/bash
|
|
||||||
echo "Issues closed this week:"
|
|
||||||
sqlite3 project.db "
|
|
||||||
SELECT id, title, closed_at
|
|
||||||
FROM issues
|
|
||||||
WHERE closed_at > date('now', '-7 days')
|
|
||||||
ORDER BY closed_at DESC;
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Multi-Project Management
|
|
||||||
|
|
||||||
**Use different databases:**
|
|
||||||
```bash
|
|
||||||
# Personal projects
|
|
||||||
beads --db ~/personal.db create "Task"
|
|
||||||
|
|
||||||
# Work projects
|
|
||||||
beads --db ~/work.db create "Task"
|
|
||||||
|
|
||||||
# Client A
|
|
||||||
beads --db ~/clients/client-a.db create "Task"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Or use labels:**
|
|
||||||
```bash
|
|
||||||
beads create "Task" -l "project:website"
|
|
||||||
beads create "Task" -l "project:mobile-app"
|
|
||||||
|
|
||||||
# Filter by project
|
|
||||||
sqlite3 ~/.beads/beads.db "
|
|
||||||
SELECT i.id, i.title
|
|
||||||
FROM issues i
|
|
||||||
JOIN labels l ON i.id = l.issue_id
|
|
||||||
WHERE l.label = 'project:website';
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Export/Import
|
|
||||||
|
|
||||||
**Export issues to JSON:**
|
|
||||||
```bash
|
|
||||||
sqlite3 project.db -json "SELECT * FROM issues;" > backup.json
|
|
||||||
```
|
|
||||||
|
|
||||||
**Export dependency graph:**
|
|
||||||
```bash
|
|
||||||
# DOT format for Graphviz
|
|
||||||
sqlite3 project.db "
|
|
||||||
SELECT 'digraph G {'
|
|
||||||
UNION ALL
|
|
||||||
SELECT ' \"' || issue_id || '\" -> \"' || depends_on_id || '\";'
|
|
||||||
FROM dependencies
|
|
||||||
UNION ALL
|
|
||||||
SELECT '}';
|
|
||||||
" > graph.dot
|
|
||||||
|
|
||||||
dot -Tpng graph.dot -o graph.png
|
|
||||||
```
|
|
||||||
|
|
||||||
### Performance Tips
|
|
||||||
|
|
||||||
**Vacuum regularly for large databases:**
|
|
||||||
```bash
|
|
||||||
sqlite3 project.db "VACUUM;"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Add custom indexes:**
|
|
||||||
```bash
|
|
||||||
sqlite3 project.db "CREATE INDEX idx_labels_custom ON labels(label) WHERE label LIKE 'project:%';"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Archive old issues:**
|
|
||||||
```bash
|
|
||||||
sqlite3 project.db "
|
|
||||||
DELETE FROM issues
|
|
||||||
WHERE status = 'closed'
|
|
||||||
AND closed_at < date('now', '-6 months');
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
**Database locked:**
|
|
||||||
```bash
|
|
||||||
# Another process is using it
|
|
||||||
lsof project.db
|
|
||||||
# Kill the process or wait for it to finish
|
|
||||||
```
|
|
||||||
|
|
||||||
**Corrupted database:**
|
|
||||||
```bash
|
|
||||||
# Check integrity
|
|
||||||
sqlite3 project.db "PRAGMA integrity_check;"
|
|
||||||
|
|
||||||
# Recover
|
|
||||||
sqlite3 project.db ".dump" | sqlite3 recovered.db
|
|
||||||
```
|
|
||||||
|
|
||||||
**Reset everything:**
|
|
||||||
```bash
|
|
||||||
rm ~/.beads/beads.db
|
|
||||||
beads create "Fresh start" -p 1
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Summary
|
|
||||||
|
|
||||||
**Beads is:**
|
|
||||||
- A single binary
|
|
||||||
- A single database file
|
|
||||||
- Simple commands
|
|
||||||
- Powerful dependency tracking
|
|
||||||
- Perfect for solo dev or AI pairing
|
|
||||||
|
|
||||||
**The workflow:**
|
|
||||||
1. Brain dump all tasks → `beads create`
|
|
||||||
2. Map dependencies → `beads dep add`
|
|
||||||
3. Find ready work → `beads ready`
|
|
||||||
4. Work on it → `beads update --status in_progress`
|
|
||||||
5. Complete it → `beads close`
|
|
||||||
6. Commit database → `git add project.db`
|
|
||||||
7. Repeat
|
|
||||||
|
|
||||||
**The magic:**
|
|
||||||
- Database knows what's ready
|
|
||||||
- Git tracks your progress
|
|
||||||
- AI can query and update
|
|
||||||
- You never lose track of "what's next"
|
|
||||||
@@ -1,815 +0,0 @@
|
|||||||
# Code Health Cleanup Epic
|
|
||||||
|
|
||||||
## Epic: Code Health & Technical Debt Cleanup
|
|
||||||
|
|
||||||
**Type:** epic
|
|
||||||
**Priority:** 2
|
|
||||||
**Status:** open
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
Comprehensive codebase cleanup to remove dead code, refactor monolithic files, deduplicate utilities, and improve maintainability. Based on ultrathink code health analysis conducted 2025-10-27.
|
|
||||||
|
|
||||||
**Goals:**
|
|
||||||
- Remove ~1,500 LOC of dead/unreachable code
|
|
||||||
- Split 2 monolithic files (server.go 2,273 LOC, sqlite.go 2,136 LOC) into focused modules
|
|
||||||
- Deduplicate scattered utility functions (normalizeLabels, BD_DEBUG checks)
|
|
||||||
- Consolidate test coverage (2,019 LOC of collision tests)
|
|
||||||
- Improve code navigation and reduce merge conflicts
|
|
||||||
|
|
||||||
**Impact:** Reduces codebase by ~6-8%, improves maintainability, faster CI/CD
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- All unreachable code identified by `deadcode` analyzer is removed
|
|
||||||
- RPC server split into <500 LOC files with clear responsibilities
|
|
||||||
- Duplicate utility functions centralized
|
|
||||||
- Test coverage maintained or improved
|
|
||||||
- All tests passing
|
|
||||||
- Documentation updated
|
|
||||||
|
|
||||||
**Estimated Effort:** 11 days across 4 phases
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 1: Dead Code Removal
|
|
||||||
|
|
||||||
### Issue: Delete cmd/bd/import_phases.go - entire file is dead code
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 1
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** cleanup, dead-code, phase-1
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
The file `cmd/bd/import_phases.go` (377 LOC) contains 7 functions that are all unreachable according to the deadcode analyzer. This appears to be an abandoned import refactoring that was never completed or has been replaced by the current implementation in `import.go`.
|
|
||||||
|
|
||||||
**Unreachable functions:**
|
|
||||||
- `getOrCreateStore` (line 15)
|
|
||||||
- `handlePrefixMismatch` (line 43)
|
|
||||||
- `handleCollisions` (line 87)
|
|
||||||
- `upsertIssues` (line 155)
|
|
||||||
- `importDependencies` (line 240)
|
|
||||||
- `importLabels` (line 281)
|
|
||||||
- `importComments` (line 316)
|
|
||||||
|
|
||||||
**Evidence:**
|
|
||||||
```bash
|
|
||||||
go run golang.org/x/tools/cmd/deadcode@latest -test ./...
|
|
||||||
# Shows all 7 functions as unreachable
|
|
||||||
```
|
|
||||||
|
|
||||||
No external callers found via:
|
|
||||||
```bash
|
|
||||||
grep -r "getOrCreateStore\|handlePrefixMismatch\|handleCollisions\|upsertIssues" cmd/bd/*.go
|
|
||||||
# Only matches within import_phases.go itself
|
|
||||||
```
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- Delete `cmd/bd/import_phases.go`
|
|
||||||
- Verify all tests still pass: `go test ./cmd/bd/...`
|
|
||||||
- Verify import functionality works: test `bd import` command
|
|
||||||
- Run deadcode analyzer to confirm no new unreachable code
|
|
||||||
|
|
||||||
**Impact:** Removes 377 LOC, simplifies import logic
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Issue: Remove deprecated rename functions from import_shared.go
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 1
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** cleanup, dead-code, phase-1
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
The file `cmd/bd/import_shared.go` contains deprecated and unreachable rename functions (~100 LOC) that are no longer used. The active implementation has moved to `internal/importer/importer.go`.
|
|
||||||
|
|
||||||
**Functions to remove:**
|
|
||||||
- `renameImportedIssuePrefixes` (line 262) - wrapper function, unused
|
|
||||||
- `renameImportedIssuePrefixesOld` (line 267) - marked Deprecated, 70 LOC
|
|
||||||
- `replaceIDReferences` (line 332) - only called by deprecated function
|
|
||||||
|
|
||||||
**Evidence:**
|
|
||||||
```bash
|
|
||||||
go run golang.org/x/tools/cmd/deadcode@latest -test ./...
|
|
||||||
# Shows these as unreachable
|
|
||||||
```
|
|
||||||
|
|
||||||
The actual implementation is in:
|
|
||||||
- `internal/importer/importer.go` - `RenameImportedIssuePrefixes`
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- Remove lines 262-340 from `cmd/bd/import_shared.go`
|
|
||||||
- Verify no callers exist: `grep -r "renameImportedIssuePrefixes\|replaceIDReferences" cmd/bd/`
|
|
||||||
- All tests pass: `go test ./cmd/bd/...`
|
|
||||||
- Import with rename works: `bd import --rename-on-import`
|
|
||||||
|
|
||||||
**Impact:** Removes ~100 LOC of deprecated code
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Issue: Delete skipped tests for "old buggy behavior"
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 1
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** cleanup, dead-code, test-cleanup, phase-1
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
Three test functions are permanently skipped with comments indicating they test behavior that was fixed in GH#120. These tests will never run again and should be deleted.
|
|
||||||
|
|
||||||
**Test functions to remove:**
|
|
||||||
|
|
||||||
1. `cmd/bd/import_collision_test.go:228`
|
|
||||||
```go
|
|
||||||
t.Skip("Test expects old buggy behavior - needs rewrite for GH#120 fix")
|
|
||||||
```
|
|
||||||
|
|
||||||
2. `cmd/bd/import_collision_test.go:505`
|
|
||||||
```go
|
|
||||||
t.Skip("Test expects old buggy behavior - needs rewrite for GH#120 fix")
|
|
||||||
```
|
|
||||||
|
|
||||||
3. `internal/storage/sqlite/collision_test.go:919`
|
|
||||||
```go
|
|
||||||
t.Skip("Test expects old buggy behavior - needs rewrite for GH#120 fix")
|
|
||||||
```
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- Delete the 3 test functions entirely (~150 LOC total)
|
|
||||||
- Update test file comments to reference GH#120 fix if needed
|
|
||||||
- All remaining tests pass: `go test ./...`
|
|
||||||
- No reduction in meaningful test coverage (these test fixed bugs)
|
|
||||||
|
|
||||||
**Impact:** Removes ~150 LOC of permanently skipped tests
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Issue: Remove unreachable RPC methods
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 2
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** cleanup, dead-code, rpc, phase-1
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
Several RPC server and client methods are unreachable and should be removed:
|
|
||||||
|
|
||||||
**Server methods (internal/rpc/server.go):**
|
|
||||||
- `Server.GetLastImportTime` (line 2116)
|
|
||||||
- `Server.SetLastImportTime` (line 2123)
|
|
||||||
- `Server.findJSONLPath` (line 2255)
|
|
||||||
|
|
||||||
**Client methods (internal/rpc/client.go):**
|
|
||||||
- `Client.Import` (line 311) - RPC import not used (daemon uses autoimport)
|
|
||||||
|
|
||||||
**Evidence:**
|
|
||||||
```bash
|
|
||||||
go run golang.org/x/tools/cmd/deadcode@latest -test ./...
|
|
||||||
```
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- Remove the 4 unreachable methods (~80 LOC total)
|
|
||||||
- Verify no callers: `grep -r "GetLastImportTime\|SetLastImportTime\|findJSONLPath" .`
|
|
||||||
- All tests pass: `go test ./internal/rpc/...`
|
|
||||||
- Daemon functionality works: test daemon start/stop/operations
|
|
||||||
|
|
||||||
**Impact:** Removes ~80 LOC of unused RPC code
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Issue: Remove unreachable utility functions
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 2
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** cleanup, dead-code, phase-1
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
Several small utility functions are unreachable:
|
|
||||||
|
|
||||||
**Files to clean:**
|
|
||||||
1. `internal/storage/sqlite/hash.go` - `computeIssueContentHash` (line 17)
|
|
||||||
- Check if entire file can be deleted if only contains this function
|
|
||||||
|
|
||||||
2. `internal/config/config.go` - `FileUsed` (line 151)
|
|
||||||
- Delete unused config helper
|
|
||||||
|
|
||||||
3. `cmd/bd/git_sync_test.go` - `verifyIssueOpen` (line 300)
|
|
||||||
- Delete dead test helper
|
|
||||||
|
|
||||||
4. `internal/compact/haiku.go` - `HaikuClient.SummarizeTier2` (line 81)
|
|
||||||
- Tier 2 summarization not implemented
|
|
||||||
- Options: implement feature OR delete method
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- Remove unreachable functions
|
|
||||||
- If entire files can be deleted (like hash.go), delete them
|
|
||||||
- For SummarizeTier2: decide to implement or delete, document decision
|
|
||||||
- All tests pass: `go test ./...`
|
|
||||||
- Verify no callers exist for each function
|
|
||||||
|
|
||||||
**Impact:** Removes 50-100 LOC depending on decisions
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 2: Refactor Monolithic Files
|
|
||||||
|
|
||||||
### Issue: Split internal/rpc/server.go into focused modules
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 1
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** refactor, architecture, phase-2
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
The file `internal/rpc/server.go` is 2,273 lines with 50+ methods, making it difficult to navigate and prone to merge conflicts. Split into 8 focused files with clear responsibilities.
|
|
||||||
|
|
||||||
**Current structure:** Single 2,273-line file with:
|
|
||||||
- Connection handling
|
|
||||||
- Request routing
|
|
||||||
- All 40+ RPC method implementations
|
|
||||||
- Storage caching
|
|
||||||
- Health checks & metrics
|
|
||||||
- Cleanup loops
|
|
||||||
|
|
||||||
**Target structure:**
|
|
||||||
```
|
|
||||||
internal/rpc/
|
|
||||||
├── server.go # Core server, connection handling (~300 lines)
|
|
||||||
│ # - NewServer, Start, Stop, WaitReady
|
|
||||||
│ # - handleConnection, handleRequest
|
|
||||||
│ # - Signal handling
|
|
||||||
│
|
|
||||||
├── methods_issue.go # Issue operations (~400 lines)
|
|
||||||
│ # - handleCreate, handleUpdate, handleClose
|
|
||||||
│ # - handleList, handleShow
|
|
||||||
│
|
|
||||||
├── methods_deps.go # Dependency operations (~200 lines)
|
|
||||||
│ # - handleDepAdd, handleDepRemove
|
|
||||||
│
|
|
||||||
├── methods_labels.go # Label operations (~150 lines)
|
|
||||||
│ # - handleLabelAdd, handleLabelRemove
|
|
||||||
│
|
|
||||||
├── methods_ready.go # Ready work queries (~150 lines)
|
|
||||||
│ # - handleReady, handleStats
|
|
||||||
│
|
|
||||||
├── methods_compact.go # Compaction operations (~200 lines)
|
|
||||||
│ # - handleCompact
|
|
||||||
│
|
|
||||||
├── methods_comments.go # Comment operations (~150 lines)
|
|
||||||
│ # - handleCommentAdd, handleCommentList
|
|
||||||
│
|
|
||||||
├── storage_cache.go # Storage caching logic (~300 lines)
|
|
||||||
│ # - getStorageForRequest
|
|
||||||
│ # - findDatabaseForCwd
|
|
||||||
│ # - Storage cache management
|
|
||||||
│ # - evictStaleStorage, aggressiveEviction
|
|
||||||
│
|
|
||||||
├── health.go # Health & metrics (~200 lines)
|
|
||||||
│ # - handleHealth, handleMetrics
|
|
||||||
│ # - handlePing, handleStatus
|
|
||||||
│ # - checkMemoryPressure
|
|
||||||
│
|
|
||||||
├── protocol.go # (already separate - no change)
|
|
||||||
└── client.go # (already separate - no change)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- All 50 methods split into appropriate files
|
|
||||||
- Each file <500 LOC
|
|
||||||
- All methods remain on `*Server` receiver (no behavior change)
|
|
||||||
- All tests pass: `go test ./internal/rpc/...`
|
|
||||||
- Verify daemon works: start daemon, run operations, check health
|
|
||||||
- Update internal documentation if needed
|
|
||||||
- No change to public API
|
|
||||||
|
|
||||||
**Migration strategy:**
|
|
||||||
1. Create new files with appropriate methods
|
|
||||||
2. Keep `server.go` as main file with core server logic
|
|
||||||
3. Test incrementally after each file split
|
|
||||||
4. Final verification with full test suite
|
|
||||||
|
|
||||||
**Impact:**
|
|
||||||
- Better code navigation
|
|
||||||
- Reduced merge conflicts
|
|
||||||
- Easier to find specific RPC operations
|
|
||||||
- Improved testability (can test method groups independently)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Issue: Extract SQLite migrations into separate files
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 2
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** refactor, database, phase-2
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
The file `internal/storage/sqlite/sqlite.go` is 2,136 lines and contains 11 sequential migrations alongside core storage logic. Extract migrations into a versioned system.
|
|
||||||
|
|
||||||
**Current issues:**
|
|
||||||
- 11 migration functions mixed with core logic
|
|
||||||
- Hard to see migration history
|
|
||||||
- Sequential migrations slow database open
|
|
||||||
- No clear migration versioning
|
|
||||||
|
|
||||||
**Migration functions to extract:**
|
|
||||||
- `migrateDirtyIssuesTable()`
|
|
||||||
- `migrateIssueCountersTable()`
|
|
||||||
- `migrateExternalRefColumn()`
|
|
||||||
- `migrateCompositeIndexes()`
|
|
||||||
- `migrateClosedAtConstraint()`
|
|
||||||
- `migrateCompactionColumns()`
|
|
||||||
- `migrateSnapshotsTable()`
|
|
||||||
- `migrateCompactionConfig()`
|
|
||||||
- `migrateCompactedAtCommitColumn()`
|
|
||||||
- `migrateExportHashesTable()`
|
|
||||||
- Plus 1 more (11 total)
|
|
||||||
|
|
||||||
**Target structure:**
|
|
||||||
```
|
|
||||||
internal/storage/sqlite/
|
|
||||||
├── sqlite.go # Core storage (~800 lines)
|
|
||||||
│ # - CRUD operations
|
|
||||||
│ # - Database connection
|
|
||||||
│ # - Main Storage interface implementation
|
|
||||||
│
|
|
||||||
├── schema.go # Table definitions (~200 lines)
|
|
||||||
│ # - CREATE TABLE statements
|
|
||||||
│ # - Initial schema
|
|
||||||
│
|
|
||||||
├── migrations.go # Migration orchestration (~200 lines)
|
|
||||||
│ # - runMigrations()
|
|
||||||
│ # - Migration version tracking
|
|
||||||
│ # - Migration registry
|
|
||||||
│
|
|
||||||
└── migrations/ # Individual migrations
|
|
||||||
├── 001_initial_schema.go
|
|
||||||
├── 002_dirty_issues.go
|
|
||||||
├── 003_issue_counters.go
|
|
||||||
├── 004_external_ref.go
|
|
||||||
├── 005_composite_indexes.go
|
|
||||||
├── 006_closed_at_constraint.go
|
|
||||||
├── 007_compaction_columns.go
|
|
||||||
├── 008_snapshots_table.go
|
|
||||||
├── 009_compaction_config.go
|
|
||||||
├── 010_compacted_at_commit.go
|
|
||||||
└── 011_export_hashes.go
|
|
||||||
```
|
|
||||||
|
|
||||||
**Each migration file format:**
|
|
||||||
```go
|
|
||||||
package migrations
|
|
||||||
|
|
||||||
func Migration_001_InitialSchema(db *sql.DB) error {
|
|
||||||
// Migration logic
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
Register(1, "initial_schema", Migration_001_InitialSchema)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- All 11 migrations extracted to separate files
|
|
||||||
- Migration version tracking in database
|
|
||||||
- Migrations run in order on fresh database
|
|
||||||
- Existing databases upgrade correctly
|
|
||||||
- All tests pass: `go test ./internal/storage/sqlite/...`
|
|
||||||
- Database initialization time unchanged or improved
|
|
||||||
- Add migration rollback capability (optional, nice-to-have)
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Clear migration history
|
|
||||||
- Each migration self-contained
|
|
||||||
- Easier to review migration changes in PRs
|
|
||||||
- Future migrations easier to add
|
|
||||||
|
|
||||||
**Impact:** Reduces sqlite.go from 2,136 to ~1,000 lines, improves maintainability
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 3: Deduplicate Code
|
|
||||||
|
|
||||||
### Issue: Extract normalizeLabels to shared utility package
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 2
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** refactor, deduplication, phase-3
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
The `normalizeLabels` function appears in multiple locations with identical implementation. Extract to a shared utility package.
|
|
||||||
|
|
||||||
**Current locations:**
|
|
||||||
- `internal/rpc/server.go:37` (53 lines) - full implementation
|
|
||||||
- `cmd/bd/list.go:50-52` - uses the server version (needs to use new shared version)
|
|
||||||
|
|
||||||
**Function purpose:**
|
|
||||||
- Trims whitespace from labels
|
|
||||||
- Removes empty strings
|
|
||||||
- Deduplicates labels
|
|
||||||
- Preserves order
|
|
||||||
|
|
||||||
**Target structure:**
|
|
||||||
```
|
|
||||||
internal/util/
|
|
||||||
├── strings.go # String utilities
|
|
||||||
└── NormalizeLabels([]string) []string
|
|
||||||
```
|
|
||||||
|
|
||||||
**Implementation:**
|
|
||||||
```go
|
|
||||||
package util
|
|
||||||
|
|
||||||
import "strings"
|
|
||||||
|
|
||||||
// NormalizeLabels trims whitespace, removes empty strings, and deduplicates labels
|
|
||||||
// while preserving order.
|
|
||||||
func NormalizeLabels(ss []string) []string {
|
|
||||||
seen := make(map[string]struct{})
|
|
||||||
out := make([]string, 0, len(ss))
|
|
||||||
for _, s := range ss {
|
|
||||||
s = strings.TrimSpace(s)
|
|
||||||
if s == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if _, ok := seen[s]; ok {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
seen[s] = struct{}{}
|
|
||||||
out = append(out, s)
|
|
||||||
}
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- Create `internal/util/strings.go` with `NormalizeLabels`
|
|
||||||
- Add comprehensive unit tests in `internal/util/strings_test.go`
|
|
||||||
- Update `internal/rpc/server.go` to import and use `util.NormalizeLabels`
|
|
||||||
- Update `cmd/bd/list.go` to import and use `util.NormalizeLabels`
|
|
||||||
- Remove duplicate implementations
|
|
||||||
- All tests pass: `go test ./...`
|
|
||||||
- Verify label normalization works: test `bd list --label` commands
|
|
||||||
|
|
||||||
**Impact:** DRY principle, single source of truth, easier to test
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Issue: Centralize BD_DEBUG logging into debug package
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 2
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** refactor, deduplication, logging, phase-3
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
The codebase has 43 scattered instances of `if os.Getenv("BD_DEBUG") != ""` debug checks across 6 files. Centralize into a debug logging package.
|
|
||||||
|
|
||||||
**Current locations:**
|
|
||||||
- `cmd/bd/main.go` - 15 checks
|
|
||||||
- `cmd/bd/autoflush.go` - 6 checks
|
|
||||||
- `cmd/bd/nodb.go` - 4 checks
|
|
||||||
- `internal/rpc/server.go` - 2 checks
|
|
||||||
- `internal/rpc/client.go` - 5 checks
|
|
||||||
- `cmd/bd/daemon_autostart.go` - 11 checks
|
|
||||||
|
|
||||||
**Current pattern:**
|
|
||||||
```go
|
|
||||||
if os.Getenv("BD_DEBUG") != "" {
|
|
||||||
fmt.Fprintf(os.Stderr, "Debug: %s\n", msg)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Target structure:**
|
|
||||||
```
|
|
||||||
internal/debug/
|
|
||||||
└── debug.go
|
|
||||||
```
|
|
||||||
|
|
||||||
**Implementation:**
|
|
||||||
```go
|
|
||||||
package debug
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Enabled is true if BD_DEBUG environment variable is set
|
|
||||||
var Enabled = os.Getenv("BD_DEBUG") != ""
|
|
||||||
|
|
||||||
// Logf prints a debug message to stderr if debug mode is enabled
|
|
||||||
func Logf(format string, args ...interface{}) {
|
|
||||||
if Enabled {
|
|
||||||
fmt.Fprintf(os.Stderr, "Debug: "+format+"\n", args...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Printf prints a debug message to stderr without "Debug:" prefix
|
|
||||||
func Printf(format string, args ...interface{}) {
|
|
||||||
if Enabled {
|
|
||||||
fmt.Fprintf(os.Stderr, format, args...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- Create `internal/debug/debug.go` with `Enabled`, `Logf`, `Printf`
|
|
||||||
- Add unit tests in `internal/debug/debug_test.go` (test with/without BD_DEBUG)
|
|
||||||
- Replace all 43 instances of `os.Getenv("BD_DEBUG")` checks with `debug.Logf()`
|
|
||||||
- Verify debug output works: run with `BD_DEBUG=1 bd status`
|
|
||||||
- All tests pass: `go test ./...`
|
|
||||||
- No behavior change (output identical to before)
|
|
||||||
|
|
||||||
**Migration example:**
|
|
||||||
```go
|
|
||||||
// Before:
|
|
||||||
if os.Getenv("BD_DEBUG") != "" {
|
|
||||||
fmt.Fprintf(os.Stderr, "Debug: connected to daemon at %s\n", socketPath)
|
|
||||||
}
|
|
||||||
|
|
||||||
// After:
|
|
||||||
debug.Logf("connected to daemon at %s", socketPath)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Centralized debug logging
|
|
||||||
- Easier to add structured logging later
|
|
||||||
- Testable (can mock debug output)
|
|
||||||
- Consistent debug message format
|
|
||||||
|
|
||||||
**Impact:** Removes 43 scattered checks, improves code clarity
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Issue: Consider central serialization package for JSON handling
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 3
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** refactor, deduplication, serialization, phase-3, optional
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
Multiple parts of the codebase handle JSON serialization of issues with slightly different approaches. Consider creating a centralized serialization package to ensure consistency.
|
|
||||||
|
|
||||||
**Current serialization locations:**
|
|
||||||
- `cmd/bd/export.go` - JSONL export (issues to file)
|
|
||||||
- `cmd/bd/import.go` - JSONL import (file to issues)
|
|
||||||
- `internal/rpc/protocol.go` - RPC JSON marshaling
|
|
||||||
- `internal/storage/memory/memory.go` - In-memory marshaling
|
|
||||||
|
|
||||||
**Potential benefits:**
|
|
||||||
- Single source of truth for JSON format
|
|
||||||
- Consistent field naming
|
|
||||||
- Easier to add new fields
|
|
||||||
- Centralized validation
|
|
||||||
|
|
||||||
**Potential structure:**
|
|
||||||
```
|
|
||||||
internal/serialization/
|
|
||||||
├── issue.go # Issue JSON serialization
|
|
||||||
├── dependency.go # Dependency serialization
|
|
||||||
├── jsonl.go # JSONL file operations
|
|
||||||
└── jsonl_test.go # Tests
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** This is marked **optional** because:
|
|
||||||
- Current serialization mostly works
|
|
||||||
- May not provide enough benefit to justify refactor
|
|
||||||
- Risk of breaking compatibility
|
|
||||||
|
|
||||||
**Acceptance Criteria (if implemented):**
|
|
||||||
- Create serialization package with documented JSON format
|
|
||||||
- Migrate export/import to use centralized serialization
|
|
||||||
- All existing JSONL files can be read with new code
|
|
||||||
- All tests pass: `go test ./...`
|
|
||||||
- Export/import round-trip works perfectly
|
|
||||||
- RPC protocol unchanged (or backwards compatible)
|
|
||||||
|
|
||||||
**Decision point:** Evaluate if benefits outweigh refactoring cost
|
|
||||||
|
|
||||||
**Impact:** TBD based on investigation - may defer to future work
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Phase 4: Test Cleanup
|
|
||||||
|
|
||||||
### Issue: Audit and consolidate collision test coverage
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 2
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** test-cleanup, phase-4
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
The codebase has 2,019 LOC of collision detection tests across 3 files. Run coverage analysis to identify redundant test cases and consolidate.
|
|
||||||
|
|
||||||
**Test files:**
|
|
||||||
- `cmd/bd/import_collision_test.go` - 974 LOC
|
|
||||||
- `cmd/bd/autoimport_collision_test.go` - 750 LOC
|
|
||||||
- `cmd/bd/import_collision_regression_test.go` - 295 LOC
|
|
||||||
|
|
||||||
**Total:** 2,019 LOC of collision tests
|
|
||||||
|
|
||||||
**Analysis steps:**
|
|
||||||
|
|
||||||
1. **Run coverage analysis:**
|
|
||||||
```bash
|
|
||||||
go test -cover -coverprofile=coverage.out ./cmd/bd/
|
|
||||||
go tool cover -func=coverage.out | grep collision
|
|
||||||
go tool cover -html=coverage.out -o coverage.html
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Identify redundant tests:**
|
|
||||||
- Look for tests covering identical code paths
|
|
||||||
- Check for overlapping table-driven test cases
|
|
||||||
- Find tests made obsolete by later tests
|
|
||||||
|
|
||||||
3. **Document findings:**
|
|
||||||
- Which tests are essential?
|
|
||||||
- Which tests duplicate coverage?
|
|
||||||
- What's the minimum set of tests for full coverage?
|
|
||||||
|
|
||||||
**Consolidation strategy:**
|
|
||||||
- Keep regression tests for critical bugs
|
|
||||||
- Merge overlapping table-driven tests
|
|
||||||
- Remove redundant edge case tests covered elsewhere
|
|
||||||
- Ensure all collision scenarios still tested
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- Coverage analysis completed and documented
|
|
||||||
- Redundant tests identified (~800 LOC estimated)
|
|
||||||
- Consolidated test suite maintains or improves coverage
|
|
||||||
- All remaining tests pass: `go test ./cmd/bd/...`
|
|
||||||
- Test run time unchanged or faster
|
|
||||||
- Document which tests were removed and why
|
|
||||||
- Coverage percentage maintained: `go test -cover ./cmd/bd/` shows same %
|
|
||||||
|
|
||||||
**Expected outcome:** Reduce to ~1,200 LOC (save ~800 lines) while maintaining coverage
|
|
||||||
|
|
||||||
**Impact:** Faster test runs, easier maintenance, clearer test intent
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Documentation & Validation
|
|
||||||
|
|
||||||
### Issue: Update documentation after code health cleanup
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 2
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** documentation, phase-4
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
Update all documentation to reflect code structure changes after cleanup phases complete.
|
|
||||||
|
|
||||||
**Documentation to update:**
|
|
||||||
|
|
||||||
1. **AGENTS.md** - Update file structure references
|
|
||||||
2. **CONTRIBUTING.md** (if exists) - Update build/test instructions
|
|
||||||
3. **Code comments** - Update any outdated references
|
|
||||||
4. **Package documentation** - Update godoc for reorganized packages
|
|
||||||
|
|
||||||
**New documentation to add:**
|
|
||||||
|
|
||||||
1. **internal/util/README.md** - Document shared utilities
|
|
||||||
2. **internal/debug/README.md** - Document debug logging
|
|
||||||
3. **internal/rpc/README.md** - Document new file organization
|
|
||||||
4. **internal/storage/sqlite/migrations/README.md** - Migration system docs
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- All documentation references to deleted files removed
|
|
||||||
- New package READMEs written
|
|
||||||
- Code comments updated for reorganized code
|
|
||||||
- Migration guide for developers (if needed)
|
|
||||||
- Architecture diagrams updated (if they exist)
|
|
||||||
|
|
||||||
**Impact:** Keeps documentation in sync with code
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Issue: Run final validation and cleanup checks
|
|
||||||
|
|
||||||
**Type:** task
|
|
||||||
**Priority:** 1
|
|
||||||
**Dependencies:** parent-child:epic
|
|
||||||
**Labels:** validation, phase-4
|
|
||||||
|
|
||||||
**Description:**
|
|
||||||
|
|
||||||
Final validation pass to ensure all cleanup objectives met and no regressions introduced.
|
|
||||||
|
|
||||||
**Validation checklist:**
|
|
||||||
|
|
||||||
**1. Dead code verification:**
|
|
||||||
```bash
|
|
||||||
go run golang.org/x/tools/cmd/deadcode@latest -test ./...
|
|
||||||
# Should show zero unreachable functions
|
|
||||||
```
|
|
||||||
|
|
||||||
**2. Test coverage:**
|
|
||||||
```bash
|
|
||||||
go test -cover ./...
|
|
||||||
# Coverage should be maintained or improved
|
|
||||||
```
|
|
||||||
|
|
||||||
**3. Build verification:**
|
|
||||||
```bash
|
|
||||||
go build ./cmd/bd/
|
|
||||||
# Should build without errors
|
|
||||||
```
|
|
||||||
|
|
||||||
**4. Linting:**
|
|
||||||
```bash
|
|
||||||
golangci-lint run
|
|
||||||
# Should show improvements in code quality metrics
|
|
||||||
```
|
|
||||||
|
|
||||||
**5. Integration tests:**
|
|
||||||
```bash
|
|
||||||
# Test key workflows:
|
|
||||||
bd init
|
|
||||||
bd create "Test issue"
|
|
||||||
bd list
|
|
||||||
bd daemon
|
|
||||||
bd export
|
|
||||||
bd import
|
|
||||||
bd compact --dry-run
|
|
||||||
```
|
|
||||||
|
|
||||||
**6. Metrics verification:**
|
|
||||||
- Count LOC before/after: `find . -name "*.go" | xargs wc -l`
|
|
||||||
- Verify ~2,500 LOC reduction achieved
|
|
||||||
- Count files before/after
|
|
||||||
|
|
||||||
**7. Git clean check:**
|
|
||||||
```bash
|
|
||||||
go mod tidy
|
|
||||||
go fmt ./...
|
|
||||||
git status # Should show only intended changes
|
|
||||||
```
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- Zero unreachable functions per deadcode analyzer
|
|
||||||
- All tests pass: `go test ./...`
|
|
||||||
- Test coverage maintained or improved
|
|
||||||
- Builds cleanly: `go build ./...`
|
|
||||||
- Linting shows improvements
|
|
||||||
- Integration tests all pass
|
|
||||||
- LOC reduction target achieved (~2,500 LOC)
|
|
||||||
- No unintended behavior changes
|
|
||||||
- Git commit messages document all changes
|
|
||||||
|
|
||||||
**Final metrics to report:**
|
|
||||||
- LOC removed: ~____
|
|
||||||
- Files deleted: ____
|
|
||||||
- Files created: ____
|
|
||||||
- Test coverage: ____%
|
|
||||||
- Build time: ____ (before/after)
|
|
||||||
- Test run time: ____ (before/after)
|
|
||||||
|
|
||||||
**Impact:** Confirms all cleanup objectives achieved successfully
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
|
|
||||||
**Analysis date:** 2025-10-27
|
|
||||||
**Analysis tool:** `deadcode` analyzer + manual code review
|
|
||||||
**Total estimated LOC reduction:** ~2,500 (6.8% of 37K codebase)
|
|
||||||
**Total estimated effort:** 11 days across 4 phases
|
|
||||||
|
|
||||||
**Risk assessment:**
|
|
||||||
- **Low risk:** Dead code removal (Phase 1)
|
|
||||||
- **Medium risk:** File splits (Phase 2) - requires careful testing
|
|
||||||
- **Low risk:** Deduplication (Phase 3)
|
|
||||||
- **Low risk:** Test cleanup (Phase 4)
|
|
||||||
|
|
||||||
**Dependencies:**
|
|
||||||
- All child issues depend on epic
|
|
||||||
- Phase 2+ should wait for Phase 1 completion
|
|
||||||
- Each phase can be worked independently
|
|
||||||
- Recommend completing in order: 1 → 2 → 3 → 4
|
|
||||||
|
|
||||||
**Success metrics:**
|
|
||||||
- ✅ All deadcode warnings eliminated
|
|
||||||
- ✅ No files >1,000 LOC
|
|
||||||
- ✅ Zero duplicate utility functions
|
|
||||||
- ✅ Test coverage maintained at 85%+
|
|
||||||
- ✅ All integration tests passing
|
|
||||||
- ✅ Documentation updated
|
|
||||||
@@ -1,788 +0,0 @@
|
|||||||
# Event-Driven Daemon Architecture
|
|
||||||
|
|
||||||
**Status:** Design Proposal
|
|
||||||
**Author:** AI Assistant
|
|
||||||
**Date:** 2025-10-28
|
|
||||||
**Context:** Post-cache removal, per-project daemon model established
|
|
||||||
|
|
||||||
## Executive Summary
|
|
||||||
|
|
||||||
Replace the current 5-second polling sync loop with an event-driven architecture that reacts instantly to changes. This eliminates stale data issues while reducing CPU usage and improving user experience.
|
|
||||||
|
|
||||||
**Key metrics:**
|
|
||||||
- Latency improvement: 5000ms → <500ms
|
|
||||||
- CPU reduction: ~60% (no polling)
|
|
||||||
- Code complexity: +300 LOC (event handling), but cleaner semantics
|
|
||||||
- User impact: Instant feedback, no stale cache pain
|
|
||||||
|
|
||||||
## Problem Statement
|
|
||||||
|
|
||||||
### Current Architecture Issues
|
|
||||||
|
|
||||||
**Polling-based sync** (`cmd/bd/daemon.go:1010-1120`):
|
|
||||||
```go
|
|
||||||
ticker := time.NewTicker(5 * time.Second)
|
|
||||||
for {
|
|
||||||
case <-ticker.C:
|
|
||||||
doSync() // Export, pull, import, push
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Pain points:**
|
|
||||||
1. **Stale data window**: Changes invisible for up to 5 seconds
|
|
||||||
2. **CPU waste**: Daemon wakes every 5s even if nothing changed
|
|
||||||
3. **Unnecessary work**: Sync cycle runs even when no mutations occurred
|
|
||||||
4. **Cache confusion**: (Now removed) Cache staleness compounded delay
|
|
||||||
|
|
||||||
### What Cache Removal Enables
|
|
||||||
|
|
||||||
The recent cache removal (Oct 27-28, 964 LOC removed) creates ideal conditions for event-driven architecture:
|
|
||||||
|
|
||||||
✅ **One daemon = One database**: No cache eviction, no cross-workspace confusion
|
|
||||||
✅ **Simpler state**: Daemon state is just `s.storage`, no cache maps
|
|
||||||
✅ **Clear ownership**: Each daemon owns exactly one JSONL + SQLite pair
|
|
||||||
✅ **No invalidation complexity**: Events can directly trigger actions
|
|
||||||
|
|
||||||
## Proposed Architecture
|
|
||||||
|
|
||||||
### High-Level Flow
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────┐
|
|
||||||
│ Event-Driven Daemon │
|
|
||||||
├─────────────────────────────────────────────────────────┤
|
|
||||||
│ │
|
|
||||||
│ Event Sources Event Handler │
|
|
||||||
│ ┌──────────────┐ ┌──────────────┐ │
|
|
||||||
│ │ FS Watcher │─────────→│ │ │
|
|
||||||
│ │ (JSONL file) │ │ Debouncer │ │
|
|
||||||
│ └──────────────┘ │ (500ms) │ │
|
|
||||||
│ │ │ │
|
|
||||||
│ ┌──────────────┐ └──────────────┘ │
|
|
||||||
│ │ RPC Mutation │─────────→ │ │
|
|
||||||
│ │ Events │ │ │
|
|
||||||
│ └──────────────┘ ↓ │
|
|
||||||
│ ┌──────────────┐ │
|
|
||||||
│ ┌──────────────┐ │ Sync Action │ │
|
|
||||||
│ │ Git Hooks │─────────→│ - Export │ │
|
|
||||||
│ │ (optional) │ │ - Import │ │
|
|
||||||
│ └──────────────┘ └──────────────┘ │
|
|
||||||
│ │
|
|
||||||
└─────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### Components
|
|
||||||
|
|
||||||
#### 1. File System Watcher
|
|
||||||
|
|
||||||
**Purpose:** Detect JSONL changes from external sources (git pull, manual edits)
|
|
||||||
|
|
||||||
**Implementation:**
|
|
||||||
```go
|
|
||||||
// cmd/bd/daemon_watcher.go (new file)
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"path/filepath"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/fsnotify/fsnotify"
|
|
||||||
)
|
|
||||||
|
|
||||||
type FileWatcher struct {
|
|
||||||
watcher *fsnotify.Watcher
|
|
||||||
debouncer *Debouncer
|
|
||||||
jsonlPath string
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewFileWatcher(jsonlPath string, onChanged func()) (*FileWatcher, error) {
|
|
||||||
watcher, err := fsnotify.NewWatcher()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
fw := &FileWatcher{
|
|
||||||
watcher: watcher,
|
|
||||||
jsonlPath: jsonlPath,
|
|
||||||
debouncer: NewDebouncer(500*time.Millisecond, onChanged),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Watch JSONL file
|
|
||||||
if err := watcher.Add(jsonlPath); err != nil {
|
|
||||||
watcher.Close()
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Also watch .git/refs/heads for branch changes
|
|
||||||
gitRefsPath := filepath.Join(filepath.Dir(jsonlPath), "..", ".git", "refs", "heads")
|
|
||||||
_ = watcher.Add(gitRefsPath) // Best effort
|
|
||||||
|
|
||||||
return fw, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fw *FileWatcher) Start(ctx context.Context, log daemonLogger) {
|
|
||||||
go func() {
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case event, ok := <-fw.watcher.Events:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Only care about writes to JSONL or ref changes
|
|
||||||
if event.Name == fw.jsonlPath && event.Op&fsnotify.Write != 0 {
|
|
||||||
log.log("File change detected: %s", event.Name)
|
|
||||||
fw.debouncer.Trigger()
|
|
||||||
} else if event.Op&fsnotify.Write != 0 {
|
|
||||||
log.log("Git ref change detected: %s", event.Name)
|
|
||||||
fw.debouncer.Trigger()
|
|
||||||
}
|
|
||||||
|
|
||||||
case err, ok := <-fw.watcher.Errors:
|
|
||||||
if !ok {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
log.log("Watcher error: %v", err)
|
|
||||||
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (fw *FileWatcher) Close() error {
|
|
||||||
return fw.watcher.Close()
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Platform support:**
|
|
||||||
- **Linux**: inotify (built into fsnotify)
|
|
||||||
- **macOS**: FSEvents (built into fsnotify)
|
|
||||||
- **Windows**: ReadDirectoryChangesW (built into fsnotify)
|
|
||||||
|
|
||||||
**Edge cases handled:**
|
|
||||||
- File rename (git atomic write via temp file): Watch directory, not just file
|
|
||||||
- Event storm (rapid git writes): Debouncer batches into single action
|
|
||||||
- Watcher failure: Fall back to polling with warning
|
|
||||||
|
|
||||||
#### 2. Debouncer
|
|
||||||
|
|
||||||
**Purpose:** Batch rapid events into single action
|
|
||||||
|
|
||||||
**Implementation:**
|
|
||||||
```go
|
|
||||||
// cmd/bd/daemon_debouncer.go (new file)
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Debouncer struct {
|
|
||||||
mu sync.Mutex
|
|
||||||
timer *time.Timer
|
|
||||||
duration time.Duration
|
|
||||||
action func()
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewDebouncer(duration time.Duration, action func()) *Debouncer {
|
|
||||||
return &Debouncer{
|
|
||||||
duration: duration,
|
|
||||||
action: action,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *Debouncer) Trigger() {
|
|
||||||
d.mu.Lock()
|
|
||||||
defer d.mu.Unlock()
|
|
||||||
|
|
||||||
if d.timer != nil {
|
|
||||||
d.timer.Stop()
|
|
||||||
}
|
|
||||||
|
|
||||||
d.timer = time.AfterFunc(d.duration, func() {
|
|
||||||
d.action()
|
|
||||||
d.mu.Lock()
|
|
||||||
d.timer = nil
|
|
||||||
d.mu.Unlock()
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *Debouncer) Cancel() {
|
|
||||||
d.mu.Lock()
|
|
||||||
defer d.mu.Unlock()
|
|
||||||
|
|
||||||
if d.timer != nil {
|
|
||||||
d.timer.Stop()
|
|
||||||
d.timer = nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Tuning:**
|
|
||||||
- Default: 500ms (balance between responsiveness and batching)
|
|
||||||
- Configurable via `BEADS_DEBOUNCE_MS` env var
|
|
||||||
- Could use adaptive timing based on event frequency
|
|
||||||
|
|
||||||
#### 3. RPC Mutation Events
|
|
||||||
|
|
||||||
**Purpose:** Trigger export immediately after DB changes (not in 5s)
|
|
||||||
|
|
||||||
**Implementation:**
|
|
||||||
```go
|
|
||||||
// internal/rpc/server.go (modifications)
|
|
||||||
type Server struct {
|
|
||||||
// ... existing fields
|
|
||||||
mutationChan chan MutationEvent
|
|
||||||
}
|
|
||||||
|
|
||||||
type MutationEvent struct {
|
|
||||||
Type string // "create", "update", "delete"
|
|
||||||
IssueID string // e.g., "bd-42"
|
|
||||||
Timestamp time.Time
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Server) CreateIssue(req *CreateRequest) (*Issue, error) {
|
|
||||||
issue, err := s.storage.CreateIssue(req)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Notify mutation channel
|
|
||||||
select {
|
|
||||||
case s.mutationChan <- MutationEvent{
|
|
||||||
Type: "create",
|
|
||||||
IssueID: issue.ID,
|
|
||||||
Timestamp: time.Now(),
|
|
||||||
}:
|
|
||||||
default:
|
|
||||||
// Channel full, event dropped (sync will happen eventually)
|
|
||||||
}
|
|
||||||
|
|
||||||
return issue, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Similar for UpdateIssue, DeleteIssue, AddComment, etc.
|
|
||||||
```
|
|
||||||
|
|
||||||
**Handler in daemon:**
|
|
||||||
```go
|
|
||||||
// cmd/bd/daemon.go (modification)
|
|
||||||
func handleMutationEvents(ctx context.Context, events <-chan rpc.MutationEvent,
|
|
||||||
debouncer *Debouncer, log daemonLogger) {
|
|
||||||
go func() {
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case event := <-events:
|
|
||||||
log.log("Mutation detected: %s %s", event.Type, event.IssueID)
|
|
||||||
debouncer.Trigger() // Schedule export
|
|
||||||
|
|
||||||
case <-ctx.Done():
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. Git Hook Integration (Optional)
|
|
||||||
|
|
||||||
**Purpose:** Explicit notifications from git operations
|
|
||||||
|
|
||||||
**Implementation:**
|
|
||||||
```bash
|
|
||||||
# .git/hooks/post-merge (installed by bd init --quiet)
|
|
||||||
#!/bin/bash
|
|
||||||
# Notify daemon of merge completion
|
|
||||||
if command -v bd &> /dev/null; then
|
|
||||||
bd daemon-event import-needed &
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
```go
|
|
||||||
// cmd/bd/daemon_event.go (new file)
|
|
||||||
package main
|
|
||||||
|
|
||||||
// Called by git hooks to notify daemon
|
|
||||||
func handleDaemonEvent() {
|
|
||||||
if len(os.Args) < 3 {
|
|
||||||
fmt.Fprintln(os.Stderr, "Usage: bd daemon-event <event-type>")
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
eventType := os.Args[2]
|
|
||||||
socketPath := getSocketPath()
|
|
||||||
|
|
||||||
client := rpc.NewClient(socketPath)
|
|
||||||
ctx := context.Background()
|
|
||||||
|
|
||||||
switch eventType {
|
|
||||||
case "import-needed":
|
|
||||||
// Git hook says "JSONL changed, please import"
|
|
||||||
if err := client.TriggerImport(ctx); err != nil {
|
|
||||||
// Ignore error - daemon might not be running
|
|
||||||
os.Exit(0)
|
|
||||||
}
|
|
||||||
case "export-needed":
|
|
||||||
if err := client.TriggerExport(ctx); err != nil {
|
|
||||||
os.Exit(0)
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
fmt.Fprintf(os.Stderr, "Unknown event type: %s\n", eventType)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note:** Git hooks are **optional enhancement**, not required. File watcher is primary mechanism.
|
|
||||||
|
|
||||||
### Complete Daemon Loop
|
|
||||||
|
|
||||||
**Current implementation** (`cmd/bd/daemon.go:1123-1161`):
|
|
||||||
```go
|
|
||||||
func runEventLoop(ctx context.Context, cancel context.CancelFunc, ticker *time.Ticker,
|
|
||||||
doSync func(), server *rpc.Server, serverErrChan chan error,
|
|
||||||
log daemonLogger) {
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-ticker.C: // ← Every 5 seconds
|
|
||||||
doSync()
|
|
||||||
case sig := <-sigChan:
|
|
||||||
// ... shutdown
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Proposed implementation:**
|
|
||||||
```go
|
|
||||||
// cmd/bd/daemon_event_loop.go (new file)
|
|
||||||
func runEventDrivenLoop(ctx context.Context, cancel context.CancelFunc,
|
|
||||||
server *rpc.Server, serverErrChan chan error,
|
|
||||||
watcher *FileWatcher, mutationChan <-chan rpc.MutationEvent,
|
|
||||||
log daemonLogger) {
|
|
||||||
|
|
||||||
sigChan := make(chan os.Signal, 1)
|
|
||||||
signal.Notify(sigChan, daemonSignals...)
|
|
||||||
defer signal.Stop(sigChan)
|
|
||||||
|
|
||||||
// Debounced sync actions
|
|
||||||
exportDebouncer := NewDebouncer(500*time.Millisecond, func() {
|
|
||||||
log.log("Export triggered by mutation events")
|
|
||||||
exportToJSONL()
|
|
||||||
})
|
|
||||||
|
|
||||||
importDebouncer := NewDebouncer(500*time.Millisecond, func() {
|
|
||||||
log.log("Import triggered by file change")
|
|
||||||
autoImportIfNewer()
|
|
||||||
})
|
|
||||||
|
|
||||||
// Start file watcher (triggers import)
|
|
||||||
watcher.Start(ctx, log)
|
|
||||||
|
|
||||||
// Start mutation handler (triggers export)
|
|
||||||
handleMutationEvents(ctx, mutationChan, exportDebouncer, log)
|
|
||||||
|
|
||||||
// Optional: Periodic health check (every 60s, not sync)
|
|
||||||
healthTicker := time.NewTicker(60 * time.Second)
|
|
||||||
defer healthTicker.Stop()
|
|
||||||
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-healthTicker.C:
|
|
||||||
// Periodic health check (validate DB, check disk space, etc.)
|
|
||||||
checkDaemonHealth(ctx, store, log)
|
|
||||||
|
|
||||||
case sig := <-sigChan:
|
|
||||||
if isReloadSignal(sig) {
|
|
||||||
log.log("Received reload signal, ignoring")
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
log.log("Received signal %v, shutting down...", sig)
|
|
||||||
cancel()
|
|
||||||
if err := server.Stop(); err != nil {
|
|
||||||
log.log("Error stopping server: %v", err)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
|
|
||||||
case <-ctx.Done():
|
|
||||||
log.log("Context canceled, shutting down")
|
|
||||||
watcher.Close()
|
|
||||||
if err := server.Stop(); err != nil {
|
|
||||||
log.log("Error stopping server: %v", err)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
|
|
||||||
case err := <-serverErrChan:
|
|
||||||
log.log("RPC server failed: %v", err)
|
|
||||||
cancel()
|
|
||||||
watcher.Close()
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Migration Strategy
|
|
||||||
|
|
||||||
### Phase 1: Parallel Implementation (2-3 weeks)
|
|
||||||
|
|
||||||
**Goal:** Event-driven as opt-in alongside polling
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
1. Add `fsnotify` dependency to `go.mod`
|
|
||||||
2. Create new files:
|
|
||||||
- `cmd/bd/daemon_watcher.go` (~150 LOC)
|
|
||||||
- `cmd/bd/daemon_debouncer.go` (~60 LOC)
|
|
||||||
- `cmd/bd/daemon_event_loop.go` (~200 LOC)
|
|
||||||
3. Add flag `BEADS_DAEMON_MODE=events` to enable
|
|
||||||
4. Keep existing `runEventLoop` as fallback
|
|
||||||
|
|
||||||
**Testing:**
|
|
||||||
- Unit tests for debouncer
|
|
||||||
- Integration tests for file watcher
|
|
||||||
- Stress test with event storm (rapid git operations)
|
|
||||||
- Test on Linux, macOS, Windows
|
|
||||||
|
|
||||||
**Rollout:**
|
|
||||||
- Default: `BEADS_DAEMON_MODE=poll` (current behavior)
|
|
||||||
- Opt-in: `BEADS_DAEMON_MODE=events` (new behavior)
|
|
||||||
- Documentation: Add to AGENTS.md
|
|
||||||
|
|
||||||
### Phase 2: Battle Testing (4-6 weeks)
|
|
||||||
|
|
||||||
**Goal:** Real-world validation with dogfooding
|
|
||||||
|
|
||||||
**Metrics to track:**
|
|
||||||
- CPU usage (before/after comparison)
|
|
||||||
- Latency (time from mutation to JSONL update)
|
|
||||||
- Memory usage (fsnotify overhead)
|
|
||||||
- Event storm handling (git pull with 100+ file changes)
|
|
||||||
- Edge case frequency (watcher failures, debounce races)
|
|
||||||
|
|
||||||
**Success criteria:**
|
|
||||||
- CPU usage <40% of polling mode
|
|
||||||
- Latency <500ms (vs 5000ms in polling)
|
|
||||||
- Zero data loss or corruption
|
|
||||||
- Zero daemon crashes from event handling
|
|
||||||
|
|
||||||
**Issue tracking:**
|
|
||||||
- Create `bd-XXX: Event-driven daemon stabilization` issue
|
|
||||||
- Track bugs as sub-issues
|
|
||||||
- Weekly review of metrics
|
|
||||||
|
|
||||||
### Phase 3: Default Switchover (1 week)
|
|
||||||
|
|
||||||
**Goal:** Make event-driven the default
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
1. Flip default: `BEADS_DAEMON_MODE=events`
|
|
||||||
2. Keep polling as fallback: `BEADS_DAEMON_MODE=poll`
|
|
||||||
3. Update documentation
|
|
||||||
4. Add release note
|
|
||||||
|
|
||||||
**Communication:**
|
|
||||||
- Blog post: "Beads daemon now event-driven"
|
|
||||||
- Changelog entry with before/after metrics
|
|
||||||
- Migration guide for users who hit issues
|
|
||||||
|
|
||||||
### Phase 4: Deprecation (6+ months later)
|
|
||||||
|
|
||||||
**Goal:** Remove polling mode entirely
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
1. Remove `runEventLoop` with ticker
|
|
||||||
2. Remove `BEADS_DAEMON_MODE` flag
|
|
||||||
3. Simplify daemon startup code
|
|
||||||
|
|
||||||
**Only if:**
|
|
||||||
- Event-driven stable for 6+ months
|
|
||||||
- No unresolved critical issues
|
|
||||||
- Community feedback positive
|
|
||||||
|
|
||||||
## Performance Analysis
|
|
||||||
|
|
||||||
### CPU Usage
|
|
||||||
|
|
||||||
**Current (polling):**
|
|
||||||
```
|
|
||||||
Every 5 seconds:
|
|
||||||
- Wake daemon
|
|
||||||
- Check git status
|
|
||||||
- Check JSONL hash
|
|
||||||
- Check dirty flags
|
|
||||||
- Sleep
|
|
||||||
|
|
||||||
Estimated: ~5-10% CPU (depends on repo size)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Event-driven:**
|
|
||||||
```
|
|
||||||
Daemon sleeps until:
|
|
||||||
- File system event (rare)
|
|
||||||
- RPC mutation (user-triggered)
|
|
||||||
- Signal
|
|
||||||
|
|
||||||
Estimated: ~1-2% CPU (mostly idle)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Savings:** ~60-80% CPU reduction
|
|
||||||
|
|
||||||
### Latency
|
|
||||||
|
|
||||||
**Current (polling):**
|
|
||||||
```
|
|
||||||
User runs: bd create "Fix bug"
|
|
||||||
→ RPC call → DB write → (wait up to 5s) → Export → Git commit
|
|
||||||
Average: 2.5s delay
|
|
||||||
Worst: 5s delay
|
|
||||||
```
|
|
||||||
|
|
||||||
**Event-driven:**
|
|
||||||
```
|
|
||||||
User runs: bd create "Fix bug"
|
|
||||||
→ RPC call → DB write → Mutation event → (500ms debounce) → Export → Git commit
|
|
||||||
Average: 250ms delay
|
|
||||||
Worst: 500ms delay
|
|
||||||
```
|
|
||||||
|
|
||||||
**Improvement:** 5-10x faster
|
|
||||||
|
|
||||||
### Memory Usage
|
|
||||||
|
|
||||||
**fsnotify overhead:**
|
|
||||||
- Linux (inotify): ~1-2 MB per watched directory
|
|
||||||
- macOS (FSEvents): ~500 KB per watched directory
|
|
||||||
- Windows: ~1 MB per watched directory
|
|
||||||
|
|
||||||
**With 1 JSONL + 1 git refs directory = ~2-4 MB**
|
|
||||||
|
|
||||||
**Negligible compared to SQLite cache (10-50 MB for typical database)**
|
|
||||||
|
|
||||||
## Edge Cases & Error Handling
|
|
||||||
|
|
||||||
### 1. File Watcher Failure
|
|
||||||
|
|
||||||
**Scenario:** `inotify` limit exceeded (Linux), permissions issue, or filesystem doesn't support watching
|
|
||||||
|
|
||||||
**Detection:**
|
|
||||||
```go
|
|
||||||
watcher, err := fsnotify.NewWatcher()
|
|
||||||
if err != nil {
|
|
||||||
log.log("WARNING: File watcher unavailable (%v), falling back to polling", err)
|
|
||||||
useFallbackPolling = true
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fallback:** Automatic switch to 5s polling with warning
|
|
||||||
|
|
||||||
### 2. Event Storm
|
|
||||||
|
|
||||||
**Scenario:** Git pull modifies JSONL 50 times in rapid succession
|
|
||||||
|
|
||||||
**Mitigation:** Debouncer batches into single action after 500ms quiet period
|
|
||||||
|
|
||||||
**Stress test:**
|
|
||||||
```bash
|
|
||||||
# Simulate event storm
|
|
||||||
for i in {1..100}; do
|
|
||||||
echo '{"id":"bd-'$i'"}' >> beads.jsonl
|
|
||||||
done
|
|
||||||
# Should trigger exactly 1 import after 500ms
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Watcher Detached from File
|
|
||||||
|
|
||||||
**Scenario:** JSONL replaced by `git checkout` (different inode)
|
|
||||||
|
|
||||||
**Detection:** fsnotify sends `RENAME` or `REMOVE` event
|
|
||||||
|
|
||||||
**Recovery:**
|
|
||||||
```go
|
|
||||||
case event.Op&fsnotify.Remove != 0:
|
|
||||||
log.log("JSONL removed, re-establishing watch")
|
|
||||||
watcher.Remove(jsonlPath)
|
|
||||||
time.Sleep(100 * time.Millisecond)
|
|
||||||
watcher.Add(jsonlPath)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Debounce Race Condition
|
|
||||||
|
|
||||||
**Scenario:** Event A triggers debounce, event B arrives during wait, action fires for A before B seen
|
|
||||||
|
|
||||||
**Solution:** Debouncer restarts timer on each trigger (standard debounce behavior)
|
|
||||||
|
|
||||||
**Test:**
|
|
||||||
```go
|
|
||||||
func TestDebouncerBatchesMultipleEvents(t *testing.T) {
|
|
||||||
callCount := 0
|
|
||||||
d := NewDebouncer(100*time.Millisecond, func() { callCount++ })
|
|
||||||
|
|
||||||
d.Trigger() // t=0ms
|
|
||||||
time.Sleep(50 * time.Millisecond)
|
|
||||||
d.Trigger() // t=50ms (resets timer)
|
|
||||||
time.Sleep(50 * time.Millisecond)
|
|
||||||
d.Trigger() // t=100ms (resets timer)
|
|
||||||
|
|
||||||
time.Sleep(150 * time.Millisecond) // t=250ms (timer fires)
|
|
||||||
|
|
||||||
assert.Equal(t, 1, callCount) // Only 1 action despite 3 triggers
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Daemon Restart During Debounce
|
|
||||||
|
|
||||||
**Scenario:** Daemon receives SIGTERM while debouncer waiting
|
|
||||||
|
|
||||||
**Solution:** Cancel debouncer on shutdown
|
|
||||||
|
|
||||||
```go
|
|
||||||
func (d *Debouncer) Cancel() {
|
|
||||||
d.mu.Lock()
|
|
||||||
defer d.mu.Unlock()
|
|
||||||
if d.timer != nil {
|
|
||||||
d.timer.Stop()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// In shutdown handler
|
|
||||||
defer exportDebouncer.Cancel()
|
|
||||||
defer importDebouncer.Cancel()
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
### Environment Variables
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Enable event-driven mode (default: events after Phase 3)
|
|
||||||
BEADS_DAEMON_MODE=events
|
|
||||||
|
|
||||||
# Debounce duration in milliseconds (default: 500)
|
|
||||||
BEADS_DEBOUNCE_MS=500
|
|
||||||
|
|
||||||
# Fall back to polling if watcher fails (default: true)
|
|
||||||
BEADS_WATCHER_FALLBACK=true
|
|
||||||
|
|
||||||
# Polling interval if fallback used (default: 5s)
|
|
||||||
BEADS_POLL_INTERVAL=5s
|
|
||||||
```
|
|
||||||
|
|
||||||
### Daemon Status
|
|
||||||
|
|
||||||
**New command:** `bd daemon status --verbose`
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ bd daemon status --verbose
|
|
||||||
Daemon running: yes
|
|
||||||
PID: 12345
|
|
||||||
Mode: event-driven
|
|
||||||
Uptime: 3h 42m
|
|
||||||
Last sync: 2s ago
|
|
||||||
|
|
||||||
Event statistics:
|
|
||||||
File changes: 23
|
|
||||||
Mutations: 156
|
|
||||||
Exports: 12 (batched from 156 mutations)
|
|
||||||
Imports: 4 (batched from 23 file changes)
|
|
||||||
|
|
||||||
Watcher status: active
|
|
||||||
Watching: /Users/steve/beads/.beads/beads.jsonl
|
|
||||||
Events received: 23
|
|
||||||
Errors: 0
|
|
||||||
```
|
|
||||||
|
|
||||||
## What This Doesn't Solve
|
|
||||||
|
|
||||||
Event-driven architecture improves **responsiveness** but doesn't eliminate **repair cycles** caused by:
|
|
||||||
|
|
||||||
1. **Git merge conflicts** - Still need manual/AI resolution
|
|
||||||
2. **Semantic duplication** - Still need deduplication logic
|
|
||||||
3. **Test pollution** - Still need better isolation (separate issue)
|
|
||||||
4. **Worktree confusion** - Still need per-worktree branch tracking (separate design)
|
|
||||||
|
|
||||||
**These require separate solutions** (see repair commands design doc)
|
|
||||||
|
|
||||||
## Success Metrics
|
|
||||||
|
|
||||||
### Must-Have (P0)
|
|
||||||
- ✅ Zero data loss or corruption
|
|
||||||
- ✅ Zero regressions in sync reliability
|
|
||||||
- ✅ Works on Linux, macOS, Windows
|
|
||||||
|
|
||||||
### Should-Have (P1)
|
|
||||||
- ✅ Latency <500ms (vs 5000ms today)
|
|
||||||
- ✅ CPU usage <40% of polling mode
|
|
||||||
- ✅ Graceful fallback to polling if watcher fails
|
|
||||||
|
|
||||||
### Nice-to-Have (P2)
|
|
||||||
- ✅ Configurable debounce timing
|
|
||||||
- ✅ Detailed event statistics in `bd daemon status`
|
|
||||||
- ✅ Real-time dashboard of events (debug mode)
|
|
||||||
|
|
||||||
## Implementation Checklist
|
|
||||||
|
|
||||||
### Code Changes
|
|
||||||
- [ ] Add `fsnotify` to `go.mod`
|
|
||||||
- [ ] Create `cmd/bd/daemon_watcher.go`
|
|
||||||
- [ ] Create `cmd/bd/daemon_debouncer.go`
|
|
||||||
- [ ] Create `cmd/bd/daemon_event_loop.go`
|
|
||||||
- [ ] Modify `internal/rpc/server.go` (add mutation channel)
|
|
||||||
- [ ] Add `BEADS_DAEMON_MODE` flag handling
|
|
||||||
- [ ] Add fallback to polling on watcher failure
|
|
||||||
|
|
||||||
### Tests
|
|
||||||
- [ ] Unit tests for Debouncer
|
|
||||||
- [ ] Unit tests for FileWatcher
|
|
||||||
- [ ] Integration test: mutation → export latency
|
|
||||||
- [ ] Integration test: file change → import latency
|
|
||||||
- [ ] Stress test: event storm (100+ rapid changes)
|
|
||||||
- [ ] Platform tests: Linux, macOS, Windows
|
|
||||||
- [ ] Edge case test: watcher failure recovery
|
|
||||||
- [ ] Edge case test: file inode change (git checkout)
|
|
||||||
|
|
||||||
### Documentation
|
|
||||||
- [ ] Update AGENTS.md (event-driven mode)
|
|
||||||
- [ ] Add `docs/architecture/event_driven.md` (this doc)
|
|
||||||
- [ ] Update `bd daemon --help` (add --mode flag)
|
|
||||||
- [ ] Add troubleshooting guide (watcher failures)
|
|
||||||
- [ ] Write migration guide (for users hitting issues)
|
|
||||||
|
|
||||||
### Rollout
|
|
||||||
- [ ] Phase 1: Parallel implementation (opt-in)
|
|
||||||
- [ ] Phase 2: Dogfooding (beads repo itself)
|
|
||||||
- [ ] Phase 3: Default switchover
|
|
||||||
- [ ] Phase 4: Announce in release notes
|
|
||||||
|
|
||||||
## Open Questions
|
|
||||||
|
|
||||||
1. **Should git hooks be required or optional?**
|
|
||||||
- Recommendation: Optional (file watcher is sufficient)
|
|
||||||
|
|
||||||
2. **What debounce duration is optimal?**
|
|
||||||
- Recommendation: 500ms default, configurable
|
|
||||||
- Could use adaptive timing based on event frequency
|
|
||||||
|
|
||||||
3. **Should we track event statistics permanently?**
|
|
||||||
- Recommendation: In-memory only (reset on daemon restart)
|
|
||||||
- Could add `bd daemon stats --export` for debugging
|
|
||||||
|
|
||||||
4. **What happens if fsnotify doesn't support filesystem?**
|
|
||||||
- Recommendation: Automatic fallback to polling with warning
|
|
||||||
|
|
||||||
5. **Should mutation events be buffered or dropped if channel full?**
|
|
||||||
- Recommendation: Buffered (size 100), then drop oldest
|
|
||||||
- Worst case: Export delayed by 500ms, but no data loss
|
|
||||||
|
|
||||||
## Conclusion
|
|
||||||
|
|
||||||
Event-driven architecture is a **natural evolution** after cache removal:
|
|
||||||
- ✅ Eliminates stale data issues
|
|
||||||
- ✅ Reduces CPU usage significantly
|
|
||||||
- ✅ Improves user experience with instant feedback
|
|
||||||
- ✅ Builds on simplified per-project daemon model
|
|
||||||
|
|
||||||
**Recommended:** Proceed with Phase 1 implementation, targeting 2-3 week timeline for opt-in release.
|
|
||||||
Reference in New Issue
Block a user