bd sync: 2025-11-25 11:35:26
This commit is contained in:
@@ -32,6 +32,7 @@
|
||||
{"id":"bd-b0c8","title":"Research hooks/skills approach for enforcing issue descriptions","description":"## Solution: Hooks/Skills (mentioned in discussion)\n\nResearch the hooks/skills system mentioned by riordanpawley in discussion #366:\nhttps://www.reddit.com/r/ClaudeAI/s/wrn2tjkMHX\n\n## Investigation Tasks\n\n1. Review the Reddit post to understand the approach\n2. Determine if beads hooks can enforce validation\n3. Check if Claude Code skills/hooks can intercept MCP calls\n4. Assess feasibility and user burden\n\n## Notes\n\nFrom discussion #366:\n\u003e I'm using a skills/hooks system to get Claude to do that kind of thing right similar to https://www.reddit.com/r/ClaudeAI/s/wrn2tjkMHX\n\nThis might be a client-side solution rather than server-side.\n\n## Deliverable\n\nDocument findings in issue notes, with recommendation on whether to pursue this approach.","status":"closed","priority":3,"issue_type":"task","created_at":"2025-11-23T14:01:01.57079-08:00","updated_at":"2025-11-24T01:25:22.237007-08:00","closed_at":"2025-11-24T01:15:03.961995-08:00"}
|
||||
{"id":"bd-b2c","title":"Clarify metadata handling in init.go (config vs tracking)","description":"Review init.go:207-228 to ensure configuration metadata (issue_prefix, sync.branch) uses Pattern A consistently, while tracking metadata (bd_version, repo_id, etc.) uses Pattern B. Add comments explaining the distinction.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-24T00:28:53.199481-08:00","updated_at":"2025-11-24T00:33:24.654007-08:00","closed_at":"2025-11-24T00:33:24.654007-08:00","dependencies":[{"issue_id":"bd-b2c","depends_on_id":"bd-1qwo","type":"blocks","created_at":"2025-11-24T00:28:53.200299-08:00","created_by":"daemon"}]}
|
||||
{"id":"bd-b55e2ac2","title":"Fix autoimport tests for content-hash collision scoring","description":"## Overview\nThree autoimport tests are failing after bd-cbed9619.4 because they expect behavior based on the old reference-counting collision resolution, but the system now uses deterministic content-hash scoring.\n\n## Failing Tests\n1. `TestAutoImportMultipleCollisionsRemapped` - expects local versions preserved\n2. `TestAutoImportAllCollisionsRemapped` - expects local versions preserved \n3. `TestAutoImportCollisionRemapMultipleFields` - expects specific collision resolution behavior\n\n## Root Cause\nThese tests were written when ScoreCollisions used reference counting to determine which version to keep. Now it uses content-hash comparison (introduced in commit 2e87329), which produces different but deterministic results.\n\n## Example\nOld behavior: Issue with more references would be kept\nNew behavior: Issue with lexicographically lower content hash is kept\n\n## Solution\nUpdate each test to:\n1. Verify the new content-hash based behavior is correct\n2. Check that the remapped issue (not necessarily local/remote) has the expected content\n3. Ensure dependencies are preserved on the correct remapped issue\n\n## Acceptance Criteria\n- All three autoimport tests pass\n- Tests verify content-hash determinism (same collision always resolves the same way)\n- Tests check dependency preservation on remapped issues\n- Test documentation explains content-hash scoring expectations\n\n## Files to Modify\n- `cmd/bd/autoimport_collision_test.go`\n\n## Testing\nRun: `go test ./cmd/bd -run \"TestAutoImport.*Collision\" -v`","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-28T19:17:28.358028-07:00","updated_at":"2025-11-24T00:01:27.558153-08:00","closed_at":"2025-11-23T23:20:43.995595-08:00"}
|
||||
{"id":"bd-bh1","title":"Rewrite ARCHITECTURE.md with proper data model overview","description":"Issue #376 feedback: Current ARCHITECTURE.md is misleading - it's about FlushManager and Blocked Cache internals, not overall architecture.\n\nNeeded:\n1. Rename current ARCHITECTURE.md to INTERNALS.md or FLUSH_ARCHITECTURE.md\n2. Write new ARCHITECTURE.md covering:\n - Git ↔ JSONL ↔ SQLite data model (the 'magic')\n - Write path: CLI → SQLite → JSONL export → git commit\n - Read path: git pull → JSONL import → SQLite → CLI\n - Hash-based collision prevention\n - Daemon architecture overview\n3. Add TROUBLESHOOTING.md or section with recovery procedures:\n - Merge conflicts in JSONL\n - Locked database\n - Worktree sync issues\n - Multiple databases created\n\nReference: CLAUDE.md already has decent 'Three-Layer Design' and 'Distributed Database Pattern' sections to build from.\n\nGitHub issue: #376","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-25T11:30:52.57061-08:00","updated_at":"2025-11-25T11:35:07.125512-08:00","closed_at":"2025-11-25T11:35:07.125512-08:00"}
|
||||
{"id":"bd-bt6y","title":"Improve compact/daemon/merge documentation and UX","description":"Multiple documentation and UX issues encountered:\n1. \"bd compact --analyze\" fails with misleading \"requires SQLite storage\" error when daemon is running. Needs --no-daemon or better error.\n2. \"bd merge\" help text is outdated (refers to 3-way merge instead of issue merging).\n3. Daemon mode purpose isn't clear to local-only users.\n4. Compact/cleanup commands are hard to discover.\n\nProposed fixes:\n- Fix compact+daemon interaction or error message.\n- Update \"bd merge\" help text.\n- Add \"when to use daemon\" section to docs.\n- Add maintenance section to quickstart.\n","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-20T18:55:43.637047-05:00","updated_at":"2025-11-20T18:55:43.637047-05:00"}
|
||||
{"id":"bd-bwk2","title":"Centralize error handling patterns in storage layer","description":"80+ instances of inconsistent error handling across sqlite.go with mix of %w, %v, and no wrapping.\n\nLocation: internal/storage/sqlite/sqlite.go (throughout)\n\nProblem:\n- Some use fmt.Errorf(\"op failed: %w\", err) - correct wrapping\n- Some use fmt.Errorf(\"op failed: %v\", err) - loses error chain\n- Some return err directly - no context\n- Hard to debug production issues\n- Can't distinguish error types\n\nSolution: Create internal/storage/sqlite/errors.go:\n- Define sentinel errors (ErrNotFound, ErrInvalidID, etc.)\n- Create wrapDBError(op string, err error) helper\n- Convert sql.ErrNoRows to ErrNotFound\n- Always wrap with operation context\n\nImpact: Lost error context; inconsistent messages; hard to debug\n\nEffort: 5-7 hours","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-16T14:51:54.974909-08:00","updated_at":"2025-11-24T01:08:18.298536-08:00","closed_at":"2025-11-24T00:57:47.359519-08:00","dependencies":[{"issue_id":"bd-bwk2","depends_on_id":"bd-vfe","type":"blocks","created_at":"2025-11-24T00:53:28.831021-08:00","created_by":"daemon"},{"issue_id":"bd-bwk2","depends_on_id":"bd-z8z","type":"blocks","created_at":"2025-11-24T00:53:28.897593-08:00","created_by":"daemon"},{"issue_id":"bd-bwk2","depends_on_id":"bd-w8h","type":"blocks","created_at":"2025-11-24T00:53:28.957487-08:00","created_by":"daemon"},{"issue_id":"bd-bwk2","depends_on_id":"bd-r71","type":"blocks","created_at":"2025-11-24T00:53:29.023262-08:00","created_by":"daemon"}]}
|
||||
{"id":"bd-c362","title":"Extract database search logic into helper function","description":"The logic for finding a database in a beads directory is duplicated:\n- FindDatabasePath() BEADS_DIR section (beads.go:141-169)\n- findDatabaseInTree() (beads.go:248-280)\n\nBoth implement the same search order:\n1. Check config.json first (single source of truth)\n2. Fall back to canonical beads.db\n3. Search for *.db files, filtering backups and vc.db\n\nRefactoring suggestion:\nExtract to a helper function like:\n func findDatabaseInBeadsDir(beadsDir string) string\n\nBenefits:\n- Single source of truth for database search logic\n- Easier to maintain and update search order\n- Reduces code duplication\n\nRelated to [deleted:bd-e16b] implementation.","status":"open","priority":3,"issue_type":"chore","created_at":"2025-11-02T18:34:02.831543-08:00","updated_at":"2025-11-23T22:38:46.782725-08:00"}
|
||||
@@ -45,8 +46,8 @@
|
||||
{"id":"bd-gfu","title":"Add --start flag to bd daemon, show help with no args","description":"Currently `bd daemon` with no args immediately starts the daemon. This is inconsistent with other daemon management commands like `--stop`, `--status`, etc.\n\n## Proposed Changes\n\n1. Add `--start` flag to explicitly start daemon\n2. With no args or flags, print help text showing available options\n3. Maintain backward compatibility where feasible\n\n## Current Consumers\n\n- **Auto-start logic** (`daemon_autostart.go:106, 270`): Calls `bd daemon` programmatically - needs update\n- **MCP docs** (SETUP_DAEMON.md:111): Already incorrectly shows `bd daemon start` - will be fixed by this change\n- **Python daemon client**: Suggests `bd daemon` in error messages - needs doc update\n\n## Implementation Notes\n\n- Default behavior (no args/flags) should show help\n- `--start` should start daemon (current no-args behavior)\n- Auto-start code needs to pass `--start` flag explicitly\n- Update all documentation to use `bd daemon --start`\n- Ensure backward compat doesn't break existing workflows","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-23T23:54:49.906553-08:00","updated_at":"2025-11-24T00:03:15.231345-08:00","closed_at":"2025-11-24T00:03:15.231345-08:00"}
|
||||
{"id":"bd-gqo","title":"Implement health checks in daemon event loop","description":"Add health checks to checkDaemonHealth() function in daemon_event_loop.go:170:\n- Database integrity check\n- Disk space check\n- Memory usage check\n\nCurrently it's just a no-op placeholder.","status":"open","priority":3,"issue_type":"feature","created_at":"2025-11-21T18:55:07.534304-05:00","updated_at":"2025-11-21T18:55:07.534304-05:00"}
|
||||
{"id":"bd-hdt","title":"Implement auto-merge functionality in duplicates command","description":"The duplicates.go file has a TODO at line 95 to implement the performMerge function for automatic duplicate merging. Currently it just prints a warning message. This would automate the merge process instead of just suggesting commands.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-11-21T18:55:02.828619-05:00","updated_at":"2025-11-21T18:55:02.828619-05:00"}
|
||||
{"id":"bd-ibg","title":"Integrate deletions manifest into import logic","description":"Parent: bd-imj\n\n## Task\nModify import to check deletions manifest and delete stale DB issues.\n\n## Implementation\n\nIn `internal/importer/importer.go` (or equivalent):\n\n```go\nfunc (i *Importer) Import(jsonlPath string) error {\n jsonlIDs := i.parseJSONL(jsonlPath)\n deletions, _ := deletions.Load(\".beads/deletions.jsonl\")\n \n for _, dbID := range i.db.AllIDs() {\n if jsonlIDs.Has(dbID) {\n continue // In sync\n }\n if del, ok := deletions[dbID]; ok {\n i.db.Delete(dbID)\n log.Printf(\"Purged %s (deleted %s by %s)\", dbID, del.Timestamp, del.Actor)\n continue\n }\n // Not deleted, keep as local work\n }\n \n // Normal create/update logic...\n}\n```\n\n## Acceptance Criteria\n- [ ] Import loads deletions manifest\n- [ ] DB issues not in JSONL but in manifest are deleted\n- [ ] Deletion logged with metadata (timestamp, actor)\n- [ ] Handles missing/empty deletions file gracefully\n- [ ] Integration test: delete in clone A propagates to clone B","status":"open","priority":0,"issue_type":"task","created_at":"2025-11-25T09:56:25.168983-08:00","updated_at":"2025-11-25T09:56:25.168983-08:00","dependencies":[{"issue_id":"bd-ibg","depends_on_id":"bd-2bl","type":"blocks","created_at":"2025-11-25T09:57:49.904629-08:00","created_by":"daemon"}]}
|
||||
{"id":"bd-ieh","title":"Record deletions in bd delete and bd cleanup","description":"Parent: bd-imj\n\n## Task\nModify delete operations to append to deletions manifest.\n\n## Implementation\n\n### Order of Operations (IMPORTANT)\n1. Append to deletions.jsonl first (can fail safely, no side effects)\n2. Delete from DB\n3. Export to JSONL\n\nThis order ensures we never lose deletion records.\n\n### bd delete\n```go\nfunc (s *Storage) DeleteIssue(id, actor, reason string) error {\n // NEW: Record in manifest FIRST\n if err := deletions.Append(\".beads/deletions.jsonl\", DeletionRecord{\n ID: id,\n Timestamp: time.Now().UTC(),\n Actor: actor,\n Reason: reason,\n }); err != nil {\n return fmt.Errorf(\"failed to record deletion: %w\", err)\n }\n \n // Then delete from DB\n s.db.Delete(id)\n \n // Then export\n // ...\n}\n```\n\n### bd cleanup\nSame pattern, with reason=\"cleanup\"\n\n### Actor sourcing (in order)\n1. `--actor` flag if provided\n2. `$BD_ACTOR` env var if set\n3. `git config user.name` (preferred default)\n4. `$USER` as last resort\n\n## Acceptance Criteria\n- [ ] `bd delete` appends to deletions.jsonl BEFORE deleting\n- [ ] `bd cleanup` appends to deletions.jsonl (one entry per deleted issue)\n- [ ] Actor sourced from flag \u003e env \u003e git config \u003e $USER\n- [ ] Reason field populated appropriately\n- [ ] Test: delete creates manifest entry with correct metadata\n- [ ] Test: failed manifest append prevents deletion","status":"open","priority":0,"issue_type":"task","created_at":"2025-11-25T09:56:25.721739-08:00","updated_at":"2025-11-25T10:51:34.633728-08:00","dependencies":[{"issue_id":"bd-ieh","depends_on_id":"bd-2bl","type":"blocks","created_at":"2025-11-25T09:57:49.940751-08:00","created_by":"daemon"}]}
|
||||
{"id":"bd-ibg","title":"Integrate deletions manifest into import logic","description":"Parent: bd-imj\n\n## Task\nModify import to check deletions manifest and delete stale DB issues.\n\n## Implementation\n\nIn `internal/importer/importer.go` (or equivalent):\n\n```go\nfunc (i *Importer) Import(jsonlPath string) error {\n jsonlIDs := i.parseJSONL(jsonlPath)\n deletions, _ := deletions.Load(\".beads/deletions.jsonl\")\n \n for _, dbID := range i.db.AllIDs() {\n if jsonlIDs.Has(dbID) {\n continue // In sync\n }\n if del, ok := deletions[dbID]; ok {\n i.db.Delete(dbID)\n log.Printf(\"Purged %s (deleted %s by %s)\", dbID, del.Timestamp, del.Actor)\n continue\n }\n // Not deleted, keep as local work\n }\n \n // Normal create/update logic...\n}\n```\n\n## Acceptance Criteria\n- [ ] Import loads deletions manifest\n- [ ] DB issues not in JSONL but in manifest are deleted\n- [ ] Deletion logged with metadata (timestamp, actor)\n- [ ] Handles missing/empty deletions file gracefully\n- [ ] Integration test: delete in clone A propagates to clone B","status":"in_progress","priority":0,"issue_type":"task","created_at":"2025-11-25T09:56:25.168983-08:00","updated_at":"2025-11-25T11:32:42.188441-08:00","dependencies":[{"issue_id":"bd-ibg","depends_on_id":"bd-2bl","type":"blocks","created_at":"2025-11-25T09:57:49.904629-08:00","created_by":"daemon"}]}
|
||||
{"id":"bd-ieh","title":"Record deletions in bd delete and bd cleanup","description":"Parent: bd-imj\n\n## Task\nModify delete operations to append to deletions manifest.\n\n## Implementation\n\n### Order of Operations (IMPORTANT)\n1. Append to deletions.jsonl first (can fail safely, no side effects)\n2. Delete from DB\n3. Export to JSONL\n\nThis order ensures we never lose deletion records.\n\n### bd delete\n```go\nfunc (s *Storage) DeleteIssue(id, actor, reason string) error {\n // NEW: Record in manifest FIRST\n if err := deletions.Append(\".beads/deletions.jsonl\", DeletionRecord{\n ID: id,\n Timestamp: time.Now().UTC(),\n Actor: actor,\n Reason: reason,\n }); err != nil {\n return fmt.Errorf(\"failed to record deletion: %w\", err)\n }\n \n // Then delete from DB\n s.db.Delete(id)\n \n // Then export\n // ...\n}\n```\n\n### bd cleanup\nSame pattern, with reason=\"cleanup\"\n\n### Actor sourcing (in order)\n1. `--actor` flag if provided\n2. `$BD_ACTOR` env var if set\n3. `git config user.name` (preferred default)\n4. `$USER` as last resort\n\n## Acceptance Criteria\n- [ ] `bd delete` appends to deletions.jsonl BEFORE deleting\n- [ ] `bd cleanup` appends to deletions.jsonl (one entry per deleted issue)\n- [ ] Actor sourced from flag \u003e env \u003e git config \u003e $USER\n- [ ] Reason field populated appropriately\n- [ ] Test: delete creates manifest entry with correct metadata\n- [ ] Test: failed manifest append prevents deletion","status":"in_progress","priority":0,"issue_type":"task","created_at":"2025-11-25T09:56:25.721739-08:00","updated_at":"2025-11-25T11:32:42.128789-08:00","dependencies":[{"issue_id":"bd-ieh","depends_on_id":"bd-2bl","type":"blocks","created_at":"2025-11-25T09:57:49.940751-08:00","created_by":"daemon"}]}
|
||||
{"id":"bd-imj","title":"Deletion propagation via deletions manifest","description":"## Problem\n\nWhen `bd cleanup -f` or `bd delete` removes issues in one clone, those deletions don't propagate to other clones. The import logic only creates/updates, never deletes. This causes \"resurrection\" where deleted issues reappear.\n\n## Root Cause\n\nImport sees DB issues not in JSONL and assumes they're \"local unpushed work\" rather than \"intentionally deleted upstream.\"\n\n## Solution: Deletions Manifest\n\nAdd `.beads/deletions.jsonl` - an append-only log of deleted issue IDs with metadata.\n\n### Format\n```jsonl\n{\"id\":\"bd-xxx\",\"ts\":\"2025-11-25T10:00:00Z\",\"by\":\"stevey\"}\n{\"id\":\"bd-yyy\",\"ts\":\"2025-11-25T10:05:00Z\",\"by\":\"claude\",\"reason\":\"duplicate of bd-zzz\"}\n```\n\n### Fields\n- `id`: Issue ID (required)\n- `ts`: ISO 8601 UTC timestamp (required)\n- `by`: Actor who deleted (required)\n- `reason`: Optional context (\"cleanup\", \"duplicate of X\", etc.)\n\n### Import Logic\n```\nFor each DB issue not in JSONL:\n 1. Check deletions manifest → if found, delete from DB\n 2. Fallback: check git history → if found, delete + backfill manifest\n 3. Neither → keep (local unpushed work)\n```\n\n### Conflict Resolution\nSimultaneous deletions from multiple clones are handled naturally:\n- Append-only design means both clones append their deletion records\n- On merge, file contains duplicate entries (same ID, different timestamps)\n- `LoadDeletions` deduplicates by ID (keeps any/first entry)\n- Result: deletion propagates correctly, duplicates are harmless\n\n### Pruning Policy\n- Default retention: 7 days (configurable via `deletions.retention_days`)\n- Auto-compact during `bd sync` is **opt-in** (disabled by default)\n- Hard cap: `deletions.max_entries` (default 50000)\n- Git fallback handles pruned entries (self-healing)\n\n### Self-Healing\nWhen git fallback catches a resurrection (pruned entry), it backfills the manifest. One-time git scan cost, then fast again.\n\n### Size Estimates\n- ~80 bytes/entry\n- 7-day retention with 100 deletions/day = ~56KB\n- Git compressed: ~10KB\n\n## Benefits\n- ✅ Deletions propagate across clones\n- ✅ O(1) lookup (no git scan in normal case)\n- ✅ Works in shallow clones\n- ✅ Survives history rewrite\n- ✅ Audit trail (who deleted what when)\n- ✅ Self-healing via git fallback\n- ✅ Bounded size via time-based pruning\n\n## References\n- Investigation session: 2025-11-25\n- Related: bd-2q6d (stale database warnings)","status":"open","priority":0,"issue_type":"epic","created_at":"2025-11-25T09:56:01.98027-08:00","updated_at":"2025-11-25T10:52:28.738368-08:00","dependencies":[{"issue_id":"bd-imj","depends_on_id":"bd-2bl","type":"blocks","created_at":"2025-11-25T09:57:37.845868-08:00","created_by":"daemon"},{"issue_id":"bd-imj","depends_on_id":"bd-ibg","type":"blocks","created_at":"2025-11-25T09:57:42.694905-08:00","created_by":"daemon"},{"issue_id":"bd-imj","depends_on_id":"bd-ieh","type":"blocks","created_at":"2025-11-25T09:57:42.724272-08:00","created_by":"daemon"},{"issue_id":"bd-imj","depends_on_id":"bd-pnm","type":"blocks","created_at":"2025-11-25T09:57:42.755707-08:00","created_by":"daemon"},{"issue_id":"bd-imj","depends_on_id":"bd-v2x","type":"blocks","created_at":"2025-11-25T09:57:42.790453-08:00","created_by":"daemon"},{"issue_id":"bd-imj","depends_on_id":"bd-qsm","type":"blocks","created_at":"2025-11-25T09:57:42.821911-08:00","created_by":"daemon"},{"issue_id":"bd-imj","depends_on_id":"bd-x2i","type":"blocks","created_at":"2025-11-25T09:57:42.851712-08:00","created_by":"daemon"},{"issue_id":"bd-imj","depends_on_id":"bd-44e","type":"blocks","created_at":"2025-11-25T09:57:42.88154-08:00","created_by":"daemon"},{"issue_id":"bd-imj","depends_on_id":"bd-kzo","type":"blocks","created_at":"2025-11-25T10:52:16.319402-08:00","created_by":"daemon"}]}
|
||||
{"id":"bd-j3zt","title":"Fix mypy errors in beads-mcp","description":"Running `mypy .` in `integrations/beads-mcp` reports 287 errors. These should be addressed to improve type safety and code quality.","status":"open","priority":3,"issue_type":"task","created_at":"2025-11-20T18:53:28.557708-05:00","updated_at":"2025-11-20T18:53:28.557708-05:00"}
|
||||
{"id":"bd-koab","title":"Import should continue on FOREIGN KEY constraint violations from deletions","description":"# Problem\n\nWhen importing JSONL after a merge that includes deletions, we may encounter FOREIGN KEY constraint violations if:\n- Issue A was deleted in one branch\n- Issue B (that depends on A) was modified in another branch \n- The merge keeps the deletion of A and the modification of B\n- Import tries to import B with a dependency/reference to deleted A\n\nCurrently import fails completely on such constraint violations, requiring manual intervention.\n\n# Solution\n\nAdd IsForeignKeyConstraintError() helper similar to IsUniqueConstraintError()\n\nUpdate import code to:\n1. Detect FOREIGN KEY constraint violations\n2. Log a warning with the issue ID and constraint\n3. Continue importing remaining issues\n4. Report summary of skipped issues at the end\n\n# Implementation Notes\n\n- Add to internal/storage/sqlite/util.go\n- Pattern: strings.Contains(err.Error(), \"FOREIGN KEY constraint failed\")\n- Update importer to handle these errors gracefully\n- Keep track of skipped issues for summary reporting","notes":"## Implementation Complete\n\nAdded FOREIGN KEY constraint violation handling to the importer:\n\n**Changes made:**\n\n1. **internal/importer/importer.go**\n - Added SkippedDependencies field to Result struct\n - Updated importDependencies() to accept result parameter\n - Added FK constraint detection using sqlite.IsForeignKeyConstraintError()\n - Log warning for each skipped dependency\n - Track skipped dependencies in result\n\n2. **cmd/bd/import_shared.go**\n - Added SkippedDependencies field to ImportResult struct\n - Updated result conversion to include skipped dependencies\n\n3. **cmd/bd/import.go**\n - Added summary reporting for skipped dependencies\n - Displays warning with list of skipped dependencies and helpful context\n\n**Behavior:**\n- When a FOREIGN KEY constraint violation is encountered during dependency import:\n - A warning is logged: 'Warning: Skipping dependency due to missing reference: issue-a → issue-b (blocks)'\n - The dependency is tracked in result.SkippedDependencies\n - Import continues with remaining dependencies\n - Summary at end lists all skipped dependencies with context message\n\n**Testing:**\n- All existing importer tests pass\n- Build succeeds\n- Ready for real-world testing when FK constraint violations are encountered","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-23T21:37:02.811665-08:00","updated_at":"2025-11-24T00:01:27.559495-08:00","closed_at":"2025-11-23T23:31:04.325337-08:00"}
|
||||
|
||||
@@ -1,357 +1,275 @@
|
||||
# Architecture
|
||||
|
||||
This document describes the internal architecture of the `bd` issue tracker, with particular focus on concurrency guarantees and data consistency.
|
||||
This document describes bd's overall architecture - the data model, sync mechanism, and how components fit together. For internal implementation details (FlushManager, Blocked Cache), see [INTERNALS.md](INTERNALS.md).
|
||||
|
||||
## Auto-Flush Architecture
|
||||
## The Three-Layer Data Model
|
||||
|
||||
### Problem Statement (Issue bd-52)
|
||||
|
||||
The original auto-flush implementation suffered from a critical race condition when multiple concurrent operations accessed shared state:
|
||||
|
||||
- **Concurrent access points:**
|
||||
- Auto-flush timer goroutine (5s debounce)
|
||||
- Daemon sync goroutine
|
||||
- Concurrent CLI commands
|
||||
- Git hook execution
|
||||
- PersistentPostRun cleanup
|
||||
|
||||
- **Shared mutable state:**
|
||||
- `isDirty` flag
|
||||
- `needsFullExport` flag
|
||||
- `flushTimer` instance
|
||||
- `storeActive` flag
|
||||
|
||||
- **Impact:**
|
||||
- Potential data loss under concurrent load
|
||||
- Corruption when multiple agents/commands run simultaneously
|
||||
- Race conditions during rapid commits
|
||||
- Flush operations could access closed storage
|
||||
|
||||
### Solution: Event-Driven FlushManager
|
||||
|
||||
The race condition was eliminated by replacing timer-based shared state with an event-driven architecture using a single-owner pattern.
|
||||
|
||||
#### Architecture
|
||||
bd's core design enables a distributed, git-backed issue tracker that feels like a centralized database. The "magic" comes from three synchronized layers:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Command/Agent │
|
||||
│ │
|
||||
│ markDirtyAndScheduleFlush() ─┐ │
|
||||
│ markDirtyAndScheduleFullExport() ─┐ │
|
||||
└────────────────────────────────────┼───┼────────────────┘
|
||||
│ │
|
||||
v v
|
||||
┌────────────────────────────────────┐
|
||||
│ FlushManager │
|
||||
│ (Single-Owner Pattern) │
|
||||
│ │
|
||||
│ Channels (buffered): │
|
||||
│ - markDirtyCh │
|
||||
│ - timerFiredCh │
|
||||
│ - flushNowCh │
|
||||
│ - shutdownCh │
|
||||
│ │
|
||||
│ State (owned by run() goroutine): │
|
||||
│ - isDirty │
|
||||
│ - needsFullExport │
|
||||
│ - debounceTimer │
|
||||
└────────────────────────────────────┘
|
||||
│
|
||||
v
|
||||
┌────────────────────────────────────┐
|
||||
│ flushToJSONLWithState() │
|
||||
│ │
|
||||
│ - Validates store is active │
|
||||
│ - Checks JSONL integrity │
|
||||
│ - Performs incremental/full export│
|
||||
│ - Updates export hashes │
|
||||
└────────────────────────────────────┘
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ CLI Layer │
|
||||
│ │
|
||||
│ bd create, list, update, close, ready, show, dep, sync, ... │
|
||||
│ - Cobra commands in cmd/bd/ │
|
||||
│ - All commands support --json for programmatic use │
|
||||
│ - Tries daemon RPC first, falls back to direct DB access │
|
||||
└──────────────────────────────┬──────────────────────────────────┘
|
||||
│
|
||||
v
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ SQLite Database │
|
||||
│ (.beads/beads.db) │
|
||||
│ │
|
||||
│ - Local working copy (gitignored) │
|
||||
│ - Fast queries, indexes, foreign keys │
|
||||
│ - Issues, dependencies, labels, comments, events │
|
||||
│ - Each machine has its own copy │
|
||||
└──────────────────────────────┬──────────────────────────────────┘
|
||||
│
|
||||
auto-sync
|
||||
(5s debounce)
|
||||
│
|
||||
v
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ JSONL File │
|
||||
│ (.beads/beads.jsonl) │
|
||||
│ │
|
||||
│ - Git-tracked source of truth │
|
||||
│ - One JSON line per entity (issue, dep, label, comment) │
|
||||
│ - Merge-friendly: additions rarely conflict │
|
||||
│ - Shared across machines via git push/pull │
|
||||
└──────────────────────────────┬──────────────────────────────────┘
|
||||
│
|
||||
git push/pull
|
||||
│
|
||||
v
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Remote Repository │
|
||||
│ (GitHub, GitLab, etc.) │
|
||||
│ │
|
||||
│ - Stores JSONL as part of normal repo history │
|
||||
│ - All collaborators share the same issue database │
|
||||
│ - Protected branch support via separate sync branch │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
#### Key Design Principles
|
||||
### Why This Design?
|
||||
|
||||
**1. Single Owner Pattern**
|
||||
**SQLite for speed:** Local queries complete in milliseconds. Complex dependency graphs, full-text search, and joins are fast.
|
||||
|
||||
All flush state (`isDirty`, `needsFullExport`, `debounceTimer`) is owned by a single background goroutine (`FlushManager.run()`). This eliminates the need for mutexes to protect this state.
|
||||
**JSONL for git:** One entity per line means git diffs are readable and merges usually succeed automatically. No binary database files in version control.
|
||||
|
||||
**2. Channel-Based Communication**
|
||||
**Git for distribution:** No special sync server needed. Issues travel with your code. Offline work just works.
|
||||
|
||||
External code communicates with FlushManager via buffered channels:
|
||||
- `markDirtyCh`: Request to mark DB dirty (incremental or full export)
|
||||
- `timerFiredCh`: Debounce timer expired notification
|
||||
- `flushNowCh`: Synchronous flush request (returns error)
|
||||
- `shutdownCh`: Graceful shutdown with final flush
|
||||
## Write Path
|
||||
|
||||
**3. No Shared Mutable State**
|
||||
|
||||
The only shared state is accessed via atomic operations (channel sends/receives). The `storeActive` flag and `store` pointer still use a mutex, but only to coordinate with store lifecycle, not flush logic.
|
||||
|
||||
**4. Debouncing Without Locks**
|
||||
|
||||
The timer callback sends to `timerFiredCh` instead of directly manipulating state. The run() goroutine processes timer events in its select loop, eliminating timer-related races.
|
||||
|
||||
#### Concurrency Guarantees
|
||||
|
||||
**Thread-Safety:**
|
||||
- `MarkDirty(fullExport bool)` - Safe from any goroutine, non-blocking
|
||||
- `FlushNow() error` - Safe from any goroutine, blocks until flush completes
|
||||
- `Shutdown() error` - Idempotent, safe to call multiple times
|
||||
|
||||
**Debouncing Guarantees:**
|
||||
- Multiple `MarkDirty()` calls within the debounce window → single flush
|
||||
- Timer resets on each mark, flush occurs after last modification
|
||||
- FlushNow() bypasses debounce, forces immediate flush
|
||||
|
||||
**Shutdown Guarantees:**
|
||||
- Final flush performed if database is dirty
|
||||
- Background goroutine cleanly exits
|
||||
- Idempotent via `sync.Once` - safe for multiple calls
|
||||
- Subsequent operations after shutdown are no-ops
|
||||
|
||||
**Store Lifecycle:**
|
||||
- FlushManager checks `storeActive` flag before every flush
|
||||
- Store closure is coordinated via `storeMutex`
|
||||
- Flush safely aborts if store closes mid-operation
|
||||
|
||||
#### Migration Path
|
||||
|
||||
The implementation maintains backward compatibility:
|
||||
|
||||
1. **Legacy path (tests):** If `flushManager == nil`, falls back to old timer-based logic
|
||||
2. **New path (production):** Uses FlushManager event-driven architecture
|
||||
3. **Wrapper functions:** `markDirtyAndScheduleFlush()` and `markDirtyAndScheduleFullExport()` delegate to FlushManager when available
|
||||
|
||||
This allows existing tests to pass without modification while fixing the race condition in production.
|
||||
|
||||
## Testing
|
||||
|
||||
### Race Detection
|
||||
|
||||
Comprehensive race detector tests ensure concurrency safety:
|
||||
|
||||
- `TestFlushManagerConcurrentMarkDirty` - Many goroutines marking dirty
|
||||
- `TestFlushManagerConcurrentFlushNow` - Concurrent immediate flushes
|
||||
- `TestFlushManagerMarkDirtyDuringFlush` - Interleaved mark/flush operations
|
||||
- `TestFlushManagerShutdownDuringOperation` - Shutdown while operations ongoing
|
||||
- `TestMarkDirtyAndScheduleFlushConcurrency` - Integration test with legacy API
|
||||
|
||||
Run with: `go test -race -run TestFlushManager ./cmd/bd`
|
||||
|
||||
### In-Process Test Compatibility
|
||||
|
||||
The FlushManager is designed to work correctly when commands run multiple times in the same process (common in tests):
|
||||
|
||||
- Each command execution in `PersistentPreRun` creates a new FlushManager
|
||||
- `PersistentPostRun` shuts down the manager
|
||||
- `Shutdown()` is idempotent via `sync.Once`
|
||||
- Old managers are garbage collected when replaced
|
||||
|
||||
## Related Subsystems
|
||||
|
||||
### Daemon Mode
|
||||
|
||||
When running with daemon mode (`--no-daemon=false`), the CLI delegates to an RPC server. The FlushManager is NOT used in daemon mode - the daemon process has its own flush coordination.
|
||||
|
||||
The `daemonClient != nil` check in `PersistentPostRun` ensures FlushManager shutdown only occurs in direct mode.
|
||||
|
||||
### Auto-Import
|
||||
|
||||
Auto-import runs in `PersistentPreRun` before FlushManager is used. It may call `markDirtyAndScheduleFlush()` or `markDirtyAndScheduleFullExport()` if JSONL changes are detected.
|
||||
|
||||
Hash-based comparison (not mtime) prevents git pull false positives (issue bd-84).
|
||||
|
||||
### JSONL Integrity
|
||||
|
||||
`flushToJSONLWithState()` validates JSONL file hash before flush:
|
||||
- Compares stored hash with actual file hash
|
||||
- If mismatch detected, clears export_hashes and forces full re-export (issue bd-160)
|
||||
- Prevents staleness when JSONL is modified outside bd
|
||||
|
||||
### Export Modes
|
||||
|
||||
**Incremental export (default):**
|
||||
- Exports only dirty issues (tracked in `dirty_issues` table)
|
||||
- Merges with existing JSONL file
|
||||
- Faster for small changesets
|
||||
|
||||
**Full export (after ID changes):**
|
||||
- Exports all issues from database
|
||||
- Rebuilds JSONL from scratch
|
||||
- Required after operations like `rename-prefix` that change issue IDs
|
||||
- Triggered by `markDirtyAndScheduleFullExport()`
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
- **Debounce window:** Configurable via `getDebounceDuration()` (default 5s)
|
||||
- **Channel buffer sizes:**
|
||||
- markDirtyCh: 10 events (prevents blocking during bursts)
|
||||
- timerFiredCh: 1 event (timer notifications coalesce naturally)
|
||||
- flushNowCh: 1 request (synchronous, one at a time)
|
||||
- shutdownCh: 1 request (one-shot operation)
|
||||
- **Memory overhead:** One goroutine + minimal channel buffers per command execution
|
||||
- **Flush latency:** Debounce duration + JSONL write time (typically <100ms for incremental)
|
||||
|
||||
## Blocked Issues Cache (bd-5qim)
|
||||
|
||||
### Problem Statement
|
||||
|
||||
The `bd ready` command originally computed blocked issues using a recursive CTE on every query. On a 10K issue database, each query took ~752ms, making the command feel sluggish and impractical for large projects.
|
||||
|
||||
### Solution: Materialized Cache Table
|
||||
|
||||
The `blocked_issues_cache` table materializes the blocking computation, storing issue IDs for all currently blocked issues. Queries now use a simple `NOT EXISTS` check against this cache, completing in ~29ms (25x speedup).
|
||||
|
||||
### Architecture
|
||||
When you create or modify an issue:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ GetReadyWork Query │
|
||||
│ │
|
||||
│ SELECT ... FROM issues WHERE status IN (...) │
|
||||
│ AND NOT EXISTS ( │
|
||||
│ SELECT 1 FROM blocked_issues_cache │
|
||||
│ WHERE issue_id = issues.id │
|
||||
│ ) │
|
||||
│ │
|
||||
│ Performance: 29ms (was 752ms with recursive CTE) │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Cache Invalidation Triggers │
|
||||
│ │
|
||||
│ 1. AddDependency (blocks/parent-child only) │
|
||||
│ 2. RemoveDependency (blocks/parent-child only) │
|
||||
│ 3. UpdateIssue (on any status change) │
|
||||
│ 4. CloseIssue (changes status to closed) │
|
||||
│ │
|
||||
│ NOT triggered by: related, discovered-from deps │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Cache Rebuild Process │
|
||||
│ │
|
||||
│ 1. DELETE FROM blocked_issues_cache │
|
||||
│ 2. INSERT INTO blocked_issues_cache │
|
||||
│ WITH RECURSIVE CTE: │
|
||||
│ - Find directly blocked issues (blocks deps) │
|
||||
│ - Propagate to children (parent-child deps) │
|
||||
│ 3. Happens in same transaction as triggering change │
|
||||
│ │
|
||||
│ Performance: <50ms full rebuild on 10K database │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ CLI Command │───▶│ SQLite Write │───▶│ Mark Dirty │
|
||||
│ (bd create) │ │ (immediate) │ │ (trigger sync) │
|
||||
└─────────────────┘ └─────────────────┘ └────────┬────────┘
|
||||
│
|
||||
5-second debounce
|
||||
│
|
||||
v
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ Git Commit │◀───│ JSONL Export │◀───│ FlushManager │
|
||||
│ (git hooks) │ │ (incremental) │ │ (background) │
|
||||
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
### Blocking Semantics
|
||||
1. **Command executes:** `bd create "New feature"` writes to SQLite immediately
|
||||
2. **Mark dirty:** The operation marks the database as needing export
|
||||
3. **Debounce window:** Wait 5 seconds for batch operations (configurable)
|
||||
4. **Export to JSONL:** Only changed entities are appended/updated
|
||||
5. **Git commit:** If git hooks are installed, changes auto-commit
|
||||
|
||||
An issue is blocked if:
|
||||
Key implementation:
|
||||
- Export: `cmd/bd/export.go`, `cmd/bd/autoflush.go`
|
||||
- FlushManager: `internal/flush/` (see [INTERNALS.md](INTERNALS.md))
|
||||
- Dirty tracking: `internal/storage/sqlite/dirty_issues.go`
|
||||
|
||||
1. **Direct blocking**: Has a `blocks` dependency on an open/in_progress/blocked issue
|
||||
2. **Transitive blocking**: Parent is blocked and issue is connected via `parent-child` dependency
|
||||
## Read Path
|
||||
|
||||
Closed issues never block others. Related and discovered-from dependencies don't affect blocking.
|
||||
When you query issues after a `git pull`:
|
||||
|
||||
### Cache Invalidation Strategy
|
||||
```
|
||||
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
|
||||
│ git pull │───▶│ Auto-Import │───▶│ SQLite Update │
|
||||
│ (new JSONL) │ │ (on next cmd) │ │ (merge logic) │
|
||||
└─────────────────┘ └─────────────────┘ └────────┬────────┘
|
||||
│
|
||||
v
|
||||
┌─────────────────┐
|
||||
│ CLI Query │
|
||||
│ (bd ready) │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
**Full rebuild on every change**
|
||||
1. **Git pull:** Fetches updated JSONL from remote
|
||||
2. **Auto-import detection:** First bd command checks if JSONL is newer than DB
|
||||
3. **Import to SQLite:** Parse JSONL, merge with local state using content hashes
|
||||
4. **Query:** Commands read from fast local SQLite
|
||||
|
||||
Instead of incremental updates, the cache is completely rebuilt (DELETE + INSERT) on any triggering change. This approach is chosen because:
|
||||
Key implementation:
|
||||
- Import: `cmd/bd/import.go`, `cmd/bd/autoimport.go`
|
||||
- Auto-import logic: `internal/autoimport/autoimport.go`
|
||||
- Collision detection: `internal/importer/importer.go`
|
||||
|
||||
- Rebuild is fast (<50ms even on 10K issues) due to optimized CTE
|
||||
- Simpler implementation with no risk of partial/stale updates
|
||||
- Dependency changes are rare compared to reads
|
||||
- Guarantees consistency - cache matches database state exactly
|
||||
## Hash-Based Collision Prevention
|
||||
|
||||
**Transaction safety**
|
||||
The key insight that enables distributed operation: **content-based hashing for deduplication**.
|
||||
|
||||
All cache operations happen within the same transaction as the triggering change:
|
||||
- Uses transaction if provided, otherwise direct db connection
|
||||
- Cache can never be in an inconsistent state visible to queries
|
||||
- Foreign key CASCADE ensures cache entries deleted when issues are deleted
|
||||
### The Problem
|
||||
|
||||
**Selective invalidation**
|
||||
Sequential IDs (bd-1, bd-2, bd-3) cause collisions when multiple agents create issues concurrently:
|
||||
|
||||
Only `blocks` and `parent-child` dependencies trigger rebuilds since they affect blocking semantics. Related and discovered-from dependencies don't trigger invalidation, avoiding unnecessary work.
|
||||
```
|
||||
Branch A: bd create "Add OAuth" → bd-10
|
||||
Branch B: bd create "Add Stripe" → bd-10 (collision!)
|
||||
```
|
||||
|
||||
### Performance Characteristics
|
||||
### The Solution
|
||||
|
||||
**Query performance (GetReadyWork):**
|
||||
- Before cache: ~752ms (recursive CTE)
|
||||
- With cache: ~29ms (NOT EXISTS)
|
||||
- Speedup: 25x
|
||||
Hash-based IDs derived from random UUIDs ensure uniqueness:
|
||||
|
||||
**Write overhead:**
|
||||
- Cache rebuild: <50ms
|
||||
- Only triggered on dependency/status changes (rare operations)
|
||||
- Trade-off: slower writes for much faster reads
|
||||
```
|
||||
Branch A: bd create "Add OAuth" → bd-a1b2
|
||||
Branch B: bd create "Add Stripe" → bd-f14c (no collision)
|
||||
```
|
||||
|
||||
### Edge Cases
|
||||
### How It Works
|
||||
|
||||
1. **Parent-child transitive blocking**
|
||||
- Children of blocked parents are automatically marked as blocked
|
||||
- Propagates through arbitrary depth hierarchies (limited to depth 50 for safety)
|
||||
1. **Issue creation:** Generate random UUID, derive short hash as ID
|
||||
2. **Progressive scaling:** IDs start at 4 chars, grow to 5-6 chars as database grows
|
||||
3. **Content hashing:** Each issue has a content hash for change detection
|
||||
4. **Import merge:** Same ID + different content = update, same ID + same content = skip
|
||||
|
||||
2. **Multiple blockers**
|
||||
- Issue blocked by multiple open issues stays blocked until all are closed
|
||||
- DISTINCT in CTE ensures issue appears once in cache
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Import Logic │
|
||||
│ │
|
||||
│ For each issue in JSONL: │
|
||||
│ 1. Compute content hash │
|
||||
│ 2. Look up existing issue by ID │
|
||||
│ 3. Compare hashes: │
|
||||
│ - Same hash → skip (already imported) │
|
||||
│ - Different hash → update (newer version) │
|
||||
│ - No match → create (new issue) │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
3. **Status changes**
|
||||
- Closing a blocker removes all blocked descendants from cache
|
||||
- Reopening a blocker adds them back
|
||||
This eliminates the need for central coordination while ensuring all machines converge to the same state.
|
||||
|
||||
4. **Dependency removal**
|
||||
- Removing last blocker unblocks the issue
|
||||
- Removing parent-child link unblocks orphaned subtree
|
||||
See [COLLISION_MATH.md](COLLISION_MATH.md) for birthday paradox calculations on hash length vs collision probability.
|
||||
|
||||
5. **Foreign key cascades**
|
||||
- Cache entries automatically deleted when issue is deleted
|
||||
- No manual cleanup needed
|
||||
## Daemon Architecture
|
||||
|
||||
### Testing
|
||||
Each workspace runs its own background daemon for auto-sync:
|
||||
|
||||
Comprehensive test coverage in `blocked_cache_test.go`:
|
||||
- Cache invalidation on dependency add/remove
|
||||
- Cache updates on status changes
|
||||
- Multiple blockers
|
||||
- Deep hierarchies
|
||||
- Transitive blocking via parent-child
|
||||
- Related dependencies (should NOT affect cache)
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Per-Workspace Daemon │
|
||||
│ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ RPC Server │ │ Auto-Sync │ │ Background │ │
|
||||
│ │ (bd.sock) │ │ Manager │ │ Tasks │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ │ │ │ │
|
||||
│ └──────────────────┴──────────────────┘ │
|
||||
│ │ │
|
||||
│ v │
|
||||
│ ┌─────────────┐ │
|
||||
│ │ SQLite │ │
|
||||
│ │ Database │ │
|
||||
│ └─────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
|
||||
Run tests: `go test -v ./internal/storage/sqlite -run TestCache`
|
||||
CLI commands ───RPC───▶ Daemon ───SQL───▶ Database
|
||||
or
|
||||
CLI commands ───SQL───▶ Database (if daemon unavailable)
|
||||
```
|
||||
|
||||
### Implementation Files
|
||||
**Why daemons?**
|
||||
- Batches multiple operations before export
|
||||
- Holds database connection open (faster queries)
|
||||
- Coordinates auto-sync timing
|
||||
- One daemon per workspace (LSP-like model)
|
||||
|
||||
- `internal/storage/sqlite/blocked_cache.go` - Cache rebuild and invalidation
|
||||
- `internal/storage/sqlite/ready.go` - Uses cache in GetReadyWork queries
|
||||
- `internal/storage/sqlite/dependencies.go` - Invalidates on dep changes
|
||||
- `internal/storage/sqlite/queries.go` - Invalidates on status changes
|
||||
- `internal/storage/sqlite/migrations/015_blocked_issues_cache.go` - Schema and initial population
|
||||
**Communication:**
|
||||
- Unix domain socket at `.beads/bd.sock` (Windows: named pipes)
|
||||
- Protocol defined in `internal/rpc/protocol.go`
|
||||
- CLI tries daemon first, falls back to direct DB access
|
||||
|
||||
### Future Optimizations
|
||||
**Lifecycle:**
|
||||
- Auto-starts on first bd command (unless `BEADS_NO_DAEMON=1`)
|
||||
- Auto-restarts after version upgrades
|
||||
- Managed via `bd daemons` command
|
||||
|
||||
If rebuild becomes a bottleneck in very large databases (>100K issues):
|
||||
- Consider incremental updates for specific dependency types
|
||||
- Add indexes to dependencies table for CTE performance
|
||||
- Implement dirty tracking to avoid rebuilds when cache is unchanged
|
||||
See [DAEMON.md](DAEMON.md) for operational details.
|
||||
|
||||
However, current performance is excellent for realistic workloads.
|
||||
## Data Types
|
||||
|
||||
## Future Improvements
|
||||
Core types in `internal/types/types.go`:
|
||||
|
||||
Potential enhancements for multi-agent scenarios:
|
||||
| Type | Description | Key Fields |
|
||||
|------|-------------|------------|
|
||||
| **Issue** | Work item | ID, Title, Description, Status, Priority, Type |
|
||||
| **Dependency** | Relationship | FromID, ToID, Type (blocks/related/parent-child/discovered-from) |
|
||||
| **Label** | Tag | Name, Color, Description |
|
||||
| **Comment** | Discussion | IssueID, Author, Content, Timestamp |
|
||||
| **Event** | Audit trail | IssueID, Type, Data, Timestamp |
|
||||
|
||||
1. **Flush coordination across agents:**
|
||||
- Shared lock file to prevent concurrent JSONL writes
|
||||
- Detection of external JSONL modifications during flush
|
||||
### Dependency Types
|
||||
|
||||
2. **Adaptive debounce window:**
|
||||
- Shorter debounce during interactive sessions
|
||||
- Longer debounce during batch operations
|
||||
| Type | Semantic | Affects `bd ready`? |
|
||||
|------|----------|---------------------|
|
||||
| `blocks` | Issue X must close before Y starts | Yes |
|
||||
| `parent-child` | Hierarchical (epic/subtask) | Yes (children blocked if parent blocked) |
|
||||
| `related` | Soft link for reference | No |
|
||||
| `discovered-from` | Found during work on parent | No |
|
||||
|
||||
3. **Flush progress tracking:**
|
||||
- Expose flush queue depth via status API
|
||||
- Allow clients to wait for flush completion
|
||||
### Status Flow
|
||||
|
||||
4. **Per-issue dirty tracking optimization:**
|
||||
- Currently tracks full vs. incremental
|
||||
- Could track specific issue IDs for surgical updates
|
||||
```
|
||||
open ──▶ in_progress ──▶ closed
|
||||
│ │
|
||||
└────────────────────────┘
|
||||
(reopen)
|
||||
```
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
.beads/
|
||||
├── beads.db # SQLite database (gitignored)
|
||||
├── beads.jsonl # JSONL source of truth (git-tracked)
|
||||
├── bd.sock # Daemon socket (gitignored)
|
||||
├── daemon.log # Daemon logs (gitignored)
|
||||
├── config.yaml # Project config (optional)
|
||||
└── export_hashes.db # Export tracking (gitignored)
|
||||
```
|
||||
|
||||
## Key Code Paths
|
||||
|
||||
| Area | Files |
|
||||
|------|-------|
|
||||
| CLI entry | `cmd/bd/main.go` |
|
||||
| Storage interface | `internal/storage/storage.go` |
|
||||
| SQLite implementation | `internal/storage/sqlite/` |
|
||||
| RPC protocol | `internal/rpc/protocol.go`, `server_*.go` |
|
||||
| Export logic | `cmd/bd/export.go`, `autoflush.go` |
|
||||
| Import logic | `cmd/bd/import.go`, `internal/importer/` |
|
||||
| Auto-sync | `internal/autoimport/`, `internal/flush/` |
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [INTERNALS.md](INTERNALS.md) - FlushManager, Blocked Cache implementation details
|
||||
- [DAEMON.md](DAEMON.md) - Daemon management and configuration
|
||||
- [EXTENDING.md](EXTENDING.md) - Adding custom tables to SQLite
|
||||
- [TROUBLESHOOTING.md](TROUBLESHOOTING.md) - Recovery procedures and common issues
|
||||
- [FAQ.md](FAQ.md) - Common questions about the architecture
|
||||
- [COLLISION_MATH.md](COLLISION_MATH.md) - Hash collision probability analysis
|
||||
|
||||
359
docs/INTERNALS.md
Normal file
359
docs/INTERNALS.md
Normal file
@@ -0,0 +1,359 @@
|
||||
# Internals
|
||||
|
||||
This document describes internal implementation details of bd, with particular focus on concurrency guarantees and data consistency.
|
||||
|
||||
For the overall architecture (data model, sync mechanism, component overview), see [ARCHITECTURE.md](ARCHITECTURE.md).
|
||||
|
||||
## Auto-Flush Architecture
|
||||
|
||||
### Problem Statement (Issue bd-52)
|
||||
|
||||
The original auto-flush implementation suffered from a critical race condition when multiple concurrent operations accessed shared state:
|
||||
|
||||
- **Concurrent access points:**
|
||||
- Auto-flush timer goroutine (5s debounce)
|
||||
- Daemon sync goroutine
|
||||
- Concurrent CLI commands
|
||||
- Git hook execution
|
||||
- PersistentPostRun cleanup
|
||||
|
||||
- **Shared mutable state:**
|
||||
- `isDirty` flag
|
||||
- `needsFullExport` flag
|
||||
- `flushTimer` instance
|
||||
- `storeActive` flag
|
||||
|
||||
- **Impact:**
|
||||
- Potential data loss under concurrent load
|
||||
- Corruption when multiple agents/commands run simultaneously
|
||||
- Race conditions during rapid commits
|
||||
- Flush operations could access closed storage
|
||||
|
||||
### Solution: Event-Driven FlushManager
|
||||
|
||||
The race condition was eliminated by replacing timer-based shared state with an event-driven architecture using a single-owner pattern.
|
||||
|
||||
#### Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Command/Agent │
|
||||
│ │
|
||||
│ markDirtyAndScheduleFlush() ─┐ │
|
||||
│ markDirtyAndScheduleFullExport() ─┐ │
|
||||
└────────────────────────────────────┼───┼────────────────┘
|
||||
│ │
|
||||
v v
|
||||
┌────────────────────────────────────┐
|
||||
│ FlushManager │
|
||||
│ (Single-Owner Pattern) │
|
||||
│ │
|
||||
│ Channels (buffered): │
|
||||
│ - markDirtyCh │
|
||||
│ - timerFiredCh │
|
||||
│ - flushNowCh │
|
||||
│ - shutdownCh │
|
||||
│ │
|
||||
│ State (owned by run() goroutine): │
|
||||
│ - isDirty │
|
||||
│ - needsFullExport │
|
||||
│ - debounceTimer │
|
||||
└────────────────────────────────────┘
|
||||
│
|
||||
v
|
||||
┌────────────────────────────────────┐
|
||||
│ flushToJSONLWithState() │
|
||||
│ │
|
||||
│ - Validates store is active │
|
||||
│ - Checks JSONL integrity │
|
||||
│ - Performs incremental/full export│
|
||||
│ - Updates export hashes │
|
||||
└────────────────────────────────────┘
|
||||
```
|
||||
|
||||
#### Key Design Principles
|
||||
|
||||
**1. Single Owner Pattern**
|
||||
|
||||
All flush state (`isDirty`, `needsFullExport`, `debounceTimer`) is owned by a single background goroutine (`FlushManager.run()`). This eliminates the need for mutexes to protect this state.
|
||||
|
||||
**2. Channel-Based Communication**
|
||||
|
||||
External code communicates with FlushManager via buffered channels:
|
||||
- `markDirtyCh`: Request to mark DB dirty (incremental or full export)
|
||||
- `timerFiredCh`: Debounce timer expired notification
|
||||
- `flushNowCh`: Synchronous flush request (returns error)
|
||||
- `shutdownCh`: Graceful shutdown with final flush
|
||||
|
||||
**3. No Shared Mutable State**
|
||||
|
||||
The only shared state is accessed via atomic operations (channel sends/receives). The `storeActive` flag and `store` pointer still use a mutex, but only to coordinate with store lifecycle, not flush logic.
|
||||
|
||||
**4. Debouncing Without Locks**
|
||||
|
||||
The timer callback sends to `timerFiredCh` instead of directly manipulating state. The run() goroutine processes timer events in its select loop, eliminating timer-related races.
|
||||
|
||||
#### Concurrency Guarantees
|
||||
|
||||
**Thread-Safety:**
|
||||
- `MarkDirty(fullExport bool)` - Safe from any goroutine, non-blocking
|
||||
- `FlushNow() error` - Safe from any goroutine, blocks until flush completes
|
||||
- `Shutdown() error` - Idempotent, safe to call multiple times
|
||||
|
||||
**Debouncing Guarantees:**
|
||||
- Multiple `MarkDirty()` calls within the debounce window → single flush
|
||||
- Timer resets on each mark, flush occurs after last modification
|
||||
- FlushNow() bypasses debounce, forces immediate flush
|
||||
|
||||
**Shutdown Guarantees:**
|
||||
- Final flush performed if database is dirty
|
||||
- Background goroutine cleanly exits
|
||||
- Idempotent via `sync.Once` - safe for multiple calls
|
||||
- Subsequent operations after shutdown are no-ops
|
||||
|
||||
**Store Lifecycle:**
|
||||
- FlushManager checks `storeActive` flag before every flush
|
||||
- Store closure is coordinated via `storeMutex`
|
||||
- Flush safely aborts if store closes mid-operation
|
||||
|
||||
#### Migration Path
|
||||
|
||||
The implementation maintains backward compatibility:
|
||||
|
||||
1. **Legacy path (tests):** If `flushManager == nil`, falls back to old timer-based logic
|
||||
2. **New path (production):** Uses FlushManager event-driven architecture
|
||||
3. **Wrapper functions:** `markDirtyAndScheduleFlush()` and `markDirtyAndScheduleFullExport()` delegate to FlushManager when available
|
||||
|
||||
This allows existing tests to pass without modification while fixing the race condition in production.
|
||||
|
||||
## Testing
|
||||
|
||||
### Race Detection
|
||||
|
||||
Comprehensive race detector tests ensure concurrency safety:
|
||||
|
||||
- `TestFlushManagerConcurrentMarkDirty` - Many goroutines marking dirty
|
||||
- `TestFlushManagerConcurrentFlushNow` - Concurrent immediate flushes
|
||||
- `TestFlushManagerMarkDirtyDuringFlush` - Interleaved mark/flush operations
|
||||
- `TestFlushManagerShutdownDuringOperation` - Shutdown while operations ongoing
|
||||
- `TestMarkDirtyAndScheduleFlushConcurrency` - Integration test with legacy API
|
||||
|
||||
Run with: `go test -race -run TestFlushManager ./cmd/bd`
|
||||
|
||||
### In-Process Test Compatibility
|
||||
|
||||
The FlushManager is designed to work correctly when commands run multiple times in the same process (common in tests):
|
||||
|
||||
- Each command execution in `PersistentPreRun` creates a new FlushManager
|
||||
- `PersistentPostRun` shuts down the manager
|
||||
- `Shutdown()` is idempotent via `sync.Once`
|
||||
- Old managers are garbage collected when replaced
|
||||
|
||||
## Related Subsystems
|
||||
|
||||
### Daemon Mode
|
||||
|
||||
When running with daemon mode (`--no-daemon=false`), the CLI delegates to an RPC server. The FlushManager is NOT used in daemon mode - the daemon process has its own flush coordination.
|
||||
|
||||
The `daemonClient != nil` check in `PersistentPostRun` ensures FlushManager shutdown only occurs in direct mode.
|
||||
|
||||
### Auto-Import
|
||||
|
||||
Auto-import runs in `PersistentPreRun` before FlushManager is used. It may call `markDirtyAndScheduleFlush()` or `markDirtyAndScheduleFullExport()` if JSONL changes are detected.
|
||||
|
||||
Hash-based comparison (not mtime) prevents git pull false positives (issue bd-84).
|
||||
|
||||
### JSONL Integrity
|
||||
|
||||
`flushToJSONLWithState()` validates JSONL file hash before flush:
|
||||
- Compares stored hash with actual file hash
|
||||
- If mismatch detected, clears export_hashes and forces full re-export (issue bd-160)
|
||||
- Prevents staleness when JSONL is modified outside bd
|
||||
|
||||
### Export Modes
|
||||
|
||||
**Incremental export (default):**
|
||||
- Exports only dirty issues (tracked in `dirty_issues` table)
|
||||
- Merges with existing JSONL file
|
||||
- Faster for small changesets
|
||||
|
||||
**Full export (after ID changes):**
|
||||
- Exports all issues from database
|
||||
- Rebuilds JSONL from scratch
|
||||
- Required after operations like `rename-prefix` that change issue IDs
|
||||
- Triggered by `markDirtyAndScheduleFullExport()`
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
- **Debounce window:** Configurable via `getDebounceDuration()` (default 5s)
|
||||
- **Channel buffer sizes:**
|
||||
- markDirtyCh: 10 events (prevents blocking during bursts)
|
||||
- timerFiredCh: 1 event (timer notifications coalesce naturally)
|
||||
- flushNowCh: 1 request (synchronous, one at a time)
|
||||
- shutdownCh: 1 request (one-shot operation)
|
||||
- **Memory overhead:** One goroutine + minimal channel buffers per command execution
|
||||
- **Flush latency:** Debounce duration + JSONL write time (typically <100ms for incremental)
|
||||
|
||||
## Blocked Issues Cache (bd-5qim)
|
||||
|
||||
### Problem Statement
|
||||
|
||||
The `bd ready` command originally computed blocked issues using a recursive CTE on every query. On a 10K issue database, each query took ~752ms, making the command feel sluggish and impractical for large projects.
|
||||
|
||||
### Solution: Materialized Cache Table
|
||||
|
||||
The `blocked_issues_cache` table materializes the blocking computation, storing issue IDs for all currently blocked issues. Queries now use a simple `NOT EXISTS` check against this cache, completing in ~29ms (25x speedup).
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ GetReadyWork Query │
|
||||
│ │
|
||||
│ SELECT ... FROM issues WHERE status IN (...) │
|
||||
│ AND NOT EXISTS ( │
|
||||
│ SELECT 1 FROM blocked_issues_cache │
|
||||
│ WHERE issue_id = issues.id │
|
||||
│ ) │
|
||||
│ │
|
||||
│ Performance: 29ms (was 752ms with recursive CTE) │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Cache Invalidation Triggers │
|
||||
│ │
|
||||
│ 1. AddDependency (blocks/parent-child only) │
|
||||
│ 2. RemoveDependency (blocks/parent-child only) │
|
||||
│ 3. UpdateIssue (on any status change) │
|
||||
│ 4. CloseIssue (changes status to closed) │
|
||||
│ │
|
||||
│ NOT triggered by: related, discovered-from deps │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ Cache Rebuild Process │
|
||||
│ │
|
||||
│ 1. DELETE FROM blocked_issues_cache │
|
||||
│ 2. INSERT INTO blocked_issues_cache │
|
||||
│ WITH RECURSIVE CTE: │
|
||||
│ - Find directly blocked issues (blocks deps) │
|
||||
│ - Propagate to children (parent-child deps) │
|
||||
│ 3. Happens in same transaction as triggering change │
|
||||
│ │
|
||||
│ Performance: <50ms full rebuild on 10K database │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Blocking Semantics
|
||||
|
||||
An issue is blocked if:
|
||||
|
||||
1. **Direct blocking**: Has a `blocks` dependency on an open/in_progress/blocked issue
|
||||
2. **Transitive blocking**: Parent is blocked and issue is connected via `parent-child` dependency
|
||||
|
||||
Closed issues never block others. Related and discovered-from dependencies don't affect blocking.
|
||||
|
||||
### Cache Invalidation Strategy
|
||||
|
||||
**Full rebuild on every change**
|
||||
|
||||
Instead of incremental updates, the cache is completely rebuilt (DELETE + INSERT) on any triggering change. This approach is chosen because:
|
||||
|
||||
- Rebuild is fast (<50ms even on 10K issues) due to optimized CTE
|
||||
- Simpler implementation with no risk of partial/stale updates
|
||||
- Dependency changes are rare compared to reads
|
||||
- Guarantees consistency - cache matches database state exactly
|
||||
|
||||
**Transaction safety**
|
||||
|
||||
All cache operations happen within the same transaction as the triggering change:
|
||||
- Uses transaction if provided, otherwise direct db connection
|
||||
- Cache can never be in an inconsistent state visible to queries
|
||||
- Foreign key CASCADE ensures cache entries deleted when issues are deleted
|
||||
|
||||
**Selective invalidation**
|
||||
|
||||
Only `blocks` and `parent-child` dependencies trigger rebuilds since they affect blocking semantics. Related and discovered-from dependencies don't trigger invalidation, avoiding unnecessary work.
|
||||
|
||||
### Performance Characteristics
|
||||
|
||||
**Query performance (GetReadyWork):**
|
||||
- Before cache: ~752ms (recursive CTE)
|
||||
- With cache: ~29ms (NOT EXISTS)
|
||||
- Speedup: 25x
|
||||
|
||||
**Write overhead:**
|
||||
- Cache rebuild: <50ms
|
||||
- Only triggered on dependency/status changes (rare operations)
|
||||
- Trade-off: slower writes for much faster reads
|
||||
|
||||
### Edge Cases
|
||||
|
||||
1. **Parent-child transitive blocking**
|
||||
- Children of blocked parents are automatically marked as blocked
|
||||
- Propagates through arbitrary depth hierarchies (limited to depth 50 for safety)
|
||||
|
||||
2. **Multiple blockers**
|
||||
- Issue blocked by multiple open issues stays blocked until all are closed
|
||||
- DISTINCT in CTE ensures issue appears once in cache
|
||||
|
||||
3. **Status changes**
|
||||
- Closing a blocker removes all blocked descendants from cache
|
||||
- Reopening a blocker adds them back
|
||||
|
||||
4. **Dependency removal**
|
||||
- Removing last blocker unblocks the issue
|
||||
- Removing parent-child link unblocks orphaned subtree
|
||||
|
||||
5. **Foreign key cascades**
|
||||
- Cache entries automatically deleted when issue is deleted
|
||||
- No manual cleanup needed
|
||||
|
||||
### Testing
|
||||
|
||||
Comprehensive test coverage in `blocked_cache_test.go`:
|
||||
- Cache invalidation on dependency add/remove
|
||||
- Cache updates on status changes
|
||||
- Multiple blockers
|
||||
- Deep hierarchies
|
||||
- Transitive blocking via parent-child
|
||||
- Related dependencies (should NOT affect cache)
|
||||
|
||||
Run tests: `go test -v ./internal/storage/sqlite -run TestCache`
|
||||
|
||||
### Implementation Files
|
||||
|
||||
- `internal/storage/sqlite/blocked_cache.go` - Cache rebuild and invalidation
|
||||
- `internal/storage/sqlite/ready.go` - Uses cache in GetReadyWork queries
|
||||
- `internal/storage/sqlite/dependencies.go` - Invalidates on dep changes
|
||||
- `internal/storage/sqlite/queries.go` - Invalidates on status changes
|
||||
- `internal/storage/sqlite/migrations/015_blocked_issues_cache.go` - Schema and initial population
|
||||
|
||||
### Future Optimizations
|
||||
|
||||
If rebuild becomes a bottleneck in very large databases (>100K issues):
|
||||
- Consider incremental updates for specific dependency types
|
||||
- Add indexes to dependencies table for CTE performance
|
||||
- Implement dirty tracking to avoid rebuilds when cache is unchanged
|
||||
|
||||
However, current performance is excellent for realistic workloads.
|
||||
|
||||
## Future Improvements
|
||||
|
||||
Potential enhancements for multi-agent scenarios:
|
||||
|
||||
1. **Flush coordination across agents:**
|
||||
- Shared lock file to prevent concurrent JSONL writes
|
||||
- Detection of external JSONL modifications during flush
|
||||
|
||||
2. **Adaptive debounce window:**
|
||||
- Shorter debounce during interactive sessions
|
||||
- Longer debounce during batch operations
|
||||
|
||||
3. **Flush progress tracking:**
|
||||
- Expose flush queue depth via status API
|
||||
- Allow clients to wait for flush completion
|
||||
|
||||
4. **Per-issue dirty tracking optimization:**
|
||||
- Currently tracks full vs. incremental
|
||||
- Could track specific issue IDs for surgical updates
|
||||
Reference in New Issue
Block a user