From cabe5e8d9b73e7ca4adefa0bc5e9f7d997336404 Mon Sep 17 00:00:00 2001 From: Steve Yegge Date: Sun, 26 Oct 2025 21:38:55 -0700 Subject: [PATCH] bd sync: 2025-10-26 21:38:55 --- .beads/bd.jsonl | 47 +++++++++++++++++++++++------------------------ 1 file changed, 23 insertions(+), 24 deletions(-) diff --git a/.beads/bd.jsonl b/.beads/bd.jsonl index a08ca453..55bad754 100644 --- a/.beads/bd.jsonl +++ b/.beads/bd.jsonl @@ -49,35 +49,31 @@ {"id":"bd-142","title":"Local testing of bd init --quiet and auto-sync improvements","description":"Before doing official version bump, test the new changes locally across multiple agent projects:\n\n**Changes to test:**\n- bd init --quiet (non-interactive for agents)\n- Auto-sync documentation updates\n- Git hooks auto-install in quiet mode\n\n**Test scenarios:**\n1. Fresh repo clone with existing .beads/issues.jsonl (bd init --quiet)\n2. Agent setting up new project (bd init --quiet)\n3. Verify git hooks install automatically in quiet mode\n4. Test auto-import after git pull\n5. Verify no daemon conflicts across projects\n\n**Projects/agents to test:**\n- ~/src/vc (waiting on exclusive lock protocol)\n- Other agent projects with fresh clones\n- Multiple agents on same machine","design":"From RELEASING.md, for local testing:\n\n1. Build local binary: `go build -o bd ./cmd/bd`\n2. Use `./bd` directly (don't install via brew yet)\n3. Optional: `alias bd=\"$PWD/bd\"` to test across projects\n4. Kill all daemons first: `pkill -f \"bd.*daemon\"`\n\n**Testing workflow:**\n- Build latest bd binary in ~/src/beads\n- Create alias: `alias bd=\"$HOME/src/beads/bd\"`\n- Test with multiple agents in different repos\n- Verify version: `bd version` shows latest\n- Check daemon compatibility after restart","notes":"Found and fixed bug: bd init --quiet was returning before hooks installation.\n\nFixed by:\n1. Moving hooks check/install before quiet mode return\n2. Embedding hooks inline instead of using external install.sh\n3. This makes bd init --quiet fully self-contained\n\nTested successfully in /tmp/bd-test-quiet - hooks installed correctly.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T12:56:04.960569-07:00","updated_at":"2025-10-26T13:19:22.400117-07:00","closed_at":"2025-10-26T13:19:22.400117-07:00"} {"id":"bd-143","title":"Review and respond to new GitHub PRs","description":"Check for new pull requests on GitHub and review/respond to them.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T13:19:22.407693-07:00","updated_at":"2025-10-26T13:22:33.395599-07:00","closed_at":"2025-10-26T13:22:33.395599-07:00"} {"id":"bd-144","title":"Document bd edit command and verify MCP exclusion","description":"Follow-up from PR #152:\n1. Add \"bd edit\" to AGENTS.md with \"Humans only\" note\n2. Verify MCP server doesn't expose bd edit command\n3. Consider adding test for command registration","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-26T13:23:47.982295-07:00","updated_at":"2025-10-26T13:23:47.982295-07:00","dependencies":[{"issue_id":"bd-144","depends_on_id":"bd-143","type":"discovered-from","created_at":"2025-10-26T13:23:47.983557-07:00","created_by":"daemon"}]} -{"id":"bd-145","title":"Add \"bd daemons\" command for multi-daemon management","description":"Add a new \"bd daemons\" command with subcommands to manage daemon processes across all beads repositories/worktrees. Should show all running daemons with metadata (version, workspace, uptime, last sync), allow stopping/restarting individual daemons, auto-clean stale processes, view logs, and show exclusive lock status.","design":"Subcommands:\n- list: Show all running daemons with metadata (workspace, PID, version, socket path, uptime, last activity, exclusive lock status)\n- stop \u003cpath|pid\u003e: Gracefully stop a specific daemon\n- restart \u003cpath|pid\u003e: Stop and restart daemon\n- killall: Emergency stop all daemons\n- health: Verify each daemon responds to ping\n- logs \u003cpath\u003e: View daemon logs\n\nFeatures:\n- Auto-clean stale sockets/dead processes\n- Discovery: Scan for .beads/bd.sock files + running processes\n- Communication: Use existing socket protocol, add GET /status endpoint for metadata","status":"in_progress","priority":1,"issue_type":"epic","created_at":"2025-10-26T16:53:40.970042-07:00","updated_at":"2025-10-26T18:11:11.077613-07:00"} -{"id":"bd-146","title":"Implement daemon discovery mechanism","description":"Build the core discovery logic to find all running bd daemons. Scan filesystem for .beads/bd.sock files, check if processes are alive, and collect metadata.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.22163-07:00","updated_at":"2025-10-26T18:05:22.257361-07:00","closed_at":"2025-10-26T18:05:22.257361-07:00","dependencies":[{"issue_id":"bd-146","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.222398-07:00","created_by":"daemon"}]} -{"id":"bd-147","title":"Implement \"bd daemons list\" subcommand","description":"Create the \"bd daemons list\" command that displays all running daemons in a table with: workspace path, PID, version, socket path, uptime, last activity, exclusive lock status. Include --json flag.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.232956-07:00","updated_at":"2025-10-26T18:10:24.905516-07:00","closed_at":"2025-10-26T18:10:24.905516-07:00","dependencies":[{"issue_id":"bd-147","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T17:47:47.922208-07:00","created_by":"stevey"}]} -{"id":"bd-148","title":"Add GET /status endpoint to daemon HTTP server","description":"Add a new HTTP endpoint that returns daemon metadata: version, workspace path, PID, uptime, last activity timestamp, exclusive lock status.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.233084-07:00","updated_at":"2025-10-26T17:55:32.40399-07:00","closed_at":"2025-10-26T17:55:32.40399-07:00","dependencies":[{"issue_id":"bd-148","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.238293-07:00","created_by":"daemon"}]} -{"id":"bd-149","title":"Add auto-cleanup of stale sockets and dead processes","description":"When discovering daemons, automatically detect and clean up stale socket files (where process is dead) and orphaned PID files. Should be safe and only remove confirmed-dead processes.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.246629-07:00","updated_at":"2025-10-26T18:17:18.560526-07:00","closed_at":"2025-10-26T18:17:18.560526-07:00","dependencies":[{"issue_id":"bd-149","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.247788-07:00","created_by":"daemon"}]} +{"id":"bd-145","title":"Add \"bd daemons\" command for multi-daemon management","description":"Add a new \"bd daemons\" command with subcommands to manage daemon processes across all beads repositories/worktrees. Should show all running daemons with metadata (version, workspace, uptime, last sync), allow stopping/restarting individual daemons, auto-clean stale processes, view logs, and show exclusive lock status.","design":"Subcommands:\n- list: Show all running daemons with metadata (workspace, PID, version, socket path, uptime, last activity, exclusive lock status)\n- stop \u003cpath|pid\u003e: Gracefully stop a specific daemon\n- restart \u003cpath|pid\u003e: Stop and restart daemon\n- killall: Emergency stop all daemons\n- health: Verify each daemon responds to ping\n- logs \u003cpath\u003e: View daemon logs\n\nFeatures:\n- Auto-clean stale sockets/dead processes\n- Discovery: Scan for .beads/bd.sock files + running processes\n- Communication: Use existing socket protocol, add GET /status endpoint for metadata","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-10-26T16:53:40.970042-07:00","updated_at":"2025-10-26T19:26:29.045738-07:00","closed_at":"2025-10-26T19:26:29.045738-07:00"} +{"id":"bd-146","title":"Implement daemon discovery mechanism","description":"Build the core discovery logic to find all running bd daemons. Scan filesystem for .beads/bd.sock files, check if processes are alive, and collect metadata.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.22163-07:00","updated_at":"2025-10-26T19:26:29.048756-07:00","closed_at":"2025-10-26T19:26:29.048756-07:00","dependencies":[{"issue_id":"bd-146","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.222398-07:00","created_by":"daemon"}]} +{"id":"bd-147","title":"Implement \"bd daemons list\" subcommand","description":"Create the \"bd daemons list\" command that displays all running daemons in a table with: workspace path, PID, version, socket path, uptime, last activity, exclusive lock status. Include --json flag.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.232956-07:00","updated_at":"2025-10-26T19:26:29.061768-07:00","closed_at":"2025-10-26T19:26:29.061768-07:00"} +{"id":"bd-148","title":"Add GET /status endpoint to daemon HTTP server","description":"Add a new HTTP endpoint that returns daemon metadata: version, workspace path, PID, uptime, last activity timestamp, exclusive lock status.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.233084-07:00","updated_at":"2025-10-26T19:26:29.070048-07:00","closed_at":"2025-10-26T19:26:29.070048-07:00","dependencies":[{"issue_id":"bd-148","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.238293-07:00","created_by":"daemon"}]} +{"id":"bd-149","title":"Add auto-cleanup of stale sockets and dead processes","description":"When discovering daemons, automatically detect and clean up stale socket files (where process is dead) and orphaned PID files. Should be safe and only remove confirmed-dead processes.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.246629-07:00","updated_at":"2025-10-26T19:26:29.077686-07:00","closed_at":"2025-10-26T19:26:29.077686-07:00","dependencies":[{"issue_id":"bd-149","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.247788-07:00","created_by":"daemon"}]} {"id":"bd-15","title":"Phase 4: Gradual Cutover \u0026 Production Rollout","description":"Replace SQLite implementation with Beads library in production and remove legacy code.\n\n**Goal:** Complete transition to Beads library, deprecate and remove custom SQLite implementation.\n\n**Key Tasks:**\n1. Run VC executor with Beads library in CI\n2. Dogfood: Use Beads library for VC's own development\n3. Monitor for regressions and performance issues\n4. Flip feature flag: VC_USE_BEADS_LIBRARY=true by default\n5. Monitor production logs for errors\n6. Collect user feedback\n7. Add deprecation notice to CLAUDE.md\n8. Provide migration guide for users\n9. Remove legacy code: internal/storage/sqlite/sqlite.go (~1500 lines)\n10. Remove migration framework: internal/storage/migrations/\n11. Remove manual transaction management code\n12. Update all documentation\n\n**Acceptance Criteria:**\n- Beads library enabled by default in production\n- Zero production incidents related to migration\n- Performance meets or exceeds SQLite implementation\n- All tests passing with Beads library\n- Legacy SQLite code removed\n- Documentation updated\n- Celebration documented šŸŽ‰\n\n**Rollout Strategy:**\n1. Week 1: Enable for CI/testing environments\n2. Week 2: Dogfood on VC development\n3. Week 3: Enable for 50% of production (canary)\n4. Week 4: Enable for 100% of production\n5. Week 5: Remove legacy code\n\n**Monitoring:**\n- Track error rates before/after cutover\n- Monitor database query performance\n- Track issue creation/update latency\n- Monitor executor claim performance\n\n**Rollback Plan:**\n- Keep VC_FORCE_SQLITE=true escape hatch for 2 weeks post-cutover\n- Keep legacy code for 1 sprint after cutover\n- Document rollback procedure\n\n**Success Metrics:**\n- Zero data loss\n- No performance regression (\u003c 5% latency increase acceptable)\n- Reduced maintenance burden (code LOC reduction)\n- Positive developer feedback\n\n**Dependencies:**\n- Blocked by Phase 3 (need migration tooling)\n\n**Estimated Effort:** 1 sprint","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-22T14:05:07.755107-07:00","updated_at":"2025-10-25T23:15:33.474948-07:00","closed_at":"2025-10-22T21:37:48.748919-07:00","dependencies":[{"issue_id":"bd-15","depends_on_id":"bd-11","type":"parent-child","created_at":"2025-10-24T13:17:40.324637-07:00","created_by":"renumber"},{"issue_id":"bd-15","depends_on_id":"bd-14","type":"blocks","created_at":"2025-10-24T13:17:40.324851-07:00","created_by":"renumber"}]} -{"id":"bd-150","title":"Update AGENTS.md and README.md with \"bd daemons\" documentation","description":"Document the new \"bd daemons\" command and all subcommands in AGENTS.md and README.md. Include examples and troubleshooting guidance.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-26T16:54:00.254006-07:00","updated_at":"2025-10-26T19:03:21.93015-07:00","closed_at":"2025-10-26T19:03:21.93015-07:00","dependencies":[{"issue_id":"bd-150","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.254862-07:00","created_by":"daemon"}]} -{"id":"bd-151","title":"Implement \"bd daemons health\" subcommand","description":"Add health check command that pings each daemon and reports responsiveness. Should detect and report stale sockets, version mismatches, unresponsive daemons.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.255444-07:00","updated_at":"2025-10-26T18:21:59.75853-07:00","closed_at":"2025-10-26T18:21:59.75853-07:00","dependencies":[{"issue_id":"bd-151","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T17:47:47.949848-07:00","created_by":"stevey"}]} -{"id":"bd-152","title":"Implement \"bd daemons logs\" subcommand","description":"Add command to view daemon logs for a specific workspace. Requires daemon logging to file (may need separate issue for log infrastructure).","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-26T16:54:00.256037-07:00","updated_at":"2025-10-26T19:03:21.929414-07:00","closed_at":"2025-10-26T19:03:21.929414-07:00","dependencies":[{"issue_id":"bd-152","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.256797-07:00","created_by":"daemon"}]} -{"id":"bd-153","title":"Implement \"bd daemons killall\" subcommand","description":"Add emergency command to stop all running bd daemons. Should discover all daemons and stop them gracefully (with timeout fallback to SIGKILL).","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.258822-07:00","updated_at":"2025-10-26T19:03:21.928597-07:00","closed_at":"2025-10-26T19:03:21.928597-07:00","dependencies":[{"issue_id":"bd-153","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.259421-07:00","created_by":"daemon"}]} -{"id":"bd-154","title":"Implement \"bd daemons stop\" and \"bd daemons restart\" subcommands","description":"Add commands to stop and restart individual daemons by path or PID. Should send graceful shutdown signal via socket, with fallback to SIGTERM.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.259875-07:00","updated_at":"2025-10-26T18:35:28.407904-07:00","closed_at":"2025-10-26T18:35:28.407904-07:00","dependencies":[{"issue_id":"bd-154","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.260433-07:00","created_by":"daemon"}]} +{"id":"bd-150","title":"Update AGENTS.md and README.md with \"bd daemons\" documentation","description":"Document the new \"bd daemons\" command and all subcommands in AGENTS.md and README.md. Include examples and troubleshooting guidance.","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-26T16:54:00.254006-07:00","updated_at":"2025-10-26T16:54:00.254006-07:00","dependencies":[{"issue_id":"bd-150","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.254862-07:00","created_by":"daemon"}]} +{"id":"bd-151","title":"Implement \"bd daemons health\" subcommand","description":"Add health check command that pings each daemon and reports responsiveness. Should detect and report stale sockets, version mismatches, unresponsive daemons.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.255444-07:00","updated_at":"2025-10-26T19:26:29.085111-07:00","closed_at":"2025-10-26T19:26:29.085111-07:00"} +{"id":"bd-152","title":"Implement \"bd daemons logs\" subcommand","description":"Add command to view daemon logs for a specific workspace. Requires daemon logging to file (may need separate issue for log infrastructure).","status":"open","priority":2,"issue_type":"task","created_at":"2025-10-26T16:54:00.256037-07:00","updated_at":"2025-10-26T16:54:00.256037-07:00","dependencies":[{"issue_id":"bd-152","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.256797-07:00","created_by":"daemon"}]} +{"id":"bd-153","title":"Implement \"bd daemons killall\" subcommand","description":"Add emergency command to stop all running bd daemons. Should discover all daemons and stop them gracefully (with timeout fallback to SIGKILL).","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.258822-07:00","updated_at":"2025-10-26T19:26:29.086026-07:00","closed_at":"2025-10-26T19:26:29.086026-07:00","dependencies":[{"issue_id":"bd-153","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.259421-07:00","created_by":"daemon"}]} +{"id":"bd-154","title":"Implement \"bd daemons stop\" and \"bd daemons restart\" subcommands","description":"Add commands to stop and restart individual daemons by path or PID. Should send graceful shutdown signal via socket, with fallback to SIGTERM.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T16:54:00.259875-07:00","updated_at":"2025-10-26T19:26:29.091742-07:00","closed_at":"2025-10-26T19:26:29.091742-07:00","dependencies":[{"issue_id":"bd-154","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T16:54:00.260433-07:00","created_by":"daemon"}]} {"id":"bd-155","title":"Daemon auto-import creates race condition with deletions","description":"When a user deletes an issue while the daemon is running, the daemon's periodic import from JSONL can immediately re-add the deleted issue before the deletion is flushed to the JSONL file.\n\n## Steps to reproduce:\n1. Start daemon: bd daemon --interval 30s --auto-commit --auto-push\n2. Delete an issue: bd delete wy-XX --force\n3. Wait for daemon sync cycle (observe daemon log showing \"Imported from JSONL\")\n4. Run bd show wy-XX - issue still exists despite successful deletion\n\n## Expected behavior:\nThe deletion should be immediately flushed to JSONL before the next import cycle, or imports should respect deletions in the database.\n\n## Actual behavior:\nThe daemon imports from JSONL and re-adds the deleted issue, overwriting the deletion. The user sees \"āœ“ Deleted wy-XX\" but the issue persists.\n\n## Workaround:\n1. Stop daemon: bd daemon --stop\n2. Delete issue: bd delete wy-XX --force\n3. Export to JSONL: bd export -o .beads/issues.jsonl\n4. Commit and push manually\n5. Restart daemon\n\n## Suggested fixes:\n1. Flush pending changes to JSONL before each import cycle\n2. Track deletions separately and don't re-import deleted issues\n3. Make delete operation immediately flush to JSONL when daemon is running\n4. Add a \"dirty\" flag that prevents import if there are pending exports","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-10-26T17:02:30.576489-07:00","updated_at":"2025-10-26T17:07:10.34777-07:00","closed_at":"2025-10-26T17:07:10.34777-07:00","labels":["daemon","data-integrity","race-condition"]} {"id":"bd-156","title":"bd create files issues in wrong project when multiple beads databases exist","description":"When working in a directory with a beads database (e.g., /Users/stevey/src/wyvern/.beads/wy.db), bd create can file issues in a different project's database instead of the current directory's database.\n\n## Steps to reproduce:\n1. Have multiple beads projects (e.g., ~/src/wyvern with wy.db, ~/vibecoder with vc.db)\n2. cd ~/src/wyvern\n3. Run bd create --title \"Test\" --type bug\n4. Observe issue created with wrong prefix (e.g., vc-1 instead of wy-1)\n\n## Expected behavior:\nbd create should respect the current working directory and use the beads database in that directory (.beads/ folder).\n\n## Actual behavior:\nbd create appears to use a different project's database, possibly the last accessed or a global default.\n\n## Impact:\nThis can cause issues to be filed in completely wrong projects, polluting unrelated issue trackers.\n\n## Suggested fix:\n- Always check for .beads/ directory in current working directory first\n- Add --project flag to explicitly specify which database to use\n- Show which project/database is being used in command output\n- Add validation/confirmation when creating issues if current directory doesn't match database project","status":"closed","priority":2,"issue_type":"bug","created_at":"2025-10-26T17:02:30.578817-07:00","updated_at":"2025-10-26T17:08:43.009159-07:00","closed_at":"2025-10-26T17:08:43.009159-07:00","labels":["cli","project-context"]} -{"id":"bd-157","title":"Implement \"bd daemons health\" subcommand","description":"Add health check command that pings each daemon and reports responsiveness. Should detect and report stale sockets, version mismatches, unresponsive daemons.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T17:09:51.138682-07:00","updated_at":"2025-10-26T17:47:47.958834-07:00","closed_at":"2025-10-26T17:47:47.958834-07:00","dependencies":[{"issue_id":"bd-157","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T17:09:51.140111-07:00","created_by":"daemon"}]} -{"id":"bd-158","title":"Implement \"bd daemons list\" subcommand","description":"Create the \"bd daemons list\" command that displays all running daemons in a table with: workspace path, PID, version, socket path, uptime, last activity, exclusive lock status. Include --json flag.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T17:09:51.140442-07:00","updated_at":"2025-10-26T17:47:47.929666-07:00","closed_at":"2025-10-26T17:47:47.929666-07:00","dependencies":[{"issue_id":"bd-158","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T17:09:51.150077-07:00","created_by":"daemon"}]} -{"id":"bd-159","title":"Timestamp-only changes still being exported despite dedup logic","description":"User observed timestamp-only changes in .beads/beads.jsonl causing dirty working tree. Example: bd-128's updated_at changed from 2025-10-25T23:51:09.811006-07:00 to 2025-10-26T14:12:45.207573-07:00 with no other field changes.\n\nThis should have been prevented by the export deduplication logic that's supposed to skip timestamp-only updates.\n\nNeed to investigate why timestamp-only changes are still being exported and fix the dedup logic.","status":"open","priority":1,"issue_type":"bug","created_at":"2025-10-26T17:58:15.41007-07:00","updated_at":"2025-10-26T17:58:15.41007-07:00"} +{"id":"bd-157","title":"Implement \"bd daemons health\" subcommand","description":"Add health check command that pings each daemon and reports responsiveness. Should detect and report stale sockets, version mismatches, unresponsive daemons.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T17:09:51.138682-07:00","updated_at":"2025-10-26T19:26:29.101216-07:00","closed_at":"2025-10-26T19:26:29.101216-07:00","dependencies":[{"issue_id":"bd-157","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T17:09:51.140111-07:00","created_by":"daemon"}]} +{"id":"bd-158","title":"Implement \"bd daemons list\" subcommand","description":"Create the \"bd daemons list\" command that displays all running daemons in a table with: workspace path, PID, version, socket path, uptime, last activity, exclusive lock status. Include --json flag.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T17:09:51.140442-07:00","updated_at":"2025-10-26T19:26:29.10179-07:00","closed_at":"2025-10-26T19:26:29.10179-07:00","dependencies":[{"issue_id":"bd-158","depends_on_id":"bd-145","type":"parent-child","created_at":"2025-10-26T17:09:51.150077-07:00","created_by":"daemon"}]} +{"id":"bd-159","title":"Improve database naming and version management robustness","description":"Make beads architecture more robust to prevent issues like accidental vc.db usage after version upgrades.\n\nKey improvements:\n\n1. **Canonical database name enforcement**\n - Always use beads.db, never auto-detect from multiple .db files\n - bd init migrates/renames any old databases (vc.db → beads.db, bd.db → beads.db)\n - Daemon refuses to start if multiple .db files exist (ambiguity error)\n\n2. **Database schema versioning**\n - Store beads version in SQLite (PRAGMA user_version or metadata table)\n - Daemon checks on startup: validate schema version matches\n - Auto-migrate or fail with clear instructions on version mismatch\n\n3. **Config file with database path**\n - .beads/config.json specifies {\"database\": \"beads.db\", \"version\": \"0.17.5\"}\n - Daemon and clients read config first (single source of truth)\n - No ambiguity about which file is active\n\n4. **Stricter daemon lock validation**\n - daemon.lock includes database path and beads version (JSON)\n - Client validates: lock says beads.db but I expect bd.db → hard error\n - Already partially implemented, make it stricter\n\n5. **Migration tooling**\n - bd init --migrate or auto-run on first command after upgrade\n - Detects old databases, prompts to migrate/clean up\n - Could be part of daemon auto-start logic\n\n**IMPORTANT**: Allow issues.jsonl to be renamed (users cycle through new names to avoid polluted git history). Only enforce database naming, not JSONL naming.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-10-26T18:01:42.356175-07:00","updated_at":"2025-10-26T19:04:07.843634-07:00","closed_at":"2025-10-26T19:04:07.843634-07:00"} {"id":"bd-16","title":"Add lifecycle safety docs and tests for UnderlyingDB() method","description":"The new UnderlyingDB() method exposes the raw *sql.DB connection for extensions like VC to create their own tables. While database/sql is concurrency-safe, there are lifecycle and misuse risks that need documentation and testing.\n\n**What needs to be done:**\n\n1. **Enhanced documentation** - Expand UnderlyingDB() comments to warn:\n - Callers MUST NOT call Close() on returned DB\n - Do NOT change pool/driver settings (SetMaxOpenConns, SetConnMaxIdleTime)\n - Do NOT modify SQLite PRAGMAs (WAL mode, journal, etc.)\n - Expect errors after Storage.Close() - use contexts\n - Keep write transactions short to avoid blocking core storage\n\n2. **Add lifecycle tracking** - Implement closed flag:\n - Add atomic.Bool closed field to SQLiteStorage\n - Set flag in Close(), clear in New()\n - Optional: Add IsClosed() bool method\n\n3. **Add safety tests** (run with -race):\n - TestUnderlyingDB_ConcurrentAccess - N goroutines using UnderlyingDB() during normal storage ops\n - TestUnderlyingDB_AfterClose - Verify operations fail cleanly after storage closed\n - TestUnderlyingDB_CreateExtensionTables - Create VC table with FK to issues, verify FK enforcement\n - TestUnderlyingDB_LongTxDoesNotCorrupt - Ensure long read tx doesn't block writes indefinitely\n\n**Why this matters:**\nVC will use this to create tables in the same database. Need to ensure production-ready safety without over-engineering.\n\n**Estimated effort:** S+S+S = M total (1-3h)","design":"Oracle recommends \"simple path\": enhanced docs + minimal guardrails + focused tests. See oracle output for detailed rationale on concurrency safety, lifecycle risks, and when to consider advanced path (wrapping interface).","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-22T17:07:56.812983-07:00","updated_at":"2025-10-25T23:15:33.476053-07:00","closed_at":"2025-10-22T20:10:52.636372-07:00"} -{"id":"bd-160","title":"Critical: Multi-clone sync is fundamentally broken","description":"## Problem\n\nTwo clones of the same beads repo working on non-overlapping issues cannot stay in sync. The JSONL export/import mechanism creates catastrophic divergence instead of keeping databases synchronized.\n\n## What Happened (2025-10-26)\n\nTwo repos working simultaneously:\n- ~/src/beads (bd.db, 159 issues) - worked on bd-153, bd-152, bd-150, closed them\n- ~/src/fred/beads (beads.db, 165 issues) - worked on bd-159, bd-160-164\n\nResult after attempting sync:\n- Databases completely diverged (159 vs 165 issues)\n- JSONL files contain conflicting state\n- Database corruption in fred/beads\n- bd-150/152/153 show as closed in one repo, open in the other\n- No clear recovery path without manual database copying\n- git pull + bd sync does NOT synchronize state\n\n## Root Cause Analysis\n\n### SMOKING GUN: Daemon Import is a NO-OP\n**Location**: cmd/bd/daemon.go:791-797\n\nThe daemon's importToJSONLWithStore() function returns nil without actually importing.\nThis means the daemon exports DB to JSONL, commits, pulls from remote, but NEVER imports remote changes back into the database.\n\nResult: Remote changes are pulled but never imported, daemon keeps exporting stale state.\n\n### Other Root Causes\n\n1. **Database naming inconsistency**: One repo uses bd.db, other uses beads.db - no enforcement\n2. **Daemon state divergence**: Each repo's daemon maintains separate state, never converges\n3. **JSONL import/export race conditions**: Auto-import can overwrite local changes before export\n4. **No conflict resolution**: When databases diverge, there's no merge strategy\n5. **Timestamp-only changes**: bd-159 - exports trigger even with no real changes\n6. **Multiple daemons**: No coordination between daemon instances\n\n## Impact\n\n**Beads is unusable for multi-developer or multi-agent workflows**. The core promise - git-based sync via JSONL - is broken.\n\n## Fix Strategy (Epic)\n\nThis issue is tracked as an EPIC with child issues:\n\n### Phase 1: Stop the Bleeding (P0)\n- Implement daemon JSONL import (fixes the NO-OP)\n- Add database integrity checks\n- Fix timestamp-only exports (bd-159)\n\n### Phase 2: Database Consistency (P0)\n- Enforce canonical database naming\n- Add database fingerprinting\n- Migration tooling\n\n### Phase 3: Conflict Resolution (P1)\n- Implement version tracking\n- Three-way merge detection\n- Interactive conflict resolution\n\n### Phase 4: Testing \u0026 Validation (P1)\n- Multi-clone integration tests\n- Stress tests\n- Documentation\n\n## Severity\n\nP0 - This breaks the fundamental use case of beads. Without reliable sync, the tool is unusable for any multi-agent or team scenario.","status":"open","priority":0,"issue_type":"bug","created_at":"2025-10-26T19:42:43.355244-07:00","updated_at":"2025-10-26T19:53:08.681645-07:00"} -{"id":"bd-161","title":"Implement daemon JSONL import (fix NO-OP stub)","description":"## Critical Bug\n\nThe daemon's sync loop calls importToJSONLWithStore() but this function is a NO-OP stub that returns nil without importing any changes.\n\n## Location\n\n**File**: cmd/bd/daemon.go:791-797\n\nCurrent implementation:\n```go\nfunc importToJSONLWithStore(ctx context.Context, store storage.Storage, jsonlPath string) error {\n // TODO Phase 4: Implement direct import for daemon\n // Currently a no-op - daemon doesn't import git changes into DB\n return nil\n}\n```\n\n## Impact\n\nThis is the PRIMARY cause of bd-160. When the daemon:\n1. Exports DB → JSONL\n2. Commits changes\n3. Pulls from remote (gets other clone's changes)\n4. Calls importToJSONLWithStore() ← **Does nothing!**\n5. Pushes commits (overwrites remote with stale state)\n\nResult: Perpetual divergence between clones.\n\n## Implementation Approach\n\nReplace the NO-OP with actual import logic:\n\n```go\nfunc importToJSONLWithStore(ctx context.Context, store storage.Storage, jsonlPath string) error {\n // Read JSONL file\n file, err := os.Open(jsonlPath)\n if err != nil {\n return fmt.Errorf(\"failed to open JSONL: %w\", err)\n }\n defer file.Close()\n \n // Parse all issues\n var issues []*types.Issue\n scanner := bufio.NewScanner(file)\n for scanner.Scan() {\n var issue types.Issue\n if err := json.Unmarshal(scanner.Bytes(), \u0026issue); err != nil {\n return fmt.Errorf(\"failed to parse issue: %w\", err)\n }\n issues = append(issues, \u0026issue)\n }\n \n if err := scanner.Err(); err != nil {\n return fmt.Errorf(\"failed to read JSONL: %w\", err)\n }\n \n // Use existing import logic with auto-conflict resolution\n opts := ImportOptions{\n ResolveCollisions: true, // Auto-resolve ID conflicts\n DryRun: false,\n SkipUpdate: false,\n Strict: false,\n }\n \n _, err = importIssuesCore(ctx, \"\", store, issues, opts)\n return err\n}\n```\n\n## Testing\n\nAfter implementation, test with:\n```bash\n# Create two clones\ngit init repo1 \u0026\u0026 cd repo1 \u0026\u0026 bd init \u0026\u0026 bd daemon\nbd new \"Issue A\"\ngit add . \u0026\u0026 git commit -m \"init\"\n\ncd .. \u0026\u0026 git clone repo1 repo2 \u0026\u0026 cd repo2 \u0026\u0026 bd init \u0026\u0026 bd daemon\n\n# Make changes in repo1\ncd ../repo1 \u0026\u0026 bd new \"Issue B\"\n\n# Wait for daemon sync, then check repo2\nsleep 10\ncd ../repo2 \u0026\u0026 bd list # Should show both Issue A and B\n```\n\n## Success Criteria\n\n- Daemon imports remote changes after git pull\n- Issue count converges across clones within one sync cycle\n- No manual intervention needed\n- Existing collision resolution logic handles conflicts\n\n## Estimated Effort\n\n30-60 minutes\n\n## Priority\n\nP0 - This is the critical path fix for bd-160","status":"closed","priority":0,"issue_type":"task","created_at":"2025-10-26T19:53:55.313039-07:00","updated_at":"2025-10-26T20:04:41.902916-07:00","closed_at":"2025-10-26T20:04:41.902916-07:00","dependencies":[{"issue_id":"bd-161","depends_on_id":"bd-160","type":"blocks","created_at":"2025-10-26T19:53:55.3136-07:00","created_by":"daemon"}]} -{"id":"bd-162","title":"Add database integrity checks to sync operations","description":"## Problem\n\nWhen databases diverge (due to the import NO-OP bug or race conditions), there are no safety checks to detect or prevent catastrophic data loss.\n\nNeed integrity checks before/after sync operations to catch divergence early.\n\n## Implementation Locations\n\n**Pre-export checks** (cmd/bd/daemon.go:948, sync.go:108):\n- Before exportToJSONLWithStore()\n- Before exportToJSONL()\n\n**Post-import checks** (cmd/bd/daemon.go:985):\n- After importToJSONLWithStore()\n\n## Checks to Implement\n\n### 1. Database vs JSONL Count Divergence\n\nBefore export:\n```go\nfunc validatePreExport(store storage.Storage, jsonlPath string) error {\n dbIssues, _ := store.SearchIssues(ctx, \"\", types.IssueFilter{})\n dbCount := len(dbIssues)\n \n jsonlCount, _ := countIssuesInJSONL(jsonlPath)\n \n if dbCount == 0 \u0026\u0026 jsonlCount \u003e 0 {\n return fmt.Errorf(\"refusing to export empty DB over %d issues in JSONL\", jsonlCount)\n }\n \n divergencePercent := math.Abs(float64(dbCount-jsonlCount)) / float64(jsonlCount) * 100\n if divergencePercent \u003e 50 {\n log.Printf(\"WARNING: DB has %d issues, JSONL has %d (%.1f%% divergence)\", \n dbCount, jsonlCount, divergencePercent)\n log.Printf(\"This suggests sync failure - investigate before proceeding\")\n }\n \n return nil\n}\n```\n\n### 2. Duplicate ID Detection\n\n```go\nfunc checkDuplicateIDs(store storage.Storage) error {\n // Query for duplicate IDs\n rows, _ := db.Query(`\n SELECT id, COUNT(*) as cnt \n FROM issues \n GROUP BY id \n HAVING cnt \u003e 1\n `)\n \n var duplicates []string\n for rows.Next() {\n var id string\n var count int\n rows.Scan(\u0026id, \u0026count)\n duplicates = append(duplicates, fmt.Sprintf(\"%s (x%d)\", id, count))\n }\n \n if len(duplicates) \u003e 0 {\n return fmt.Errorf(\"database corruption: duplicate IDs: %v\", duplicates)\n }\n return nil\n}\n```\n\n### 3. Orphaned Dependencies\n\n```go\nfunc checkOrphanedDeps(store storage.Storage) ([]string, error) {\n // Find dependencies pointing to non-existent issues\n rows, _ := db.Query(`\n SELECT DISTINCT d.depends_on_id \n FROM dependencies d \n LEFT JOIN issues i ON d.depends_on_id = i.id \n WHERE i.id IS NULL\n `)\n \n var orphaned []string\n for rows.Next() {\n var id string\n rows.Scan(\u0026id)\n orphaned = append(orphaned, id)\n }\n \n if len(orphaned) \u003e 0 {\n log.Printf(\"WARNING: Found %d orphaned dependencies: %v\", len(orphaned), orphaned)\n }\n \n return orphaned, nil\n}\n```\n\n### 4. Post-Import Validation\n\nAfter import, verify:\n```go\nfunc validatePostImport(before, after int) error {\n if after \u003c before {\n return fmt.Errorf(\"import reduced issue count: %d → %d (data loss!)\", before, after)\n }\n if after == before {\n log.Printf(\"Import complete: no changes\")\n } else {\n log.Printf(\"Import complete: %d → %d issues (+%d)\", before, after, after-before)\n }\n return nil\n}\n```\n\n## Integration Points\n\nAdd to daemon sync loop (daemon.go:920-999):\n```go\n// Before export\nif err := validatePreExport(store, jsonlPath); err != nil {\n log.log(\"Pre-export validation failed: %v\", err)\n return\n}\n\n// Export...\n\n// Before import\nbeforeCount := countDBIssues(store)\n\n// Import...\n\n// After import\nafterCount := countDBIssues(store)\nif err := validatePostImport(beforeCount, afterCount); err != nil {\n log.log(\"Post-import validation failed: %v\", err)\n}\n```\n\n## Testing\n\nCreate test scenarios:\n1. Empty DB, non-empty JSONL → should error\n2. Duplicate IDs in DB → should error\n3. Orphaned dependencies → should warn\n4. Import reduces count → should error\n\n## Success Criteria\n\n- Catches divergence \u003e50% before export\n- Detects duplicate IDs\n- Reports orphaned dependencies\n- Validates import doesn't lose data\n- All checks logged clearly\n\n## Estimated Effort\n\n2-3 hours\n\n## Priority\n\nP0 - Safety checks prevent data loss during sync","status":"closed","priority":0,"issue_type":"task","created_at":"2025-10-26T19:54:22.558861-07:00","updated_at":"2025-10-26T20:17:37.981054-07:00","closed_at":"2025-10-26T20:17:37.981054-07:00","dependencies":[{"issue_id":"bd-162","depends_on_id":"bd-160","type":"blocks","created_at":"2025-10-26T19:54:22.55941-07:00","created_by":"daemon"}]} -{"id":"bd-164","title":"Fix timestamp-only export deduplication (bd-159)","description":"## Problem\n\nExport deduplication logic is supposed to skip timestamp-only changes, but it's not working. This causes:\n- Spurious git commits every sync cycle\n- Increased race condition window\n- Harder to detect real changes\n- Amplifies bd-160 sync issues\n\nRelated to bd-159.\n\n## Location\n\n**File**: cmd/bd/export.go:236-246\n\nCurrent code clears dirty flags for all exported issues:\n```go\nif output == \"\" || output == findJSONLPath() {\n if err := store.ClearDirtyIssuesByID(ctx, exportedIDs); err != nil {\n fmt.Fprintf(os.Stderr, \"Warning: failed to clear dirty issues: %v\\n\", err)\n }\n clearAutoFlushState()\n}\n```\n\nProblem: No check whether issue actually changed (beyond timestamps).\n\n## Root Cause\n\nIssues are marked dirty on ANY update, including:\n- Timestamp updates (UpdatedAt field)\n- No-op updates (same values written)\n- Database reopens (sqlite WAL journal replays)\n\n## Implementation Approach\n\n### 1. Add Content Hash to dirty_issues Table\n\n```sql\nALTER TABLE dirty_issues ADD COLUMN content_hash TEXT;\n```\n\nThe hash should exclude timestamp fields:\n```go\nfunc computeIssueContentHash(issue *types.Issue) string {\n // Clone issue and zero out timestamps\n normalized := *issue\n normalized.CreatedAt = time.Time{}\n normalized.UpdatedAt = time.Time{}\n \n // Serialize to JSON\n data, _ := json.Marshal(normalized)\n \n // SHA256 hash\n hash := sha256.Sum256(data)\n return hex.EncodeToString(hash[:])\n}\n```\n\n### 2. Track Previous Export State\n\nStore issue snapshots in issue_snapshots table (already exists):\n```go\nfunc saveExportSnapshot(ctx context.Context, store storage.Storage, issue *types.Issue) error {\n snapshot := \u0026types.IssueSnapshot{\n IssueID: issue.ID,\n SnapshotAt: time.Now(),\n Title: issue.Title,\n Description: issue.Description,\n Status: issue.Status,\n // ... all fields except timestamps\n }\n return store.SaveSnapshot(ctx, snapshot)\n}\n```\n\n### 3. Deduplicate During Export\n\nIn export.go:\n```go\n// Before encoding each issue\nif shouldSkipExport(ctx, store, issue) {\n skippedCount++\n continue\n}\n\nfunc shouldSkipExport(ctx context.Context, store storage.Storage, issue *types.Issue) bool {\n // Get last exported snapshot\n snapshot, err := store.GetLatestSnapshot(ctx, issue.ID)\n if err != nil || snapshot == nil {\n return false // No snapshot, must export\n }\n \n // Compare content hash\n currentHash := computeIssueContentHash(issue)\n snapshotHash := computeSnapshotHash(snapshot)\n \n if currentHash == snapshotHash {\n // Timestamp-only change, skip\n log.Printf(\"Skipping %s (timestamp-only change)\", issue.ID)\n return true\n }\n \n return false\n}\n```\n\n### 4. Update on Real Export\n\nOnly save snapshot when actually exporting:\n```go\nfor _, issue := range issues {\n if shouldSkipExport(ctx, store, issue) {\n continue\n }\n \n if err := encoder.Encode(issue); err != nil {\n return err\n }\n \n // Save snapshot of exported state\n saveExportSnapshot(ctx, store, issue)\n exportedIDs = append(exportedIDs, issue.ID)\n}\n```\n\n## Alternative: Simpler Approach\n\nIf snapshot complexity is too much, use a simpler hash:\n\n```go\n// In dirty_issues table, store hash when marking dirty\nfunc markIssueDirty(ctx context.Context, issueID string, issue *types.Issue) error {\n hash := computeIssueContentHash(issue)\n \n _, err := db.Exec(`\n INSERT INTO dirty_issues (issue_id, content_hash) \n VALUES (?, ?)\n ON CONFLICT(issue_id) DO UPDATE SET content_hash = ?\n `, issueID, hash, hash)\n \n return err\n}\n\n// During export, check if hash changed\nfunc hasRealChanges(ctx context.Context, store storage.Storage, issue *types.Issue) bool {\n var storedHash string\n err := db.QueryRow(\"SELECT content_hash FROM dirty_issues WHERE issue_id = ?\", issue.ID).Scan(\u0026storedHash)\n if err != nil {\n return true // No stored hash, export it\n }\n \n currentHash := computeIssueContentHash(issue)\n return currentHash != storedHash\n}\n```\n\n## Testing\n\nTest cases:\n1. Update issue timestamp only → no export\n2. Update issue title → export\n3. Multiple timestamp updates → single export\n4. Database reopen → no spurious exports\n\nValidation:\n```bash\n# Start daemon, wait 1 hour\nbd daemon --interval 5s\nsleep 3600\n\n# Check git log - should be 0 commits\ngit log --since=\"1 hour ago\" --oneline | wc -l # expect: 0\n```\n\n## Success Criteria\n\n- Zero spurious exports for timestamp-only changes\n- Real changes still exported immediately\n- No performance regression\n- bd-159 resolved\n\n## Estimated Effort\n\n2-3 hours\n\n## Priority\n\nP0 - Prevents noise that amplifies bd-160 sync issues","notes":"## Implementation Complete (2025-10-26)\n\nSuccessfully implemented timestamp-only export deduplication using export_hashes table approach.\n\n### āœ… All Changes Completed:\n\n1. **Export hash tracking (bd-164):**\n - Created `export_hashes` table in schema (schema.go:123-129)\n - Added migration `migrateExportHashesTable()` in sqlite.go\n - Implemented `GetExportHash()` and `SetExportHash()` in hash.go\n - Added interface methods to storage.go\n\n2. **Export deduplication logic:**\n - Updated `shouldSkipExport()` to query export_hashes table\n - Modified export loop to call `SetExportHash()` after successful export\n - Export reports \"Skipped N issue(s) with timestamp-only changes\"\n\n3. **Reverted incorrect hash tracking in dirty_issues:**\n - Removed content_hash column from dirty_issues table (schema + migration)\n - Simplified `MarkIssueDirty()` - no longer fetches issues or computes hashes\n - Simplified `MarkIssuesDirty()` - no longer needs issue fetch\n - Simplified `markIssuesDirtyTx()` - removed store parameter\n - Updated all callers in dependencies.go to remove store parameter\n\n4. **Testing:**\n - Timestamp-only updates (updated_at change) → skipped āœ“\n - Real content changes (priority change) → exported āœ“\n - export_hashes table populated correctly āœ“\n - dirty_issues cleared after successful export āœ“\n\n### Design:\n- **dirty_issues table**: Tracks \"needs export\" flag (cleared after export)\n- **export_hashes table**: Tracks \"last exported state\" (persists for comparison)\n\nThis clean separation avoids lifecycle complexity and prevents spurious exports.\n\n### Files Modified:\n- cmd/bd/export.go (hash computation, skip logic, SetExportHash calls)\n- internal/storage/storage.go (GetExportHash/SetExportHash interface)\n- internal/storage/sqlite/schema.go (export_hashes table)\n- internal/storage/sqlite/hash.go (GetExportHash/SetExportHash implementation)\n- internal/storage/sqlite/dirty.go (simplified, hash tracking removed)\n- internal/storage/sqlite/sqlite.go (migration)\n- internal/storage/sqlite/dependencies.go (markIssuesDirtyTx calls updated)\n\nBuild successful, all tests passed.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T19:54:58.248715-07:00","updated_at":"2025-10-26T20:34:43.345317-07:00","closed_at":"2025-10-26T20:34:43.345317-07:00","dependencies":[{"issue_id":"bd-164","depends_on_id":"bd-160","type":"blocks","created_at":"2025-10-26T19:54:58.24935-07:00","created_by":"daemon"},{"issue_id":"bd-164","depends_on_id":"bd-159","type":"related","created_at":"2025-10-26T19:54:58.249718-07:00","created_by":"daemon"}]} -{"id":"bd-165","title":"Enforce canonical database naming (beads.db)","description":"## Problem\n\nCurrently, different clones can use different database filenames (bd.db, beads.db, issues.db), causing incompatibility when attempting to sync.\n\nExample from bd-160:\n- ~/src/beads uses bd.db\n- ~/src/fred/beads uses beads.db\n- Sync fails because they're fundamentally different databases\n\n## Solution\n\nEnforce a single canonical database name: **beads.db**\n\n## Implementation\n\n### 1. Define Canonical Name\n\n**File**: beads.go or constants.go (create if needed)\n\n```go\npackage beads\n\n// CanonicalDatabaseName is the required database filename for all beads repositories\nconst CanonicalDatabaseName = \"beads.db\"\n\n// LegacyDatabaseNames are old names that should be migrated\nvar LegacyDatabaseNames = []string{\"bd.db\", \"issues.db\", \"bugs.db\"}\n```\n\n### 2. Update bd init Command\n\n**File**: cmd/bd/init.go\n\n```go\nfunc runInit(cmd *cobra.Command, args []string) error {\n beadsDir := \".beads\"\n os.MkdirAll(beadsDir, 0755)\n \n dbPath := filepath.Join(beadsDir, beads.CanonicalDatabaseName)\n \n // Check for legacy databases\n for _, legacy := range beads.LegacyDatabaseNames {\n legacyPath := filepath.Join(beadsDir, legacy)\n if exists(legacyPath) {\n fmt.Printf(\"Found legacy database: %s\\n\", legacy)\n fmt.Printf(\"Migrating to canonical name: %s\\n\", beads.CanonicalDatabaseName)\n \n // Rename to canonical\n if err := os.Rename(legacyPath, dbPath); err != nil {\n return fmt.Errorf(\"migration failed: %w\", err)\n }\n fmt.Printf(\"āœ“ Migrated %s → %s\\n\", legacy, beads.CanonicalDatabaseName)\n }\n }\n \n // Create new database if doesn't exist\n if !exists(dbPath) {\n store, err := sqlite.New(dbPath)\n if err != nil {\n return err\n }\n defer store.Close()\n \n // Initialize with version metadata\n store.SetMetadata(context.Background(), \"bd_version\", Version)\n store.SetMetadata(context.Background(), \"db_name\", beads.CanonicalDatabaseName)\n }\n \n // ... rest of init\n}\n```\n\n### 3. Validate on Daemon Start\n\n**File**: cmd/bd/daemon.go:1076-1095\n\nUpdate the existing multiple-DB check:\n```go\n// Check for multiple .db files\nbeadsDir := filepath.Dir(daemonDBPath)\nmatches, err := filepath.Glob(filepath.Join(beadsDir, \"*.db\"))\nif err == nil \u0026\u0026 len(matches) \u003e 1 {\n log.log(\"Error: Multiple database files found:\")\n for _, match := range matches {\n log.log(\" - %s\", filepath.Base(match))\n }\n log.log(\"\")\n log.log(\"Beads requires a single canonical database: %s\", beads.CanonicalDatabaseName)\n log.log(\"Run 'bd init' to migrate legacy databases\")\n os.Exit(1)\n}\n\n// Validate using canonical name\nif filepath.Base(daemonDBPath) != beads.CanonicalDatabaseName {\n log.log(\"Error: Non-canonical database name: %s\", filepath.Base(daemonDBPath))\n log.log(\"Expected: %s\", beads.CanonicalDatabaseName)\n log.log(\"Run 'bd init' to migrate to canonical name\")\n os.Exit(1)\n}\n```\n\n### 4. Add Migration Command\n\n**File**: cmd/bd/migrate.go (create new)\n\n```go\nvar migrateCmd = \u0026cobra.Command{\n Use: \"migrate\",\n Short: \"Migrate database to canonical naming and schema\",\n Run: func(cmd *cobra.Command, args []string) {\n beadsDir := \".beads\"\n \n // Find current database\n var currentDB string\n for _, name := range append(beads.LegacyDatabaseNames, beads.CanonicalDatabaseName) {\n path := filepath.Join(beadsDir, name)\n if exists(path) {\n currentDB = path\n break\n }\n }\n \n if currentDB == \"\" {\n fmt.Println(\"No database found\")\n return\n }\n \n targetPath := filepath.Join(beadsDir, beads.CanonicalDatabaseName)\n \n if currentDB == targetPath {\n fmt.Println(\"Database already using canonical name\")\n return\n }\n \n // Backup first\n backupPath := currentDB + \".backup\"\n copyFile(currentDB, backupPath)\n fmt.Printf(\"Created backup: %s\\n\", backupPath)\n \n // Rename\n if err := os.Rename(currentDB, targetPath); err != nil {\n fmt.Fprintf(os.Stderr, \"Migration failed: %v\\n\", err)\n os.Exit(1)\n }\n \n fmt.Printf(\"āœ“ Migrated: %s → %s\\n\", filepath.Base(currentDB), beads.CanonicalDatabaseName)\n \n // Update metadata\n store, _ := sqlite.New(targetPath)\n defer store.Close()\n store.SetMetadata(context.Background(), \"db_name\", beads.CanonicalDatabaseName)\n },\n}\n```\n\n### 5. Update FindDatabasePath\n\n**File**: beads.go (or wherever FindDatabasePath is defined)\n\n```go\nfunc FindDatabasePath() string {\n beadsDir := findBeadsDir()\n if beadsDir == \"\" {\n return \"\"\n }\n \n // First try canonical name\n canonical := filepath.Join(beadsDir, CanonicalDatabaseName)\n if exists(canonical) {\n return canonical\n }\n \n // Check for legacy names (warn user)\n for _, legacy := range LegacyDatabaseNames {\n path := filepath.Join(beadsDir, legacy)\n if exists(path) {\n fmt.Fprintf(os.Stderr, \"WARNING: Using legacy database name: %s\\n\", legacy)\n fmt.Fprintf(os.Stderr, \"Run 'bd migrate' to upgrade to canonical name: %s\\n\", CanonicalDatabaseName)\n return path\n }\n }\n \n return \"\"\n}\n```\n\n## Testing\n\n```bash\n# Test migration\nmkdir test-repo \u0026\u0026 cd test-repo \u0026\u0026 git init\nmkdir .beads\nsqlite3 .beads/bd.db \"CREATE TABLE test (id int);\"\n\nbd init # Should detect and migrate bd.db → beads.db\n\n# Verify\nls .beads/*.db # Should only show beads.db\n\n# Test daemon rejection\nsqlite3 .beads/old.db \"CREATE TABLE test (id int);\"\nbd daemon # Should error: multiple databases found\n\n# Test clean init\nrm -rf test-repo2 \u0026\u0026 mkdir test-repo2 \u0026\u0026 cd test-repo2\nbd init # Should create .beads/beads.db directly\n```\n\n## Rollout Strategy\n\n1. Add migration logic to bd init\n2. Update FindDatabasePath to warn on legacy names\n3. Add 'bd migrate' command for manual migration\n4. Update docs to specify canonical name\n5. Add daemon validation after 2 releases\n\n## Success Criteria\n\n- All new repositories use beads.db\n- bd init auto-migrates legacy names\n- bd daemon rejects non-canonical names\n- Clear migration path for existing users\n- No data loss during migration\n\n## Estimated Effort\n\n3-4 hours\n\n## Priority\n\nP0 - Critical for multi-clone compatibility","status":"closed","priority":0,"issue_type":"task","created_at":"2025-10-26T19:55:39.056716-07:00","updated_at":"2025-10-26T20:42:12.175028-07:00","closed_at":"2025-10-26T20:42:12.175028-07:00","dependencies":[{"issue_id":"bd-165","depends_on_id":"bd-160","type":"blocks","created_at":"2025-10-26T19:55:39.057336-07:00","created_by":"daemon"}]} -{"id":"bd-166","title":"Add database fingerprinting and validation","description":"## Problem\n\nWhen multiple clones exist, there's no validation that they're actually clones of the same repository. Different repos can accidentally share databases, causing data corruption.\n\nNeed database fingerprinting to ensure clones belong to the same logical repository.\n\n## Solution\n\nAdd repository fingerprint to database metadata and validate on daemon start.\n\n## Implementation\n\n### 1. Compute Repository ID\n\n**File**: pkg/fingerprint.go (create new)\n\n```go\npackage beads\n\nimport (\n \"crypto/sha256\"\n \"encoding/hex\"\n \"fmt\"\n \"os/exec\"\n)\n\n// ComputeRepoID generates a unique identifier for this git repository\nfunc ComputeRepoID() (string, error) {\n // Get git remote URL (canonical repo identifier)\n cmd := exec.Command(\"git\", \"config\", \"--get\", \"remote.origin.url\")\n output, err := cmd.Output()\n if err != nil {\n // No remote configured, use local path\n cmd = exec.Command(\"git\", \"rev-parse\", \"--show-toplevel\")\n output, err = cmd.Output()\n if err != nil {\n return \"\", fmt.Errorf(\"not a git repository\")\n }\n }\n \n repoURL := strings.TrimSpace(string(output))\n \n // Normalize URL (remove .git suffix, https vs git@, etc.)\n repoURL = normalizeGitURL(repoURL)\n \n // SHA256 hash for privacy (don't expose repo URL in database)\n hash := sha256.Sum256([]byte(repoURL))\n return hex.EncodeToString(hash[:16]), nil // Use first 16 bytes\n}\n\nfunc normalizeGitURL(url string) string {\n // Convert git@github.com:user/repo.git → github.com/user/repo\n // Convert https://github.com/user/repo.git → github.com/user/repo\n url = strings.TrimSuffix(url, \".git\")\n url = strings.ReplaceAll(url, \"git@\", \"\")\n url = strings.ReplaceAll(url, \"https://\", \"\")\n url = strings.ReplaceAll(url, \"http://\", \"\")\n url = strings.ReplaceAll(url, \":\", \"/\")\n return url\n}\n\n// GetCloneID generates a unique ID for this specific clone (not shared with other clones)\nfunc GetCloneID() string {\n // Use hostname + path for uniqueness\n hostname, _ := os.Hostname()\n path, _ := os.Getwd()\n hash := sha256.Sum256([]byte(hostname + \":\" + path))\n return hex.EncodeToString(hash[:8])\n}\n```\n\n### 2. Store Fingerprint on Init\n\n**File**: cmd/bd/init.go\n\n```go\nfunc runInit(cmd *cobra.Command, args []string) error {\n // ... create database ...\n \n // Compute and store repo ID\n repoID, err := beads.ComputeRepoID()\n if err != nil {\n fmt.Fprintf(os.Stderr, \"Warning: could not compute repo ID: %v\\n\", err)\n } else {\n if err := store.SetMetadata(ctx, \"repo_id\", repoID); err != nil {\n return fmt.Errorf(\"failed to set repo_id: %w\", err)\n }\n fmt.Printf(\"Repository ID: %s\\n\", repoID[:8])\n }\n \n // Store clone ID\n cloneID := beads.GetCloneID()\n if err := store.SetMetadata(ctx, \"clone_id\", cloneID); err != nil {\n return fmt.Errorf(\"failed to set clone_id: %w\", err)\n }\n fmt.Printf(\"Clone ID: %s\\n\", cloneID)\n \n // Store creation timestamp\n if err := store.SetMetadata(ctx, \"created_at\", time.Now().Format(time.RFC3339)); err != nil {\n return fmt.Errorf(\"failed to set created_at: %w\", err)\n }\n \n return nil\n}\n```\n\n### 3. Validate on Database Open\n\n**File**: cmd/bd/daemon.go (in runDaemonLoop)\n\n```go\nfunc validateDatabaseFingerprint(store storage.Storage) error {\n ctx := context.Background()\n \n // Get stored repo ID\n storedRepoID, err := store.GetMetadata(ctx, \"repo_id\")\n if err != nil \u0026\u0026 err.Error() != \"metadata key not found: repo_id\" {\n return fmt.Errorf(\"failed to read repo_id: %w\", err)\n }\n \n // If no repo_id, this is a legacy database - set it now\n if storedRepoID == \"\" {\n repoID, err := beads.ComputeRepoID()\n if err != nil {\n log.log(\"Warning: could not compute repo ID: %v\", err)\n return nil // Non-fatal for backward compat\n }\n \n log.log(\"Legacy database detected, setting repo_id: %s\", repoID[:8])\n if err := store.SetMetadata(ctx, \"repo_id\", repoID); err != nil {\n return fmt.Errorf(\"failed to set repo_id: %w\", err)\n }\n return nil\n }\n \n // Validate repo ID matches\n currentRepoID, err := beads.ComputeRepoID()\n if err != nil {\n log.log(\"Warning: could not compute current repo ID: %v\", err)\n return nil // Non-fatal\n }\n \n if storedRepoID != currentRepoID {\n return fmt.Errorf(`\nDATABASE MISMATCH DETECTED!\n\nThis database belongs to a different repository:\n Database repo ID: %s\n Current repo ID: %s\n\nThis usually means:\n 1. You copied a .beads directory from another repo (don't do this!)\n 2. Git remote URL changed (run 'bd migrate' to update)\n 3. Database corruption\n\nSolutions:\n - If remote URL changed: bd migrate --update-repo-id\n - If wrong database: rm -rf .beads \u0026\u0026 bd init\n - If correct database: BEADS_IGNORE_REPO_MISMATCH=1 bd daemon\n`, storedRepoID[:8], currentRepoID[:8])\n }\n \n return nil\n}\n\n// In runDaemonLoop, after opening database:\nif err := validateDatabaseFingerprint(store); err != nil {\n if os.Getenv(\"BEADS_IGNORE_REPO_MISMATCH\") != \"1\" {\n log.log(\"Error: %v\", err)\n os.Exit(1)\n }\n log.log(\"Warning: repo mismatch ignored (BEADS_IGNORE_REPO_MISMATCH=1)\")\n}\n```\n\n### 4. Add Update Command for Remote Changes\n\n**File**: cmd/bd/migrate.go\n\n```go\nvar updateRepoID bool\n\nfunc init() {\n migrateCmd.Flags().BoolVar(\u0026updateRepoID, \"update-repo-id\", false, \n \"Update repository ID (use after changing git remote)\")\n}\n\n// In migrate command:\nif updateRepoID {\n newRepoID, err := beads.ComputeRepoID()\n if err != nil {\n fmt.Fprintf(os.Stderr, \"Error: %v\\n\", err)\n os.Exit(1)\n }\n \n oldRepoID, _ := store.GetMetadata(ctx, \"repo_id\")\n \n fmt.Printf(\"Updating repository ID:\\n\")\n fmt.Printf(\" Old: %s\\n\", oldRepoID[:8])\n fmt.Printf(\" New: %s\\n\", newRepoID[:8])\n \n if err := store.SetMetadata(ctx, \"repo_id\", newRepoID); err != nil {\n fmt.Fprintf(os.Stderr, \"Error: %v\\n\", err)\n os.Exit(1)\n }\n \n fmt.Println(\"āœ“ Repository ID updated\")\n}\n```\n\n## Metadata Schema\n\nAdd to `metadata` table:\n\n| Key | Value | Description |\n|-----|-------|-------------|\n| repo_id | sha256(git_remote)[..16] | Repository fingerprint |\n| clone_id | sha256(hostname:path)[..8] | Clone-specific ID |\n| created_at | RFC3339 timestamp | Database creation time |\n| db_name | \"beads.db\" | Canonical database name |\n| bd_version | \"v0.x.x\" | Schema version |\n\n## Testing\n\n```bash\n# Test repo ID generation\ncd /tmp/test-repo \u0026\u0026 git init\ngit remote add origin https://github.com/user/repo.git\nbd init\nbd show-meta repo_id # Should show consistent hash\n\n# Test mismatch detection\ncd /tmp/other-repo \u0026\u0026 git init\ngit remote add origin https://github.com/other/repo.git\ncp -r /tmp/test-repo/.beads /tmp/other-repo/\nbd daemon # Should error: repo mismatch\n\n# Test migration\ngit remote set-url origin https://github.com/user/new-repo.git\nbd migrate --update-repo-id # Should update successfully\n```\n\n## Success Criteria\n\n- New databases automatically get repo_id\n- Daemon validates repo_id on start\n- Clear error messages on mismatch\n- Migration path for remote URL changes\n- Legacy databases automatically fingerprinted\n\n## Estimated Effort\n\n3-4 hours\n\n## Priority\n\nP0 - Prevents accidental database mixing across repos","status":"closed","priority":0,"issue_type":"task","created_at":"2025-10-26T19:56:18.53693-07:00","updated_at":"2025-10-26T20:55:35.983262-07:00","closed_at":"2025-10-26T20:55:35.983262-07:00","dependencies":[{"issue_id":"bd-166","depends_on_id":"bd-160","type":"blocks","created_at":"2025-10-26T19:56:18.537546-07:00","created_by":"daemon"}]} -{"id":"bd-167","title":"Implement version tracking for issues","description":"## Problem\n\nWhen two clones modify the same issue concurrently, there's no way to detect or handle the conflict properly. Last writer wins arbitrarily, losing data.\n\nNeed version tracking to implement proper conflict detection and resolution.\n\n## Solution\n\nAdd version counter and last-modified metadata to issues for Last-Writer-Wins (LWW) conflict resolution.\n\n## Database Schema Changes\n\n**File**: internal/storage/sqlite/schema.go\n\n```sql\n-- Add version tracking columns to issues table\nALTER TABLE issues ADD COLUMN version INTEGER DEFAULT 1 NOT NULL;\nALTER TABLE issues ADD COLUMN modified_by TEXT DEFAULT '' NOT NULL;\nALTER TABLE issues ADD COLUMN modified_at DATETIME;\n\n-- Create index for version-based queries\nCREATE INDEX IF NOT EXISTS idx_issues_version ON issues(id, version);\n\n-- Store modification history\nCREATE TABLE IF NOT EXISTS issue_versions (\n id INTEGER PRIMARY KEY AUTOINCREMENT,\n issue_id TEXT NOT NULL,\n version INTEGER NOT NULL,\n modified_by TEXT NOT NULL,\n modified_at DATETIME NOT NULL,\n snapshot BLOB NOT NULL, -- JSON snapshot of issue at this version\n FOREIGN KEY (issue_id) REFERENCES issues(id) ON DELETE CASCADE,\n UNIQUE(issue_id, version)\n);\n\nCREATE INDEX IF NOT EXISTS idx_issue_versions_lookup \n ON issue_versions(issue_id, version);\n```\n\n## Implementation\n\n### 1. Update Issue Type\n\n**File**: internal/types/issue.go\n\n```go\ntype Issue struct {\n ID string `json:\"id\"`\n Title string `json:\"title\"`\n Description string `json:\"description\"`\n Status Status `json:\"status\"`\n Priority int `json:\"priority\"`\n CreatedAt time.Time `json:\"created_at\"`\n UpdatedAt time.Time `json:\"updated_at\"`\n \n // Version tracking (new fields)\n Version int `json:\"version\"` // Incremented on each update\n ModifiedBy string `json:\"modified_by\"` // Clone ID that made the change\n ModifiedAt time.Time `json:\"modified_at\"` // When the change was made\n \n // ... rest of fields\n}\n```\n\n### 2. Increment Version on Update\n\n**File**: internal/storage/sqlite/sqlite.go\n\n```go\nfunc (s *SQLiteStorage) UpdateIssue(ctx context.Context, issue *types.Issue) error {\n // Get current version from database\n var currentVersion int\n var currentModifiedAt time.Time\n err := s.db.QueryRowContext(ctx, `\n SELECT version, modified_at \n FROM issues \n WHERE id = ?\n `, issue.ID).Scan(\u0026currentVersion, \u0026currentModifiedAt)\n \n if err != nil \u0026\u0026 err != sql.ErrNoRows {\n return fmt.Errorf(\"failed to get current version: %w\", err)\n }\n \n // Detect conflict: incoming version is stale\n if issue.Version \u003e 0 \u0026\u0026 issue.Version \u003c currentVersion {\n return \u0026ConflictError{\n IssueID: issue.ID,\n LocalVersion: currentVersion,\n RemoteVersion: issue.Version,\n LocalModified: currentModifiedAt,\n RemoteModified: issue.ModifiedAt,\n }\n }\n \n // No conflict or local is newer: increment version\n issue.Version = currentVersion + 1\n issue.ModifiedBy = getCloneID() // From fingerprinting\n issue.ModifiedAt = time.Now()\n \n // Save version snapshot before updating\n if err := s.saveVersionSnapshot(ctx, issue); err != nil {\n // Non-fatal warning\n log.Printf(\"Warning: failed to save version snapshot: %v\", err)\n }\n \n // Perform update\n _, err = s.db.ExecContext(ctx, `\n UPDATE issues SET\n title = ?,\n description = ?,\n status = ?,\n priority = ?,\n updated_at = ?,\n version = ?,\n modified_by = ?,\n modified_at = ?\n WHERE id = ?\n `, issue.Title, issue.Description, issue.Status, issue.Priority,\n issue.UpdatedAt, issue.Version, issue.ModifiedBy, issue.ModifiedAt,\n issue.ID)\n \n return err\n}\n\nfunc (s *SQLiteStorage) saveVersionSnapshot(ctx context.Context, issue *types.Issue) error {\n snapshot, _ := json.Marshal(issue)\n \n _, err := s.db.ExecContext(ctx, `\n INSERT INTO issue_versions (issue_id, version, modified_by, modified_at, snapshot)\n VALUES (?, ?, ?, ?, ?)\n `, issue.ID, issue.Version, issue.ModifiedBy, issue.ModifiedAt, snapshot)\n \n return err\n}\n```\n\n### 3. Conflict Detection on Import\n\n**File**: cmd/bd/import_core.go\n\n```go\ntype ConflictError struct {\n IssueID string\n LocalVersion int\n RemoteVersion int\n LocalModified time.Time\n RemoteModified time.Time\n LocalIssue *types.Issue\n RemoteIssue *types.Issue\n}\n\nfunc (e *ConflictError) Error() string {\n return fmt.Sprintf(\"conflict on %s: local v%d (modified %s) vs remote v%d (modified %s)\",\n e.IssueID, e.LocalVersion, e.LocalModified, e.RemoteVersion, e.RemoteModified)\n}\n\nfunc detectVersionConflict(local, remote *types.Issue) *ConflictError {\n // No conflict if same version\n if local.Version == remote.Version {\n return nil\n }\n \n // Remote is newer - no conflict\n if remote.Version \u003e local.Version {\n return nil\n }\n \n // Local is newer - remote is stale\n if remote.Version \u003c local.Version {\n // Check if concurrent modification (both diverged from same base)\n if local.ModifiedAt.Sub(remote.ModifiedAt).Abs() \u003c 1*time.Minute {\n return \u0026ConflictError{\n IssueID: local.ID,\n LocalVersion: local.Version,\n RemoteVersion: remote.Version,\n LocalModified: local.ModifiedAt,\n RemoteModified: remote.ModifiedAt,\n LocalIssue: local,\n RemoteIssue: remote,\n }\n }\n }\n \n return nil\n}\n```\n\n### 4. Conflict Resolution Strategies\n\n```go\ntype ConflictStrategy int\n\nconst (\n StrategyLWW ConflictStrategy = iota // Last Writer Wins (use newest modified_at)\n StrategyHighestVersion // Use highest version number\n StrategyInteractive // Prompt user\n StrategyMerge // Three-way merge (future)\n)\n\nfunc resolveConflict(conflict *ConflictError, strategy ConflictStrategy) (*types.Issue, error) {\n switch strategy {\n case StrategyLWW:\n if conflict.RemoteModified.After(conflict.LocalModified) {\n return conflict.RemoteIssue, nil\n }\n return conflict.LocalIssue, nil\n \n case StrategyHighestVersion:\n if conflict.RemoteVersion \u003e conflict.LocalVersion {\n return conflict.RemoteIssue, nil\n }\n return conflict.LocalIssue, nil\n \n case StrategyInteractive:\n return promptUserForResolution(conflict)\n \n default:\n return nil, fmt.Errorf(\"unknown conflict strategy: %v\", strategy)\n }\n}\n```\n\n## Migration for Existing Databases\n\n**File**: cmd/bd/migrate.go\n\n```go\nfunc migrateToVersionTracking(store storage.Storage) error {\n ctx := context.Background()\n \n // Add columns if not exist\n _, err := db.Exec(`\n ALTER TABLE issues ADD COLUMN IF NOT EXISTS version INTEGER DEFAULT 1 NOT NULL\n `)\n if err != nil {\n return err\n }\n \n _, err = db.Exec(`\n ALTER TABLE issues ADD COLUMN IF NOT EXISTS modified_by TEXT DEFAULT ''\n `)\n if err != nil {\n return err\n }\n \n _, err = db.Exec(`\n ALTER TABLE issues ADD COLUMN IF NOT EXISTS modified_at DATETIME\n `)\n if err != nil {\n return err\n }\n \n // Backfill modified_at from updated_at\n _, err = db.Exec(`\n UPDATE issues SET modified_at = updated_at WHERE modified_at IS NULL\n `)\n \n return err\n}\n```\n\n## Testing\n\n```bash\n# Test version increment\nbd create \"Test issue\"\nbd show bd-1 --json | jq .version # Should be 1\nbd update bd-1 --title \"Updated\"\nbd show bd-1 --json | jq .version # Should be 2\n\n# Test conflict detection\n# Clone A: modify bd-1\ncd repo-a \u0026\u0026 bd update bd-1 --title \"A's version\"\n# Clone B: modify bd-1\ncd repo-b \u0026\u0026 bd update bd-1 --title \"B's version\"\n\n# Sync\ncd repo-a \u0026\u0026 bd sync # Should detect conflict\n```\n\n## Success Criteria\n\n- All issues have version numbers\n- Version increments on each update\n- Conflicts detected when importing stale versions\n- Version history preserved in issue_versions table\n- Migration works for existing databases\n\n## Estimated Effort\n\n4-5 hours\n\n## Priority\n\nP1 - Enables proper conflict detection (required before three-way merge)","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-26T19:57:01.745351-07:00","updated_at":"2025-10-26T19:57:01.745351-07:00","dependencies":[{"issue_id":"bd-167","depends_on_id":"bd-160","type":"blocks","created_at":"2025-10-26T19:57:01.746071-07:00","created_by":"daemon"}]} -{"id":"bd-168","title":"Add three-way merge conflict detection and resolution","description":"## Problem\n\nWhen version tracking detects a conflict (two clones modified same issue), we need intelligent merge logic instead of just picking a winner.\n\nDepends on: bd-167 (version tracking)\n\n## Solution\n\nImplement three-way merge algorithm using git merge-base concept:\n- Base: Last common version\n- Local: Current database state\n- Remote: Incoming JSONL state\n\n## Three-Way Merge Algorithm\n\n### 1. Find Merge Base\n\n**File**: internal/merge/merge.go (create new)\n\n```go\npackage merge\n\nimport (\n \"github.com/steveyegge/beads/internal/types\"\n)\n\n// FindMergeBase finds the last common version between local and remote\nfunc FindMergeBase(store storage.Storage, issueID string, localVersion, remoteVersion int) (*types.Issue, error) {\n // Get version history from issue_versions table\n baseVersion := min(localVersion, remoteVersion)\n \n var snapshot []byte\n err := db.QueryRow(`\n SELECT snapshot FROM issue_versions \n WHERE issue_id = ? AND version = ?\n ORDER BY version DESC LIMIT 1\n `, issueID, baseVersion).Scan(\u0026snapshot)\n \n if err != nil {\n return nil, fmt.Errorf(\"merge base not found: %w\", err)\n }\n \n var base types.Issue\n json.Unmarshal(snapshot, \u0026base)\n return \u0026base, nil\n}\n```\n\n### 2. Detect Conflict Type\n\n```go\ntype ConflictType int\n\nconst (\n NoConflict ConflictType = iota\n ModifyModify // Both modified same field\n ModifyDelete // One modified, one deleted\n CreateCreate // Both created same ID (ID collision)\n AutoMergeable // Different fields modified\n)\n\ntype FieldConflict struct {\n Field string\n BaseValue interface{}\n LocalValue interface{}\n RemoteValue interface{}\n}\n\ntype MergeResult struct {\n Type ConflictType\n Merged *types.Issue\n Conflicts []FieldConflict\n}\n\nfunc ThreeWayMerge(base, local, remote *types.Issue) (*MergeResult, error) {\n result := \u0026MergeResult{\n Type: NoConflict,\n Merged: \u0026types.Issue{},\n }\n \n // Copy base as starting point\n *result.Merged = *base\n \n // Check each field\n conflicts := []FieldConflict{}\n \n // Title\n if local.Title != base.Title \u0026\u0026 remote.Title != base.Title {\n if local.Title != remote.Title {\n conflicts = append(conflicts, FieldConflict{\n Field: \"title\",\n BaseValue: base.Title,\n LocalValue: local.Title,\n RemoteValue: remote.Title,\n })\n } else {\n result.Merged.Title = local.Title // Same change\n }\n } else if local.Title != base.Title {\n result.Merged.Title = local.Title // Only local changed\n } else if remote.Title != base.Title {\n result.Merged.Title = remote.Title // Only remote changed\n }\n \n // Description\n if local.Description != base.Description \u0026\u0026 remote.Description != base.Description {\n if local.Description != remote.Description {\n // Try smart merge for text fields\n merged, conflict := mergeText(base.Description, local.Description, remote.Description)\n if conflict {\n conflicts = append(conflicts, FieldConflict{\n Field: \"description\",\n BaseValue: base.Description,\n LocalValue: local.Description,\n RemoteValue: remote.Description,\n })\n } else {\n result.Merged.Description = merged\n }\n }\n } else if local.Description != base.Description {\n result.Merged.Description = local.Description\n } else if remote.Description != base.Description {\n result.Merged.Description = remote.Description\n }\n \n // Status\n if local.Status != base.Status \u0026\u0026 remote.Status != base.Status {\n if local.Status != remote.Status {\n conflicts = append(conflicts, FieldConflict{\n Field: \"status\",\n BaseValue: base.Status,\n LocalValue: local.Status,\n RemoteValue: remote.Status,\n })\n }\n } else if local.Status != base.Status {\n result.Merged.Status = local.Status\n } else if remote.Status != base.Status {\n result.Merged.Status = remote.Status\n }\n \n // Priority (numeric: take higher)\n if local.Priority != base.Priority \u0026\u0026 remote.Priority != base.Priority {\n result.Merged.Priority = min(local.Priority, remote.Priority) // Lower number = higher priority\n } else if local.Priority != base.Priority {\n result.Merged.Priority = local.Priority\n } else if remote.Priority != base.Priority {\n result.Merged.Priority = remote.Priority\n }\n \n // Set conflict type\n if len(conflicts) \u003e 0 {\n result.Type = ModifyModify\n result.Conflicts = conflicts\n } else if hasChanges(base, result.Merged) {\n result.Type = AutoMergeable\n }\n \n return result, nil\n}\n```\n\n### 3. Smart Text Merging\n\n```go\n// mergeText attempts to merge text changes using line-based diff\nfunc mergeText(base, local, remote string) (string, bool) {\n // If one side didn't change, use the other\n if local == base {\n return remote, false\n }\n if remote == base {\n return local, false\n }\n \n // Both changed - try line-based merge\n baseLines := strings.Split(base, \"\\n\")\n localLines := strings.Split(local, \"\\n\")\n remoteLines := strings.Split(remote, \"\\n\")\n \n // Simple merge: if changes are in different lines, combine them\n merged, conflict := mergeLines(baseLines, localLines, remoteLines)\n return strings.Join(merged, \"\\n\"), conflict\n}\n\nfunc mergeLines(base, local, remote []string) ([]string, bool) {\n // Use Myers diff algorithm or simple LCS\n // For MVP, use simple strategy:\n // - If local added lines, keep them\n // - If remote added lines, keep them\n // - If both modified same line, conflict\n \n // This is a simplified implementation\n // Production would use a proper diff library\n \n if reflect.DeepEqual(local, remote) {\n return local, false // Same changes\n }\n \n // Different changes - conflict\n return local, true\n}\n```\n\n### 4. Conflict Resolution UI\n\n**File**: cmd/bd/import_core.go\n\n```go\nfunc handleMergeConflict(result *merge.MergeResult) (*types.Issue, error) {\n fmt.Fprintf(os.Stderr, \"\\n=== MERGE CONFLICT ===\\n\")\n fmt.Fprintf(os.Stderr, \"Issue: %s\\n\", result.Merged.ID)\n fmt.Fprintf(os.Stderr, \"Conflicts in %d field(s):\\n\\n\", len(result.Conflicts))\n \n for _, conflict := range result.Conflicts {\n fmt.Fprintf(os.Stderr, \"Field: %s\\n\", conflict.Field)\n fmt.Fprintf(os.Stderr, \" Base: %v\\n\", conflict.BaseValue)\n fmt.Fprintf(os.Stderr, \" Local: %v\\n\", conflict.LocalValue)\n fmt.Fprintf(os.Stderr, \" Remote: %v\\n\", conflict.RemoteValue)\n \n fmt.Fprintf(os.Stderr, \"\\nChoose resolution:\\n\")\n fmt.Fprintf(os.Stderr, \" 1) Use local\\n\")\n fmt.Fprintf(os.Stderr, \" 2) Use remote\\n\")\n fmt.Fprintf(os.Stderr, \" 3) Edit manually\\n\")\n fmt.Fprintf(os.Stderr, \"Choice: \")\n \n var choice int\n fmt.Scanln(\u0026choice)\n \n switch choice {\n case 1:\n setField(result.Merged, conflict.Field, conflict.LocalValue)\n case 2:\n setField(result.Merged, conflict.Field, conflict.RemoteValue)\n case 3:\n // Open editor with conflict markers\n edited := editWithConflictMarkers(conflict)\n setField(result.Merged, conflict.Field, edited)\n }\n }\n \n return result.Merged, nil\n}\n```\n\n### 5. Auto-Merge Strategy\n\nFor non-interactive mode (daemon):\n\n```go\nfunc autoResolveConflict(result *merge.MergeResult, strategy string) *types.Issue {\n switch strategy {\n case \"local-wins\":\n for _, c := range result.Conflicts {\n setField(result.Merged, c.Field, c.LocalValue)\n }\n \n case \"remote-wins\":\n for _, c := range result.Conflicts {\n setField(result.Merged, c.Field, c.RemoteValue)\n }\n \n case \"newest-wins\":\n // Use ModifiedAt timestamp\n for _, c := range result.Conflicts {\n if result.Merged.ModifiedAt.After(remoteModifiedAt) {\n setField(result.Merged, c.Field, c.LocalValue)\n } else {\n setField(result.Merged, c.Field, c.RemoteValue)\n }\n }\n }\n \n return result.Merged\n}\n```\n\n## Integration with Import\n\n**File**: cmd/bd/import_core.go\n\n```go\nfunc importIssuesCore(ctx context.Context, dbPath string, store storage.Storage, issues []*types.Issue, opts ImportOptions) (*ImportResult, error) {\n // ...\n \n for _, remoteIssue := range issues {\n localIssue, err := store.GetIssue(ctx, remoteIssue.ID)\n \n if err == nil {\n // Issue exists - check for conflict\n if localIssue.Version != remoteIssue.Version {\n // Get merge base\n base, err := merge.FindMergeBase(store, remoteIssue.ID, \n localIssue.Version, remoteIssue.Version)\n \n if err != nil {\n // No merge base - use LWW\n if localIssue.ModifiedAt.After(remoteIssue.ModifiedAt) {\n continue // Keep local\n } else {\n store.UpdateIssue(ctx, remoteIssue) // Use remote\n }\n } else {\n // Three-way merge\n result, err := merge.ThreeWayMerge(base, localIssue, remoteIssue)\n if err != nil {\n return nil, err\n }\n \n if result.Type == merge.AutoMergeable {\n // Clean merge\n store.UpdateIssue(ctx, result.Merged)\n result.Updated++\n } else if result.Type == merge.ModifyModify {\n // Conflict - need resolution\n if opts.Interactive {\n merged, _ := handleMergeConflict(result)\n store.UpdateIssue(ctx, merged)\n } else {\n // Auto-resolve\n merged := autoResolveConflict(result, \"newest-wins\")\n store.UpdateIssue(ctx, merged)\n result.Conflicts++\n }\n }\n }\n }\n }\n }\n \n return result, nil\n}\n```\n\n## Testing\n\n```bash\n# Test auto-merge (different fields)\ncd repo-a \u0026\u0026 bd update bd-1 --title \"New title\"\ncd repo-b \u0026\u0026 bd update bd-1 --description \"New desc\"\ncd repo-a \u0026\u0026 bd sync \u0026\u0026 cd ../repo-b \u0026\u0026 bd sync\n# Expected: Both changes merged\n\n# Test conflict (same field)\ncd repo-a \u0026\u0026 bd update bd-2 --title \"A's title\"\ncd repo-b \u0026\u0026 bd update bd-2 --title \"B's title\"\ncd repo-a \u0026\u0026 bd sync \u0026\u0026 cd ../repo-b \u0026\u0026 bd sync\n# Expected: Conflict detected, newest wins\n```\n\n## Success Criteria\n\n- Different field changes auto-merge successfully\n- Same field conflicts detected\n- Interactive resolution for manual sync\n- Auto-resolution for daemon (newest-wins)\n- All merges preserve version history\n\n## Estimated Effort\n\n6-8 hours\n\n## Priority\n\nP1 - Enables intelligent conflict resolution","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-26T19:57:51.037146-07:00","updated_at":"2025-10-26T19:57:51.037146-07:00","dependencies":[{"issue_id":"bd-168","depends_on_id":"bd-160","type":"blocks","created_at":"2025-10-26T19:57:51.0378-07:00","created_by":"daemon"},{"issue_id":"bd-168","depends_on_id":"bd-167","type":"blocks","created_at":"2025-10-26T19:57:51.03819-07:00","created_by":"daemon"}]} -{"id":"bd-169","title":"Multi-clone integration tests and stress testing","description":"## Problem\n\nMulti-clone sync is complex with many race conditions and edge cases. Need comprehensive integration tests to validate all scenarios.\n\n## Test Scenarios\n\n### 1. Basic Two-Clone Sync\n\n**File**: test/integration/multi_clone_test.go\n\n```go\nfunc TestBasicTwoCloneSync(t *testing.T) {\n // Setup\n repo1 := setupTestRepo(t, \"repo1\")\n defer cleanup(repo1)\n \n // Create initial issue\n bd(repo1, \"create\", \"Issue A\")\n bd(repo1, \"sync\")\n \n // Clone to repo2\n repo2 := cloneRepo(t, repo1, \"repo2\")\n defer cleanup(repo2)\n \n // Verify issue synced\n issues := listIssues(repo2)\n assert.Len(t, issues, 1)\n assert.Equal(t, \"Issue A\", issues[0].Title)\n \n // Make change in repo1\n bd(repo1, \"create\", \"Issue B\")\n bd(repo1, \"sync\")\n \n // Pull in repo2\n bd(repo2, \"sync\")\n \n // Verify both issues present\n issues = listIssues(repo2)\n assert.Len(t, issues, 2)\n}\n```\n\n### 2. Concurrent Non-Overlapping Changes\n\n```go\nfunc TestConcurrentNonOverlappingChanges(t *testing.T) {\n repo1, repo2 := setupTwoClones(t)\n \n // Repo1: Create bd-1\n bd(repo1, \"create\", \"Issue 1\")\n \n // Repo2: Create bd-2\n bd(repo2, \"create\", \"Issue 2\")\n \n // Both sync\n bd(repo1, \"sync\")\n bd(repo2, \"sync\")\n bd(repo1, \"sync\") // Pull repo2's changes\n \n // Verify both repos have both issues\n assert.Len(t, listIssues(repo1), 2)\n assert.Len(t, listIssues(repo2), 2)\n}\n```\n\n### 3. Concurrent Modification Same Issue (Different Fields)\n\n```go\nfunc TestConcurrentModificationDifferentFields(t *testing.T) {\n repo1, repo2 := setupTwoClones(t)\n \n // Create initial issue\n bd(repo1, \"create\", \"Issue 1\", \"--description\", \"Base description\")\n bd(repo1, \"sync\")\n bd(repo2, \"sync\")\n \n // Repo1: Update title\n bd(repo1, \"update\", \"bd-1\", \"--title\", \"New title\")\n \n // Repo2: Update description\n bd(repo2, \"update\", \"bd-1\", \"--description\", \"New description\")\n \n // Both sync\n bd(repo1, \"sync\")\n bd(repo2, \"sync\")\n bd(repo1, \"sync\")\n \n // Verify auto-merge happened\n issue1 := getIssue(repo1, \"bd-1\")\n issue2 := getIssue(repo2, \"bd-1\")\n \n assert.Equal(t, \"New title\", issue1.Title)\n assert.Equal(t, \"New description\", issue1.Description)\n assert.Equal(t, issue1, issue2) // Both repos converged\n}\n```\n\n### 4. Concurrent Modification Same Field (Conflict)\n\n```go\nfunc TestConcurrentModificationSameField(t *testing.T) {\n repo1, repo2 := setupTwoClones(t)\n \n // Create initial issue\n bd(repo1, \"create\", \"Issue 1\")\n bd(repo1, \"sync\")\n bd(repo2, \"sync\")\n \n // Both update title\n bd(repo1, \"update\", \"bd-1\", \"--title\", \"Repo1 title\")\n bd(repo2, \"update\", \"bd-1\", \"--title\", \"Repo2 title\")\n \n // Sync\n bd(repo1, \"sync\")\n bd(repo2, \"sync\")\n bd(repo1, \"sync\")\n \n // Verify conflict resolved (newest wins)\n issue1 := getIssue(repo1, \"bd-1\")\n issue2 := getIssue(repo2, \"bd-1\")\n assert.Equal(t, issue1.Title, issue2.Title) // Converged to same value\n}\n```\n\n### 5. Create-Create Collision\n\n```go\nfunc TestCreateCreateCollision(t *testing.T) {\n repo1, repo2 := setupTwoClones(t)\n \n // Both create with same ID\n bd(repo1, \"create\", \"Issue from repo1\", \"--id\", \"bd-100\")\n bd(repo2, \"create\", \"Issue from repo2\", \"--id\", \"bd-100\")\n \n // Sync with collision resolution\n bd(repo1, \"sync\")\n bd(repo2, \"sync\", \"--rename-on-import\")\n bd(repo1, \"sync\")\n \n // Verify one was remapped\n issues := listIssues(repo1)\n assert.Len(t, issues, 2)\n \n ids := []string{issues[0].ID, issues[1].ID}\n assert.Contains(t, ids, \"bd-100\")\n assert.Contains(t, ids, \"bd-101\") // One remapped\n}\n```\n\n### 6. Delete-Modify Conflict\n\n```go\nfunc TestDeleteModifyConflict(t *testing.T) {\n repo1, repo2 := setupTwoClones(t)\n \n // Create issue\n bd(repo1, \"create\", \"Issue 1\")\n bd(repo1, \"sync\")\n bd(repo2, \"sync\")\n \n // Repo1: Delete\n bd(repo1, \"delete\", \"bd-1\")\n \n // Repo2: Modify\n bd(repo2, \"update\", \"bd-1\", \"--title\", \"Modified\")\n \n // Sync\n bd(repo1, \"sync\")\n bd(repo2, \"sync\")\n bd(repo1, \"sync\")\n \n // Verify resolution (modification wins over deletion)\n issues1 := listIssues(repo1)\n issues2 := listIssues(repo2)\n assert.Len(t, issues1, 1) // Deletion reverted\n assert.Len(t, issues2, 1)\n}\n```\n\n### 7. Three-Clone Convergence\n\n```go\nfunc TestThreeCloneConvergence(t *testing.T) {\n repo1, repo2, repo3 := setupThreeClones(t)\n \n // Each creates an issue\n bd(repo1, \"create\", \"Issue 1\")\n bd(repo2, \"create\", \"Issue 2\")\n bd(repo3, \"create\", \"Issue 3\")\n \n // Sync all (multiple rounds)\n for i := 0; i \u003c 3; i++ {\n bd(repo1, \"sync\")\n bd(repo2, \"sync\")\n bd(repo3, \"sync\")\n }\n \n // Verify all have all issues\n assert.Len(t, listIssues(repo1), 3)\n assert.Len(t, listIssues(repo2), 3)\n assert.Len(t, listIssues(repo3), 3)\n}\n```\n\n### 8. Daemon Auto-Sync Test\n\n```go\nfunc TestDaemonAutoSync(t *testing.T) {\n repo1, repo2 := setupTwoClones(t)\n \n // Start daemons\n daemon1 := startDaemon(repo1, \"--auto-commit\", \"--auto-push\", \"--interval\", \"2s\")\n defer daemon1.Stop()\n \n daemon2 := startDaemon(repo2, \"--auto-commit\", \"--auto-push\", \"--interval\", \"2s\")\n defer daemon2.Stop()\n \n // Create issue in repo1\n bd(repo1, \"create\", \"Issue 1\")\n \n // Wait for sync\n time.Sleep(10 * time.Second)\n \n // Verify appeared in repo2\n issues := listIssues(repo2)\n assert.Len(t, issues, 1)\n assert.Equal(t, \"Issue 1\", issues[0].Title)\n}\n```\n\n### 9. Database Divergence Recovery\n\n```go\nfunc TestDatabaseDivergenceRecovery(t *testing.T) {\n repo1, repo2 := setupTwoClones(t)\n \n // Create divergence (simulate the bd-160 bug)\n for i := 0; i \u003c 10; i++ {\n bd(repo1, \"create\", fmt.Sprintf(\"Issue %d\", i))\n }\n \n // Force divergence (don't sync)\n for i := 0; i \u003c 10; i++ {\n bd(repo2, \"create\", fmt.Sprintf(\"Other %d\", i))\n }\n \n // Now sync and verify recovery\n bd(repo1, \"sync\")\n bd(repo2, \"sync\")\n bd(repo1, \"sync\")\n \n // Should converge\n count1 := len(listIssues(repo1))\n count2 := len(listIssues(repo2))\n assert.Equal(t, count1, count2)\n assert.Equal(t, 20, count1)\n}\n```\n\n### 10. Timestamp-Only Changes Don't Export\n\n```go\nfunc TestTimestampOnlyNoExport(t *testing.T) {\n repo := setupTestRepo(t)\n \n // Create issue\n bd(repo, \"create\", \"Issue 1\")\n bd(repo, \"sync\")\n \n // Get commit count\n commits1 := gitCommitCount(repo)\n \n // Touch database (simulate timestamp update)\n touchDatabase(repo)\n \n // Wait for daemon cycle\n time.Sleep(10 * time.Second)\n \n // Verify no new commit\n commits2 := gitCommitCount(repo)\n assert.Equal(t, commits1, commits2)\n}\n```\n\n## Test Infrastructure\n\n**File**: test/integration/helpers.go\n\n```go\nfunc setupTestRepo(t *testing.T, name string) string {\n dir := t.TempDir()\n run(dir, \"git\", \"init\")\n run(dir, \"bd\", \"init\")\n return dir\n}\n\nfunc cloneRepo(t *testing.T, source, name string) string {\n dir := t.TempDir()\n run(dir, \"git\", \"clone\", source, name)\n return filepath.Join(dir, name)\n}\n\nfunc bd(repo string, args ...string) {\n cmd := exec.Command(\"bd\", args...)\n cmd.Dir = repo\n output, err := cmd.CombinedOutput()\n if err != nil {\n panic(fmt.Sprintf(\"bd command failed: %v\\n%s\", err, output))\n }\n}\n\nfunc listIssues(repo string) []*types.Issue {\n cmd := exec.Command(\"bd\", \"list\", \"--json\")\n cmd.Dir = repo\n output, _ := cmd.Output()\n \n var issues []*types.Issue\n json.Unmarshal(output, \u0026issues)\n return issues\n}\n```\n\n## CI Integration\n\n**File**: .github/workflows/multi-clone-tests.yml\n\n```yaml\nname: Multi-Clone Integration Tests\n\non: [push, pull_request]\n\njobs:\n test:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v3\n - uses: actions/setup-go@v4\n with:\n go-version: '1.21'\n \n - name: Run multi-clone tests\n run: |\n go test -v ./test/integration/... -tags=integration\n \n - name: Run stress tests\n run: |\n go test -v ./test/stress/... -race -timeout=30m\n```\n\n## Success Criteria\n\n- All 10 test scenarios pass\n- Tests run in CI on every commit\n- Zero flaky tests\n- Coverage \u003e80% for sync-related code\n- Stress tests validate 100+ concurrent operations\n\n## Estimated Effort\n\n8-10 hours\n\n## Priority\n\nP1 - Required to validate bd-160 fixes","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-26T19:58:40.735773-07:00","updated_at":"2025-10-26T19:58:40.735773-07:00","dependencies":[{"issue_id":"bd-169","depends_on_id":"bd-160","type":"blocks","created_at":"2025-10-26T19:58:40.736466-07:00","created_by":"daemon"}]} +{"id":"bd-160","title":"Add database schema versioning","description":"Store beads version in SQLite database for version compatibility checking.\n\nImplementation:\n- Add metadata table with schema_version field (or use PRAGMA user_version)\n- Set on database creation (bd init)\n- Daemon validates on startup: schema version matches daemon version\n- Fail with clear error if mismatch: \"Database schema v0.17.5 but daemon is v0.18.0\"\n- Provide migration guidance in error message\n\nSchema version format:\n- Use semver (0.17.5)\n- Store in metadata table: CREATE TABLE metadata (key TEXT PRIMARY KEY, value TEXT)\n- Alternative: PRAGMA user_version (integer only)\n\nBenefits:\n- Detect version mismatches before corruption\n- Enable auto-migration in future\n- Clear error messages for users","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T18:06:07.568054-07:00","updated_at":"2025-10-26T18:27:48.294001-07:00","closed_at":"2025-10-26T18:27:48.294001-07:00","dependencies":[{"issue_id":"bd-160","depends_on_id":"bd-159","type":"parent-child","created_at":"2025-10-26T18:06:07.569191-07:00","created_by":"daemon"}]} +{"id":"bd-161","title":"Stricter daemon lock file validation","description":"Enhance daemon.lock to include database path and version, validate on client connection.\n\nCurrent: daemon.lock has PID\nProposed: JSON format with database path and version\n\nLock file format:\n{\n \"pid\": 12345,\n \"database\": \"/full/path/to/.beads/beads.db\",\n \"version\": \"0.17.5\",\n \"started_at\": \"2025-10-26T18:00:00Z\"\n}\n\nImplementation:\n- Daemon writes enhanced lock on startup\n- Client reads lock and validates:\n - Database path matches expected\n - Version compatible\n - Fail hard (not just warn) on mismatch\n- Update existing lock validation code (already partially implemented)\n\nBenefits:\n- Catch daemon/database mismatches early\n- Better error messages\n- More robust multi-workspace scenarios","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-26T18:06:07.570081-07:00","updated_at":"2025-10-26T18:36:09.975648-07:00","closed_at":"2025-10-26T18:36:09.975648-07:00","dependencies":[{"issue_id":"bd-161","depends_on_id":"bd-159","type":"parent-child","created_at":"2025-10-26T18:06:07.570687-07:00","created_by":"daemon"}]} +{"id":"bd-162","title":"Enforce canonical database name (beads.db)","description":"Always use beads.db as the canonical database name. Never auto-detect from multiple .db files.\n\nImplementation:\n- bd init always creates/uses beads.db\n- bd init detects and migrates old databases (vc.db → beads.db, bd.db → beads.db)\n- Daemon refuses to start if multiple .db files exist in .beads/ (exit with ambiguity error)\n- Update database discovery logic to prefer beads.db, error on ambiguity\n\nBenefits:\n- Prevents accidental use of stale databases\n- Clear single source of truth\n- Migration path for existing users","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T18:06:07.571076-07:00","updated_at":"2025-10-26T18:16:56.23769-07:00","closed_at":"2025-10-26T18:16:56.23769-07:00"} +{"id":"bd-163","title":"Add .beads/config.json for database path configuration","description":"Create config file to eliminate ambiguity about which database is active.\n\nConfig file format (.beads/config.json):\n{\n \"database\": \"beads.db\",\n \"version\": \"0.17.5\",\n \"jsonl_export\": \"beads.jsonl\" // Allow user to rename\n}\n\nImplementation:\n- bd init creates config.json\n- Daemon and clients read config first (single source of truth)\n- Fall back to beads.db if config missing (backward compat)\n- bd init --jsonl-name allows customizing export filename\n- Gitignore: do NOT ignore config.json (part of repo state)\n\nBenefits:\n- Explicit configuration over convention\n- Allows JSONL renaming for git history hygiene\n- Single source of truth for file paths","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-26T18:06:07.571909-07:00","updated_at":"2025-10-26T18:44:16.133085-07:00","closed_at":"2025-10-26T18:44:16.133085-07:00","dependencies":[{"issue_id":"bd-163","depends_on_id":"bd-159","type":"parent-child","created_at":"2025-10-26T18:06:07.572636-07:00","created_by":"daemon"}]} +{"id":"bd-164","title":"Add migration tooling for database upgrades","description":"Create bd migrate command and auto-migration logic for version upgrades.\n\nImplementation:\n- bd migrate command (or bd init --migrate)\n- Auto-run on first command after daemon version upgrade\n- Detection logic:\n - Find all .db files in .beads/\n - Check schema version in each\n - Prompt to migrate/rename/delete\n- Migration operations:\n - Rename old database to beads.db\n - Update schema version metadata\n - Remove stale databases (with confirmation)\n- Could be part of daemon auto-start logic\n\nUser experience:\n$ bd ready\nDatabase schema mismatch detected.\n Found: vc.db (schema v0.16.0)\n Expected: beads.db (schema v0.17.5)\n \nRun 'bd migrate' to migrate automatically.\n\nBenefits:\n- Smooth upgrade path\n- Prevents confusion on version changes\n- Clean up stale databases\n\nDepends on:\n- Canonical naming (bd-160)\n- Schema versioning (bd-161)","status":"closed","priority":2,"issue_type":"task","created_at":"2025-10-26T18:06:07.571855-07:00","updated_at":"2025-10-26T19:04:02.023089-07:00","closed_at":"2025-10-26T19:04:02.023089-07:00","dependencies":[{"issue_id":"bd-164","depends_on_id":"bd-159","type":"parent-child","created_at":"2025-10-26T18:06:07.573546-07:00","created_by":"daemon"},{"issue_id":"bd-164","depends_on_id":"bd-162","type":"blocks","created_at":"2025-10-26T18:06:17.327717-07:00","created_by":"daemon"},{"issue_id":"bd-164","depends_on_id":"bd-160","type":"blocks","created_at":"2025-10-26T18:06:17.351768-07:00","created_by":"daemon"}]} +{"id":"bd-165","title":"Enforce canonical database name (beads.db)","description":"Always use beads.db as the canonical database name. Never auto-detect from multiple .db files.\n\nImplementation:\n- bd init always creates/uses beads.db\n- bd init detects and migrates old databases (vc.db → beads.db, bd.db → beads.db)\n- Daemon refuses to start if multiple .db files exist in .beads/ (exit with ambiguity error)\n- Update database discovery logic to prefer beads.db, error on ambiguity\n\nBenefits:\n- Prevents accidental use of stale databases\n- Clear single source of truth\n- Migration path for existing users","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-26T18:06:18.33827-07:00","updated_at":"2025-10-26T18:10:34.132537-07:00","closed_at":"2025-10-26T18:10:34.132537-07:00","dependencies":[{"issue_id":"bd-165","depends_on_id":"bd-159","type":"parent-child","created_at":"2025-10-26T18:06:18.339465-07:00","created_by":"daemon"}]} +{"id":"bd-166","title":"bd import/sync created 173 duplicate issues with wrong prefix","description":"## What Happened\nDuring corruption recovery investigation (beads-173), discovered the database contained 338 issues instead of expected 165:\n- 165 issues with correct `bd-` prefix \n- 173 duplicate issues with `beads-` prefix (bd-1 → beads-1, etc.)\n- Database config was set to `beads` prefix instead of `bd`\n\n## Root Cause\nSome bd operation (likely import or sync) created duplicate issues with the wrong prefix. The database should have rejected or warned about prefix mismatch, but instead:\n1. Silently created duplicates with wrong prefix\n2. Changed database prefix config from `bd` to `beads`\n\n## Impact\n- **Data integrity violation**: Duplicate issues with different IDs\n- **Silent corruption**: No error or warning during creation \n- **Wrong prefix**: Database config changed without user consent\n- **Confusion**: Users see double the issues, dependencies broken\n\n## Recovery \nHad to manually fix the `issue_prefix` config key (not `prefix` as initially thought):\n```bash\nsqlite3 .beads/beads.db \"UPDATE config SET value = 'bd' WHERE key = 'issue_prefix';\"\nsqlite3 .beads/beads.db \"DELETE FROM issues WHERE id LIKE 'beads-%';\"\n```\n\n## What Should Happen\n1. **Reject prefix mismatch**: If importing issues with different prefix than configured, error or require `--rename-on-import`\n2. **Never auto-change prefix**: Database prefix should only change via explicit `bd rename-prefix` command \n3. **Validate on import**: Check imported issue IDs match configured prefix before creating\n4. **Warn on duplicate IDs**: Even with different prefixes, detect potential duplicates\n\n## Related\n- Discovered during beads-173 (database corruption investigation)\n- Similar to existing prefix validation in `bd sync` (bd-21)","status":"open","priority":0,"issue_type":"bug","created_at":"2025-10-26T21:38:39.096165-07:00","updated_at":"2025-10-26T21:38:55.013079-07:00"} {"id":"bd-17","title":"Update EXTENDING.md with UnderlyingDB() usage and best practices","description":"EXTENDING.md currently shows how to use direct sql.Open() to access the database, but doesn't mention the new UnderlyingDB() method that's the recommended way for extensions.\n\n**Update needed:**\n1. Add section showing UnderlyingDB() usage:\n ```go\n store, err := beads.NewSQLiteStorage(dbPath)\n db := store.UnderlyingDB()\n // Create extension tables using db\n ```\n\n2. Document when to use UnderlyingDB() vs direct sql.Open():\n - Use UnderlyingDB() when you want to share the storage connection\n - Use sql.Open() when you need independent connection management\n\n3. Add safety warnings (cross-reference from UnderlyingDB() docs):\n - Don't close the DB\n - Don't modify pool settings\n - Keep transactions short\n\n4. Update the VC example to show UnderlyingDB() pattern\n\n5. Explain beads.Storage.UnderlyingDB() in the API section","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-22T17:07:56.820056-07:00","updated_at":"2025-10-25T23:15:33.478579-07:00","closed_at":"2025-10-22T19:41:19.895847-07:00","dependencies":[{"issue_id":"bd-17","depends_on_id":"bd-10","type":"discovered-from","created_at":"2025-10-24T13:17:40.32522-07:00","created_by":"renumber"}]} -{"id":"bd-170","title":"Documentation for multi-clone workflows and conflict resolution","description":"## Problem\n\nMulti-clone sync is now fixed (bd-160), but users need documentation on:\n- How to set up multi-clone workflows\n- What to do when conflicts occur\n- Best practices for team collaboration\n- Recovery procedures for divergence\n\n## Documentation Sections\n\n### 1. Multi-Clone Setup Guide\n\n**File**: docs/multi-clone-setup.md\n\n```markdown\n# Multi-Clone Workflow Setup\n\n## Prerequisites\n\n- Git repository with remote configured\n- Beads v0.x.x or later (includes bd-160 fixes)\n\n## Setup Steps\n\n### 1. Initialize First Clone\n\n```bash\ncd ~/projects/myapp\ngit init\nbd init\nbd create \"First issue\"\nbd sync\ngit remote add origin git@github.com:user/myapp.git\ngit push -u origin main\n```\n\n### 2. Clone to Another Location\n\n```bash\ncd ~/work/myapp\ngit clone git@github.com:user/myapp.git\ncd myapp\nbd init # Initializes database from JSONL\n```\n\n### 3. Start Daemons (Optional)\n\nFor automatic sync:\n\n```bash\n# In first clone\ncd ~/projects/myapp\nbd daemon --auto-commit --auto-push\n\n# In second clone\ncd ~/work/myapp\nbd daemon --auto-commit --auto-push\n```\n\n## Verification\n\nCheck both clones have same issues:\n\n```bash\ncd ~/projects/myapp \u0026\u0026 bd list\ncd ~/work/myapp \u0026\u0026 bd list\n# Should show identical output\n```\n\n## Troubleshooting\n\nIf clones diverge:\n1. Stop daemons: `bd daemon --stop`\n2. Force sync: `bd sync --force`\n3. Check diff: `git diff origin/main .beads/beads.jsonl`\n```\n\n### 2. Conflict Resolution Guide\n\n**File**: docs/conflict-resolution.md\n\n```markdown\n# Handling Sync Conflicts\n\n## Types of Conflicts\n\n### Auto-Mergeable (No Action Needed)\n\nDifferent fields modified:\n```\nClone A: Update title\nClone B: Update description\nResult: Both changes merged automatically\n```\n\n### Same-Field Conflict (Newest Wins)\n\nBoth modify same field:\n```\nClone A: Update title to \"A\" at 14:00\nClone B: Update title to \"B\" at 14:05\nResult: \"B\" wins (newer timestamp)\n```\n\n### Interactive Conflict\n\nWhen using manual sync (`bd sync`):\n```\n=== MERGE CONFLICT ===\nIssue: bd-42\nField: title\n Local: \"Fix the bug\"\n Remote: \"Resolve the issue\"\n\nChoose resolution:\n 1) Use local\n 2) Use remote\n 3) Edit manually\nChoice: _\n```\n\n## Manual Resolution\n\n1. Stop daemon: `bd daemon --stop`\n2. Sync with interactive mode: `bd sync`\n3. Resolve conflicts as prompted\n4. Restart daemon: `bd daemon --auto-commit --auto-push`\n\n## Viewing Conflict History\n\n```bash\n# Show version history\nbd show bd-42 --versions\n\n# Show who modified\nbd show bd-42 --json | jq '.modified_by, .version'\n```\n```\n\n### 3. Best Practices\n\n**File**: docs/best-practices.md\n\n```markdown\n# Multi-Clone Best Practices\n\n## Do's\n\nāœ… **Always sync before starting work**\n```bash\nbd sync # Pull latest changes\n```\n\nāœ… **Use descriptive commit messages**\n```bash\nbd sync -m \"Close bd-123: Fix memory leak\"\n```\n\nāœ… **Enable daemon for automatic sync**\n```bash\nbd daemon --auto-commit --auto-push --interval 30s\n```\n\nāœ… **Use atomic operations**\n```bash\n# Good: One logical change\nbd update bd-42 --title \"New title\" --description \"New desc\"\n\n# Avoid: Multiple separate updates\nbd update bd-42 --title \"New title\"\nbd update bd-42 --description \"New desc\" # Creates conflict window\n```\n\n## Don'ts\n\nāŒ **Don't copy .beads directories between repos**\n```bash\n# WRONG - causes database mismatch errors\ncp -r ~/project1/.beads ~/project2/\n```\n\nāŒ **Don't manually edit JSONL files**\n```bash\n# WRONG - use bd commands instead\nvim .beads/beads.jsonl\n```\n\nāŒ **Don't use different database names**\n```bash\n# WRONG - all clones must use beads.db\nmv .beads/beads.db .beads/custom.db\n```\n\nāŒ **Don't force push**\n```bash\n# WRONG - can lose other clones' changes\ngit push --force\n```\n\n## Team Workflows\n\n### Option 1: Daemon Auto-Sync (Recommended)\n\nEveryone runs daemon with auto-commit/auto-push. Changes sync automatically within seconds.\n\n**Pros**: Zero manual intervention, always in sync\n**Cons**: Requires stable internet, more git noise\n\n### Option 2: Manual Sync\n\nEveryone runs `bd sync` before/after work sessions.\n\n**Pros**: Fewer git commits, works offline\n**Cons**: Must remember to sync, conflicts more likely\n\n### Option 3: Hybrid\n\nDaemon without auto-push, manual push at milestones.\n\n**Pros**: Auto-import, controlled pushes\n**Cons**: Requires both daemon and manual sync\n```\n\n### 4. Recovery Procedures\n\n**File**: docs/recovery.md\n\n```markdown\n# Recovery from Sync Issues\n\n## Scenario 1: Database Diverged (Different Issue Counts)\n\n```bash\n# Clone A: 100 issues\n# Clone B: 95 issues\n\n# Recovery:\ncd clone-a\nbd export -o /tmp/clone-a.jsonl\n\ncd clone-b\nbd export -o /tmp/clone-b.jsonl\n\n# Compare\ndiff \u003c(jq -r .id /tmp/clone-a.jsonl | sort) \\\n \u003c(jq -r .id /tmp/clone-b.jsonl | sort)\n\n# Import missing issues\ncd clone-b\nbd import -i /tmp/clone-a.jsonl --resolve-collisions\nbd sync\n```\n\n## Scenario 2: Database Corruption\n\n```bash\n# Rebuild from JSONL\nrm .beads/beads.db\nbd init # Recreates database from JSONL\nbd list # Verify issues restored\n```\n\n## Scenario 3: Accidental Force Push\n\n```bash\n# Restore from reflog\ngit reflog\ngit reset --hard HEAD@{1}\nbd import # Re-import from restored JSONL\n```\n\n## Scenario 4: Merge Conflict in JSONL\n\n```bash\n# After git pull conflict\ngit status # Shows .beads/beads.jsonl as conflicted\n\n# Resolve by importing both sides\ngit show :2:.beads/beads.jsonl \u003e /tmp/ours.jsonl\ngit show :3:.beads/beads.jsonl \u003e /tmp/theirs.jsonl\n\n# Import both (collision resolution will merge)\nbd import -i /tmp/ours.jsonl --resolve-collisions\nbd import -i /tmp/theirs.jsonl --resolve-collisions\n\n# Export merged state\nbd export -o .beads/beads.jsonl\ngit add .beads/beads.jsonl\ngit commit -m \"Resolve JSONL merge conflict\"\n```\n```\n\n### 5. Architecture Documentation\n\n**File**: docs/architecture/sync.md\n\n```markdown\n# Sync Architecture\n\n## Components\n\n```\nā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”\n│ Git JSONL │ ← Source of truth\nā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜\n ↕\nā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”\n│ bd sync │ ← Orchestrator\nā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜\n ↕\nā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”\n│ Database │ ← Local cache\nā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜\n ↕\nā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”\n│ Daemon │ ← Auto-sync (optional)\nā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜\n```\n\n## Sync Flow\n\n1. **Export**: DB → JSONL\n2. **Commit**: JSONL → Git\n3. **Pull**: Git remote → Local\n4. **Import**: JSONL → DB (with conflict resolution)\n5. **Push**: Local Git → Remote\n\n## Conflict Resolution\n\nUses three-way merge:\n- **Base**: Last common version (from issue_versions table)\n- **Local**: Current database state\n- **Remote**: Incoming JSONL state\n\nAuto-merge strategy:\n- Different fields changed → Merge both\n- Same field changed → Newest wins (by modified_at)\n- Unresolvable → Interactive prompt (manual sync only)\n\n## Version Tracking\n\nEach issue has:\n- `version`: Incremented on each update\n- `modified_by`: Clone ID that made change\n- `modified_at`: Timestamp of change\n\nEnables:\n- Conflict detection\n- History tracking\n- LWW resolution\n```\n\n## Update Existing Docs\n\n**Files to update**:\n- README.md - Add multi-clone section\n- docs/commands.md - Document sync flags\n- docs/daemon.md - Update with conflict resolution\n- TROUBLESHOOTING.md - Add sync issues section\n\n## Examples Repository\n\n**File**: examples/multi-clone-demo.sh\n\n```bash\n#!/bin/bash\n# Multi-clone demonstration script\n\nset -e\n\necho \"=== Multi-Clone Sync Demo ===\"\n\n# Setup\nmkdir -p /tmp/bd-demo \u0026\u0026 cd /tmp/bd-demo\nrm -rf clone1 clone2\n\n# Create first clone\nmkdir clone1 \u0026\u0026 cd clone1\ngit init\nbd init\nbd create \"First issue\"\nbd sync\ncd ..\n\n# Create second clone\ngit clone clone1 clone2\ncd clone2\nbd init\nbd list # Shows first issue\n\n# Make concurrent changes\ncd ../clone1\nbd create \"Issue from clone1\"\nbd sync\n\ncd ../clone2\nbd create \"Issue from clone2\"\nbd sync\n\n# Sync both\ncd ../clone1\nbd sync\n\n# Verify convergence\necho \"Clone 1 issues:\"\nbd list\n\ncd ../clone2\necho \"Clone 2 issues:\"\nbd list\n\necho \"āœ“ Demo complete - both clones have 3 issues\"\n```\n\n## Success Criteria\n\n- All documentation sections complete\n- Examples tested and working\n- Troubleshooting covers common scenarios\n- Clear diagrams for sync flow\n- Published to docs site\n\n## Estimated Effort\n\n4-6 hours\n\n## Priority\n\nP1 - Critical for user adoption of multi-clone workflows","status":"open","priority":1,"issue_type":"task","created_at":"2025-10-26T19:59:35.553665-07:00","updated_at":"2025-10-26T19:59:35.553665-07:00","dependencies":[{"issue_id":"bd-170","depends_on_id":"bd-160","type":"blocks","created_at":"2025-10-26T19:59:35.554368-07:00","created_by":"daemon"}]} -{"id":"bd-171","title":"Test issue for hash check","description":"","status":"open","priority":3,"issue_type":"task","created_at":"2025-10-26T20:26:00.708099-07:00","updated_at":"2025-10-26T20:26:20.511674-07:00"} {"id":"bd-18","title":"Consider adding UnderlyingConn(ctx) for safer scoped DB access","description":"Currently UnderlyingDB() returns *sql.DB which is correct for most uses, but for extension migrations/DDL, a scoped connection might be safer.\n\n**Proposal:** Add optional UnderlyingConn(ctx) (*sql.Conn, error) method that:\n- Returns a scoped connection via s.db.Conn(ctx)\n- Encourages lifetime-bounded usage\n- Reduces temptation to tune global pool settings\n- Better for one-time DDL operations like CREATE TABLE\n\n**Implementation:**\n```go\n// UnderlyingConn returns a single connection from the pool for scoped use\n// Useful for migrations and DDL. Close the connection when done.\nfunc (s *SQLiteStorage) UnderlyingConn(ctx context.Context) (*sql.Conn, error) {\n return s.db.Conn(ctx)\n}\n```\n\n**Benefits:**\n- Safer for migrations (explicit scope)\n- Complements UnderlyingDB() for different use cases\n- Low implementation cost\n\n**Trade-off:** Adds another method to maintain, but Oracle considers this balanced compromise between safety and flexibility.\n\n**Decision:** This is optional - evaluate based on VC's actual usage patterns.","status":"closed","priority":3,"issue_type":"feature","created_at":"2025-10-22T17:07:56.832638-07:00","updated_at":"2025-10-25T23:15:33.479496-07:00","closed_at":"2025-10-22T22:02:18.479512-07:00","dependencies":[{"issue_id":"bd-18","depends_on_id":"bd-10","type":"related","created_at":"2025-10-24T13:17:40.325463-07:00","created_by":"renumber"}]} {"id":"bd-19","title":"MCP close tool method signature error - takes 1 positional argument but 2 were given","description":"The close approval routing fix in beads-mcp v0.11.0 works correctly and successfully routes update(status=\"closed\") calls to close() tool. However, the close() tool has a Python method signature bug that prevents execution.\n\nImpact: All MCP-based close operations are broken. Workaround: Use bd CLI directly.\n\nError: BdDaemonClient.close() takes 1 positional argument but 2 were given\n\nRoot cause: BdDaemonClient.close() only accepts self, but MCP tool passes issue_id and reason.\n\nAdditional issue: CLI close has FOREIGN KEY constraint error when recording reason parameter.\n\nSee GitHub issue #107 for full details.","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-10-22T17:25:34.67056-07:00","updated_at":"2025-10-25T23:15:33.480292-07:00","closed_at":"2025-10-22T17:36:55.463445-07:00"} {"id":"bd-2","title":"Improve error handling in dependency removal during remapping","description":"In updateDependencyReferences(), RemoveDependency errors are caught and ignored with continue (line 392). Comment says 'if dependency doesn't exist' but this catches ALL errors including real failures. Should check error type with errors.Is(err, ErrDependencyNotFound) and only ignore not-found errors, returning other errors properly.","status":"closed","priority":3,"issue_type":"bug","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-25T23:15:33.462194-07:00","closed_at":"2025-10-18T09:41:18.209717-07:00"} @@ -168,3 +164,6 @@ {"id":"bd-97","title":"Counter not synced after import on existing DB with populated issue_counters table","description":"The counter sync fix in counter_sync_test.go only syncs during initial migration when issue_counters table is empty (migrateIssueCountersTable checks count==0). For existing databases with stale counters:\n\n- Import doesn't resync the counter\n- Delete doesn't update counter \n- Renumber doesn't fix counter\n- Counter remains stuck at old high value\n\nExample from today:\n- Had 49 issues after clean import\n- Counter stuck at 4106 from previous test pollution\n- Next issue would be bd-4107 instead of bd-12\n- Even after renumber, counter stayed at 4106\n\nRoot cause: Migration only syncs if table is empty (line 182 in sqlite.go). Once populated, never resyncs.\n\nFix needed: \n1. Sync counter after import operations (not just empty table)\n2. Add counter resync after renumber\n3. Daemon caches counter value - needs to reload after external changes\n\nRelated: bd-8 (original counter sync fix), bd-9 (daemon cache staleness)","notes":"## Investigation Results\n\nAfter thorough code review, all the fixes mentioned in the issue description have ALREADY been implemented:\n\n### āœ… Fixes Already in Place:\n\n1. **Import DOES resync counters**\n - `cmd/bd/import_shared.go:253` calls `SyncAllCounters()` after batch import\n - Verified with new test `TestCounterSyncAfterImport`\n\n2. **Delete DOES update counters**\n - `internal/storage/sqlite/sqlite.go:1424` calls `SyncAllCounters()` after deletion\n - Both single delete and batch delete sync properly\n - Verified with existing tests: `TestCounterSyncAfterDelete`, `TestCounterSyncAfterBatchDelete`\n\n3. **Renumber DOES fix counters**\n - `cmd/bd/renumber.go:298-304` calls `ResetCounter()` then `SyncAllCounters()`\n - Forces counter to actual max ID (not just MAX with stale value)\n\n4. **Daemon cache DOES detect external changes**\n - `internal/rpc/server.go:1466-1487` checks file mtime and evicts stale cache\n - When DB file changes externally, cached storage is evicted and reopened\n\n### Tests Added:\n\n- `TestCounterSyncAfterImport`: Confirms import syncs counters from stale value (4106) to actual max (49)\n- `TestCounterNotSyncedWithoutExplicitSync`: Documents what would happen without the fix (bd-4107 instead of bd-12)\n\n### Conclusion:\n\nThe issue described in bd-12 has been **fully resolved**. All operations (import, delete, renumber) now properly sync counters. The daemon correctly detects external DB changes via file modification time.\n\nThe root cause (migration only syncing empty tables) was fixed by adding explicit `SyncAllCounters()` calls after import, delete, and renumber operations.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-10-24T13:35:23.110118-07:00","updated_at":"2025-10-25T23:15:33.522231-07:00","closed_at":"2025-10-22T00:03:46.697918-07:00"} {"id":"bd-98","title":"Re-land TestDatabaseReinitialization after fixing Windows/Nix issues","description":"TestDatabaseReinitialization test was reverted due to CI failures:\n- Windows: JSON parse errors, missing files \n- Nix: git not available in build environment\n\nNeed to fix and re-land:\n1. Make test work on Windows (path separators, file handling)\n2. Skip test in Nix environment or mock git\n3. Fix JSON parsing issues","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-24T15:06:27.385396-07:00","updated_at":"2025-10-25T23:15:33.508271-07:00","closed_at":"2025-10-25T17:48:52.214323-07:00"} {"id":"bd-99","title":"Feature: Use external_ref as primary matching key for import updates","description":"Implement external_ref-based matching for imports to enable hybrid workflows with external systems (Jira, GitHub, Linear).\n\n## Problem\nCurrent import collision detection treats any content change as a collision, preventing users from syncing updates from external systems without creating duplicates.\n\n## Solution\nUse external_ref field as primary matching key during imports. When an incoming issue has external_ref set:\n- Search for existing issue with same external_ref\n- If found, UPDATE (not collision)\n- If not found, create new issue\n- Never match local issues (without external_ref)\n\n## Use Cases\n- Jira integration: Import backlog, add local tasks, re-sync updates\n- GitHub integration: Import issues, track with local subtasks, sync status\n- Linear integration: Team coordination with local breakdown\n\n## Reference\nGitHub issue #142: https://github.com/steveyegge/beads/issues/142","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-10-24T22:10:24.862547-07:00","updated_at":"2025-10-25T23:15:33.508456-07:00","closed_at":"2025-10-25T10:17:33.543504-07:00"} +{"id":"beads-2","title":"bd import/sync created 173 duplicate issues with wrong prefix (beads- instead of bd-)","description":"## What Happened\nDuring corruption recovery investigation (beads-173), discovered the database contained 338 issues instead of expected 165:\n- 165 issues with correct `bd-` prefix\n- 173 duplicate issues with `beads-` prefix (bd-1 → beads-1, etc.)\n- Database config was set to `beads-` prefix instead of `bd-`\n\n## Root Cause\nSome bd operation (likely import or sync) created duplicate issues with the wrong prefix. The database should have rejected or warned about prefix mismatch, but instead:\n1. Silently created duplicates with wrong prefix\n2. Changed database prefix config from `bd-` to `beads-`\n\n## Impact\n- **Data integrity violation**: Duplicate issues with different IDs\n- **Silent corruption**: No error or warning during creation\n- **Wrong prefix**: Database config changed without user consent\n- **Confusion**: Users see double the issues, dependencies broken\n\n## Recovery\nHad to manually:\n```bash\n# Delete duplicates\nsqlite3 .beads/beads.db \"DELETE FROM dependencies WHERE issue_id LIKE 'beads-%' OR depends_on_id LIKE 'beads-%'; DELETE FROM issues WHERE id LIKE 'beads-%';\"\n\n# Fix prefix config\nsqlite3 .beads/beads.db \"UPDATE config SET value = 'bd-' WHERE key = 'prefix';\"\n```\n\n## What Should Happen\n1. **Reject prefix mismatch**: If importing issues with different prefix than configured, error or require `--rename-on-import`\n2. **Never auto-change prefix**: Database prefix should only change via explicit `bd rename-prefix` command\n3. **Validate on import**: Check imported issue IDs match configured prefix before creating\n4. **Warn on duplicate IDs**: Even with different prefixes, detect potential duplicates\n\n## Related\n- Discovered during beads-173 (database corruption investigation)\n- Similar to existing prefix validation in `bd sync` (bd-21)","status":"open","priority":0,"issue_type":"bug","created_at":"2025-10-26T21:38:17.31308-07:00","updated_at":"2025-10-26T21:38:17.31308-07:00"} +{"id":"beads-3","title":"bd import/sync created 173 duplicate issues with wrong prefix (beads- instead of bd-)","description":"","status":"open","priority":0,"issue_type":"bug","created_at":"2025-10-26T21:38:24.300198-07:00","updated_at":"2025-10-26T21:38:24.300198-07:00"} +{"id":"beads-4","title":"bd import/sync created 173 duplicate issues with wrong prefix","description":"","status":"open","priority":0,"issue_type":"bug","created_at":"2025-10-26T21:38:33.52121-07:00","updated_at":"2025-10-26T21:38:33.52121-07:00"}