Fix bd-pq5k: merge conflicts now prefer closed>open and deletion>modification

CHANGES:
1. Merge logic (internal/merge/merge.go):
   - Added mergeStatus() enforcing closed ALWAYS wins over open
   - Fixed closed_at handling: only set when status='closed'
   - Changed deletion handling: deletion ALWAYS wins over modification

2. Deletion tracking (cmd/bd/snapshot_manager.go):
   - Updated ComputeAcceptedDeletions to accept all merge deletions
   - Removed "unchanged locally" check (deletion wins regardless)

3. FK constraint helper (internal/storage/sqlite/util.go):
   - Added IsForeignKeyConstraintError() for bd-koab
   - Detects FK violations for graceful import handling

TESTS UPDATED:
- TestMergeStatus: comprehensive status merge tests
- TestIsForeignKeyConstraintError: FK constraint detection
- bd-pq5k test: validates no invalid state (status=open with closed_at)
- Deletion tests: reflect new deletion-wins behavior
- All tests pass ✓

This ensures issues never get stuck in invalid states and prevents
the insane situation where issues never die!

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Steve Yegge
2025-11-23 21:42:43 -08:00
parent b428254f89
commit d4f9a05bb2
8 changed files with 926 additions and 55 deletions

View File

@@ -533,6 +533,7 @@
{"id":"bd-keb","content_hash":"449e24b36012295a897592b083b8d730a93e08f0fb8195ebc5859a6b5263244a","title":"Add database maintenance commands section to QUICKSTART.md","description":"**Problem:**\nUsers don't discover `bd compact` or `bd cleanup` commands until their database grows large. These maintenance commands aren't mentioned in quickstart documentation.\n\nRelated to issue #349 item #4.\n\n**Current state:**\ndocs/QUICKSTART.md ends at line 217 with \"See README.md for full documentation\" but has no mention of maintenance operations.\n\n**Proposed addition:**\nAdd a \"Database Maintenance\" section after line 140 (before \"Advanced: Agent Mail\" section) covering:\n- When database grows (many closed issues)\n- How to view compaction statistics\n- How to compact old issues\n- How to delete closed issues\n- Warning about permanence\n\n**Example content:**\n```markdown\n## Database Maintenance\n\nAs your project accumulates closed issues, the database grows. Use these commands to manage size:\n\n```bash\n# View compaction statistics\nbd compact --stats\n\n# Preview compaction candidates (30+ days closed)\nbd compact --analyze --json\n\n# Apply agent-generated summary\nbd compact --apply --id bd-42 --summary summary.txt\n\n# Immediately delete closed issues (use with caution!)\nbd cleanup --force\n```\n\n**When to compact:**\n- Database file \u003e 10MB with many old closed issues\n- After major project milestones\n- Before archiving a project phase\n```\n\n**Files to modify:**\n- docs/QUICKSTART.md (add section after line 140)","status":"closed","priority":3,"issue_type":"task","created_at":"2025-11-20T20:48:40.488512-05:00","updated_at":"2025-11-23T10:31:59.661347-08:00","closed_at":"2025-11-20T20:59:13.439462-05:00","source_repo":".","labels":["documentation","onboarding"],"comments":[{"id":41,"issue_id":"bd-keb","author":"stevey","text":"Addresses GitHub issue #349 item 4: https://github.com/steveyegge/beads/issues/349\n\nUsers don't discover compact/cleanup commands until database grows large. Quickstart should mention maintenance operations.","created_at":"2025-11-22T07:53:00Z"}]}
{"id":"bd-khnb","content_hash":"4f55475e1150be5a2710768a06f3162510814d96ffb2145b0a1693f5972ca5ae","title":"bd migrate --update-repo-id triggers auto-import that resurrects deleted issues","description":"**Bug:** Running `bd migrate --update-repo-id` can resurrect previously deleted issues from git history.\n\n## What Happened\n\nUser deleted 490 closed issues:\n- Deletion committed successfully (06d655a) with JSONL at 48 lines\n- Database had 48 issues after deletion\n- User ran `bd migrate --update-repo-id` to fix legacy database\n- Migration triggered daemon auto-import\n- JSONL had been restored to 538 issues (from commit 6cd3a32 - before deletion)\n- Auto-import loaded the old JSONL over the cleaned database\n- Result: 490 deleted issues resurrected\n\n## Root Cause\n\nThe auto-import logic in `cmd/bd/sync.go:130-136`:\n```go\nif isJSONLNewer(jsonlPath) {\n fmt.Println(\"→ JSONL is newer than database, importing first...\")\n if err := importFromJSONL(ctx, jsonlPath, renameOnImport); err != nil {\n```\n\nThis checks if JSONL mtime is newer than database and auto-imports. The problem:\n1. Git operations (pull, merge, checkout) can restore old JSONL files\n2. Restored file has recent mtime (time of git operation)\n3. Auto-import sees \"newer\" JSONL and imports it\n4. Old data overwrites current database state\n\n## Timeline\n\n- 19:59: Commit 6cd3a32 restored JSONL to 538 issues from d99222d\n- 20:22: Commit 3520321 (bd sync)\n- 20:23: Commit 06d655a deleted 490 issues → JSONL now 48 lines\n- 20:23: User ran `bd migrate --update-repo-id`\n- Migration completed, daemon started\n- Daemon saw JSONL (restored earlier to 538) was \"newer\" than database\n- Auto-import resurrected 490 deleted issues\n\n## Impact\n\n- **Critical data loss bug** - user deletions can be undone silently\n- Affects any workflow that uses git branches, merges, or checkouts\n- Auto-import has no safety checks against importing older data\n- Users have no warning that old data will overwrite current state\n\n## Fix Options\n\n1. **Content-based staleness** (not mtime-based)\n - Compare JSONL content hash vs database content hash\n - Only import if content actually changed\n - Prevents re-importing old data with new mtime\n\n2. **Database timestamp check**\n - Store \"last export timestamp\" in database metadata\n - Only import JSONL if it's newer than last export\n - Prevents importing old JSONL after git operations\n\n3. **User confirmation**\n - Before auto-import, show diff of what will change\n - Require confirmation for large changes (\u003e10% issues affected)\n - Safety valve for destructive imports\n\n4. **Explicit sync mode**\n - Disable auto-import entirely\n - Require explicit `bd sync` or `bd import` commands\n - Trade convenience for safety\n\n## Recommended Solution\n\nCombination of #1 and #2:\n- Add `last_export_timestamp` to database metadata\n- Check JSONL mtime \u003e last_export_timestamp before importing\n- Add content hash check as additional safety\n- Show warning if importing would delete \u003e10 issues\n\nThis preserves auto-import convenience while preventing data loss.\n\n## Files Involved\n\n- `cmd/bd/sync.go:130-136` - Auto-import logic\n- `cmd/bd/daemon_sync.go` - Daemon export/import cycle\n- `internal/autoimport/autoimport.go` - Staleness detection\n\n## Reproduction Steps\n\n1. Create and delete some issues, commit to git\n2. Checkout an earlier commit (before deletion)\n3. Checkout back to current commit\n4. JSONL file now has recent mtime but old content\n5. Run any bd command that triggers auto-import\n6. Deleted issues are resurrected","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-11-20T20:44:35.235807-05:00","updated_at":"2025-11-20T21:51:31.806158-05:00","closed_at":"2025-11-20T21:51:31.806158-05:00","source_repo":"."}
{"id":"bd-kla1","content_hash":"825b411d37b412a1ee19e3ebc246b6725aca0f32b83e65c8b4680fa4ef2193ff","title":"Add bd init --contributor wizard","description":"Interactive wizard for OSS contributor setup. Guides user through: fork workflow setup, separate planning repo configuration, auto-detection of fork relationships, examples of common OSS workflows.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-05T18:04:29.958409-08:00","updated_at":"2025-11-05T19:27:33.07529-08:00","closed_at":"2025-11-05T18:53:51.267625-08:00","source_repo":".","dependencies":[{"issue_id":"bd-kla1","depends_on_id":"bd-8rd","type":"parent-child","created_at":"2025-11-05T18:04:39.120064-08:00","created_by":"daemon"}]}
{"id":"bd-koab","content_hash":"1b5e9a3a60f61472698af52ab6b2cbe46c839660900e7da3b562d9c3d5c608f6","title":"Import should continue on FOREIGN KEY constraint violations from deletions","description":"# Problem\n\nWhen importing JSONL after a merge that includes deletions, we may encounter FOREIGN KEY constraint violations if:\n- Issue A was deleted in one branch\n- Issue B (that depends on A) was modified in another branch \n- The merge keeps the deletion of A and the modification of B\n- Import tries to import B with a dependency/reference to deleted A\n\nCurrently import fails completely on such constraint violations, requiring manual intervention.\n\n# Solution\n\nAdd IsForeignKeyConstraintError() helper similar to IsUniqueConstraintError()\n\nUpdate import code to:\n1. Detect FOREIGN KEY constraint violations\n2. Log a warning with the issue ID and constraint\n3. Continue importing remaining issues\n4. Report summary of skipped issues at the end\n\n# Implementation Notes\n\n- Add to internal/storage/sqlite/util.go\n- Pattern: strings.Contains(err.Error(), \"FOREIGN KEY constraint failed\")\n- Update importer to handle these errors gracefully\n- Keep track of skipped issues for summary reporting","notes":"## Progress\n\nAdded IsForeignKeyConstraintError() helper function:\n- Located in internal/storage/sqlite/util.go \n- Detects both uppercase and lowercase variants\n- Full test coverage added to util_test.go\n- Tests pass ✓\n\n## Next Steps\n\nWhen FK constraint error is reproduced:\n1. Update importer.go to use IsForeignKeyConstraintError()\n2. Log warning with issue ID and constraint details\n3. Track skipped issues in Result struct\n4. Continue import instead of failing\n5. Report skipped issues in summary\n\nThe helper is ready to use when you encounter the actual constraint violation.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-11-23T21:37:02.811665-08:00","updated_at":"2025-11-23T21:37:50.739917-08:00","source_repo":"."}
{"id":"bd-ktng","content_hash":"0a09f3e1549a70817f23aa57444811aaf18683ff9336944ff6e8c277ac5684b4","title":"Optimize CLI test suite - eliminate redundant git init calls","description":"Current: Each of 13 CLI tests calls git init (31s total). Solution: Use single test binary built once in init(), skip git operations where possible, or use mock filesystem.","status":"open","priority":2,"issue_type":"task","created_at":"2025-11-04T11:23:13.660276-08:00","updated_at":"2025-11-04T11:23:13.660276-08:00","source_repo":".","dependencies":[{"issue_id":"bd-ktng","depends_on_id":"bd-l5gq","type":"discovered-from","created_at":"2025-11-04T11:23:13.662102-08:00","created_by":"daemon"}]}
{"id":"bd-ky74","content_hash":"d44651203d5d7996a09dbcfbadede992b6364b40ec6c79fa5efe98f0bb26daee","title":"Optimize cmd/bd long-mode tests by switching to in-process testing","description":"The long-mode CLI tests in cmd/bd are slow (1.4-4.4 seconds each) because they spawn full bd processes via exec.Command() for every operation.\n\nCurrent approach:\n- Each runBD() call spawns new bd process via exec.Command(testBD, args...)\n- Each process initializes Go runtime, loads SQLite, parses CLI flags\n- Tests run serially (create → update → show → close)\n- Even with --no-daemon flag, there's significant process spawn overhead\n\nExample timing from test run:\n- TestCLI_PriorityFormats: 2.21s\n- TestCLI_Show: 2.26s\n- TestCLI_Ready: 2.29s\n- TestCLI_Import: 4.42s\n\nOptimization strategy:\n1. Convert most tests to in-process testing (call bd functions directly)\n2. Reuse test databases across related operations instead of fresh init each time\n3. Keep a few exec.Command() tests that batch multiple operations to verify the CLI path works end-to-end\n\nThis should reduce test time from ~40s to ~5s for the affected tests.","design":"**Approach:**\n\n1. **In-process testing (majority of tests):**\n - Call bd command functions directly instead of exec.Command()\n - Create helper that invokes root command with test args\n - Capture stdout/stderr in-process\n - Benefits: ~10-20x faster, better stack traces, no process overhead\n\n2. **Database reuse:**\n - Share test database across related operations in same test\n - Only create fresh DB when isolation needed\n - Use subtests to share setup cost\n\n3. **Minimal exec.Command() coverage:**\n - Keep 2-3 tests that use exec.Command() for end-to-end validation\n - Batch multiple operations per test (e.g., TestCLI_EndToEnd runs create+update+close+export)\n - Just validates the binary works when executed normally\n\n**Files to change:**\n- cmd/bd/cli_fast_test.go - convert runBD() helper to in-process\n- cmd/bd/test_helpers_test.go - may need new helpers for in-process execution","acceptance_criteria":"- All tests in cli_fast_test.go still pass\n- Test suite runs in \u0026lt;10s (down from ~40s)\n- At least 1-2 tests still use exec.Command() for end-to-end validation\n- No daemon processes spawned during tests\n- Coverage maintained or improved","notes":"Converted all CLI tests in cli_fast_test.go to use in-process testing via rootCmd.Execute(). Created runBDInProcess() helper that:\n- Calls rootCmd.Execute() directly instead of spawning processes\n- Uses mutex to serialize execution (rootCmd/viper not thread-safe)\n- Properly cleans up global state (store, daemonClient)\n- Returns only stdout to avoid JSON parsing issues with stderr warnings\n\nPerformance results:\n- In-process tests: ~0.6s each (cached even faster)\n- exec.Command tests: ~3.7s each \n- Speedup: ~10x faster\n\nKept TestCLI_EndToEnd() that uses exec.Command for end-to-end validation of the actual binary.","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-08T18:40:27.358821-08:00","updated_at":"2025-11-08T18:47:11.107998-08:00","closed_at":"2025-11-08T18:47:11.107998-08:00","source_repo":"."}
{"id":"bd-l4b6","content_hash":"62f76d6f751783139b97ee4b08e1134f6154d0eb5696e0f78ce258f841c9738e","title":"Add tests for bd init --team wizard","description":"Write integration tests for the team wizard:\n- Test branch detection\n- Test sync branch creation\n- Test protected branch workflow\n- Test auto-sync configuration","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-05T18:58:18.192425-08:00","updated_at":"2025-11-06T20:06:49.22056-08:00","closed_at":"2025-11-06T19:55:39.687439-08:00","source_repo":"."}
@@ -591,7 +592,7 @@
{"id":"bd-pi7u","content_hash":"68afb41a73129782a2f0c22d98a1bb04ecaf1adf059dc183849a0b731e65fd8d","title":"Fix TestAddCommentUpdatesTimestamp timing flake on Windows","description":"The TestAddCommentUpdatesTimestamp test fails on Windows due to insufficient time resolution between creating an issue and adding a comment. The test expects updated_at to be strictly after the original timestamp, but on Windows both operations can complete within the same time unit, causing them to have identical timestamps.","status":"closed","priority":1,"issue_type":"bug","created_at":"2025-11-23T10:56:40.575897-08:00","updated_at":"2025-11-23T10:57:20.956211-08:00","closed_at":"2025-11-23T10:57:20.956211-08:00","source_repo":"."}
{"id":"bd-pmuu","content_hash":"5e55fb75f647ecdcf928497d05c0263a5db7baf1d1d47e8b4074ca02766672ba","title":"Create architecture decision record (ADR)","description":"Document why we chose Agent Mail, alternatives considered, and tradeoffs.\n\nAcceptance Criteria:\n- Problem statement (git traffic, no locks)\n- Alternatives considered (custom RPC, Redis, etc.)\n- Why Agent Mail fits Beads\n- Integration principles (optional, graceful degradation)\n- Future considerations\n\nFile: docs/adr/002-agent-mail-integration.md","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-07T22:42:51.420203-08:00","updated_at":"2025-11-08T00:06:01.816892-08:00","closed_at":"2025-11-08T00:06:01.816892-08:00","source_repo":".","dependencies":[{"issue_id":"bd-pmuu","depends_on_id":"bd-spmx","type":"parent-child","created_at":"2025-11-08T00:02:47.93119-08:00","created_by":"daemon"}]}
{"id":"bd-poh9","content_hash":"a8cb703915a08abe2c2123e49a46d768e5a6aefee0ab7c4d9174749d538cfa56","title":"Complete and commit beads-mcp type safety improvements","description":"Uncommitted type safety work in beads-mcp needs review and completion","status":"closed","priority":1,"issue_type":"task","created_at":"2025-11-20T19:23:32.8516-05:00","updated_at":"2025-11-20T19:27:00.88849-05:00","closed_at":"2025-11-20T19:27:00.88849-05:00","source_repo":"."}
{"id":"bd-pq5k","content_hash":"611f64a2ebf583b58753d86fbf8bc26cffe33beb20be511fc744d656b77f5d58","title":"CRITICAL: Merge conflicts must prefer 'closed' status over 'open'","description":"# Problem\n\nWhen merging JSONL files (e.g., during git pull), if an issue has:\n- Local: status=closed with closed_at timestamp\n- Remote: status=open without closed_at\n\nThe merge can produce INVALID data:\n- status=open BUT closed_at is set\n- This violates DB constraint: (status = 'closed') = (closed_at IS NOT NULL)\n\n# The Rule\n\n**CLOSED ALWAYS WINS OVER OPEN IN A MERGE. ALWAYS.**\n\nIf one side has status=closed (with closed_at), that MUST be preserved.\n\n# Why This Matters\n\n- Prevents import errors after merge\n- Prevents data corruption\n- Prevents lost work (someone closed the issue, that matters!)\n- Closing an issue is a definitive action; reopening needs to be explicit\n\n# Where to Fix\n\nNeed to implement custom merge driver or post-merge hook that:\n1. Detects conflicts where status differs\n2. If one side is 'closed' with closed_at, prefer that version\n3. Ensure closed_at is removed if status=open\n\n# Current Workaround\n\nManual fix required:\n- Find invalid records (status=open with closed_at)\n- Remove closed_at field from JSONL\n- Re-import\n\n# Acceptance Criteria\n\n- [ ] Merge conflicts between open/closed always resolve to closed\n- [ ] Test case: merge where local=closed, remote=open → result=closed\n- [ ] Test case: merge where local=open, remote=closed → result=closed\n- [ ] Documentation on merge conflict resolution rules","status":"open","priority":0,"issue_type":"bug","created_at":"2025-11-23T21:30:33.879796-08:00","updated_at":"2025-11-23T21:30:33.879796-08:00","source_repo":"."}
{"id":"bd-pq5k","content_hash":"972c5b6d564088ce0fc6c86f77802f74b4d121104c95565508150ec3f09a7308","title":"CRITICAL: Merge conflicts must prefer 'closed' status over 'open'","description":"# Problem\n\nWhen merging JSONL files (e.g., during git pull), if an issue has:\n- Local: status=closed with closed_at timestamp\n- Remote: status=open without closed_at\n\nThe merge can produce INVALID data:\n- status=open BUT closed_at is set\n- This violates DB constraint: (status = 'closed') = (closed_at IS NOT NULL)\n\n# The Rule\n\n**CLOSED ALWAYS WINS OVER OPEN IN A MERGE. ALWAYS.**\n\nIf one side has status=closed (with closed_at), that MUST be preserved.\n\n# Why This Matters\n\n- Prevents import errors after merge\n- Prevents data corruption\n- Prevents lost work (someone closed the issue, that matters!)\n- Closing an issue is a definitive action; reopening needs to be explicit\n\n# Where to Fix\n\nNeed to implement custom merge driver or post-merge hook that:\n1. Detects conflicts where status differs\n2. If one side is 'closed' with closed_at, prefer that version\n3. Ensure closed_at is removed if status=open\n\n# Current Workaround\n\nManual fix required:\n- Find invalid records (status=open with closed_at)\n- Remove closed_at field from JSONL\n- Re-import\n\n# Acceptance Criteria\n\n- [ ] Merge conflicts between open/closed always resolve to closed\n- [ ] Test case: merge where local=closed, remote=open → result=closed\n- [ ] Test case: merge where local=open, remote=closed → result=closed\n- [ ] Documentation on merge conflict resolution rules","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-11-23T21:30:33.879796-08:00","updated_at":"2025-11-23T21:34:47.316504-08:00","closed_at":"2025-11-23T21:34:47.316504-08:00","source_repo":"."}
{"id":"bd-q13h","content_hash":"ad443aa59b317e5900e1c0e3a083d693c699c44f8582a6bfdf6c0d93f909e40c","title":"Makefile Integration for Benchmarks","description":"Add a single 'bench' target to the Makefile for running performance benchmarks.\n\nTarget:\n.PHONY: bench\n\nbench:\n\tgo test -bench=. -benchtime=3s -tags=bench \\\n\t\t-cpuprofile=cpu.prof -memprofile=mem.prof \\\n\t\t./internal/storage/sqlite/\n\t@echo \"\"\n\t@echo \"Profiles generated. View flamegraph:\"\n\t@echo \" go tool pprof -http=:8080 cpu.prof\"\n\nFeatures:\n- Single simple target, no complexity\n- Always generates CPU and memory profiles\n- Clear output on how to view results\n- 3 second benchmark time for reliable results\n- Uses bench build tag for heavy benchmarks\n\nUsage:\n make bench # Run all benchmarks\n go test -bench=BenchmarkGetReadyWork... # Run specific benchmark","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-13T22:23:25.922916-08:00","updated_at":"2025-11-14T08:55:17.620824-08:00","closed_at":"2025-11-14T08:55:17.620824-08:00","source_repo":".","dependencies":[{"issue_id":"bd-q13h","depends_on_id":"bd-zj8e","type":"blocks","created_at":"2025-11-13T22:24:06.371947-08:00","created_by":"daemon"}]}
{"id":"bd-q2ri","content_hash":"472cf1c393423f4ec4a4e74a971be0f44fd4b8186ea276860fe0947d031e3eb1","title":"bd-hv01: Add comprehensive edge case tests for deletion tracking","description":"Need to add tests for: corrupted snapshot file, stale snapshot (\u003e 1 hour), concurrent sync operations (daemon + manual), partial deletion failure, empty remote JSONL, multi-repo mode with deletions, git worktree scenario.\n\nAlso refine TestDeletionWithLocalModification to check for specific conflict error instead of accepting any error.\n\nFiles: cmd/bd/deletion_tracking_test.go","status":"closed","priority":2,"issue_type":"task","created_at":"2025-11-06T18:16:26.849881-08:00","updated_at":"2025-11-06T20:06:49.221043-08:00","closed_at":"2025-11-06T19:55:39.700695-08:00","source_repo":".","dependencies":[{"issue_id":"bd-q2ri","depends_on_id":"bd-rbxi","type":"parent-child","created_at":"2025-11-06T18:19:15.104113-08:00","created_by":"daemon"}]}
{"id":"bd-q59i","content_hash":"807970859370452e8892779759b15ba2f52740d8d38ad1c1f8f47a364c898cc3","title":"User Diagnostics (bd doctor --perf)","description":"Extend cmd/bd/doctor.go to add --perf flag for user performance diagnostics.\n\nFunctionality:\n- Add --perf flag to existing bd doctor command\n- Collect system info (OS, arch, Go version, SQLite version)\n- Collect database stats (size, issue counts, dependency counts)\n- Time key operations on user's actual database:\n * bd ready\n * bd list --status=open\n * bd show \u003crandom-issue\u003e\n * bd create (with rollback)\n * Search with filters\n- Generate CPU profile automatically (timestamped filename)\n- Output simple report with platform info, timings, profile location\n\nOutput example:\n Beads Performance Diagnostics\n Platform: darwin/arm64\n Database: 8,234 issues (4,123 open)\n \n Operation Performance:\n bd ready 42ms\n bd list --status=open 15ms\n \n Profile saved: beads-perf-2025-11-13.prof\n View: go tool pprof -http=:8080 beads-perf-2025-11-13.prof\n\nImplementation:\n- Extend cmd/bd/doctor.go (~100 lines)\n- Use runtime/pprof for CPU profiling\n- Use time.Now()/time.Since() for timing\n- Rollback test operations (don't modify user's database)","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-11-13T22:23:11.988562-08:00","updated_at":"2025-11-13T22:45:57.26294-08:00","closed_at":"2025-11-13T22:45:57.26294-08:00","source_repo":".","dependencies":[{"issue_id":"bd-q59i","depends_on_id":"bd-zj8e","type":"blocks","created_at":"2025-11-13T22:24:06.336236-08:00","created_by":"daemon"}]}

683
.beads/beads.jsonl.bak Normal file

File diff suppressed because one or more lines are too long

View File

@@ -235,16 +235,17 @@ func TestDeletionWithLocalModification(t *testing.T) {
t.Fatalf("Failed to simulate remote deletion: %v", err)
}
// Try to merge - this should detect a conflict (modified locally, deleted remotely)
// Try to merge - deletion now wins over modification (bd-pq5k)
// This should succeed and delete the issue
_, err = merge3WayAndPruneDeletions(ctx, store, jsonlPath)
if err == nil {
t.Error("Expected merge conflict error, but got nil")
if err != nil {
t.Errorf("Expected merge to succeed (deletion wins), but got error: %v", err)
}
// The issue should still exist in the database (conflict not auto-resolved)
// The issue should be deleted (deletion wins over modification)
conflictIssue, err := store.GetIssue(ctx, "bd-conflict")
if err != nil || conflictIssue == nil {
t.Error("Issue should still exist after conflict")
if err == nil && conflictIssue != nil {
t.Error("Issue should be deleted after merge (deletion wins)")
}
}
@@ -295,12 +296,13 @@ func TestComputeAcceptedDeletions(t *testing.T) {
}
}
// TestComputeAcceptedDeletions_LocallyModified tests that locally modified issues are not deleted
// TestComputeAcceptedDeletions_LocallyModified tests that deletion wins even for locally modified issues (bd-pq5k)
func TestComputeAcceptedDeletions_LocallyModified(t *testing.T) {
dir := t.TempDir()
basePath := filepath.Join(dir, "base.jsonl")
leftPath := filepath.Join(dir, "left.jsonl")
jsonlPath := filepath.Join(dir, "issues.jsonl")
sm := NewSnapshotManager(jsonlPath)
basePath, leftPath := sm.GetSnapshotPaths()
mergedPath := filepath.Join(dir, "merged.jsonl")
// Base has 2 issues
@@ -313,7 +315,7 @@ func TestComputeAcceptedDeletions_LocallyModified(t *testing.T) {
{"id":"bd-2","title":"Modified locally"}
`
// Merged has only bd-1 (bd-2 deleted remotely, but we modified it locally)
// Merged has only bd-1 (bd-2 deleted remotely, we modified it locally, but deletion wins per bd-pq5k)
mergedContent := `{"id":"bd-1","title":"Original 1"}
`
@@ -327,16 +329,17 @@ func TestComputeAcceptedDeletions_LocallyModified(t *testing.T) {
t.Fatalf("Failed to write merged: %v", err)
}
jsonlPath := filepath.Join(dir, "issues.jsonl")
sm := NewSnapshotManager(jsonlPath)
deletions, err := sm.ComputeAcceptedDeletions(mergedPath)
if err != nil {
t.Fatalf("Failed to compute deletions: %v", err)
}
// bd-2 should NOT be in accepted deletions because it was modified locally
if len(deletions) != 0 {
t.Errorf("Expected 0 deletions (locally modified), got %d: %v", len(deletions), deletions)
// bd-pq5k: bd-2 SHOULD be in accepted deletions even though modified locally (deletion wins)
if len(deletions) != 1 {
t.Errorf("Expected 1 deletion (deletion wins over local modification), got %d: %v", len(deletions), deletions)
}
if len(deletions) == 1 && deletions[0] != "bd-2" {
t.Errorf("Expected deletion of bd-2, got %v", deletions)
}
}

View File

@@ -232,21 +232,19 @@ func (sm *SnapshotManager) Initialize() error {
// An issue is an "accepted deletion" if:
// - It exists in base (last import)
// - It does NOT exist in merged (after 3-way merge)
// - It is unchanged in left (pre-pull export) compared to base
//
// Note (bd-pq5k): Deletion always wins over modification in the merge,
// so if an issue is deleted in the merged result, we accept it regardless
// of local changes.
func (sm *SnapshotManager) ComputeAcceptedDeletions(mergedPath string) ([]string, error) {
basePath, leftPath := sm.getSnapshotPaths()
basePath, _ := sm.getSnapshotPaths()
// Build map of ID -> raw line for base and left
// Build map of ID -> raw line for base
baseIndex, err := sm.buildIDToLineMap(basePath)
if err != nil {
return nil, fmt.Errorf("failed to read base snapshot: %w", err)
}
leftIndex, err := sm.buildIDToLineMap(leftPath)
if err != nil {
return nil, fmt.Errorf("failed to read left snapshot: %w", err)
}
// Build set of IDs in merged result
mergedIDs, err := sm.buildIDSet(mergedPath)
if err != nil {
@@ -257,15 +255,15 @@ func (sm *SnapshotManager) ComputeAcceptedDeletions(mergedPath string) ([]string
// Find accepted deletions
var deletions []string
for id, baseLine := range baseIndex {
for id := range baseIndex {
// Issue in base but not in merged
if !mergedIDs[id] {
// Check if unchanged locally - try raw equality first, then semantic JSON comparison
if leftLine, existsInLeft := leftIndex[id]; existsInLeft && (leftLine == baseLine || sm.jsonEquals(leftLine, baseLine)) {
// bd-pq5k: Deletion always wins over modification in 3-way merge
// If the merge resulted in deletion, accept it regardless of local changes
// The 3-way merge already determined that deletion should win
deletions = append(deletions, id)
}
}
}
sm.stats.DeletionsFound = len(deletions)

View File

@@ -297,22 +297,14 @@ func merge3Way(base, left, right []Issue) ([]Issue, []string) {
}
} else if inBase && inLeft && !inRight {
// Deleted in right, maybe modified in left
if issuesEqual(baseIssue, leftIssue) {
// Deleted in right, unchanged in left - accept deletion
// RULE 2: deletion always wins over modification
// This is because deletion is an explicit action that should be preserved
continue
} else {
// Modified in left, deleted in right - conflict
conflicts = append(conflicts, makeConflictWithBase(baseIssue.RawLine, leftIssue.RawLine, ""))
}
} else if inBase && !inLeft && inRight {
// Deleted in left, maybe modified in right
if issuesEqual(baseIssue, rightIssue) {
// Deleted in left, unchanged in right - accept deletion
// RULE 2: deletion always wins over modification
// This is because deletion is an explicit action that should be preserved
continue
} else {
// Modified in right, deleted in left - conflict
conflicts = append(conflicts, makeConflictWithBase(baseIssue.RawLine, "", rightIssue.RawLine))
}
} else if !inBase && inLeft && !inRight {
// Added only in left
result = append(result, leftIssue)
@@ -341,8 +333,8 @@ func mergeIssue(base, left, right Issue) (Issue, string) {
// Merge notes
result.Notes = mergeField(base.Notes, left.Notes, right.Notes)
// Merge status
result.Status = mergeField(base.Status, left.Status, right.Status)
// Merge status - SPECIAL RULE: closed always wins over open
result.Status = mergeStatus(base.Status, left.Status, right.Status)
// Merge priority (as int)
if base.Priority == left.Priority && base.Priority != right.Priority {
@@ -362,8 +354,13 @@ func mergeIssue(base, left, right Issue) (Issue, string) {
// Merge updated_at - take the max
result.UpdatedAt = maxTime(left.UpdatedAt, right.UpdatedAt)
// Merge closed_at - take the max
// Merge closed_at - only if status is closed
// This prevents invalid state (status=open with closed_at set)
if result.Status == "closed" {
result.ClosedAt = maxTime(left.ClosedAt, right.ClosedAt)
} else {
result.ClosedAt = ""
}
// Merge dependencies - combine and deduplicate
result.Dependencies = mergeDependencies(left.Dependencies, right.Dependencies)
@@ -376,6 +373,17 @@ func mergeIssue(base, left, right Issue) (Issue, string) {
return result, ""
}
func mergeStatus(base, left, right string) string {
// RULE 1: closed always wins over open
// This prevents the insane situation where issues never die
if left == "closed" || right == "closed" {
return "closed"
}
// Otherwise use standard 3-way merge
return mergeField(base, left, right)
}
func mergeField(base, left, right string) string {
if base == left && base != right {
return right

View File

@@ -7,6 +7,70 @@ import (
"testing"
)
// TestMergeStatus tests the status merging logic with special rules
func TestMergeStatus(t *testing.T) {
tests := []struct {
name string
base string
left string
right string
expected string
}{
{
name: "no changes",
base: "open",
left: "open",
right: "open",
expected: "open",
},
{
name: "left closed, right open - closed wins",
base: "open",
left: "closed",
right: "open",
expected: "closed",
},
{
name: "left open, right closed - closed wins",
base: "open",
left: "open",
right: "closed",
expected: "closed",
},
{
name: "both closed",
base: "open",
left: "closed",
right: "closed",
expected: "closed",
},
{
name: "base closed, left open, right open - open (standard merge)",
base: "closed",
left: "open",
right: "open",
expected: "open",
},
{
name: "base closed, left open, right closed - closed wins",
base: "closed",
left: "open",
right: "closed",
expected: "closed",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := mergeStatus(tt.base, tt.left, tt.right)
if result != tt.expected {
t.Errorf("mergeStatus(%q, %q, %q) = %q, want %q",
tt.base, tt.left, tt.right, result, tt.expected)
}
})
}
}
// TestMergeField tests the basic field merging logic
func TestMergeField(t *testing.T) {
tests := []struct {
@@ -475,7 +539,7 @@ func TestMerge3Way_Deletions(t *testing.T) {
}
})
t.Run("deleted in left, modified in right - conflict", func(t *testing.T) {
t.Run("deleted in left, modified in right - deletion wins", func(t *testing.T) {
base := []Issue{
{
ID: "bd-abc123",
@@ -499,15 +563,15 @@ func TestMerge3Way_Deletions(t *testing.T) {
}
result, conflicts := merge3Way(base, left, right)
if len(conflicts) == 0 {
t.Error("expected conflict for delete vs modify")
if len(conflicts) != 0 {
t.Errorf("expected no conflicts, got %d", len(conflicts))
}
if len(result) != 0 {
t.Errorf("expected no merged issues with conflict, got %d", len(result))
t.Errorf("expected deletion to win (0 results), got %d", len(result))
}
})
t.Run("deleted in right, modified in left - conflict", func(t *testing.T) {
t.Run("deleted in right, modified in left - deletion wins", func(t *testing.T) {
base := []Issue{
{
ID: "bd-abc123",
@@ -531,11 +595,11 @@ func TestMerge3Way_Deletions(t *testing.T) {
right := []Issue{} // Deleted in right
result, conflicts := merge3Way(base, left, right)
if len(conflicts) == 0 {
t.Error("expected conflict for modify vs delete")
if len(conflicts) != 0 {
t.Errorf("expected no conflicts, got %d", len(conflicts))
}
if len(result) != 0 {
t.Errorf("expected no merged issues with conflict, got %d", len(result))
t.Errorf("expected deletion to win (0 results), got %d", len(result))
}
})
}
@@ -648,6 +712,61 @@ func TestMerge3Way_Additions(t *testing.T) {
// TestMerge3Way_ResurrectionPrevention tests bd-hv01 regression
func TestMerge3Way_ResurrectionPrevention(t *testing.T) {
t.Run("bd-pq5k: no invalid state (status=open with closed_at)", func(t *testing.T) {
// Simulate the broken merge case that was creating invalid data
// Base: issue is closed
base := []Issue{
{
ID: "bd-test",
Title: "Test issue",
Status: "closed",
ClosedAt: "2024-01-02T00:00:00Z",
CreatedAt: "2024-01-01T00:00:00Z",
UpdatedAt: "2024-01-02T00:00:00Z",
CreatedBy: "user1",
RawLine: `{"id":"bd-test","title":"Test issue","status":"closed","closed_at":"2024-01-02T00:00:00Z","created_at":"2024-01-01T00:00:00Z","updated_at":"2024-01-02T00:00:00Z","created_by":"user1"}`,
},
}
// Left: still closed with closed_at
left := base
// Right: somehow got reopened but WITHOUT removing closed_at (the bug scenario)
right := []Issue{
{
ID: "bd-test",
Title: "Test issue",
Status: "open", // reopened
ClosedAt: "", // correctly removed
CreatedAt: "2024-01-01T00:00:00Z",
UpdatedAt: "2024-01-03T00:00:00Z",
CreatedBy: "user1",
RawLine: `{"id":"bd-test","title":"Test issue","status":"open","created_at":"2024-01-01T00:00:00Z","updated_at":"2024-01-03T00:00:00Z","created_by":"user1"}`,
},
}
result, conflicts := merge3Way(base, left, right)
if len(conflicts) != 0 {
t.Errorf("unexpected conflicts: %v", conflicts)
}
if len(result) != 1 {
t.Fatalf("expected 1 issue, got %d", len(result))
}
// CRITICAL: Status should be closed (closed wins over open)
if result[0].Status != "closed" {
t.Errorf("expected status 'closed', got %q", result[0].Status)
}
// CRITICAL: If status is closed, closed_at MUST be set
if result[0].Status == "closed" && result[0].ClosedAt == "" {
t.Error("INVALID STATE: status='closed' but closed_at is empty")
}
// CRITICAL: If status is open, closed_at MUST be empty
if result[0].Status == "open" && result[0].ClosedAt != "" {
t.Errorf("INVALID STATE: status='open' but closed_at='%s'", result[0].ClosedAt)
}
})
t.Run("bd-hv01 regression: closed issue not resurrected", func(t *testing.T) {
// Base: issue is open
base := []Issue{

View File

@@ -51,4 +51,15 @@ func IsUniqueConstraintError(err error) bool {
return strings.Contains(err.Error(), "UNIQUE constraint failed")
}
// IsForeignKeyConstraintError checks if an error is a FOREIGN KEY constraint violation
// This can occur when importing issues that reference deleted issues (e.g., after merge)
func IsForeignKeyConstraintError(err error) bool {
if err == nil {
return false
}
errStr := err.Error()
return strings.Contains(errStr, "FOREIGN KEY constraint failed") ||
strings.Contains(errStr, "foreign key constraint failed")
}

View File

@@ -50,6 +50,54 @@ func TestIsUniqueConstraintError(t *testing.T) {
}
}
func TestIsForeignKeyConstraintError(t *testing.T) {
tests := []struct {
name string
err error
expected bool
}{
{
name: "nil error",
err: nil,
expected: false,
},
{
name: "FOREIGN KEY constraint error (uppercase)",
err: errors.New("FOREIGN KEY constraint failed"),
expected: true,
},
{
name: "foreign key constraint error (lowercase)",
err: errors.New("foreign key constraint failed"),
expected: true,
},
{
name: "FOREIGN KEY with details",
err: errors.New("FOREIGN KEY constraint failed: dependencies.depends_on_id"),
expected: true,
},
{
name: "UNIQUE constraint error",
err: errors.New("UNIQUE constraint failed: issues.id"),
expected: false,
},
{
name: "other error",
err: errors.New("some other database error"),
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsForeignKeyConstraintError(tt.err)
if result != tt.expected {
t.Errorf("IsForeignKeyConstraintError(%v) = %v, want %v", tt.err, result, tt.expected)
}
})
}
}
func TestExecInTransaction(t *testing.T) {
ctx := context.Background()
store := newTestStore(t, t.TempDir()+"/test.db")