feat(sync): pull-first sync with 3-way merge (#918)
* feat(sync): implement pull-first synchronization strategy - Add --pull-first flag and logic to sync command - Introduce 3-way merge stub for issue synchronization - Add concurrent edit tests for the pull-first flow Ensures local changes are reconciled with remote updates before pushing to prevent data loss. * feat(sync): implement 3-way merge and state tracking - Implement 3-way merge algorithm for issue synchronization - Add base state storage to track changes between syncs - Add comprehensive tests for merge logic and persistence Ensures data consistency and prevents data loss during concurrent issue updates. * feat(sync): implement field-level conflict merging - Implement field-level merge logic for issue conflicts - Add unit tests for field-level merge strategies Reduces manual intervention by automatically resolving overlapping updates at the field level. * refactor(sync): simplify sync flow by removing ZFC checks The previous sync implementation relied on Zero-False-Convergence (ZFC) staleness checks which are redundant following the transition to structural 3-way merging. This legacy logic added complexity and maintenance overhead without providing additional safety. This commit introduces a streamlined sync pipeline: - Remove ZFC staleness validation from primary sync flow - Update safety documentation to reflect current merge strategy - Eliminate deprecated unit tests associated with ZFC logic These changes reduce codebase complexity while maintaining data integrity through the robust structural 3-way merge implementation. * feat(sync): default to pull-first sync workflow - Set pull-first as the primary synchronization workflow - Refactor core sync logic for better maintainability - Update concurrent edit tests to validate 3-way merge logic Reduces merge conflicts by ensuring local state is current before pushing changes. * refactor(sync): clean up lint issues in merge code - Remove unused error return from MergeIssues (never returned error) - Use _ prefix for unused _base parameter in mergeFieldLevel - Update callers to not expect error from MergeIssues - Keep nolint:gosec for trusted internal file path * test(sync): add mode compatibility and upgrade safety tests Add tests addressing Steve's PR #918 review concerns: - TestSyncBranchModeWithPullFirst: Verifies sync-branch config storage and git branch creation work with pull-first - TestExternalBeadsDirWithPullFirst: Verifies external BEADS_DIR detection and pullFromExternalBeadsRepo - TestUpgradeFromOldSync: Validates upgrade safety when sync_base.jsonl doesn't exist (first sync after upgrade) - TestMergeIssuesWithBaseState: Comprehensive 3-way merge cases - TestLabelUnionMerge: Verifies labels use union (no data loss) Key upgrade behavior validated: - base=nil (no sync_base.jsonl) safely handles all cases - Local-only issues kept (StrategyLocal) - Remote-only issues kept (StrategyRemote) - Overlapping issues merged (LWW scalars, union labels) * fix(sync): report line numbers for malformed JSON Problem: - JSON decoding errors when loading sync base state lacked line numbers - Difficult to identify location of syntax errors in large state files Solution: - Include line number reporting in JSON decoder errors during state loading - Add regression tests for malformed sync base file scenarios Impact: - Users receive actionable feedback for corrupted state files - Faster troubleshooting of manual configuration errors * fix(sync): warn on large clock skew during sync Problem: - Unsynchronized clocks between systems could lead to silent merge errors - No mechanism existed to alert users of significant timestamp drift Solution: - Implement clock skew detection during sync merge - Log a warning when large timestamp differences are found - Add comprehensive unit tests for skew reporting Impact: - Users are alerted to potential synchronization risks - Easier debugging of time-related merge issues * fix(sync): defer state update until remote push succeeds Problem: - Base state updated before confirming remote push completion - Failed pushes resulted in inconsistent local state tracking Solution: - Defer base state update until after the remote push succeeds Impact: - Ensures local state accurately reflects remote repository status - Prevents state desynchronization during network or push failures * fix(sync): prevent concurrent sync operations Problem: - Multiple sync processes could run simultaneously - Overlapping operations risk data corruption and race conditions Solution: - Implement file-based locking using gofrs/flock - Add integration tests to verify locking behavior Impact: - Guarantees execution of a single sync process at a time - Eliminates potential for data inconsistency during sync * docs: document sync architecture and merge model - Detail the 3-way merge model logic - Describe the core synchronization architecture principles * fix(lint): explicitly ignore lock.Unlock return value errcheck linter flagged bare defer lock.Unlock() calls. Wrap in anonymous function with explicit _ assignment to acknowledge intentional ignore of unlock errors during cleanup. * fix(lint): add sync_merge.go to G304 exclusions The loadBaseState and saveBaseState functions use file paths derived from trusted internal sources (beadsDir parameter from config). Add to existing G304 exclusion list for safe JSONL file operations. * feat(sync): integrate sync-branch into pull-first flow When sync.branch is configured, doPullFirstSync now: - Calls PullFromSyncBranch before merge - Calls CommitToSyncBranch after export This ensures sync-branch mode uses the correct branch for pull/push operations. * test(sync): add E2E tests for sync-branch and external BEADS_DIR Adds comprehensive end-to-end tests: - TestSyncBranchE2E: verifies pull→merge→commit flow with remote changes - TestExternalBeadsDirE2E: verifies sync with separate beads repository - TestExternalBeadsDirDetection: edge cases for repo detection - TestCommitToExternalBeadsRepo: commit handling * refactor(sync): remove unused rollbackJSONLFromGit Function was defined but never called. Pull-first flow saves base state after successful push, making this safety net unnecessary. * test(sync): add export-only mode E2E test Add TestExportOnlySync to cover --no-pull flag which was the only untested sync mode. This completes full mode coverage: - Normal (pull-first): sync_test.go, sync_merge_test.go - Sync-branch: sync_modes_test.go:TestSyncBranchE2E (PR#918) - External BEADS_DIR: sync_external_test.go (PR#918) - From-main: sync_branch_priority_test.go - Local-only: sync_local_only_test.go - Export-only: sync_modes_test.go:TestExportOnlySync (this commit) Refs: #911 * docs(sync): add sync modes reference section Document all 6 sync modes with triggers, flows, and use cases. Include mode selection decision tree and test coverage matrix. Co-authored-by: Claude <noreply@anthropic.com> * test(sync): upgrade sync-branch E2E tests to bare repo - Replace mocked repository with real bare repo setup - Implement multi-machine simulation in sync tests - Refactor test logic to handle distributed states Coverage: sync-branch end-to-end scenarios * test(sync): add daemon sync-branch E2E tests - Implement E2E tests for daemon sync-branch flow - Add test cases for force-overwrite scenarios Coverage: daemon sync-branch workflow in cmd/bd * docs(sync): document sync-branch paths and E2E architecture - Describe sync-branch CLI and Daemon execution flow - Document the end-to-end test architecture * build(nix): update vendorHash for gofrs/flock dependency New dependency added for file-based sync locking changes the Go module checksum. --------- Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
committed by
GitHub
parent
e0b613d5b1
commit
1561374c04
@@ -45,7 +45,7 @@ linters:
|
||||
- gosec
|
||||
text: "G306"
|
||||
# G304: Safe file reads from known JSONL and error paths
|
||||
- path: 'cmd/bd/autoflush\.go|cmd/bd/daemon_autostart\.go|cmd/bd/doctor/fix/sync_branch\.go|cmd/bd/rename_prefix\.go|internal/beads/beads\.go|internal/daemon/discovery\.go|internal/daemonrunner/sync\.go|internal/syncbranch/worktree\.go'
|
||||
- path: 'cmd/bd/autoflush\.go|cmd/bd/daemon_autostart\.go|cmd/bd/doctor/fix/sync_branch\.go|cmd/bd/rename_prefix\.go|cmd/bd/sync_merge\.go|internal/beads/beads\.go|internal/daemon/discovery\.go|internal/daemonrunner/sync\.go|internal/syncbranch/worktree\.go'
|
||||
linters:
|
||||
- gosec
|
||||
text: "G304"
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/steveyegge/beads/internal/git"
|
||||
"github.com/steveyegge/beads/internal/storage/sqlite"
|
||||
"github.com/steveyegge/beads/internal/syncbranch"
|
||||
"github.com/steveyegge/beads/internal/types"
|
||||
@@ -1543,3 +1544,416 @@ func TestGitPushFromWorktree_FetchRebaseRetry(t *testing.T) {
|
||||
|
||||
t.Log("Fetch-rebase-retry test passed: diverged sync branch was successfully rebased and pushed")
|
||||
}
|
||||
|
||||
// TestDaemonSyncBranchE2E tests the daemon sync-branch flow with concurrent changes from
|
||||
// two machines using a real bare repo. This tests the daemon path (syncBranchCommitAndPush/Pull)
|
||||
// as opposed to TestSyncBranchE2E which tests the CLI path (syncbranch.CommitToSyncBranch/Pull).
|
||||
//
|
||||
// Key difference from CLI path tests:
|
||||
// - CLI: Uses syncbranch.CommitToSyncBranch() from internal/syncbranch
|
||||
// - Daemon: Uses syncBranchCommitAndPush() from daemon_sync_branch.go
|
||||
//
|
||||
// Flow:
|
||||
// 1. Machine A creates bd-1, calls daemon syncBranchCommitAndPush(), pushes to bare remote
|
||||
// 2. Machine B creates bd-2, calls daemon syncBranchCommitAndPush(), pushes to bare remote
|
||||
// 3. Machine A calls daemon syncBranchPull(), should merge both issues
|
||||
// 4. Verify both issues present after merge
|
||||
func TestDaemonSyncBranchE2E(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("Skipping integration test in short mode")
|
||||
}
|
||||
|
||||
// Skip on Windows due to path issues with worktrees
|
||||
if runtime.GOOS == "windows" {
|
||||
t.Skip("Skipping on Windows")
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// Setup: Create bare remote with two clones using Phase 1 helper
|
||||
_, machineA, machineB, cleanup := setupBareRemoteWithClones(t)
|
||||
defer cleanup()
|
||||
|
||||
// Use unique sync branch name and set via env var (highest priority)
|
||||
// This overrides any config.yaml setting
|
||||
syncBranch := "beads-daemon-sync"
|
||||
t.Setenv(syncbranch.EnvVar, syncBranch)
|
||||
|
||||
// Machine A: Setup database with sync.branch configured
|
||||
var storeA *sqlite.SQLiteStorage
|
||||
var jsonlPathA string
|
||||
|
||||
withBeadsDir(t, machineA, func() {
|
||||
beadsDirA := filepath.Join(machineA, ".beads")
|
||||
dbPathA := filepath.Join(beadsDirA, "beads.db")
|
||||
|
||||
var err error
|
||||
storeA, err = sqlite.New(ctx, dbPathA)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create store for Machine A: %v", err)
|
||||
}
|
||||
|
||||
// Configure store
|
||||
if err := storeA.SetConfig(ctx, "issue_prefix", "bd"); err != nil {
|
||||
t.Fatalf("Failed to set issue_prefix: %v", err)
|
||||
}
|
||||
if err := storeA.SetConfig(ctx, syncbranch.ConfigKey, syncBranch); err != nil {
|
||||
t.Fatalf("Failed to set sync.branch: %v", err)
|
||||
}
|
||||
|
||||
// Create issue in Machine A
|
||||
issueA := &types.Issue{
|
||||
Title: "Issue from Machine A (daemon path)",
|
||||
Status: types.StatusOpen,
|
||||
Priority: 1,
|
||||
IssueType: types.TypeTask,
|
||||
CreatedAt: time.Now(),
|
||||
UpdatedAt: time.Now(),
|
||||
}
|
||||
if err := storeA.CreateIssue(ctx, issueA, "machineA"); err != nil {
|
||||
t.Fatalf("Failed to create issue in Machine A: %v", err)
|
||||
}
|
||||
t.Logf("Machine A created issue: %s", issueA.ID)
|
||||
|
||||
// Export to JSONL
|
||||
jsonlPathA = filepath.Join(beadsDirA, "issues.jsonl")
|
||||
if err := exportToJSONLWithStore(ctx, storeA, jsonlPathA); err != nil {
|
||||
t.Fatalf("Failed to export JSONL for Machine A: %v", err)
|
||||
}
|
||||
|
||||
// Change to machineA directory for git operations
|
||||
if err := os.Chdir(machineA); err != nil {
|
||||
t.Fatalf("Failed to chdir to machineA: %v", err)
|
||||
}
|
||||
|
||||
// Set global dbPath so findJSONLPath() works for daemon functions
|
||||
oldDBPath := dbPath
|
||||
dbPath = dbPathA
|
||||
defer func() { dbPath = oldDBPath }()
|
||||
|
||||
// Machine A: Commit and push using daemon path (syncBranchCommitAndPush)
|
||||
log, _ := newTestSyncBranchLogger()
|
||||
committed, err := syncBranchCommitAndPush(ctx, storeA, true, log)
|
||||
if err != nil {
|
||||
t.Fatalf("Machine A syncBranchCommitAndPush failed: %v", err)
|
||||
}
|
||||
if !committed {
|
||||
t.Fatal("Expected Machine A daemon commit to succeed")
|
||||
}
|
||||
t.Log("Machine A: Daemon committed and pushed issue to sync branch")
|
||||
})
|
||||
defer storeA.Close()
|
||||
|
||||
// Reset git caches before switching to Machine B to prevent path caching issues
|
||||
git.ResetCaches()
|
||||
|
||||
// Machine B: Setup database and sync with Machine A's changes first
|
||||
var storeB *sqlite.SQLiteStorage
|
||||
var jsonlPathB string
|
||||
|
||||
withBeadsDir(t, machineB, func() {
|
||||
beadsDirB := filepath.Join(machineB, ".beads")
|
||||
dbPathB := filepath.Join(beadsDirB, "beads.db")
|
||||
|
||||
var err error
|
||||
storeB, err = sqlite.New(ctx, dbPathB)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create store for Machine B: %v", err)
|
||||
}
|
||||
|
||||
// Configure store
|
||||
if err := storeB.SetConfig(ctx, "issue_prefix", "bd"); err != nil {
|
||||
t.Fatalf("Failed to set issue_prefix: %v", err)
|
||||
}
|
||||
if err := storeB.SetConfig(ctx, syncbranch.ConfigKey, syncBranch); err != nil {
|
||||
t.Fatalf("Failed to set sync.branch: %v", err)
|
||||
}
|
||||
|
||||
jsonlPathB = filepath.Join(beadsDirB, "issues.jsonl")
|
||||
|
||||
// Change to machineB directory for git operations
|
||||
if err := os.Chdir(machineB); err != nil {
|
||||
t.Fatalf("Failed to chdir to machineB: %v", err)
|
||||
}
|
||||
|
||||
// Set global dbPath so findJSONLPath() works for daemon functions
|
||||
oldDBPath := dbPath
|
||||
dbPath = dbPathB
|
||||
defer func() { dbPath = oldDBPath }()
|
||||
|
||||
// Machine B: First pull from sync branch to get Machine A's issue
|
||||
// This is the correct workflow - always pull before creating local changes
|
||||
log, _ := newTestSyncBranchLogger()
|
||||
pulled, err := syncBranchPull(ctx, storeB, log)
|
||||
if err != nil {
|
||||
t.Logf("Machine B initial pull error (may be expected): %v", err)
|
||||
}
|
||||
if pulled {
|
||||
t.Log("Machine B: Pulled Machine A's changes from sync branch")
|
||||
}
|
||||
|
||||
// Import the pulled JSONL into Machine B's database
|
||||
if _, err := os.Stat(jsonlPathB); err == nil {
|
||||
if err := importToJSONLWithStore(ctx, storeB, jsonlPathB); err != nil {
|
||||
t.Logf("Machine B import warning: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Create issue in Machine B (different from A)
|
||||
issueB := &types.Issue{
|
||||
Title: "Issue from Machine B (daemon path)",
|
||||
Status: types.StatusOpen,
|
||||
Priority: 2,
|
||||
IssueType: types.TypeBug,
|
||||
CreatedAt: time.Now().Add(time.Second), // Ensure different timestamp
|
||||
UpdatedAt: time.Now().Add(time.Second),
|
||||
}
|
||||
if err := storeB.CreateIssue(ctx, issueB, "machineB"); err != nil {
|
||||
t.Fatalf("Failed to create issue in Machine B: %v", err)
|
||||
}
|
||||
t.Logf("Machine B created issue: %s", issueB.ID)
|
||||
|
||||
// Export to JSONL (now includes both Machine A's and Machine B's issues)
|
||||
if err := exportToJSONLWithStore(ctx, storeB, jsonlPathB); err != nil {
|
||||
t.Fatalf("Failed to export JSONL for Machine B: %v", err)
|
||||
}
|
||||
|
||||
// Machine B: Commit and push using daemon path
|
||||
// This should succeed without conflict because we pulled first
|
||||
committed, err := syncBranchCommitAndPush(ctx, storeB, true, log)
|
||||
if err != nil {
|
||||
t.Fatalf("Machine B syncBranchCommitAndPush failed: %v", err)
|
||||
}
|
||||
if !committed {
|
||||
t.Fatal("Expected Machine B daemon commit to succeed")
|
||||
}
|
||||
t.Log("Machine B: Daemon committed and pushed issue to sync branch")
|
||||
})
|
||||
defer storeB.Close()
|
||||
|
||||
// Reset git caches before switching back to Machine A
|
||||
git.ResetCaches()
|
||||
|
||||
// Machine A: Pull from sync branch using daemon path
|
||||
withBeadsDir(t, machineA, func() {
|
||||
beadsDirA := filepath.Join(machineA, ".beads")
|
||||
dbPathA := filepath.Join(beadsDirA, "beads.db")
|
||||
|
||||
// Change to machineA directory for git operations
|
||||
if err := os.Chdir(machineA); err != nil {
|
||||
t.Fatalf("Failed to chdir to machineA: %v", err)
|
||||
}
|
||||
|
||||
// Set global dbPath so findJSONLPath() works for daemon functions
|
||||
oldDBPath := dbPath
|
||||
dbPath = dbPathA
|
||||
defer func() { dbPath = oldDBPath }()
|
||||
|
||||
log, _ := newTestSyncBranchLogger()
|
||||
pulled, err := syncBranchPull(ctx, storeA, log)
|
||||
if err != nil {
|
||||
t.Fatalf("Machine A syncBranchPull failed: %v", err)
|
||||
}
|
||||
if !pulled {
|
||||
t.Log("Machine A syncBranchPull returned false (may be expected if no remote changes)")
|
||||
} else {
|
||||
t.Log("Machine A: Daemon pulled from sync branch")
|
||||
}
|
||||
})
|
||||
|
||||
// Verify: Both issues should be present in Machine A's JSONL after merge
|
||||
content, err := os.ReadFile(jsonlPathA)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to read Machine A's JSONL: %v", err)
|
||||
}
|
||||
|
||||
contentStr := string(content)
|
||||
hasMachineA := strings.Contains(contentStr, "Machine A")
|
||||
hasMachineB := strings.Contains(contentStr, "Machine B")
|
||||
|
||||
if hasMachineA {
|
||||
t.Log("Issue from Machine A preserved in JSONL")
|
||||
} else {
|
||||
t.Error("FAIL: Issue from Machine A missing after merge")
|
||||
}
|
||||
|
||||
if hasMachineB {
|
||||
t.Log("Issue from Machine B merged into JSONL")
|
||||
} else {
|
||||
t.Error("FAIL: Issue from Machine B missing after merge")
|
||||
}
|
||||
|
||||
if hasMachineA && hasMachineB {
|
||||
t.Log("Daemon sync-branch E2E test PASSED: both issues present after merge")
|
||||
}
|
||||
|
||||
// Clean up git caches to prevent test pollution
|
||||
git.ResetCaches()
|
||||
}
|
||||
|
||||
// TestDaemonSyncBranchForceOverwrite tests the forceOverwrite flag behavior for delete mutations.
|
||||
// When forceOverwrite is true, the local JSONL is copied directly to the worktree without merging,
|
||||
// which is necessary for delete mutations to be properly reflected in the sync branch.
|
||||
//
|
||||
// Flow:
|
||||
// 1. Machine A creates issue, commits to sync branch
|
||||
// 2. Machine A deletes issue locally, calls syncBranchCommitAndPushWithOptions(forceOverwrite=true)
|
||||
// 3. Verify the deletion is reflected in the sync branch worktree
|
||||
func TestDaemonSyncBranchForceOverwrite(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("Skipping integration test in short mode")
|
||||
}
|
||||
|
||||
// Skip on Windows due to path issues with worktrees
|
||||
if runtime.GOOS == "windows" {
|
||||
t.Skip("Skipping on Windows")
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// Setup: Create bare remote with two clones
|
||||
_, machineA, _, cleanup := setupBareRemoteWithClones(t)
|
||||
defer cleanup()
|
||||
|
||||
// Use unique sync branch name and set via env var (highest priority)
|
||||
// This overrides any config.yaml setting
|
||||
syncBranch := "beads-force-sync"
|
||||
t.Setenv(syncbranch.EnvVar, syncBranch)
|
||||
|
||||
withBeadsDir(t, machineA, func() {
|
||||
beadsDirA := filepath.Join(machineA, ".beads")
|
||||
dbPathA := filepath.Join(beadsDirA, "beads.db")
|
||||
|
||||
storeA, err := sqlite.New(ctx, dbPathA)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create store: %v", err)
|
||||
}
|
||||
defer storeA.Close()
|
||||
|
||||
// Configure store
|
||||
if err := storeA.SetConfig(ctx, "issue_prefix", "bd"); err != nil {
|
||||
t.Fatalf("Failed to set issue_prefix: %v", err)
|
||||
}
|
||||
if err := storeA.SetConfig(ctx, syncbranch.ConfigKey, syncBranch); err != nil {
|
||||
t.Fatalf("Failed to set sync.branch: %v", err)
|
||||
}
|
||||
|
||||
// Create two issues
|
||||
issue1 := &types.Issue{
|
||||
Title: "Issue to keep",
|
||||
Status: types.StatusOpen,
|
||||
Priority: 1,
|
||||
IssueType: types.TypeTask,
|
||||
CreatedAt: time.Now(),
|
||||
UpdatedAt: time.Now(),
|
||||
}
|
||||
if err := storeA.CreateIssue(ctx, issue1, "test"); err != nil {
|
||||
t.Fatalf("Failed to create issue1: %v", err)
|
||||
}
|
||||
|
||||
issue2 := &types.Issue{
|
||||
Title: "Issue to delete",
|
||||
Status: types.StatusOpen,
|
||||
Priority: 2,
|
||||
IssueType: types.TypeTask,
|
||||
CreatedAt: time.Now(),
|
||||
UpdatedAt: time.Now(),
|
||||
}
|
||||
if err := storeA.CreateIssue(ctx, issue2, "test"); err != nil {
|
||||
t.Fatalf("Failed to create issue2: %v", err)
|
||||
}
|
||||
issue2ID := issue2.ID
|
||||
t.Logf("Created issue to delete: %s", issue2ID)
|
||||
|
||||
// Export to JSONL
|
||||
jsonlPath := filepath.Join(beadsDirA, "issues.jsonl")
|
||||
if err := exportToJSONLWithStore(ctx, storeA, jsonlPath); err != nil {
|
||||
t.Fatalf("Failed to export JSONL: %v", err)
|
||||
}
|
||||
|
||||
// Change to machineA directory for git operations
|
||||
if err := os.Chdir(machineA); err != nil {
|
||||
t.Fatalf("Failed to chdir: %v", err)
|
||||
}
|
||||
|
||||
// Set global dbPath so findJSONLPath() works for daemon functions
|
||||
oldDBPath := dbPath
|
||||
dbPath = dbPathA
|
||||
defer func() { dbPath = oldDBPath }()
|
||||
|
||||
// First commit with both issues (without forceOverwrite)
|
||||
log, _ := newTestSyncBranchLogger()
|
||||
committed, err := syncBranchCommitAndPush(ctx, storeA, true, log)
|
||||
if err != nil {
|
||||
t.Fatalf("Initial commit failed: %v", err)
|
||||
}
|
||||
if !committed {
|
||||
t.Fatal("Expected initial commit to succeed")
|
||||
}
|
||||
t.Log("Initial commit with both issues succeeded")
|
||||
|
||||
// Verify worktree has both issues
|
||||
worktreePath := filepath.Join(machineA, ".git", "beads-worktrees", syncBranch)
|
||||
worktreeJSONL := filepath.Join(worktreePath, ".beads", "issues.jsonl")
|
||||
initialContent, err := os.ReadFile(worktreeJSONL)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to read worktree JSONL: %v", err)
|
||||
}
|
||||
if !strings.Contains(string(initialContent), "Issue to delete") {
|
||||
t.Error("Initial worktree JSONL should contain 'Issue to delete'")
|
||||
}
|
||||
|
||||
// Now delete the issue from database
|
||||
if err := storeA.DeleteIssue(ctx, issue2ID); err != nil {
|
||||
t.Fatalf("Failed to delete issue: %v", err)
|
||||
}
|
||||
t.Logf("Deleted issue %s from database", issue2ID)
|
||||
|
||||
// Export JSONL after deletion (issue2 should not be in the file)
|
||||
if err := exportToJSONLWithStore(ctx, storeA, jsonlPath); err != nil {
|
||||
t.Fatalf("Failed to export JSONL after deletion: %v", err)
|
||||
}
|
||||
|
||||
// Verify local JSONL no longer has the deleted issue
|
||||
localContent, err := os.ReadFile(jsonlPath)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to read local JSONL: %v", err)
|
||||
}
|
||||
if strings.Contains(string(localContent), "Issue to delete") {
|
||||
t.Error("Local JSONL should not contain deleted issue")
|
||||
}
|
||||
|
||||
// Commit with forceOverwrite=true (simulating delete mutation)
|
||||
committed, err = syncBranchCommitAndPushWithOptions(ctx, storeA, true, true, log)
|
||||
if err != nil {
|
||||
t.Fatalf("forceOverwrite commit failed: %v", err)
|
||||
}
|
||||
if !committed {
|
||||
t.Fatal("Expected forceOverwrite commit to succeed")
|
||||
}
|
||||
t.Log("forceOverwrite commit succeeded")
|
||||
|
||||
// Verify worktree JSONL no longer has the deleted issue
|
||||
afterContent, err := os.ReadFile(worktreeJSONL)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to read worktree JSONL after forceOverwrite: %v", err)
|
||||
}
|
||||
|
||||
if strings.Contains(string(afterContent), "Issue to delete") {
|
||||
t.Error("FAIL: Worktree JSONL still contains deleted issue after forceOverwrite")
|
||||
} else {
|
||||
t.Log("Worktree JSONL correctly reflects deletion after forceOverwrite")
|
||||
}
|
||||
|
||||
if !strings.Contains(string(afterContent), "Issue to keep") {
|
||||
t.Error("FAIL: Worktree JSONL should still contain 'Issue to keep'")
|
||||
} else {
|
||||
t.Log("Worktree JSONL correctly preserves non-deleted issue")
|
||||
}
|
||||
|
||||
t.Log("forceOverwrite test PASSED: delete mutation correctly propagated")
|
||||
})
|
||||
|
||||
// Clean up git caches to prevent test pollution
|
||||
git.ResetCaches()
|
||||
}
|
||||
|
||||
1062
cmd/bd/sync.go
1062
cmd/bd/sync.go
File diff suppressed because it is too large
Load Diff
@@ -140,9 +140,9 @@ func exportToJSONLDeferred(ctx context.Context, jsonlPath string) (*ExportResult
|
||||
}
|
||||
|
||||
// Safety check: prevent exporting empty database over non-empty JSONL
|
||||
// Note: The main bd-53c protection is the reverse ZFC check earlier in sync.go
|
||||
// which runs BEFORE export. Here we only block the most catastrophic case (empty DB)
|
||||
// to allow legitimate deletions.
|
||||
// This blocks the catastrophic case where an empty/corrupted DB would overwrite
|
||||
// a valid JSONL. For staleness handling, use --pull-first which provides
|
||||
// structural protection via 3-way merge.
|
||||
if len(issues) == 0 {
|
||||
existingCount, countErr := countIssuesInJSONL(jsonlPath)
|
||||
if countErr != nil {
|
||||
|
||||
365
cmd/bd/sync_external_test.go
Normal file
365
cmd/bd/sync_external_test.go
Normal file
@@ -0,0 +1,365 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/steveyegge/beads/internal/git"
|
||||
)
|
||||
|
||||
// TestExternalBeadsDirE2E tests the full external BEADS_DIR flow.
|
||||
// This is an end-to-end regression test for PR#918.
|
||||
//
|
||||
// When BEADS_DIR points to a separate git repository (external mode),
|
||||
// sync operations should work correctly:
|
||||
// 1. Changes are committed to the external beads repo (not the project repo)
|
||||
// 2. Pulls from the external repo bring in remote changes
|
||||
// 3. The merge algorithm works correctly across repo boundaries
|
||||
func TestExternalBeadsDirE2E(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
// Store original working directory
|
||||
originalWd, err := os.Getwd()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get original working directory: %v", err)
|
||||
}
|
||||
|
||||
// Setup: Create the main project repo
|
||||
projectDir := t.TempDir()
|
||||
if err := setupGitRepoInDir(t, projectDir); err != nil {
|
||||
t.Fatalf("failed to setup project repo: %v", err)
|
||||
}
|
||||
|
||||
// Setup: Create a separate external beads repo
|
||||
// Resolve symlinks to avoid macOS /var -> /private/var issues
|
||||
externalDir, err := filepath.EvalSymlinks(t.TempDir())
|
||||
if err != nil {
|
||||
t.Fatalf("eval symlinks failed: %v", err)
|
||||
}
|
||||
if err := setupGitRepoInDir(t, externalDir); err != nil {
|
||||
t.Fatalf("failed to setup external repo: %v", err)
|
||||
}
|
||||
|
||||
// Create .beads directory in external repo
|
||||
externalBeadsDir := filepath.Join(externalDir, ".beads")
|
||||
if err := os.MkdirAll(externalBeadsDir, 0755); err != nil {
|
||||
t.Fatalf("failed to create external .beads dir: %v", err)
|
||||
}
|
||||
|
||||
// Create issues.jsonl in external beads repo with initial issue
|
||||
jsonlPath := filepath.Join(externalBeadsDir, "issues.jsonl")
|
||||
issue1 := `{"id":"ext-1","title":"External Issue 1","status":"open","issue_type":"task","priority":2,"created_at":"2025-01-01T00:00:00Z","updated_at":"2025-01-01T00:00:00Z"}`
|
||||
if err := os.WriteFile(jsonlPath, []byte(issue1+"\n"), 0644); err != nil {
|
||||
t.Fatalf("write external JSONL failed: %v", err)
|
||||
}
|
||||
|
||||
// Commit initial beads files in external repo
|
||||
runGitInDir(t, externalDir, "add", ".beads")
|
||||
runGitInDir(t, externalDir, "commit", "-m", "initial beads setup")
|
||||
t.Log("✓ External beads repo initialized with issue ext-1")
|
||||
|
||||
// Change to project directory (simulating user's project)
|
||||
if err := os.Chdir(projectDir); err != nil {
|
||||
t.Fatalf("chdir to project failed: %v", err)
|
||||
}
|
||||
defer func() { _ = os.Chdir(originalWd) }()
|
||||
|
||||
// Reset git caches after directory change
|
||||
git.ResetCaches()
|
||||
|
||||
// Test 1: isExternalBeadsDir should detect external repo
|
||||
if !isExternalBeadsDir(ctx, externalBeadsDir) {
|
||||
t.Error("isExternalBeadsDir should return true for external beads dir")
|
||||
}
|
||||
t.Log("✓ External beads dir correctly detected")
|
||||
|
||||
// Test 2: getRepoRootFromPath should correctly identify external repo root
|
||||
repoRoot, err := getRepoRootFromPath(ctx, externalBeadsDir)
|
||||
if err != nil {
|
||||
t.Fatalf("getRepoRootFromPath failed: %v", err)
|
||||
}
|
||||
// Normalize paths for comparison
|
||||
resolvedExternal, _ := filepath.EvalSymlinks(externalDir)
|
||||
if repoRoot != resolvedExternal {
|
||||
t.Errorf("getRepoRootFromPath = %q, want %q", repoRoot, resolvedExternal)
|
||||
}
|
||||
t.Logf("✓ getRepoRootFromPath correctly identifies external repo: %s", repoRoot)
|
||||
|
||||
// Test 3: pullFromExternalBeadsRepo should handle no-remote gracefully
|
||||
err = pullFromExternalBeadsRepo(ctx, externalBeadsDir)
|
||||
if err != nil {
|
||||
t.Errorf("pullFromExternalBeadsRepo should handle no-remote: %v", err)
|
||||
}
|
||||
t.Log("✓ Pull from external beads repo handled no-remote correctly")
|
||||
|
||||
// Test 4: Create new issue and commit to external repo
|
||||
issue2 := `{"id":"ext-2","title":"External Issue 2","status":"open","issue_type":"task","priority":2,"created_at":"2025-01-02T00:00:00Z","updated_at":"2025-01-02T00:00:00Z"}`
|
||||
combinedContent := issue1 + "\n" + issue2 + "\n"
|
||||
if err := os.WriteFile(jsonlPath, []byte(combinedContent), 0644); err != nil {
|
||||
t.Fatalf("write updated JSONL failed: %v", err)
|
||||
}
|
||||
|
||||
// Use commitToExternalBeadsRepo (don't push since no real remote)
|
||||
committed, err := commitToExternalBeadsRepo(ctx, externalBeadsDir, "add ext-2", false)
|
||||
if err != nil {
|
||||
t.Fatalf("commitToExternalBeadsRepo failed: %v", err)
|
||||
}
|
||||
if !committed {
|
||||
t.Error("expected commit to succeed for new issue")
|
||||
}
|
||||
t.Log("✓ Successfully committed issue ext-2 to external beads repo")
|
||||
|
||||
// Test 5: Verify commit was made in external repo (not project repo)
|
||||
// Check external repo has the commit
|
||||
logOutput := getGitOutputInDir(t, externalDir, "log", "--oneline", "-1")
|
||||
if !strings.Contains(logOutput, "add ext-2") {
|
||||
t.Errorf("external repo should have commit, got: %s", logOutput)
|
||||
}
|
||||
t.Log("✓ Commit correctly made in external repo")
|
||||
|
||||
// Test 6: Verify project repo is unchanged
|
||||
projectLogOutput := getGitOutputInDir(t, projectDir, "log", "--oneline", "-1")
|
||||
if strings.Contains(projectLogOutput, "add ext-2") {
|
||||
t.Error("project repo should not have beads commit")
|
||||
}
|
||||
t.Log("✓ Project repo correctly unchanged")
|
||||
|
||||
// Test 7: Verify JSONL content is correct
|
||||
content, err := os.ReadFile(jsonlPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read JSONL: %v", err)
|
||||
}
|
||||
contentStr := string(content)
|
||||
if !strings.Contains(contentStr, "ext-1") || !strings.Contains(contentStr, "ext-2") {
|
||||
t.Errorf("JSONL should contain both issues, got: %s", contentStr)
|
||||
}
|
||||
t.Log("✓ JSONL contains both issues")
|
||||
|
||||
t.Log("✓ External BEADS_DIR E2E test completed")
|
||||
}
|
||||
|
||||
// TestExternalBeadsDirDetection tests various edge cases for external beads dir detection.
|
||||
func TestExternalBeadsDirDetection(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
// Store original working directory
|
||||
originalWd, err := os.Getwd()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get original working directory: %v", err)
|
||||
}
|
||||
|
||||
t.Run("same repo returns false", func(t *testing.T) {
|
||||
// Setup a single repo
|
||||
repoDir := t.TempDir()
|
||||
if err := setupGitRepoInDir(t, repoDir); err != nil {
|
||||
t.Fatalf("setup failed: %v", err)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(repoDir, ".beads")
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir failed: %v", err)
|
||||
}
|
||||
|
||||
// Change to repo dir
|
||||
if err := os.Chdir(repoDir); err != nil {
|
||||
t.Fatalf("chdir failed: %v", err)
|
||||
}
|
||||
defer func() { _ = os.Chdir(originalWd) }()
|
||||
git.ResetCaches()
|
||||
|
||||
// Same repo should return false
|
||||
if isExternalBeadsDir(ctx, beadsDir) {
|
||||
t.Error("isExternalBeadsDir should return false for same repo")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("different repo returns true", func(t *testing.T) {
|
||||
// Setup two separate repos
|
||||
projectDir := t.TempDir()
|
||||
if err := setupGitRepoInDir(t, projectDir); err != nil {
|
||||
t.Fatalf("setup project failed: %v", err)
|
||||
}
|
||||
|
||||
externalDir, err := filepath.EvalSymlinks(t.TempDir())
|
||||
if err != nil {
|
||||
t.Fatalf("eval symlinks failed: %v", err)
|
||||
}
|
||||
if err := setupGitRepoInDir(t, externalDir); err != nil {
|
||||
t.Fatalf("setup external failed: %v", err)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(externalDir, ".beads")
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir failed: %v", err)
|
||||
}
|
||||
|
||||
// Change to project dir
|
||||
if err := os.Chdir(projectDir); err != nil {
|
||||
t.Fatalf("chdir failed: %v", err)
|
||||
}
|
||||
defer func() { _ = os.Chdir(originalWd) }()
|
||||
git.ResetCaches()
|
||||
|
||||
// Different repo should return true
|
||||
if !isExternalBeadsDir(ctx, beadsDir) {
|
||||
t.Error("isExternalBeadsDir should return true for different repo")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("non-git directory returns false", func(t *testing.T) {
|
||||
// Setup a repo for cwd
|
||||
repoDir := t.TempDir()
|
||||
if err := setupGitRepoInDir(t, repoDir); err != nil {
|
||||
t.Fatalf("setup failed: %v", err)
|
||||
}
|
||||
|
||||
// Non-git beads dir
|
||||
nonGitDir := t.TempDir()
|
||||
beadsDir := filepath.Join(nonGitDir, ".beads")
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir failed: %v", err)
|
||||
}
|
||||
|
||||
// Change to repo dir
|
||||
if err := os.Chdir(repoDir); err != nil {
|
||||
t.Fatalf("chdir failed: %v", err)
|
||||
}
|
||||
defer func() { _ = os.Chdir(originalWd) }()
|
||||
git.ResetCaches()
|
||||
|
||||
// Non-git dir should return false (can't determine, assume local)
|
||||
if isExternalBeadsDir(ctx, beadsDir) {
|
||||
t.Error("isExternalBeadsDir should return false for non-git directory")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestCommitToExternalBeadsRepo tests the external repo commit function.
|
||||
func TestCommitToExternalBeadsRepo(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
t.Run("commits changes to external repo", func(t *testing.T) {
|
||||
// Setup external repo
|
||||
externalDir, err := filepath.EvalSymlinks(t.TempDir())
|
||||
if err != nil {
|
||||
t.Fatalf("eval symlinks failed: %v", err)
|
||||
}
|
||||
if err := setupGitRepoInDir(t, externalDir); err != nil {
|
||||
t.Fatalf("setup failed: %v", err)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(externalDir, ".beads")
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir failed: %v", err)
|
||||
}
|
||||
|
||||
// Write initial JSONL
|
||||
jsonlPath := filepath.Join(beadsDir, "issues.jsonl")
|
||||
if err := os.WriteFile(jsonlPath, []byte(`{"id":"test-1"}`+"\n"), 0644); err != nil {
|
||||
t.Fatalf("write failed: %v", err)
|
||||
}
|
||||
|
||||
// Commit
|
||||
committed, err := commitToExternalBeadsRepo(ctx, beadsDir, "test commit", false)
|
||||
if err != nil {
|
||||
t.Fatalf("commit failed: %v", err)
|
||||
}
|
||||
if !committed {
|
||||
t.Error("expected commit to succeed")
|
||||
}
|
||||
|
||||
// Verify commit exists
|
||||
logOutput := getGitOutputInDir(t, externalDir, "log", "--oneline", "-1")
|
||||
if !strings.Contains(logOutput, "test commit") {
|
||||
t.Errorf("commit not found in log: %s", logOutput)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("returns false when no changes", func(t *testing.T) {
|
||||
// Setup external repo
|
||||
externalDir, err := filepath.EvalSymlinks(t.TempDir())
|
||||
if err != nil {
|
||||
t.Fatalf("eval symlinks failed: %v", err)
|
||||
}
|
||||
if err := setupGitRepoInDir(t, externalDir); err != nil {
|
||||
t.Fatalf("setup failed: %v", err)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(externalDir, ".beads")
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir failed: %v", err)
|
||||
}
|
||||
|
||||
// Write and commit JSONL
|
||||
jsonlPath := filepath.Join(beadsDir, "issues.jsonl")
|
||||
if err := os.WriteFile(jsonlPath, []byte(`{"id":"test-1"}`+"\n"), 0644); err != nil {
|
||||
t.Fatalf("write failed: %v", err)
|
||||
}
|
||||
runGitInDir(t, externalDir, "add", ".beads")
|
||||
runGitInDir(t, externalDir, "commit", "-m", "initial")
|
||||
|
||||
// Try to commit again with no changes
|
||||
committed, err := commitToExternalBeadsRepo(ctx, beadsDir, "no changes", false)
|
||||
if err != nil {
|
||||
t.Fatalf("commit failed: %v", err)
|
||||
}
|
||||
if committed {
|
||||
t.Error("expected no commit when no changes")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Helper: Setup git repo in a specific directory (doesn't change cwd)
|
||||
func setupGitRepoInDir(t *testing.T, dir string) error {
|
||||
t.Helper()
|
||||
|
||||
// Initialize git repo
|
||||
if err := exec.Command("git", "-C", dir, "init", "--initial-branch=main").Run(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Configure git
|
||||
_ = exec.Command("git", "-C", dir, "config", "user.email", "test@test.com").Run()
|
||||
_ = exec.Command("git", "-C", dir, "config", "user.name", "Test User").Run()
|
||||
|
||||
// Create initial commit
|
||||
readmePath := filepath.Join(dir, "README.md")
|
||||
if err := os.WriteFile(readmePath, []byte("# Test Repo\n"), 0644); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := exec.Command("git", "-C", dir, "add", ".").Run(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := exec.Command("git", "-C", dir, "commit", "-m", "initial commit").Run(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Helper: Run git command in a specific directory
|
||||
func runGitInDir(t *testing.T, dir string, args ...string) {
|
||||
t.Helper()
|
||||
cmdArgs := append([]string{"-C", dir}, args...)
|
||||
cmd := exec.Command("git", cmdArgs...)
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
t.Fatalf("git %v failed: %v\n%s", args, err, output)
|
||||
}
|
||||
}
|
||||
|
||||
// Helper: Get git output from a specific directory
|
||||
func getGitOutputInDir(t *testing.T, dir string, args ...string) string {
|
||||
t.Helper()
|
||||
cmdArgs := append([]string{"-C", dir}, args...)
|
||||
cmd := exec.Command("git", cmdArgs...)
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
t.Fatalf("git %v failed: %v", args, err)
|
||||
}
|
||||
return string(output)
|
||||
}
|
||||
@@ -525,27 +525,6 @@ func parseGitStatusForBeadsChanges(statusOutput string) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// rollbackJSONLFromGit restores the JSONL file from git HEAD after a failed commit.
|
||||
// This is part of the sync atomicity fix (GH#885/bd-3bhl): when git commit fails
|
||||
// after export, we restore the JSONL to its previous state so the working
|
||||
// directory stays consistent with the last successful sync.
|
||||
func rollbackJSONLFromGit(ctx context.Context, jsonlPath string) error {
|
||||
// Check if the file is tracked by git
|
||||
cmd := exec.CommandContext(ctx, "git", "ls-files", "--error-unmatch", jsonlPath)
|
||||
if err := cmd.Run(); err != nil {
|
||||
// File not tracked - nothing to restore
|
||||
return nil
|
||||
}
|
||||
|
||||
// Restore from HEAD
|
||||
restoreCmd := exec.CommandContext(ctx, "git", "checkout", "HEAD", "--", jsonlPath) //nolint:gosec // G204: jsonlPath from internal beads.FindBeadsDir()
|
||||
output, err := restoreCmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("git checkout failed: %w\n%s", err, output)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// getDefaultBranch returns the default branch name (main or master) for origin remote
|
||||
// Checks remote HEAD first, then falls back to checking if main/master exist
|
||||
func getDefaultBranch(ctx context.Context) string {
|
||||
|
||||
597
cmd/bd/sync_merge.go
Normal file
597
cmd/bd/sync_merge.go
Normal file
@@ -0,0 +1,597 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/steveyegge/beads/internal/beads"
|
||||
)
|
||||
|
||||
// MergeResult contains the outcome of a 3-way merge
|
||||
type MergeResult struct {
|
||||
Merged []*beads.Issue // Final merged state
|
||||
Conflicts int // Number of true conflicts resolved
|
||||
Strategy map[string]string // Per-issue: "local", "remote", "merged", "same"
|
||||
}
|
||||
|
||||
// MergeStrategy constants for describing how each issue was merged
|
||||
const (
|
||||
StrategyLocal = "local" // Only local changed
|
||||
StrategyRemote = "remote" // Only remote changed
|
||||
StrategyMerged = "merged" // True conflict, LWW applied
|
||||
StrategySame = "same" // Both made identical change (or no change)
|
||||
)
|
||||
|
||||
// FieldMergeRule defines how a specific field is merged in conflicts
|
||||
type FieldMergeRule string
|
||||
|
||||
const (
|
||||
RuleLWW FieldMergeRule = "lww" // Last-Write-Wins by updated_at
|
||||
RuleUnion FieldMergeRule = "union" // Set union (OR-Set)
|
||||
RuleAppend FieldMergeRule = "append" // Append-only merge
|
||||
)
|
||||
|
||||
// FieldRules maps field names to merge rules
|
||||
// Scalar fields use LWW, collection fields use union/append
|
||||
var FieldRules = map[string]FieldMergeRule{
|
||||
// Scalar fields - LWW by updated_at
|
||||
"status": RuleLWW,
|
||||
"priority": RuleLWW,
|
||||
"assignee": RuleLWW,
|
||||
"title": RuleLWW,
|
||||
"description": RuleLWW,
|
||||
"design": RuleLWW,
|
||||
"issue_type": RuleLWW,
|
||||
"notes": RuleLWW,
|
||||
|
||||
// Set fields - union (no data loss)
|
||||
"labels": RuleUnion,
|
||||
"dependencies": RuleUnion,
|
||||
|
||||
// Append-only fields
|
||||
"comments": RuleAppend,
|
||||
}
|
||||
|
||||
// mergeFieldLevel performs field-by-field merge for true conflicts.
|
||||
// Returns a new issue with:
|
||||
// - Scalar fields: from the newer issue (LWW by updated_at, remote wins on tie)
|
||||
// - Labels: union of both
|
||||
// - Dependencies: union of both (by DependsOnID+Type)
|
||||
// - Comments: append from both (deduplicated by ID or content)
|
||||
func mergeFieldLevel(_base, local, remote *beads.Issue) *beads.Issue {
|
||||
// Determine which is newer for LWW scalars
|
||||
localNewer := local.UpdatedAt.After(remote.UpdatedAt)
|
||||
|
||||
// Clock skew detection: warn if timestamps differ by more than 24 hours
|
||||
timeDiff := local.UpdatedAt.Sub(remote.UpdatedAt)
|
||||
if timeDiff < 0 {
|
||||
timeDiff = -timeDiff
|
||||
}
|
||||
if timeDiff > 24*time.Hour {
|
||||
fmt.Fprintf(os.Stderr, "Warning: Issue %s has %v timestamp difference (possible clock skew)\n",
|
||||
local.ID, timeDiff.Round(time.Hour))
|
||||
}
|
||||
|
||||
// Start with a copy of the newer issue for scalar fields
|
||||
var merged beads.Issue
|
||||
if localNewer {
|
||||
merged = *local
|
||||
} else {
|
||||
merged = *remote
|
||||
}
|
||||
|
||||
// Union merge: Labels
|
||||
merged.Labels = mergeLabels(local.Labels, remote.Labels)
|
||||
|
||||
// Union merge: Dependencies (by DependsOnID+Type key)
|
||||
merged.Dependencies = mergeDependencies(local.Dependencies, remote.Dependencies)
|
||||
|
||||
// Append merge: Comments (deduplicated)
|
||||
merged.Comments = mergeComments(local.Comments, remote.Comments)
|
||||
|
||||
return &merged
|
||||
}
|
||||
|
||||
// mergeLabels performs set union on labels
|
||||
func mergeLabels(local, remote []string) []string {
|
||||
seen := make(map[string]bool)
|
||||
var result []string
|
||||
|
||||
// Add all local labels
|
||||
for _, label := range local {
|
||||
if !seen[label] {
|
||||
seen[label] = true
|
||||
result = append(result, label)
|
||||
}
|
||||
}
|
||||
|
||||
// Add remote labels not in local
|
||||
for _, label := range remote {
|
||||
if !seen[label] {
|
||||
seen[label] = true
|
||||
result = append(result, label)
|
||||
}
|
||||
}
|
||||
|
||||
// Sort for deterministic output
|
||||
sort.Strings(result)
|
||||
return result
|
||||
}
|
||||
|
||||
// dependencyKey creates a unique key for deduplication
|
||||
// Uses DependsOnID + Type as the identity (same target+type = same dependency)
|
||||
func dependencyKey(d *beads.Dependency) string {
|
||||
if d == nil {
|
||||
return ""
|
||||
}
|
||||
return d.DependsOnID + ":" + string(d.Type)
|
||||
}
|
||||
|
||||
// mergeDependencies performs set union on dependencies
|
||||
func mergeDependencies(local, remote []*beads.Dependency) []*beads.Dependency {
|
||||
seen := make(map[string]*beads.Dependency)
|
||||
|
||||
// Add all local dependencies
|
||||
for _, dep := range local {
|
||||
if dep == nil {
|
||||
continue
|
||||
}
|
||||
key := dependencyKey(dep)
|
||||
seen[key] = dep
|
||||
}
|
||||
|
||||
// Add remote dependencies not in local (or with newer timestamp)
|
||||
for _, dep := range remote {
|
||||
if dep == nil {
|
||||
continue
|
||||
}
|
||||
key := dependencyKey(dep)
|
||||
if existing, ok := seen[key]; ok {
|
||||
// Keep the one with newer CreatedAt
|
||||
if dep.CreatedAt.After(existing.CreatedAt) {
|
||||
seen[key] = dep
|
||||
}
|
||||
} else {
|
||||
seen[key] = dep
|
||||
}
|
||||
}
|
||||
|
||||
// Collect and sort by key for deterministic output
|
||||
keys := make([]string, 0, len(seen))
|
||||
for k := range seen {
|
||||
keys = append(keys, k)
|
||||
}
|
||||
sort.Strings(keys)
|
||||
|
||||
result := make([]*beads.Dependency, 0, len(keys))
|
||||
for _, k := range keys {
|
||||
result = append(result, seen[k])
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// commentKey creates a unique key for deduplication
|
||||
// Uses ID if present, otherwise content hash
|
||||
func commentKey(c *beads.Comment) string {
|
||||
if c == nil {
|
||||
return ""
|
||||
}
|
||||
if c.ID != 0 {
|
||||
return fmt.Sprintf("id:%d", c.ID)
|
||||
}
|
||||
// Fallback to content-based key for comments without ID
|
||||
return fmt.Sprintf("content:%s:%s", c.Author, c.Text)
|
||||
}
|
||||
|
||||
// mergeComments performs append-merge on comments with deduplication
|
||||
func mergeComments(local, remote []*beads.Comment) []*beads.Comment {
|
||||
seen := make(map[string]*beads.Comment)
|
||||
|
||||
// Add all local comments
|
||||
for _, c := range local {
|
||||
if c == nil {
|
||||
continue
|
||||
}
|
||||
key := commentKey(c)
|
||||
seen[key] = c
|
||||
}
|
||||
|
||||
// Add remote comments not in local
|
||||
for _, c := range remote {
|
||||
if c == nil {
|
||||
continue
|
||||
}
|
||||
key := commentKey(c)
|
||||
if _, ok := seen[key]; !ok {
|
||||
seen[key] = c
|
||||
}
|
||||
}
|
||||
|
||||
// Collect all comments
|
||||
result := make([]*beads.Comment, 0, len(seen))
|
||||
for _, c := range seen {
|
||||
result = append(result, c)
|
||||
}
|
||||
|
||||
// Sort by CreatedAt for chronological order
|
||||
sort.Slice(result, func(i, j int) bool {
|
||||
return result[i].CreatedAt.Before(result[j].CreatedAt)
|
||||
})
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// MergeIssues performs 3-way merge: base x local x remote -> merged
|
||||
//
|
||||
// Algorithm:
|
||||
// 1. Build lookup maps for base, local, and remote by issue ID
|
||||
// 2. Collect all unique issue IDs across all three sets
|
||||
// 3. For each ID, apply MergeIssue to determine final state
|
||||
// 4. Return merged result with per-issue strategy annotations
|
||||
func MergeIssues(base, local, remote []*beads.Issue) *MergeResult {
|
||||
// Build lookup maps by issue ID
|
||||
baseMap := buildIssueMap(base)
|
||||
localMap := buildIssueMap(local)
|
||||
remoteMap := buildIssueMap(remote)
|
||||
|
||||
// Collect all unique issue IDs
|
||||
allIDs := collectUniqueIDs(baseMap, localMap, remoteMap)
|
||||
|
||||
result := &MergeResult{
|
||||
Merged: make([]*beads.Issue, 0, len(allIDs)),
|
||||
Strategy: make(map[string]string),
|
||||
}
|
||||
|
||||
for _, id := range allIDs {
|
||||
baseIssue := baseMap[id]
|
||||
localIssue := localMap[id]
|
||||
remoteIssue := remoteMap[id]
|
||||
|
||||
merged, strategy := MergeIssue(baseIssue, localIssue, remoteIssue)
|
||||
|
||||
// Always record strategy (even for deletions, for logging/debugging)
|
||||
result.Strategy[id] = strategy
|
||||
|
||||
if merged != nil {
|
||||
result.Merged = append(result.Merged, merged)
|
||||
if strategy == StrategyMerged {
|
||||
result.Conflicts++
|
||||
}
|
||||
}
|
||||
// If merged is nil, the issue was deleted (present in base but not in local/remote)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// MergeIssue merges a single issue using 3-way algorithm
|
||||
//
|
||||
// Cases:
|
||||
// - base=nil: First sync (no common ancestor)
|
||||
// - local=nil, remote=nil: impossible (would not be in allIDs)
|
||||
// - local=nil: return remote (new from remote)
|
||||
// - remote=nil: return local (new from local)
|
||||
// - both exist: LWW by updated_at (both added independently)
|
||||
//
|
||||
// - base!=nil: Standard 3-way merge
|
||||
// - base=local=remote: no changes (same)
|
||||
// - base=local, remote differs: only remote changed (remote)
|
||||
// - base=remote, local differs: only local changed (local)
|
||||
// - local=remote (but differs from base): both made identical change (same)
|
||||
// - all three differ: true conflict, LWW by updated_at (merged)
|
||||
//
|
||||
// - Deletion handling:
|
||||
// - local=nil (deleted locally): if remote unchanged from base, delete; else keep remote
|
||||
// - remote=nil (deleted remotely): if local unchanged from base, delete; else keep local
|
||||
func MergeIssue(base, local, remote *beads.Issue) (*beads.Issue, string) {
|
||||
// Case: no base state (first sync)
|
||||
if base == nil {
|
||||
if local == nil && remote == nil {
|
||||
// Should not happen (would not be in allIDs)
|
||||
return nil, StrategySame
|
||||
}
|
||||
if local == nil {
|
||||
return remote, StrategyRemote
|
||||
}
|
||||
if remote == nil {
|
||||
return local, StrategyLocal
|
||||
}
|
||||
// Both exist with no base: treat as conflict, use field-level merge
|
||||
// This allows labels/comments to be union-merged even in first sync
|
||||
return mergeFieldLevel(nil, local, remote), StrategyMerged
|
||||
}
|
||||
|
||||
// Case: local deleted
|
||||
if local == nil {
|
||||
// If remote unchanged from base, honor the local deletion
|
||||
if issueEqual(base, remote) {
|
||||
return nil, StrategyLocal
|
||||
}
|
||||
// Remote changed after local deleted: keep remote (remote wins conflict)
|
||||
return remote, StrategyMerged
|
||||
}
|
||||
|
||||
// Case: remote deleted
|
||||
if remote == nil {
|
||||
// If local unchanged from base, honor the remote deletion
|
||||
if issueEqual(base, local) {
|
||||
return nil, StrategyRemote
|
||||
}
|
||||
// Local changed after remote deleted: keep local (local wins conflict)
|
||||
return local, StrategyMerged
|
||||
}
|
||||
|
||||
// Standard 3-way cases (all three exist)
|
||||
if issueEqual(base, local) && issueEqual(base, remote) {
|
||||
// No changes anywhere
|
||||
return local, StrategySame
|
||||
}
|
||||
|
||||
if issueEqual(base, local) {
|
||||
// Only remote changed
|
||||
return remote, StrategyRemote
|
||||
}
|
||||
|
||||
if issueEqual(base, remote) {
|
||||
// Only local changed
|
||||
return local, StrategyLocal
|
||||
}
|
||||
|
||||
if issueEqual(local, remote) {
|
||||
// Both made identical change
|
||||
return local, StrategySame
|
||||
}
|
||||
|
||||
// True conflict: use field-level merge
|
||||
// - Scalar fields use LWW (remote wins on tie)
|
||||
// - Labels use union (no data loss)
|
||||
// - Dependencies use union (no data loss)
|
||||
// - Comments use append (deduplicated)
|
||||
return mergeFieldLevel(base, local, remote), StrategyMerged
|
||||
}
|
||||
|
||||
// issueEqual compares two issues for equality (content-level, not pointer)
|
||||
// Compares all merge-relevant fields: content, status, workflow, assignment
|
||||
func issueEqual(a, b *beads.Issue) bool {
|
||||
if a == nil || b == nil {
|
||||
return a == nil && b == nil
|
||||
}
|
||||
|
||||
// Core identification
|
||||
if a.ID != b.ID {
|
||||
return false
|
||||
}
|
||||
|
||||
// Issue content
|
||||
if a.Title != b.Title ||
|
||||
a.Description != b.Description ||
|
||||
a.Design != b.Design ||
|
||||
a.AcceptanceCriteria != b.AcceptanceCriteria ||
|
||||
a.Notes != b.Notes {
|
||||
return false
|
||||
}
|
||||
|
||||
// Status & workflow
|
||||
if a.Status != b.Status ||
|
||||
a.Priority != b.Priority ||
|
||||
a.IssueType != b.IssueType {
|
||||
return false
|
||||
}
|
||||
|
||||
// Assignment
|
||||
if a.Assignee != b.Assignee {
|
||||
return false
|
||||
}
|
||||
if !intPtrEqual(a.EstimatedMinutes, b.EstimatedMinutes) {
|
||||
return false
|
||||
}
|
||||
|
||||
// Timestamps (updated_at is crucial for LWW)
|
||||
if !a.UpdatedAt.Equal(b.UpdatedAt) {
|
||||
return false
|
||||
}
|
||||
|
||||
// Closed state
|
||||
if !timePtrEqual(a.ClosedAt, b.ClosedAt) ||
|
||||
a.CloseReason != b.CloseReason {
|
||||
return false
|
||||
}
|
||||
|
||||
// Time-based scheduling
|
||||
if !timePtrEqual(a.DueAt, b.DueAt) ||
|
||||
!timePtrEqual(a.DeferUntil, b.DeferUntil) {
|
||||
return false
|
||||
}
|
||||
|
||||
// External reference
|
||||
if !stringPtrEqual(a.ExternalRef, b.ExternalRef) {
|
||||
return false
|
||||
}
|
||||
|
||||
// Tombstone fields
|
||||
if !timePtrEqual(a.DeletedAt, b.DeletedAt) ||
|
||||
a.DeletedBy != b.DeletedBy ||
|
||||
a.DeleteReason != b.DeleteReason {
|
||||
return false
|
||||
}
|
||||
|
||||
// Labels (order-independent comparison)
|
||||
if !stringSliceEqual(a.Labels, b.Labels) {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// buildIssueMap creates a lookup map from issue ID to issue pointer
|
||||
func buildIssueMap(issues []*beads.Issue) map[string]*beads.Issue {
|
||||
m := make(map[string]*beads.Issue, len(issues))
|
||||
for _, issue := range issues {
|
||||
if issue != nil {
|
||||
m[issue.ID] = issue
|
||||
}
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
// collectUniqueIDs gathers all unique issue IDs from the three maps
|
||||
// Returns sorted for deterministic output
|
||||
func collectUniqueIDs(base, local, remote map[string]*beads.Issue) []string {
|
||||
seen := make(map[string]bool)
|
||||
for id := range base {
|
||||
seen[id] = true
|
||||
}
|
||||
for id := range local {
|
||||
seen[id] = true
|
||||
}
|
||||
for id := range remote {
|
||||
seen[id] = true
|
||||
}
|
||||
|
||||
ids := make([]string, 0, len(seen))
|
||||
for id := range seen {
|
||||
ids = append(ids, id)
|
||||
}
|
||||
sort.Strings(ids)
|
||||
return ids
|
||||
}
|
||||
|
||||
// Helper functions for pointer comparison
|
||||
|
||||
func intPtrEqual(a, b *int) bool {
|
||||
if a == nil && b == nil {
|
||||
return true
|
||||
}
|
||||
if a == nil || b == nil {
|
||||
return false
|
||||
}
|
||||
return *a == *b
|
||||
}
|
||||
|
||||
func stringPtrEqual(a, b *string) bool {
|
||||
if a == nil && b == nil {
|
||||
return true
|
||||
}
|
||||
if a == nil || b == nil {
|
||||
return false
|
||||
}
|
||||
return *a == *b
|
||||
}
|
||||
|
||||
func timePtrEqual(a, b *time.Time) bool {
|
||||
if a == nil && b == nil {
|
||||
return true
|
||||
}
|
||||
if a == nil || b == nil {
|
||||
return false
|
||||
}
|
||||
return a.Equal(*b)
|
||||
}
|
||||
|
||||
func stringSliceEqual(a, b []string) bool {
|
||||
if len(a) != len(b) {
|
||||
return false
|
||||
}
|
||||
// Sort copies for order-independent comparison
|
||||
aCopy := make([]string, len(a))
|
||||
bCopy := make([]string, len(b))
|
||||
copy(aCopy, a)
|
||||
copy(bCopy, b)
|
||||
sort.Strings(aCopy)
|
||||
sort.Strings(bCopy)
|
||||
for i := range aCopy {
|
||||
if aCopy[i] != bCopy[i] {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Base state storage functions for sync_base.jsonl
|
||||
|
||||
const syncBaseFileName = "sync_base.jsonl"
|
||||
|
||||
// loadBaseState loads the last-synced state from .beads/sync_base.jsonl
|
||||
// Returns empty slice if file doesn't exist (first sync scenario)
|
||||
func loadBaseState(beadsDir string) ([]*beads.Issue, error) {
|
||||
baseStatePath := filepath.Join(beadsDir, syncBaseFileName)
|
||||
|
||||
// Check if file exists
|
||||
if _, err := os.Stat(baseStatePath); os.IsNotExist(err) {
|
||||
// First sync: no base state
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Read and parse JSONL file
|
||||
file, err := os.Open(baseStatePath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
var issues []*beads.Issue
|
||||
scanner := bufio.NewScanner(file)
|
||||
// Increase buffer for large issues
|
||||
buf := make([]byte, 0, 64*1024)
|
||||
scanner.Buffer(buf, 1024*1024)
|
||||
|
||||
lineNum := 0
|
||||
for scanner.Scan() {
|
||||
lineNum++
|
||||
line := scanner.Text()
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
var issue beads.Issue
|
||||
if err := json.Unmarshal([]byte(line), &issue); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Warning: Skipping malformed line %d in sync_base.jsonl: %v\n", lineNum, err)
|
||||
continue
|
||||
}
|
||||
issues = append(issues, &issue)
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return issues, nil
|
||||
}
|
||||
|
||||
// saveBaseState writes the merged state to .beads/sync_base.jsonl
|
||||
// This becomes the base for the next 3-way merge
|
||||
func saveBaseState(beadsDir string, issues []*beads.Issue) error {
|
||||
baseStatePath := filepath.Join(beadsDir, syncBaseFileName)
|
||||
|
||||
// Write to temp file first for atomicity
|
||||
tempPath := baseStatePath + ".tmp"
|
||||
file, err := os.Create(tempPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
encoder := json.NewEncoder(file)
|
||||
encoder.SetEscapeHTML(false)
|
||||
|
||||
for _, issue := range issues {
|
||||
if err := encoder.Encode(issue); err != nil {
|
||||
_ = file.Close() // Best-effort cleanup
|
||||
_ = os.Remove(tempPath)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if err := file.Close(); err != nil {
|
||||
_ = os.Remove(tempPath) // Best-effort cleanup
|
||||
return err
|
||||
}
|
||||
|
||||
// Atomic rename
|
||||
return os.Rename(tempPath, baseStatePath)
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
772
cmd/bd/sync_modes_test.go
Normal file
772
cmd/bd/sync_modes_test.go
Normal file
@@ -0,0 +1,772 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/steveyegge/beads/internal/beads"
|
||||
"github.com/steveyegge/beads/internal/git"
|
||||
"github.com/steveyegge/beads/internal/storage/sqlite"
|
||||
"github.com/steveyegge/beads/internal/syncbranch"
|
||||
"github.com/steveyegge/beads/internal/types"
|
||||
)
|
||||
|
||||
// TestSyncBranchModeWithPullFirst verifies that sync-branch mode config storage
|
||||
// and retrieval works correctly. The pull-first sync gates on this config.
|
||||
// This addresses Steve's review concern about --sync-branch regression.
|
||||
func TestSyncBranchModeWithPullFirst(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
tmpDir, cleanup := setupGitRepo(t)
|
||||
defer cleanup()
|
||||
|
||||
// Setup: Create beads directory with database
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir failed: %v", err)
|
||||
}
|
||||
|
||||
// Create store and configure sync.branch
|
||||
testDBPath := filepath.Join(beadsDir, "beads.db")
|
||||
testStore, err := sqlite.New(ctx, testDBPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create store: %v", err)
|
||||
}
|
||||
defer testStore.Close()
|
||||
|
||||
// Set issue prefix (required)
|
||||
if err := testStore.SetConfig(ctx, "issue_prefix", "test"); err != nil {
|
||||
t.Fatalf("failed to set issue_prefix: %v", err)
|
||||
}
|
||||
|
||||
// Configure sync.branch
|
||||
if err := testStore.SetConfig(ctx, "sync.branch", "beads-metadata"); err != nil {
|
||||
t.Fatalf("failed to set sync.branch: %v", err)
|
||||
}
|
||||
|
||||
// Create the sync branch in git
|
||||
if err := exec.Command("git", "branch", "beads-metadata").Run(); err != nil {
|
||||
t.Fatalf("failed to create sync branch: %v", err)
|
||||
}
|
||||
|
||||
// Create issues.jsonl with a test issue
|
||||
jsonlPath := filepath.Join(beadsDir, "issues.jsonl")
|
||||
issueContent := `{"id":"test-1","title":"Test Issue","status":"open","issue_type":"task","priority":2,"created_at":"2025-01-01T00:00:00Z","updated_at":"2025-01-01T00:00:00Z"}`
|
||||
if err := os.WriteFile(jsonlPath, []byte(issueContent+"\n"), 0644); err != nil {
|
||||
t.Fatalf("write JSONL failed: %v", err)
|
||||
}
|
||||
|
||||
// Test 1: Verify sync.branch config is stored and retrievable
|
||||
// This is what the pull-first sync checks at lines 181-189 in sync.go
|
||||
syncBranch, err := testStore.GetConfig(ctx, "sync.branch")
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get sync.branch config: %v", err)
|
||||
}
|
||||
if syncBranch != "beads-metadata" {
|
||||
t.Errorf("sync.branch = %q, want %q", syncBranch, "beads-metadata")
|
||||
}
|
||||
t.Logf("✓ Sync-branch config correctly stored: %s", syncBranch)
|
||||
|
||||
// Test 2: Verify the git branch exists
|
||||
checkCmd := exec.Command("git", "show-ref", "--verify", "--quiet", "refs/heads/beads-metadata")
|
||||
if err := checkCmd.Run(); err != nil {
|
||||
t.Error("expected beads-metadata branch to exist")
|
||||
}
|
||||
t.Log("✓ Git sync branch exists")
|
||||
|
||||
// Test 3: Verify the DB config key can be read directly by syncbranch package
|
||||
// Note: syncbranch.Get() also checks config.yaml and env var, which may override
|
||||
// the DB config in the beads repo test environment. We verify DB storage works.
|
||||
dbValue, err := testStore.GetConfig(ctx, syncbranch.ConfigKey)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read %s from store: %v", syncbranch.ConfigKey, err)
|
||||
}
|
||||
if dbValue != "beads-metadata" {
|
||||
t.Errorf("store.GetConfig(%s) = %q, want %q", syncbranch.ConfigKey, dbValue, "beads-metadata")
|
||||
}
|
||||
t.Logf("✓ sync.branch config key correctly stored: %s", dbValue)
|
||||
|
||||
// Key assertion: The sync-branch detection mechanism works
|
||||
// When sync.branch is configured, doPullFirstSync gates on it (sync.go:181-189)
|
||||
// and the daemon handles sync-branch commits (daemon_sync_branch.go)
|
||||
}
|
||||
|
||||
// TestExternalBeadsDirWithPullFirst verifies that external BEADS_DIR mode
|
||||
// is correctly detected and the commit/pull functions work.
|
||||
// This addresses Steve's review concern about external beads dir regression.
|
||||
func TestExternalBeadsDirWithPullFirst(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
// Setup: Create main project repo
|
||||
mainDir, cleanupMain := setupGitRepo(t)
|
||||
defer cleanupMain()
|
||||
|
||||
// Setup: Create separate external beads repo
|
||||
// Resolve symlinks to avoid macOS /var -> /private/var issues
|
||||
externalDir, err := filepath.EvalSymlinks(t.TempDir())
|
||||
if err != nil {
|
||||
t.Fatalf("eval symlinks failed: %v", err)
|
||||
}
|
||||
|
||||
// Initialize external repo
|
||||
if err := exec.Command("git", "-C", externalDir, "init", "--initial-branch=main").Run(); err != nil {
|
||||
t.Fatalf("git init (external) failed: %v", err)
|
||||
}
|
||||
_ = exec.Command("git", "-C", externalDir, "config", "user.email", "test@test.com").Run()
|
||||
_ = exec.Command("git", "-C", externalDir, "config", "user.name", "Test User").Run()
|
||||
|
||||
// Create initial commit in external repo
|
||||
if err := os.WriteFile(filepath.Join(externalDir, "README.md"), []byte("External beads repo"), 0644); err != nil {
|
||||
t.Fatalf("write README failed: %v", err)
|
||||
}
|
||||
_ = exec.Command("git", "-C", externalDir, "add", ".").Run()
|
||||
if err := exec.Command("git", "-C", externalDir, "commit", "-m", "initial").Run(); err != nil {
|
||||
t.Fatalf("external initial commit failed: %v", err)
|
||||
}
|
||||
|
||||
// Create .beads directory in external repo
|
||||
externalBeadsDir := filepath.Join(externalDir, ".beads")
|
||||
if err := os.MkdirAll(externalBeadsDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir external beads failed: %v", err)
|
||||
}
|
||||
|
||||
// Create issues.jsonl in external beads
|
||||
jsonlPath := filepath.Join(externalBeadsDir, "issues.jsonl")
|
||||
issueContent := `{"id":"ext-1","title":"External Issue","status":"open","issue_type":"task","priority":2,"created_at":"2025-01-01T00:00:00Z","updated_at":"2025-01-01T00:00:00Z"}`
|
||||
if err := os.WriteFile(jsonlPath, []byte(issueContent+"\n"), 0644); err != nil {
|
||||
t.Fatalf("write external JSONL failed: %v", err)
|
||||
}
|
||||
|
||||
// Commit initial beads files
|
||||
_ = exec.Command("git", "-C", externalDir, "add", ".beads").Run()
|
||||
_ = exec.Command("git", "-C", externalDir, "commit", "-m", "add beads").Run()
|
||||
|
||||
// Change back to main repo (simulating user's project)
|
||||
if err := os.Chdir(mainDir); err != nil {
|
||||
t.Fatalf("chdir to main failed: %v", err)
|
||||
}
|
||||
|
||||
// Test 1: isExternalBeadsDir should detect external repo
|
||||
if !isExternalBeadsDir(ctx, externalBeadsDir) {
|
||||
t.Error("isExternalBeadsDir should return true for external beads dir")
|
||||
}
|
||||
t.Log("✓ External beads dir correctly detected")
|
||||
|
||||
// Test 2: Verify the external beads functions exist and are callable
|
||||
// The actual commit test requires more complex setup due to path resolution
|
||||
// The key verification is that detection works (Test 1)
|
||||
// and the functions are present (verified by compilation)
|
||||
|
||||
// Test 3: pullFromExternalBeadsRepo should not error (no remote)
|
||||
// This tests the function handles no-remote gracefully
|
||||
err = pullFromExternalBeadsRepo(ctx, externalBeadsDir)
|
||||
if err != nil {
|
||||
t.Errorf("pullFromExternalBeadsRepo should handle no-remote: %v", err)
|
||||
}
|
||||
t.Log("✓ Pull from external beads repo handled no-remote correctly")
|
||||
|
||||
// Test 4: Verify getRepoRootFromPath works for external dir
|
||||
repoRoot, err := getRepoRootFromPath(ctx, externalBeadsDir)
|
||||
if err != nil {
|
||||
t.Fatalf("getRepoRootFromPath failed: %v", err)
|
||||
}
|
||||
// Should return the external repo root
|
||||
resolvedExternal, _ := filepath.EvalSymlinks(externalDir)
|
||||
if repoRoot != resolvedExternal {
|
||||
t.Errorf("getRepoRootFromPath = %q, want %q", repoRoot, resolvedExternal)
|
||||
}
|
||||
t.Logf("✓ getRepoRootFromPath correctly identifies external repo: %s", repoRoot)
|
||||
}
|
||||
|
||||
// TestMergeIssuesWithBaseState verifies the 3-way merge algorithm
|
||||
// that underpins pull-first sync works correctly with base state.
|
||||
// This is the core algorithm that prevents data loss (#911).
|
||||
func TestMergeIssuesWithBaseState(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
baseTime := time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC)
|
||||
localTime := time.Date(2025, 1, 2, 0, 0, 0, 0, time.UTC)
|
||||
remoteTime := time.Date(2025, 1, 3, 0, 0, 0, 0, time.UTC)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
base []*beads.Issue
|
||||
local []*beads.Issue
|
||||
remote []*beads.Issue
|
||||
wantCount int
|
||||
wantConflicts int
|
||||
wantStrategy map[string]string
|
||||
wantTitles map[string]string // id -> expected title
|
||||
}{
|
||||
{
|
||||
name: "only remote changed",
|
||||
base: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Original", UpdatedAt: baseTime},
|
||||
},
|
||||
local: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Original", UpdatedAt: baseTime},
|
||||
},
|
||||
remote: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Remote Edit", UpdatedAt: remoteTime},
|
||||
},
|
||||
wantCount: 1,
|
||||
wantConflicts: 0,
|
||||
wantStrategy: map[string]string{"bd-1": StrategyRemote},
|
||||
wantTitles: map[string]string{"bd-1": "Remote Edit"},
|
||||
},
|
||||
{
|
||||
name: "only local changed",
|
||||
base: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Original", UpdatedAt: baseTime},
|
||||
},
|
||||
local: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Local Edit", UpdatedAt: localTime},
|
||||
},
|
||||
remote: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Original", UpdatedAt: baseTime},
|
||||
},
|
||||
wantCount: 1,
|
||||
wantConflicts: 0,
|
||||
wantStrategy: map[string]string{"bd-1": StrategyLocal},
|
||||
wantTitles: map[string]string{"bd-1": "Local Edit"},
|
||||
},
|
||||
{
|
||||
name: "true conflict - remote wins LWW",
|
||||
base: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Original", UpdatedAt: baseTime},
|
||||
},
|
||||
local: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Local Edit", UpdatedAt: localTime},
|
||||
},
|
||||
remote: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Remote Edit", UpdatedAt: remoteTime},
|
||||
},
|
||||
wantCount: 1,
|
||||
wantConflicts: 1,
|
||||
wantStrategy: map[string]string{"bd-1": StrategyMerged},
|
||||
wantTitles: map[string]string{"bd-1": "Remote Edit"}, // Remote wins (later timestamp)
|
||||
},
|
||||
{
|
||||
name: "new issue from remote",
|
||||
base: []*beads.Issue{},
|
||||
local: []*beads.Issue{},
|
||||
remote: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "New Remote Issue", UpdatedAt: remoteTime},
|
||||
},
|
||||
wantCount: 1,
|
||||
wantConflicts: 0,
|
||||
wantStrategy: map[string]string{"bd-1": StrategyRemote},
|
||||
wantTitles: map[string]string{"bd-1": "New Remote Issue"},
|
||||
},
|
||||
{
|
||||
name: "new issue from local",
|
||||
base: []*beads.Issue{},
|
||||
local: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "New Local Issue", UpdatedAt: localTime},
|
||||
},
|
||||
remote: []*beads.Issue{},
|
||||
wantCount: 1,
|
||||
wantConflicts: 0,
|
||||
wantStrategy: map[string]string{"bd-1": StrategyLocal},
|
||||
wantTitles: map[string]string{"bd-1": "New Local Issue"},
|
||||
},
|
||||
{
|
||||
name: "both made identical change",
|
||||
base: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Original", UpdatedAt: baseTime},
|
||||
},
|
||||
local: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Same Edit", UpdatedAt: localTime},
|
||||
},
|
||||
remote: []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Same Edit", UpdatedAt: localTime},
|
||||
},
|
||||
wantCount: 1,
|
||||
wantConflicts: 0,
|
||||
wantStrategy: map[string]string{"bd-1": StrategySame},
|
||||
wantTitles: map[string]string{"bd-1": "Same Edit"},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
tt := tt
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
result := MergeIssues(tt.base, tt.local, tt.remote)
|
||||
|
||||
if len(result.Merged) != tt.wantCount {
|
||||
t.Errorf("got %d merged issues, want %d", len(result.Merged), tt.wantCount)
|
||||
}
|
||||
|
||||
if result.Conflicts != tt.wantConflicts {
|
||||
t.Errorf("got %d conflicts, want %d", result.Conflicts, tt.wantConflicts)
|
||||
}
|
||||
|
||||
for id, wantStrategy := range tt.wantStrategy {
|
||||
if result.Strategy[id] != wantStrategy {
|
||||
t.Errorf("strategy[%s] = %q, want %q", id, result.Strategy[id], wantStrategy)
|
||||
}
|
||||
}
|
||||
|
||||
for _, issue := range result.Merged {
|
||||
if wantTitle, ok := tt.wantTitles[issue.ID]; ok {
|
||||
if issue.Title != wantTitle {
|
||||
t.Errorf("title[%s] = %q, want %q", issue.ID, issue.Title, wantTitle)
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestUpgradeFromOldSync verifies that existing projects safely upgrade to pull-first.
|
||||
// When sync_base.jsonl doesn't exist (first sync after upgrade), the merge should:
|
||||
// 1. Keep issues that only exist locally
|
||||
// 2. Keep issues that only exist remotely
|
||||
// 3. Merge issues that exist in both (using LWW for scalars, union for sets)
|
||||
// This is critical for production safety.
|
||||
func TestUpgradeFromOldSync(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
localTime := time.Date(2025, 1, 2, 0, 0, 0, 0, time.UTC)
|
||||
remoteTime := time.Date(2025, 1, 3, 0, 0, 0, 0, time.UTC)
|
||||
|
||||
// Simulate upgrade scenario: base=nil (no sync_base.jsonl)
|
||||
// Local has 2 issues, remote has 2 issues (1 overlap)
|
||||
local := []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Shared Issue Local", Labels: []string{"local-label"}, UpdatedAt: localTime},
|
||||
{ID: "bd-2", Title: "Local Only Issue", UpdatedAt: localTime},
|
||||
}
|
||||
remote := []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Shared Issue Remote", Labels: []string{"remote-label"}, UpdatedAt: remoteTime},
|
||||
{ID: "bd-3", Title: "Remote Only Issue", UpdatedAt: remoteTime},
|
||||
}
|
||||
|
||||
// Key: base is nil (simulating upgrade from old sync)
|
||||
result := MergeIssues(nil, local, remote)
|
||||
|
||||
// Should have 3 issues total
|
||||
if len(result.Merged) != 3 {
|
||||
t.Fatalf("expected 3 merged issues, got %d", len(result.Merged))
|
||||
}
|
||||
|
||||
// Build map for easier assertions
|
||||
byID := make(map[string]*beads.Issue)
|
||||
for _, issue := range result.Merged {
|
||||
byID[issue.ID] = issue
|
||||
}
|
||||
|
||||
// bd-1: Shared issue should be merged (remote wins LWW, labels union)
|
||||
if issue, ok := byID["bd-1"]; ok {
|
||||
// Remote wins LWW (later timestamp)
|
||||
if issue.Title != "Shared Issue Remote" {
|
||||
t.Errorf("bd-1 title = %q, want 'Shared Issue Remote' (LWW)", issue.Title)
|
||||
}
|
||||
// Labels should be union
|
||||
if len(issue.Labels) != 2 {
|
||||
t.Errorf("bd-1 labels = %v, want union of local and remote labels", issue.Labels)
|
||||
}
|
||||
if result.Strategy["bd-1"] != StrategyMerged {
|
||||
t.Errorf("bd-1 strategy = %q, want %q", result.Strategy["bd-1"], StrategyMerged)
|
||||
}
|
||||
} else {
|
||||
t.Error("bd-1 should exist in merged result")
|
||||
}
|
||||
|
||||
// bd-2: Local only should be kept
|
||||
if issue, ok := byID["bd-2"]; ok {
|
||||
if issue.Title != "Local Only Issue" {
|
||||
t.Errorf("bd-2 title = %q, want 'Local Only Issue'", issue.Title)
|
||||
}
|
||||
if result.Strategy["bd-2"] != StrategyLocal {
|
||||
t.Errorf("bd-2 strategy = %q, want %q", result.Strategy["bd-2"], StrategyLocal)
|
||||
}
|
||||
} else {
|
||||
t.Error("bd-2 should exist in merged result (local only)")
|
||||
}
|
||||
|
||||
// bd-3: Remote only should be kept
|
||||
if issue, ok := byID["bd-3"]; ok {
|
||||
if issue.Title != "Remote Only Issue" {
|
||||
t.Errorf("bd-3 title = %q, want 'Remote Only Issue'", issue.Title)
|
||||
}
|
||||
if result.Strategy["bd-3"] != StrategyRemote {
|
||||
t.Errorf("bd-3 strategy = %q, want %q", result.Strategy["bd-3"], StrategyRemote)
|
||||
}
|
||||
} else {
|
||||
t.Error("bd-3 should exist in merged result (remote only)")
|
||||
}
|
||||
|
||||
t.Log("✓ Upgrade from old sync safely merges all issues")
|
||||
}
|
||||
|
||||
// TestLabelUnionMerge verifies that labels use union merge (no data loss).
|
||||
// This is the field-level resolution Steve asked about.
|
||||
func TestLabelUnionMerge(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
baseTime := time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC)
|
||||
localTime := time.Date(2025, 1, 2, 0, 0, 0, 0, time.UTC)
|
||||
remoteTime := time.Date(2025, 1, 3, 0, 0, 0, 0, time.UTC)
|
||||
|
||||
base := []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Issue", Labels: []string{"bug"}, UpdatedAt: baseTime},
|
||||
}
|
||||
local := []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Issue", Labels: []string{"bug", "local-label"}, UpdatedAt: localTime},
|
||||
}
|
||||
remote := []*beads.Issue{
|
||||
{ID: "bd-1", Title: "Issue", Labels: []string{"bug", "remote-label"}, UpdatedAt: remoteTime},
|
||||
}
|
||||
|
||||
result := MergeIssues(base, local, remote)
|
||||
|
||||
if len(result.Merged) != 1 {
|
||||
t.Fatalf("expected 1 merged issue, got %d", len(result.Merged))
|
||||
}
|
||||
|
||||
// Labels should be union of both: bug, local-label, remote-label
|
||||
labels := result.Merged[0].Labels
|
||||
expectedLabels := map[string]bool{"bug": true, "local-label": true, "remote-label": true}
|
||||
|
||||
if len(labels) != 3 {
|
||||
t.Errorf("expected 3 labels, got %d: %v", len(labels), labels)
|
||||
}
|
||||
|
||||
for _, label := range labels {
|
||||
if !expectedLabels[label] {
|
||||
t.Errorf("unexpected label: %s", label)
|
||||
}
|
||||
}
|
||||
|
||||
t.Logf("✓ Labels correctly union-merged: %v", labels)
|
||||
}
|
||||
|
||||
// setupBareRemoteWithClones creates a bare repo (simulating GitHub) and two clones
|
||||
// for multi-machine E2E testing. Each clone has its own .beads directory for isolation.
|
||||
//
|
||||
// Returns:
|
||||
// - remoteDir: path to bare repo (the "remote")
|
||||
// - machineA: path to first clone
|
||||
// - machineB: path to second clone
|
||||
// - cleanup: function to call in defer
|
||||
func setupBareRemoteWithClones(t *testing.T) (remoteDir, machineA, machineB string, cleanup func()) {
|
||||
t.Helper()
|
||||
|
||||
// Create bare repo (acts as "GitHub")
|
||||
remoteDir = t.TempDir()
|
||||
// Resolve symlinks to avoid macOS /var -> /private/var issues
|
||||
remoteDir, _ = filepath.EvalSymlinks(remoteDir)
|
||||
cmd := exec.Command("git", "init", "--bare", "-b", "main")
|
||||
cmd.Dir = remoteDir
|
||||
if err := cmd.Run(); err != nil {
|
||||
t.Fatalf("failed to init bare repo: %v", err)
|
||||
}
|
||||
|
||||
// Clone for Machine A
|
||||
machineA = t.TempDir()
|
||||
machineA, _ = filepath.EvalSymlinks(machineA)
|
||||
if err := exec.Command("git", "clone", remoteDir, machineA).Run(); err != nil {
|
||||
t.Fatalf("failed to clone for machineA: %v", err)
|
||||
}
|
||||
// Configure git user in Machine A
|
||||
_ = exec.Command("git", "-C", machineA, "config", "user.email", "machineA@test.com").Run()
|
||||
_ = exec.Command("git", "-C", machineA, "config", "user.name", "Machine A").Run()
|
||||
|
||||
// Clone for Machine B
|
||||
machineB = t.TempDir()
|
||||
machineB, _ = filepath.EvalSymlinks(machineB)
|
||||
if err := exec.Command("git", "clone", remoteDir, machineB).Run(); err != nil {
|
||||
t.Fatalf("failed to clone for machineB: %v", err)
|
||||
}
|
||||
// Configure git user in Machine B
|
||||
_ = exec.Command("git", "-C", machineB, "config", "user.email", "machineB@test.com").Run()
|
||||
_ = exec.Command("git", "-C", machineB, "config", "user.name", "Machine B").Run()
|
||||
|
||||
// Initial commit from Machine A (bare repos need at least one commit)
|
||||
readmePath := filepath.Join(machineA, "README.md")
|
||||
if err := os.WriteFile(readmePath, []byte("# Test Repo\n"), 0644); err != nil {
|
||||
t.Fatalf("failed to write README: %v", err)
|
||||
}
|
||||
_ = exec.Command("git", "-C", machineA, "add", ".").Run()
|
||||
if err := exec.Command("git", "-C", machineA, "commit", "-m", "initial").Run(); err != nil {
|
||||
t.Fatalf("failed to create initial commit: %v", err)
|
||||
}
|
||||
if err := exec.Command("git", "-C", machineA, "push", "-u", "origin", "main").Run(); err != nil {
|
||||
t.Fatalf("failed to push initial commit: %v", err)
|
||||
}
|
||||
|
||||
// Machine B fetches and checks out main
|
||||
_ = exec.Command("git", "-C", machineB, "fetch", "origin").Run()
|
||||
_ = exec.Command("git", "-C", machineB, "checkout", "main").Run()
|
||||
|
||||
cleanup = func() {
|
||||
git.ResetCaches() // Prevent cache pollution between tests
|
||||
}
|
||||
|
||||
return remoteDir, machineA, machineB, cleanup
|
||||
}
|
||||
|
||||
// withBeadsDir runs a function with BEADS_DIR set to the specified directory's .beads subdirectory.
|
||||
// This provides database isolation for multi-machine tests.
|
||||
func withBeadsDir(t *testing.T, dir string, fn func()) {
|
||||
t.Helper()
|
||||
|
||||
origBeadsDir := os.Getenv("BEADS_DIR")
|
||||
beadsDir := filepath.Join(dir, ".beads")
|
||||
|
||||
// Create .beads directory if it doesn't exist
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatalf("failed to create .beads dir: %v", err)
|
||||
}
|
||||
|
||||
os.Setenv("BEADS_DIR", beadsDir)
|
||||
defer func() {
|
||||
if origBeadsDir != "" {
|
||||
os.Setenv("BEADS_DIR", origBeadsDir)
|
||||
} else {
|
||||
os.Unsetenv("BEADS_DIR")
|
||||
}
|
||||
}()
|
||||
|
||||
fn()
|
||||
}
|
||||
|
||||
// TestSyncBranchE2E tests the full sync-branch flow with concurrent changes from
|
||||
// two machines using a real bare repo. This is an end-to-end regression test for PR#918.
|
||||
//
|
||||
// Flow:
|
||||
// 1. Machine A creates bd-1, commits to sync branch, pushes to bare remote
|
||||
// 2. Machine B creates bd-2, commits to sync branch, pushes to bare remote
|
||||
// 3. Machine A pulls from sync branch - should merge both issues
|
||||
// 4. Verify both issues present after merge
|
||||
func TestSyncBranchE2E(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
// Setup: Create bare remote with two clones
|
||||
_, machineA, machineB, cleanup := setupBareRemoteWithClones(t)
|
||||
defer cleanup()
|
||||
|
||||
syncBranch := "beads-sync"
|
||||
|
||||
// Machine A: Create .beads directory and bd-1
|
||||
beadsDirA := filepath.Join(machineA, ".beads")
|
||||
if err := os.MkdirAll(beadsDirA, 0755); err != nil {
|
||||
t.Fatalf("failed to create .beads dir for A: %v", err)
|
||||
}
|
||||
jsonlPathA := filepath.Join(beadsDirA, "issues.jsonl")
|
||||
issue1 := `{"id":"bd-1","title":"Issue from Machine A","status":"open","issue_type":"task","priority":2,"created_at":"2025-01-01T00:00:00Z","updated_at":"2025-01-01T00:00:00Z"}`
|
||||
if err := os.WriteFile(jsonlPathA, []byte(issue1+"\n"), 0644); err != nil {
|
||||
t.Fatalf("write JSONL failed for A: %v", err)
|
||||
}
|
||||
|
||||
// Machine A: Commit to sync branch using the worktree-based API (push=true)
|
||||
withBeadsDir(t, machineA, func() {
|
||||
commitResult, err := syncbranch.CommitToSyncBranch(ctx, machineA, syncBranch, jsonlPathA, true)
|
||||
if err != nil {
|
||||
t.Fatalf("CommitToSyncBranch failed for A: %v", err)
|
||||
}
|
||||
if !commitResult.Committed {
|
||||
t.Fatal("expected commit to succeed for Machine A's issue")
|
||||
}
|
||||
t.Log("Machine A committed and pushed bd-1 to sync branch")
|
||||
})
|
||||
|
||||
// Machine B: Create .beads directory and bd-2
|
||||
beadsDirB := filepath.Join(machineB, ".beads")
|
||||
if err := os.MkdirAll(beadsDirB, 0755); err != nil {
|
||||
t.Fatalf("failed to create .beads dir for B: %v", err)
|
||||
}
|
||||
jsonlPathB := filepath.Join(beadsDirB, "issues.jsonl")
|
||||
issue2 := `{"id":"bd-2","title":"Issue from Machine B","status":"open","issue_type":"task","priority":2,"created_at":"2025-01-02T00:00:00Z","updated_at":"2025-01-02T00:00:00Z"}`
|
||||
if err := os.WriteFile(jsonlPathB, []byte(issue2+"\n"), 0644); err != nil {
|
||||
t.Fatalf("write JSONL failed for B: %v", err)
|
||||
}
|
||||
|
||||
// Machine B: Pull first to get A's changes, then commit and push
|
||||
withBeadsDir(t, machineB, func() {
|
||||
// Pull from remote first (gets Machine A's bd-1)
|
||||
pullResult, err := syncbranch.PullFromSyncBranch(ctx, machineB, syncBranch, jsonlPathB, false)
|
||||
if err != nil {
|
||||
t.Logf("Initial pull for B returned error (may be expected): %v", err)
|
||||
}
|
||||
if pullResult != nil && pullResult.Pulled {
|
||||
t.Log("Machine B pulled existing sync branch content")
|
||||
}
|
||||
|
||||
// Re-read and append bd-2 to maintain bd-1 from pull
|
||||
existingContent, _ := os.ReadFile(jsonlPathB)
|
||||
if !strings.Contains(string(existingContent), "bd-2") {
|
||||
// Append bd-2 if not already present
|
||||
if len(existingContent) > 0 && !strings.HasSuffix(string(existingContent), "\n") {
|
||||
existingContent = append(existingContent, '\n')
|
||||
}
|
||||
newContent := string(existingContent) + issue2 + "\n"
|
||||
if err := os.WriteFile(jsonlPathB, []byte(newContent), 0644); err != nil {
|
||||
t.Fatalf("failed to append bd-2: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Commit and push bd-2
|
||||
commitResult, err := syncbranch.CommitToSyncBranch(ctx, machineB, syncBranch, jsonlPathB, true)
|
||||
if err != nil {
|
||||
t.Fatalf("CommitToSyncBranch failed for B: %v", err)
|
||||
}
|
||||
if !commitResult.Committed {
|
||||
t.Log("Machine B had no new changes to commit (bd-2 may already be in sync)")
|
||||
} else {
|
||||
t.Log("Machine B committed and pushed bd-2 to sync branch")
|
||||
}
|
||||
})
|
||||
|
||||
// Machine A: Pull from sync branch - should merge both issues
|
||||
withBeadsDir(t, machineA, func() {
|
||||
pullResult, err := syncbranch.PullFromSyncBranch(ctx, machineA, syncBranch, jsonlPathA, false)
|
||||
if err != nil {
|
||||
t.Logf("PullFromSyncBranch for A returned error (may be expected): %v", err)
|
||||
}
|
||||
if pullResult != nil {
|
||||
t.Logf("Pull result for A: Pulled=%v, Merged=%v, FastForwarded=%v",
|
||||
pullResult.Pulled, pullResult.Merged, pullResult.FastForwarded)
|
||||
}
|
||||
})
|
||||
|
||||
// Verify: Both issues should be present in Machine A's JSONL after merge
|
||||
content, err := os.ReadFile(jsonlPathA)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read merged JSONL: %v", err)
|
||||
}
|
||||
|
||||
contentStr := string(content)
|
||||
hasIssue1 := strings.Contains(contentStr, "bd-1") || strings.Contains(contentStr, "Machine A")
|
||||
hasIssue2 := strings.Contains(contentStr, "bd-2") || strings.Contains(contentStr, "Machine B")
|
||||
|
||||
if hasIssue1 {
|
||||
t.Log("Issue bd-1 from Machine A preserved")
|
||||
} else {
|
||||
t.Error("FAIL: bd-1 from Machine A missing after merge")
|
||||
}
|
||||
|
||||
if hasIssue2 {
|
||||
t.Log("Issue bd-2 from Machine B merged")
|
||||
} else {
|
||||
t.Error("FAIL: bd-2 from Machine B missing after merge")
|
||||
}
|
||||
|
||||
if hasIssue1 && hasIssue2 {
|
||||
t.Log("Sync-branch E2E test PASSED: both issues present after merge")
|
||||
}
|
||||
}
|
||||
|
||||
// TestExportOnlySync tests the --no-pull mode (export-only sync).
|
||||
// This mode skips pulling from remote and only exports local changes.
|
||||
//
|
||||
// Use case: "I just want to push my local changes, don't merge anything"
|
||||
//
|
||||
// Flow:
|
||||
// 1. Create local issue in database
|
||||
// 2. Run export-only sync (doExportOnlySync)
|
||||
// 3. Verify issue is exported to JSONL
|
||||
// 4. Verify changes are committed
|
||||
func TestExportOnlySync(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
tmpDir, cleanup := setupGitRepo(t)
|
||||
defer cleanup()
|
||||
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
jsonlPath := filepath.Join(beadsDir, "issues.jsonl")
|
||||
|
||||
// Setup: Create .beads directory
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatalf("failed to create .beads dir: %v", err)
|
||||
}
|
||||
|
||||
// Create a database with a test issue
|
||||
dbPath := filepath.Join(beadsDir, "beads.db")
|
||||
testStore, err := sqlite.New(ctx, dbPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create store: %v", err)
|
||||
}
|
||||
|
||||
// Set issue prefix (required for export)
|
||||
if err := testStore.SetConfig(ctx, "issue_prefix", "export-test"); err != nil {
|
||||
t.Fatalf("failed to set issue_prefix: %v", err)
|
||||
}
|
||||
|
||||
// Create a test issue in the database
|
||||
testIssue := &types.Issue{
|
||||
ID: "export-test-1",
|
||||
Title: "Export Only Test Issue",
|
||||
Status: "open",
|
||||
IssueType: "task",
|
||||
Priority: 2,
|
||||
CreatedAt: time.Now(),
|
||||
UpdatedAt: time.Now(),
|
||||
}
|
||||
if err := testStore.CreateIssue(ctx, testIssue, "test"); err != nil {
|
||||
t.Fatalf("failed to create test issue: %v", err)
|
||||
}
|
||||
testStore.Close()
|
||||
t.Log("✓ Created test issue in database")
|
||||
|
||||
// Initialize the global store for doExportOnlySync
|
||||
// This simulates what `bd sync --no-pull` does
|
||||
store, err = sqlite.New(ctx, dbPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to open store: %v", err)
|
||||
}
|
||||
defer func() {
|
||||
store.Close()
|
||||
store = nil
|
||||
}()
|
||||
|
||||
// Run export-only sync (--no-pull mode)
|
||||
// noPush=true to avoid needing a real remote in tests
|
||||
if err := doExportOnlySync(ctx, jsonlPath, true, "bd sync: export test"); err != nil {
|
||||
t.Fatalf("doExportOnlySync failed: %v", err)
|
||||
}
|
||||
t.Log("✓ Export-only sync completed")
|
||||
|
||||
// Verify: JSONL file should exist with our issue
|
||||
content, err := os.ReadFile(jsonlPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read JSONL: %v", err)
|
||||
}
|
||||
|
||||
contentStr := string(content)
|
||||
if !strings.Contains(contentStr, "export-test-1") {
|
||||
t.Errorf("JSONL should contain issue ID 'export-test-1', got: %s", contentStr)
|
||||
}
|
||||
if !strings.Contains(contentStr, "Export Only Test Issue") {
|
||||
t.Errorf("JSONL should contain issue title, got: %s", contentStr)
|
||||
}
|
||||
t.Log("✓ Issue correctly exported to JSONL")
|
||||
|
||||
// Verify: Changes should be committed
|
||||
output, err := exec.Command("git", "log", "-1", "--pretty=format:%s").Output()
|
||||
if err != nil {
|
||||
t.Fatalf("git log failed: %v", err)
|
||||
}
|
||||
commitMsg := string(output)
|
||||
if !strings.Contains(commitMsg, "bd sync") {
|
||||
t.Errorf("expected commit message to contain 'bd sync', got: %s", commitMsg)
|
||||
}
|
||||
t.Log("✓ Changes committed with correct message")
|
||||
|
||||
// Verify: issues.jsonl should be tracked and committed (no modifications)
|
||||
// Note: Database files (.db, .db-wal, .db-shm) and .sync.lock remain untracked
|
||||
// as expected - only JSONL is committed to git
|
||||
status, err := exec.Command("git", "status", "--porcelain", jsonlPath).Output()
|
||||
if err != nil {
|
||||
t.Fatalf("git status failed: %v", err)
|
||||
}
|
||||
if len(status) > 0 {
|
||||
t.Errorf("expected issues.jsonl to be committed, got: %s", status)
|
||||
}
|
||||
t.Log("✓ issues.jsonl is committed after export-only sync")
|
||||
}
|
||||
@@ -2,8 +2,6 @@ package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
@@ -11,6 +9,7 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/gofrs/flock"
|
||||
"github.com/steveyegge/beads/internal/storage/sqlite"
|
||||
"github.com/steveyegge/beads/internal/syncbranch"
|
||||
"github.com/steveyegge/beads/internal/types"
|
||||
@@ -437,167 +436,8 @@ func TestHasJSONLConflict_MultipleConflicts(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// TestZFCSkipsExportAfterImport tests the bd-l0r fix: after importing JSONL due to
|
||||
// stale DB detection, sync should skip export to avoid overwriting the JSONL source of truth.
|
||||
func TestZFCSkipsExportAfterImport(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
tmpDir := t.TempDir()
|
||||
t.Chdir(tmpDir)
|
||||
|
||||
// Setup beads directory with JSONL
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
os.MkdirAll(beadsDir, 0755)
|
||||
jsonlPath := filepath.Join(beadsDir, "beads.jsonl")
|
||||
|
||||
// Create JSONL with 10 issues (simulating pulled state after cleanup)
|
||||
var jsonlLines []string
|
||||
for i := 1; i <= 10; i++ {
|
||||
line := fmt.Sprintf(`{"id":"bd-%d","title":"JSONL Issue %d","status":"open","issue_type":"task","priority":2,"created_at":"2025-11-24T00:00:00Z","updated_at":"2025-11-24T00:00:00Z"}`, i, i)
|
||||
jsonlLines = append(jsonlLines, line)
|
||||
}
|
||||
os.WriteFile(jsonlPath, []byte(strings.Join(jsonlLines, "\n")+"\n"), 0644)
|
||||
|
||||
// Create SQLite store with 100 stale issues (10x the JSONL count = 900% divergence)
|
||||
testDBPath := filepath.Join(beadsDir, "beads.db")
|
||||
testStore, err := sqlite.New(ctx, testDBPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create test store: %v", err)
|
||||
}
|
||||
defer testStore.Close()
|
||||
|
||||
// Set issue_prefix to prevent "database not initialized" errors
|
||||
if err := testStore.SetConfig(ctx, "issue_prefix", "bd"); err != nil {
|
||||
t.Fatalf("failed to set issue_prefix: %v", err)
|
||||
}
|
||||
|
||||
// Populate DB with 100 issues (stale, 90 closed)
|
||||
for i := 1; i <= 100; i++ {
|
||||
status := types.StatusOpen
|
||||
var closedAt *time.Time
|
||||
if i > 10 { // First 10 open, rest closed
|
||||
status = types.StatusClosed
|
||||
now := time.Now()
|
||||
closedAt = &now
|
||||
}
|
||||
issue := &types.Issue{
|
||||
Title: fmt.Sprintf("Old Issue %d", i),
|
||||
Status: status,
|
||||
ClosedAt: closedAt,
|
||||
IssueType: types.TypeTask,
|
||||
Priority: 2,
|
||||
}
|
||||
if err := testStore.CreateIssue(ctx, issue, "test-user"); err != nil {
|
||||
t.Fatalf("failed to create issue %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Verify divergence: (100 - 10) / 10 = 900% > 50% threshold
|
||||
dbCount, _ := countDBIssuesFast(ctx, testStore)
|
||||
jsonlCount, _ := countIssuesInJSONL(jsonlPath)
|
||||
divergence := float64(dbCount-jsonlCount) / float64(jsonlCount)
|
||||
|
||||
if dbCount != 100 {
|
||||
t.Fatalf("DB setup failed: expected 100 issues, got %d", dbCount)
|
||||
}
|
||||
if jsonlCount != 10 {
|
||||
t.Fatalf("JSONL setup failed: expected 10 issues, got %d", jsonlCount)
|
||||
}
|
||||
if divergence <= 0.5 {
|
||||
t.Fatalf("Divergence too low: %.2f%% (expected >50%%)", divergence*100)
|
||||
}
|
||||
|
||||
// Set global store for the test
|
||||
oldStore := store
|
||||
storeMutex.Lock()
|
||||
oldStoreActive := storeActive
|
||||
store = testStore
|
||||
storeActive = true
|
||||
storeMutex.Unlock()
|
||||
defer func() {
|
||||
storeMutex.Lock()
|
||||
store = oldStore
|
||||
storeActive = oldStoreActive
|
||||
storeMutex.Unlock()
|
||||
}()
|
||||
|
||||
// Save JSONL content hash before running sync logic
|
||||
beforeHash, _ := computeJSONLHash(jsonlPath)
|
||||
|
||||
// Simulate the ZFC check and export step from sync.go lines 126-186
|
||||
// This is the code path that should detect divergence and skip export
|
||||
skipExport := false
|
||||
|
||||
// ZFC safety check
|
||||
if err := ensureStoreActive(); err == nil && store != nil {
|
||||
dbCount, err := countDBIssuesFast(ctx, store)
|
||||
if err == nil {
|
||||
jsonlCount, err := countIssuesInJSONL(jsonlPath)
|
||||
if err == nil && jsonlCount > 0 && dbCount > jsonlCount {
|
||||
divergence := float64(dbCount-jsonlCount) / float64(jsonlCount)
|
||||
if divergence > 0.5 {
|
||||
// Parse JSONL directly (avoid subprocess spawning)
|
||||
jsonlData, err := os.ReadFile(jsonlPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to read JSONL: %v", err)
|
||||
}
|
||||
|
||||
// Parse issues from JSONL
|
||||
var issues []*types.Issue
|
||||
for _, line := range strings.Split(string(jsonlData), "\n") {
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
var issue types.Issue
|
||||
if err := json.Unmarshal([]byte(line), &issue); err != nil {
|
||||
t.Fatalf("failed to parse JSONL line: %v", err)
|
||||
}
|
||||
issue.SetDefaults()
|
||||
issues = append(issues, &issue)
|
||||
}
|
||||
|
||||
// Import using direct import logic (no subprocess)
|
||||
opts := ImportOptions{
|
||||
DryRun: false,
|
||||
SkipUpdate: false,
|
||||
}
|
||||
_, err = importIssuesCore(ctx, testDBPath, testStore, issues, opts)
|
||||
if err != nil {
|
||||
t.Fatalf("ZFC import failed: %v", err)
|
||||
}
|
||||
skipExport = true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Verify skipExport was set
|
||||
if !skipExport {
|
||||
t.Error("Expected skipExport=true after ZFC import, but got false")
|
||||
}
|
||||
|
||||
// Verify DB imported the JSONL issues
|
||||
// Note: import is additive - it adds/updates but doesn't delete.
|
||||
// The DB had 100 issues with auto-generated IDs, JSONL has 10 with explicit IDs (bd-1 to bd-10).
|
||||
// Since there's no overlap, we expect 110 issues total.
|
||||
afterDBCount, _ := countDBIssuesFast(ctx, testStore)
|
||||
if afterDBCount != 110 {
|
||||
t.Errorf("After ZFC import, DB should have 110 issues (100 original + 10 from JSONL), got %d", afterDBCount)
|
||||
}
|
||||
|
||||
// Verify JSONL was NOT modified (no export happened)
|
||||
afterHash, _ := computeJSONLHash(jsonlPath)
|
||||
if beforeHash != afterHash {
|
||||
t.Error("JSONL content changed after ZFC import (export should have been skipped)")
|
||||
}
|
||||
|
||||
// Verify issue count in JSONL is still 10
|
||||
finalJSONLCount, _ := countIssuesInJSONL(jsonlPath)
|
||||
if finalJSONLCount != 10 {
|
||||
t.Errorf("JSONL should still have 10 issues, got %d", finalJSONLCount)
|
||||
}
|
||||
|
||||
t.Logf("✓ ZFC fix verified: divergence detected, import ran, export skipped, JSONL unchanged")
|
||||
}
|
||||
// Note: TestZFCSkipsExportAfterImport was removed as ZFC checks are no longer part of the
|
||||
// legacy sync flow. Use --pull-first for structural staleness handling via 3-way merge.
|
||||
|
||||
// TestHashBasedStalenessDetection_bd_f2f tests the bd-f2f fix:
|
||||
// When JSONL content differs from stored hash (e.g., remote changed status),
|
||||
@@ -894,3 +734,207 @@ func TestIsExternalBeadsDir(t *testing.T) {
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// TestConcurrentEdit tests the pull-first sync flow with concurrent edits.
|
||||
// This validates the 3-way merge logic for the pull-first sync refactor (#911).
|
||||
//
|
||||
// Scenario:
|
||||
// - Base state exists (issue bd-1 at version 2025-01-01)
|
||||
// - Local modifies issue (version 2025-01-02)
|
||||
// - Remote also modifies issue (version 2025-01-03)
|
||||
// - 3-way merge detects conflict and resolves using LWW (remote wins)
|
||||
func TestConcurrentEdit(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
tmpDir := t.TempDir()
|
||||
t.Chdir(tmpDir)
|
||||
|
||||
// Setup: Initialize git repo
|
||||
if err := exec.Command("git", "init", "--initial-branch=main").Run(); err != nil {
|
||||
t.Fatalf("git init failed: %v", err)
|
||||
}
|
||||
_ = exec.Command("git", "config", "user.email", "test@test.com").Run()
|
||||
_ = exec.Command("git", "config", "user.name", "Test User").Run()
|
||||
|
||||
// Setup: Create beads directory with JSONL (base state)
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir failed: %v", err)
|
||||
}
|
||||
jsonlPath := filepath.Join(beadsDir, "issues.jsonl")
|
||||
|
||||
// Base state: single issue at 2025-01-01
|
||||
baseTime := time.Date(2025, 1, 1, 0, 0, 0, 0, time.UTC)
|
||||
baseIssue := `{"id":"bd-1","title":"Original Title","status":"open","issue_type":"task","priority":2,"created_at":"2025-01-01T00:00:00Z","updated_at":"2025-01-01T00:00:00Z"}`
|
||||
if err := os.WriteFile(jsonlPath, []byte(baseIssue+"\n"), 0644); err != nil {
|
||||
t.Fatalf("write JSONL failed: %v", err)
|
||||
}
|
||||
|
||||
// Initial commit
|
||||
_ = exec.Command("git", "add", ".").Run()
|
||||
if err := exec.Command("git", "commit", "-m", "initial").Run(); err != nil {
|
||||
t.Fatalf("initial commit failed: %v", err)
|
||||
}
|
||||
|
||||
// Create database and import base state
|
||||
testDBPath := filepath.Join(beadsDir, "beads.db")
|
||||
testStore, err := sqlite.New(ctx, testDBPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create test store: %v", err)
|
||||
}
|
||||
defer testStore.Close()
|
||||
|
||||
// Set issue_prefix
|
||||
if err := testStore.SetConfig(ctx, "issue_prefix", "bd"); err != nil {
|
||||
t.Fatalf("failed to set issue_prefix: %v", err)
|
||||
}
|
||||
|
||||
// Load base state for 3-way merge
|
||||
baseIssues, err := loadIssuesFromJSONL(jsonlPath)
|
||||
if err != nil {
|
||||
t.Fatalf("loadIssuesFromJSONL (base) failed: %v", err)
|
||||
}
|
||||
|
||||
// Create local issue (modified at 2025-01-02)
|
||||
localTime := time.Date(2025, 1, 2, 0, 0, 0, 0, time.UTC)
|
||||
localIssueObj := &types.Issue{
|
||||
ID: "bd-1",
|
||||
Title: "Local Edit",
|
||||
Status: types.StatusOpen,
|
||||
IssueType: types.TypeTask,
|
||||
Priority: 2,
|
||||
CreatedAt: baseTime,
|
||||
UpdatedAt: localTime,
|
||||
}
|
||||
localIssues := []*types.Issue{localIssueObj}
|
||||
|
||||
// Simulate "remote" edit: change title in JSONL (modified at 2025-01-03 - later)
|
||||
remoteIssue := `{"id":"bd-1","title":"Remote Edit","status":"open","issue_type":"task","priority":2,"created_at":"2025-01-01T00:00:00Z","updated_at":"2025-01-03T00:00:00Z"}`
|
||||
if err := os.WriteFile(jsonlPath, []byte(remoteIssue+"\n"), 0644); err != nil {
|
||||
t.Fatalf("write remote JSONL failed: %v", err)
|
||||
}
|
||||
|
||||
remoteIssues, err := loadIssuesFromJSONL(jsonlPath)
|
||||
if err != nil {
|
||||
t.Fatalf("loadIssuesFromJSONL (remote) failed: %v", err)
|
||||
}
|
||||
|
||||
if len(remoteIssues) != 1 {
|
||||
t.Fatalf("expected 1 remote issue, got %d", len(remoteIssues))
|
||||
}
|
||||
|
||||
// 3-way merge with base state
|
||||
mergeResult := MergeIssues(baseIssues, localIssues, remoteIssues)
|
||||
|
||||
// Verify merge result
|
||||
if len(mergeResult.Merged) != 1 {
|
||||
t.Fatalf("expected 1 merged issue, got %d", len(mergeResult.Merged))
|
||||
}
|
||||
|
||||
// LWW: Remote wins because it has later updated_at (2025-01-03 > 2025-01-02)
|
||||
if mergeResult.Merged[0].Title != "Remote Edit" {
|
||||
t.Errorf("expected merged title 'Remote Edit' (remote wins LWW), got '%s'", mergeResult.Merged[0].Title)
|
||||
}
|
||||
|
||||
// Verify strategy: should be "merged" (conflict resolved by LWW)
|
||||
if mergeResult.Strategy["bd-1"] != StrategyMerged {
|
||||
t.Errorf("expected strategy '%s' for bd-1, got '%s'", StrategyMerged, mergeResult.Strategy["bd-1"])
|
||||
}
|
||||
|
||||
// Verify 1 conflict was detected and resolved
|
||||
if mergeResult.Conflicts != 1 {
|
||||
t.Errorf("expected 1 conflict (both sides modified), got %d", mergeResult.Conflicts)
|
||||
}
|
||||
|
||||
t.Log("TestConcurrentEdit: 3-way merge with LWW resolution validated")
|
||||
}
|
||||
|
||||
// TestConcurrentSyncBlocked tests that concurrent syncs are blocked by file lock.
|
||||
// This validates the P0 fix for preventing data corruption when running bd sync
|
||||
// from multiple terminals simultaneously.
|
||||
func TestConcurrentSyncBlocked(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
tmpDir := t.TempDir()
|
||||
t.Chdir(tmpDir)
|
||||
|
||||
// Setup: Initialize git repo
|
||||
if err := exec.Command("git", "init", "--initial-branch=main").Run(); err != nil {
|
||||
t.Fatalf("git init failed: %v", err)
|
||||
}
|
||||
_ = exec.Command("git", "config", "user.email", "test@test.com").Run()
|
||||
_ = exec.Command("git", "config", "user.name", "Test User").Run()
|
||||
|
||||
// Setup: Create beads directory with JSONL
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir failed: %v", err)
|
||||
}
|
||||
jsonlPath := filepath.Join(beadsDir, "issues.jsonl")
|
||||
|
||||
// Create initial JSONL
|
||||
if err := os.WriteFile(jsonlPath, []byte(`{"id":"bd-1","title":"Test"}`+"\n"), 0644); err != nil {
|
||||
t.Fatalf("write JSONL failed: %v", err)
|
||||
}
|
||||
|
||||
// Initial commit
|
||||
_ = exec.Command("git", "add", ".").Run()
|
||||
if err := exec.Command("git", "commit", "-m", "initial").Run(); err != nil {
|
||||
t.Fatalf("initial commit failed: %v", err)
|
||||
}
|
||||
|
||||
// Create database
|
||||
testDBPath := filepath.Join(beadsDir, "beads.db")
|
||||
testStore, err := sqlite.New(ctx, testDBPath)
|
||||
if err != nil {
|
||||
t.Fatalf("failed to create test store: %v", err)
|
||||
}
|
||||
defer testStore.Close()
|
||||
|
||||
// Set issue_prefix
|
||||
if err := testStore.SetConfig(ctx, "issue_prefix", "bd"); err != nil {
|
||||
t.Fatalf("failed to set issue_prefix: %v", err)
|
||||
}
|
||||
|
||||
// Simulate another sync holding the lock
|
||||
lockPath := filepath.Join(beadsDir, ".sync.lock")
|
||||
lock := flock.New(lockPath)
|
||||
locked, err := lock.TryLock()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to acquire lock: %v", err)
|
||||
}
|
||||
if !locked {
|
||||
t.Fatal("expected to acquire lock")
|
||||
}
|
||||
|
||||
// Now try to acquire the same lock (simulating concurrent sync)
|
||||
lock2 := flock.New(lockPath)
|
||||
locked2, err := lock2.TryLock()
|
||||
if err != nil {
|
||||
t.Fatalf("TryLock error: %v", err)
|
||||
}
|
||||
|
||||
// Second lock attempt should fail (not block)
|
||||
if locked2 {
|
||||
lock2.Unlock()
|
||||
t.Error("expected second lock attempt to fail (concurrent sync should be blocked)")
|
||||
} else {
|
||||
t.Log("Concurrent sync correctly blocked by file lock")
|
||||
}
|
||||
|
||||
// Release first lock
|
||||
if err := lock.Unlock(); err != nil {
|
||||
t.Fatalf("failed to unlock: %v", err)
|
||||
}
|
||||
|
||||
// Now lock should be acquirable again
|
||||
lock3 := flock.New(lockPath)
|
||||
locked3, err := lock3.TryLock()
|
||||
if err != nil {
|
||||
t.Fatalf("TryLock error after unlock: %v", err)
|
||||
}
|
||||
if !locked3 {
|
||||
t.Error("expected lock to be acquirable after first sync completes")
|
||||
} else {
|
||||
lock3.Unlock()
|
||||
t.Log("Lock correctly acquirable after first sync completes")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,7 +9,7 @@ pkgs.buildGoModule {
|
||||
subPackages = [ "cmd/bd" ];
|
||||
doCheck = false;
|
||||
# Go module dependencies hash - if build fails with hash mismatch, update with the "got:" value
|
||||
vendorHash = "sha256-BpACCjVk0V5oQ5YyZRv9wC/RfHw4iikc2yrejZzD1YU=";
|
||||
vendorHash = "sha256-l3ctY2hGXskM8U1wLupyvFDWfJu8nCX5tWAH1Macavw=";
|
||||
|
||||
# Git is required for tests
|
||||
nativeBuildInputs = [ pkgs.git ];
|
||||
|
||||
308
docs/SYNC.md
Normal file
308
docs/SYNC.md
Normal file
@@ -0,0 +1,308 @@
|
||||
# Sync Architecture
|
||||
|
||||
This document explains the design decisions behind `bd sync` - why it works the way it does, and the problems each design choice solves.
|
||||
|
||||
> **Looking for something else?**
|
||||
> - Command usage: [commands/sync.md](/commands/sync.md) (Reference)
|
||||
> - Troubleshooting: [website/docs/recovery/sync-failures.md](/website/docs/recovery/sync-failures.md) (How-To)
|
||||
> - Deletion behavior: [docs/DELETIONS.md](/docs/DELETIONS.md) (Explanation)
|
||||
|
||||
## Why Pull-First?
|
||||
|
||||
The core problem: if you export local state before seeing what's on the remote, you commit to a snapshot that may conflict with changes you haven't seen yet. Any changes that arrive during pull get imported to the database but never make it back to the exported JSONL — they're silently lost on the next push.
|
||||
|
||||
Pull-first sync solves this by reversing the order:
|
||||
|
||||
```
|
||||
Machine A: Create bd-43, sync
|
||||
↳ Load local state (bd-43 in memory)
|
||||
↳ Pull (bd-42 edit arrives in JSONL)
|
||||
↳ Merge local + remote
|
||||
↳ Export merged state
|
||||
↳ Push (contains both bd-43 AND bd-42 edit)
|
||||
```
|
||||
|
||||
By loading local state into memory before pulling, we can perform a proper merge that preserves both sets of changes.
|
||||
|
||||
## The 3-Way Merge Model
|
||||
|
||||
Beads uses 3-way merge - the same algorithm Git uses for merging branches. The reason: it distinguishes between "unchanged" and "deleted".
|
||||
|
||||
With 2-way merge (just comparing local vs remote), you cannot tell if an issue is missing because:
|
||||
- It was deleted locally
|
||||
- It was deleted remotely
|
||||
- It never existed in one copy
|
||||
|
||||
3-way merge adds a **base state** - the snapshot from the last successful sync:
|
||||
|
||||
```
|
||||
Base (last sync)
|
||||
|
|
||||
+------+------+
|
||||
| |
|
||||
Local Remote
|
||||
(your DB) (git pull)
|
||||
| |
|
||||
+------+------+
|
||||
|
|
||||
Merged
|
||||
```
|
||||
|
||||
This enables precise conflict detection:
|
||||
|
||||
| Base | Local | Remote | Result | Reason |
|
||||
|------|-------|--------|--------|--------|
|
||||
| A | A | A | A | No changes |
|
||||
| A | A | B | B | Only remote changed |
|
||||
| A | B | A | B | Only local changed |
|
||||
| A | B | B | B | Both made same change |
|
||||
| A | B | C | **merge** | True conflict |
|
||||
| A | - | A | **deleted** | Local deleted, remote unchanged |
|
||||
| A | A | - | **deleted** | Remote deleted, local unchanged |
|
||||
| A | B | - | B | Local changed after remote deleted |
|
||||
| A | - | B | B | Remote changed after local deleted |
|
||||
|
||||
The last two rows show why 3-way merge prevents accidental data loss: if one side deleted while the other modified, we keep the modification.
|
||||
|
||||
## Sync Flow
|
||||
|
||||
```
|
||||
bd sync
|
||||
|
||||
1. Pull --> 2. Merge --> 3. Export --> 4. Push
|
||||
Remote 3-way JSONL Remote
|
||||
| | | |
|
||||
v v v v
|
||||
Fetch Compare all Write merged Commit +
|
||||
issues.jsonl three states to issues.jsonl push
|
||||
```
|
||||
|
||||
Step-by-step:
|
||||
|
||||
1. **Load local state** - Read all issues from SQLite database into memory
|
||||
2. **Load base state** - Read `sync_base.jsonl` (last successful sync snapshot)
|
||||
3. **Pull** - Fetch and merge remote git changes
|
||||
4. **Load remote state** - Parse `issues.jsonl` after pull
|
||||
5. **3-way merge** - Compare base vs local vs remote for each issue
|
||||
6. **Import** - Write merged result to database
|
||||
7. **Export** - Write database to JSONL (ensures DB is source of truth)
|
||||
8. **Commit & Push** - Commit changes and push to remote
|
||||
9. **Update base** - Save current state as base for next sync
|
||||
|
||||
## Why Different Merge Strategies?
|
||||
|
||||
Not all fields should merge the same way. Consider labels: if Machine A adds "urgent" and Machine B adds "blocked", the merged result should have both labels - not pick one or the other.
|
||||
|
||||
Beads uses field-specific merge strategies:
|
||||
|
||||
| Field Type | Strategy | Why This Strategy? |
|
||||
|------------|----------|-------------------|
|
||||
| Scalars (title, status, priority) | LWW | Only one value possible; most recent wins |
|
||||
| Labels | Union | Multiple valid; keep all (no data loss) |
|
||||
| Dependencies | Union | Links should not disappear silently |
|
||||
| Comments | Append | Chronological; dedup by ID prevents duplicates |
|
||||
|
||||
**LWW (Last-Write-Wins)** uses the `updated_at` timestamp to determine which value wins. On timestamp tie, remote wins (arbitrary but deterministic).
|
||||
|
||||
**Union** combines both sets. If local has `["urgent"]` and remote has `["blocked"]`, the result is `["blocked", "urgent"]` (sorted for determinism).
|
||||
|
||||
**Append** collects all comments from both sides, deduplicating by comment ID. This ensures conversations are never lost.
|
||||
|
||||
## Why "Zombie" Issues?
|
||||
|
||||
When merging, there is an edge case: what happens when one machine deletes an issue while another modifies it?
|
||||
|
||||
```
|
||||
Machine A: Delete bd-42 → sync
|
||||
Machine B: (offline) → Edit bd-42 → sync
|
||||
Pull reveals bd-42 was deleted, but local has edits
|
||||
```
|
||||
|
||||
Beads follows the principle of **no silent data loss**. If local has meaningful changes to an issue that remote deleted, the local changes win. The issue "resurrects" - it comes back from the dead.
|
||||
|
||||
This is intentional: losing someone's work without warning is worse than keeping a deleted issue. The user can always delete it again if needed.
|
||||
|
||||
However, if the local copy is unchanged from base (meaning the user on this machine never touched it since last sync), the deletion propagates normally.
|
||||
|
||||
## Concurrency Protection
|
||||
|
||||
What happens if you run `bd sync` twice simultaneously? Without protection, both processes could:
|
||||
|
||||
1. Load the same base state
|
||||
2. Pull at different times (seeing different remote states)
|
||||
3. Merge differently
|
||||
4. Overwrite each other's exports
|
||||
5. Push conflicting commits
|
||||
|
||||
Beads uses an **exclusive file lock** (`.beads/.sync.lock`) to serialize sync operations:
|
||||
|
||||
```go
|
||||
lock := flock.New(lockPath)
|
||||
locked, err := lock.TryLock()
|
||||
if !locked {
|
||||
return fmt.Errorf("another sync is in progress")
|
||||
}
|
||||
defer lock.Unlock()
|
||||
```
|
||||
|
||||
The lock is non-blocking - if another sync is running, the second sync fails immediately with a clear error rather than waiting indefinitely.
|
||||
|
||||
The lock file is not git-tracked (it only matters on the local machine).
|
||||
|
||||
## Clock Skew Considerations
|
||||
|
||||
LWW relies on timestamps, which introduces a vulnerability: what if machine clocks disagree?
|
||||
|
||||
```
|
||||
Machine A (clock correct): Edit bd-42 at 10:00:00
|
||||
Machine B (clock +1 hour): Edit bd-42 at "11:00:00" (actually 10:00:30)
|
||||
Machine B wins despite editing later
|
||||
```
|
||||
|
||||
Beads cannot fully solve clock skew (distributed systems limitation), but it mitigates the risk:
|
||||
|
||||
1. **24-hour warning threshold** - If two timestamps differ by more than 24 hours, a warning is emitted. This catches grossly misconfigured clocks.
|
||||
|
||||
2. **Union for collections** - Labels and dependencies use union merge, which is immune to clock skew (both values kept).
|
||||
|
||||
3. **Append for comments** - Comments are sorted by `created_at` but never lost due to clock skew.
|
||||
|
||||
For maximum reliability, ensure machine clocks are synchronized via NTP.
|
||||
|
||||
## Files Reference
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `.beads/issues.jsonl` | Current state (git-tracked) |
|
||||
| `.beads/sync_base.jsonl` | Last-synced state (git-tracked) |
|
||||
| `.beads/.sync.lock` | Concurrency guard (not tracked) |
|
||||
| `.beads/beads.db` | SQLite database (not tracked) |
|
||||
|
||||
The JSONL files are the source of truth for git. The database is derived from JSONL on each machine.
|
||||
|
||||
## Sync Modes
|
||||
|
||||
Beads supports several sync modes for different use cases:
|
||||
|
||||
| Mode | Trigger | Flow | Use Case |
|
||||
|------|---------|------|----------|
|
||||
| **Normal** | Default `bd sync` | Pull → Merge → Export → Push | Standard multi-machine sync |
|
||||
| **Sync-branch** | `sync.branch` config | Separate git branch for beads files | Isolated beads history |
|
||||
| **External** | `BEADS_DIR` env | Separate repo for beads | Shared team database |
|
||||
| **From-main** | `sync.from_main` config | Clone beads from main branch | Feature branch workflow |
|
||||
| **Local-only** | No git remote | Export only (no push) | Single-machine usage |
|
||||
| **Export-only** | `--no-pull` flag | Export → Push (skip pull/merge) | Force local state to remote |
|
||||
|
||||
### Mode Selection Logic
|
||||
|
||||
```
|
||||
sync:
|
||||
├─ --no-pull flag?
|
||||
│ └─ Yes → Export-only (skip pull/merge)
|
||||
├─ No remote configured?
|
||||
│ └─ Yes → Local-only (export only)
|
||||
├─ BEADS_DIR or external .beads?
|
||||
│ └─ Yes → External repo mode
|
||||
├─ sync.branch configured?
|
||||
│ └─ Yes → Sync-branch mode
|
||||
├─ sync.from_main configured?
|
||||
│ └─ Yes → From-main mode
|
||||
└─ Normal pull-first sync
|
||||
```
|
||||
|
||||
### Test Coverage
|
||||
|
||||
Each mode has E2E tests in `cmd/bd/`:
|
||||
|
||||
| Mode | Test File |
|
||||
|------|-----------|
|
||||
| Normal | `sync_test.go`, `sync_merge_test.go` |
|
||||
| Sync-branch | `sync_modes_test.go` |
|
||||
| External | `sync_external_test.go` |
|
||||
| From-main | `sync_branch_priority_test.go` |
|
||||
| Local-only | `sync_local_only_test.go` |
|
||||
| Export-only | `sync_modes_test.go` |
|
||||
| Sync-branch (CLI E2E) | `syncbranch_e2e_test.go` |
|
||||
| Sync-branch (Daemon E2E) | `daemon_sync_branch_e2e_test.go` |
|
||||
|
||||
## Sync Paths: CLI vs Daemon
|
||||
|
||||
Sync-branch mode has two distinct code paths that must be tested independently:
|
||||
|
||||
```
|
||||
bd sync (CLI) Daemon (background)
|
||||
│ │
|
||||
▼ ▼
|
||||
Force close daemon daemon_sync_branch.go
|
||||
(prevent stale conn) syncBranchCommitAndPush()
|
||||
│ │
|
||||
▼ ▼
|
||||
syncbranch.CommitToSyncBranch Direct database + git
|
||||
syncbranch.PullFromSyncBranch with forceOverwrite flag
|
||||
```
|
||||
|
||||
### Why Two Paths?
|
||||
|
||||
SQLite connections become stale when the daemon holds them while the CLI operates on the same database. The CLI path forces daemon closure before sync to prevent connection corruption. The daemon path operates directly since it owns the connection.
|
||||
|
||||
### Test Isolation Strategy
|
||||
|
||||
Each E2E test requires proper isolation to prevent interference:
|
||||
|
||||
| Variable | Purpose |
|
||||
|----------|---------|
|
||||
| `BEADS_NO_DAEMON=1` | Prevent daemon auto-start (set in TestMain) |
|
||||
| `BEADS_DIR=<clone>/.beads` | Isolate database per clone |
|
||||
|
||||
### E2E Test Architecture: Bare Repo Pattern
|
||||
|
||||
E2E tests use a bare repository as a local "remote" to enable real git operations:
|
||||
|
||||
```
|
||||
┌─────────────┐
|
||||
│ bare.git │ ← Local "remote"
|
||||
└──────┬──────┘
|
||||
│
|
||||
┌──────┴──────┐
|
||||
▼ ▼
|
||||
Machine A Machine B
|
||||
(clone) (clone)
|
||||
│ │
|
||||
│ bd-1 │ bd-2
|
||||
│ push │ push (wins)
|
||||
│ │
|
||||
│◄────────────┤ divergence
|
||||
│ 3-way merge │
|
||||
▼ │
|
||||
[bd-1, bd-2] │
|
||||
```
|
||||
|
||||
| Aspect | update-ref (old) | bare repo (new) |
|
||||
|--------|------------------|-----------------|
|
||||
| Push testing | Simulated | Real |
|
||||
| Fetch testing | Fake refs | Real |
|
||||
| Divergence | Cannot test | Non-fast-forward |
|
||||
|
||||
### E2E Test Coverage Matrix
|
||||
|
||||
| Test | Path | What It Tests |
|
||||
|------|------|---------------|
|
||||
| TestSyncBranchE2E | CLI | syncbranch.CommitToSyncBranch/Pull |
|
||||
| TestDaemonSyncBranchE2E | Daemon | syncBranchCommitAndPush/Pull |
|
||||
| TestDaemonSyncBranchForceOverwrite | Daemon | forceOverwrite delete propagation |
|
||||
|
||||
## Historical Context
|
||||
|
||||
The pull-first sync design was introduced in PR #918 to fix issue #911 (data loss during concurrent edits). The original export-first design was simpler but could not handle the "edit during sync" scenario correctly.
|
||||
|
||||
The 3-way merge algorithm borrows concepts from:
|
||||
- Git's merge strategy (base state concept)
|
||||
- CRDT research (union for sets, LWW for scalars)
|
||||
- Tombstone patterns (deletion tracking with TTL)
|
||||
|
||||
## See Also
|
||||
|
||||
- [DELETIONS.md](DELETIONS.md) - Tombstone behavior and deletion tracking
|
||||
- [GIT_INTEGRATION.md](GIT_INTEGRATION.md) - How beads integrates with git
|
||||
- [DAEMON.md](DAEMON.md) - Automatic sync via daemon
|
||||
- [ARCHITECTURE.md](ARCHITECTURE.md) - Overall system architecture
|
||||
1
go.mod
1
go.mod
@@ -39,6 +39,7 @@ require (
|
||||
github.com/dustin/go-humanize v1.0.1 // indirect
|
||||
github.com/erikgeiser/coninput v0.0.0-20211004153227-1c3628e74d0f // indirect
|
||||
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
|
||||
github.com/gofrs/flock v0.13.0 // indirect
|
||||
github.com/google/go-cmp v0.7.0 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
|
||||
|
||||
2
go.sum
2
go.sum
@@ -57,6 +57,8 @@ github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S
|
||||
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
|
||||
github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs=
|
||||
github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
|
||||
github.com/gofrs/flock v0.13.0 h1:95JolYOvGMqeH31+FC7D2+uULf6mG61mEZ/A8dRYMzw=
|
||||
github.com/gofrs/flock v0.13.0/go.mod h1:jxeyy9R1auM5S6JYDBhDt+E2TCo7DkratH4Pgi8P+Z0=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||
|
||||
Reference in New Issue
Block a user