Make merge command idempotent for safe retry after partial failures (bd-26)
- Added mergeResult struct to track operations (added vs skipped) - Check if source issues already closed before attempting to close - Track dependencies migrated vs already existed - Count text references updated - Display detailed breakdown of operations in output - Updated help text to clarify idempotent behavior - Added comprehensive tests for idempotent retry scenarios
This commit is contained in:
@@ -16,7 +16,7 @@
|
|||||||
{"id":"bd-23","title":"Implement bd quickstart command","description":"Add bd quickstart command to show context-aware repo information: recent issues, database location, configured prefix, example queries. Helps AI agents understand current project state. Companion to bd onboard.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.513862-07:00"}
|
{"id":"bd-23","title":"Implement bd quickstart command","description":"Add bd quickstart command to show context-aware repo information: recent issues, database location, configured prefix, example queries. Helps AI agents understand current project state. Companion to bd onboard.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.513862-07:00"}
|
||||||
{"id":"bd-24","title":"Add customizable time threshold for compact command","description":"Currently compact uses fixed 30-day and 90-day tiers. Add support for custom time thresholds like '--older-than 60h' or '--older-than 2.5d' to allow more flexible compaction policies.\n\nExamples:\n bd compact --all --older-than 60h\n bd compact --all --older-than 2.5d\n bd compact --all --tier 1 --age 48h\n\nThis would allow users to set their own compaction schedules based on their workflow needs.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.514114-07:00"}
|
{"id":"bd-24","title":"Add customizable time threshold for compact command","description":"Currently compact uses fixed 30-day and 90-day tiers. Add support for custom time thresholds like '--older-than 60h' or '--older-than 2.5d' to allow more flexible compaction policies.\n\nExamples:\n bd compact --all --older-than 60h\n bd compact --all --older-than 2.5d\n bd compact --all --tier 1 --age 48h\n\nThis would allow users to set their own compaction schedules based on their workflow needs.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.514114-07:00"}
|
||||||
{"id":"bd-25","title":"Add --id flag to bd list for filtering by specific issue IDs","description":"","design":"Add --id flag accepting comma-separated IDs. Usage: bd list --id wy-11,wy-12. Combines with other filters. From filter-flag-design.md.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.514386-07:00"}
|
{"id":"bd-25","title":"Add --id flag to bd list for filtering by specific issue IDs","description":"","design":"Add --id flag accepting comma-separated IDs. Usage: bd list --id wy-11,wy-12. Combines with other filters. From filter-flag-design.md.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.514386-07:00"}
|
||||||
{"id":"bd-26","title":"Make merge command idempotent for safe retry after partial failures","description":"The merge command currently performs 3 operations without an outer transaction:\n1. Migrate dependencies from source → target\n2. Update text references across all issues\n3. Close source issues\n\nIf merge fails mid-operation (network issue, daemon crash, etc.), a retry will fail or produce incorrect results because some operations already succeeded.\n\n**Goal:** Make merge idempotent so retrying after partial failure is safe and completes the remaining work.\n\n**Idempotency checks needed:**\n- Skip dependency migration if target already has the dependency\n- Skip text reference updates if already updated\n- Skip closing source issues if already closed\n- Report which operations were skipped vs performed\n\n**Example output:**\n```\n✓ Merged 2 issue(s) into bd-78\n - Dependencies: 3 migrated, 2 already existed\n - Text references: 5 updated, 0 already correct\n - Source issues: 1 closed, 1 already closed\n```\n\n**Related:** bd-23 originally requested transaction support, but idempotency is a better solution for this use case since individual operations are already atomic.","design":"Current merge code already has some idempotency:\n- Dependency migration checks `alreadyExists` before adding (line ~145-151 in merge.go)\n- Text reference updates are naturally idempotent (replacing bd-X with bd-Y twice has same result)\n\nMissing idempotency:\n- CloseIssue fails if source already closed\n- Error messages don't distinguish \"already done\" from \"real failure\"\n\nImplementation:\n1. Check source issue status before closing - skip if already closed\n2. Track which operations succeeded/skipped\n3. Return detailed results for user visibility\n4. Consider adding --dry-run output showing what would be done vs skipped","status":"open","priority":2,"issue_type":"feature","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.514681-07:00"}
|
{"id":"bd-26","title":"Make merge command idempotent for safe retry after partial failures","description":"The merge command currently performs 3 operations without an outer transaction:\n1. Migrate dependencies from source → target\n2. Update text references across all issues\n3. Close source issues\n\nIf merge fails mid-operation (network issue, daemon crash, etc.), a retry will fail or produce incorrect results because some operations already succeeded.\n\n**Goal:** Make merge idempotent so retrying after partial failure is safe and completes the remaining work.\n\n**Idempotency checks needed:**\n- Skip dependency migration if target already has the dependency\n- Skip text reference updates if already updated\n- Skip closing source issues if already closed\n- Report which operations were skipped vs performed\n\n**Example output:**\n```\n✓ Merged 2 issue(s) into bd-78\n - Dependencies: 3 migrated, 2 already existed\n - Text references: 5 updated, 0 already correct\n - Source issues: 1 closed, 1 already closed\n```\n\n**Related:** bd-23 originally requested transaction support, but idempotency is a better solution for this use case since individual operations are already atomic.","design":"Current merge code already has some idempotency:\n- Dependency migration checks `alreadyExists` before adding (line ~145-151 in merge.go)\n- Text reference updates are naturally idempotent (replacing bd-X with bd-Y twice has same result)\n\nMissing idempotency:\n- CloseIssue fails if source already closed\n- Error messages don't distinguish \"already done\" from \"real failure\"\n\nImplementation:\n1. Check source issue status before closing - skip if already closed\n2. Track which operations succeeded/skipped\n3. Return detailed results for user visibility\n4. Consider adding --dry-run output showing what would be done vs skipped","status":"closed","priority":2,"issue_type":"feature","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T12:01:51.907044-07:00","closed_at":"2025-10-22T12:01:51.907044-07:00"}
|
||||||
{"id":"bd-27","title":"bd sync crashes with nil pointer when daemon is running","description":"The 'bd sync' command crashes with a nil pointer dereference when the daemon is running.\n\n**Reproduction:**\n```bash\n# With daemon running\n./bd sync\n```\n\n**Error:**\n```\npanic: runtime error: invalid memory address or nil pointer dereference\n[signal SIGSEGV: segmentation violation code=0x2 addr=0x120 pc=0x1012314ac]\n\ngoroutine 1 [running]:\nmain.exportToJSONL({0x1014ec2e0, 0x101a49900}, {0x14000028db0, 0x30})\n /Users/stevey/src/fred/beads/cmd/bd/sync.go:245 +0x4c\n```\n\n**Root cause:**\nThe sync command's `exportToJSONL` function directly accesses `store.SearchIssues()` at line 245, but when daemon mode is active, the global `store` variable is nil. The sync command should either:\n1. Use daemon RPC when daemon is running, or\n2. Force direct mode for sync operations\n\n**Workaround:**\nUse `--no-daemon` flag: `bd sync --no-daemon`","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.514963-07:00","closed_at":"2025-10-22T00:09:12.615536-07:00"}
|
{"id":"bd-27","title":"bd sync crashes with nil pointer when daemon is running","description":"The 'bd sync' command crashes with a nil pointer dereference when the daemon is running.\n\n**Reproduction:**\n```bash\n# With daemon running\n./bd sync\n```\n\n**Error:**\n```\npanic: runtime error: invalid memory address or nil pointer dereference\n[signal SIGSEGV: segmentation violation code=0x2 addr=0x120 pc=0x1012314ac]\n\ngoroutine 1 [running]:\nmain.exportToJSONL({0x1014ec2e0, 0x101a49900}, {0x14000028db0, 0x30})\n /Users/stevey/src/fred/beads/cmd/bd/sync.go:245 +0x4c\n```\n\n**Root cause:**\nThe sync command's `exportToJSONL` function directly accesses `store.SearchIssues()` at line 245, but when daemon mode is active, the global `store` variable is nil. The sync command should either:\n1. Use daemon RPC when daemon is running, or\n2. Force direct mode for sync operations\n\n**Workaround:**\nUse `--no-daemon` flag: `bd sync --no-daemon`","status":"closed","priority":0,"issue_type":"bug","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.514963-07:00","closed_at":"2025-10-22T00:09:12.615536-07:00"}
|
||||||
{"id":"bd-28","title":"Add cross-repo issue references (future enhancement)","description":"Support referencing issues across different beads repositories. Useful for tracking dependencies between separate projects.\n\nProposed syntax:\n- Local reference: bd-78 (current behavior)\n- Cross-repo by path: ~/src/other-project#bd-456\n- Cross-repo by workspace name: @project2:bd-789\n\nUse cases:\n1. Frontend project depends on backend API issue\n2. Shared library changes blocking multiple projects\n3. System administrator tracking work across machines\n4. Monorepo with separate beads databases per component\n\nImplementation challenges:\n- Storage layer needs to query external databases\n- Dependency resolution across repos\n- What if external repo not available?\n- How to handle in JSONL export/import?\n- Security: should repos be able to read others?\n\nDesign questions to resolve first:\n1. Read-only references vs full cross-repo dependencies?\n2. How to handle repo renames/moves?\n3. Absolute paths vs workspace names vs git remotes?\n4. Should bd-77 auto-discover related repos?\n\nRecommendation: \n- Gather user feedback first\n- Start with read-only references\n- Implement as plugin/extension?\n\nContext: This is mentioned in bd-77 as approach #2. Much more complex than daemon multi-repo approach. Only implement if there's strong user demand.\n\nPriority: Backlog (4) - wait for user feedback before designing","status":"closed","priority":4,"issue_type":"feature","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.515219-07:00","closed_at":"2025-10-20T22:00:31.966891-07:00"}
|
{"id":"bd-28","title":"Add cross-repo issue references (future enhancement)","description":"Support referencing issues across different beads repositories. Useful for tracking dependencies between separate projects.\n\nProposed syntax:\n- Local reference: bd-78 (current behavior)\n- Cross-repo by path: ~/src/other-project#bd-456\n- Cross-repo by workspace name: @project2:bd-789\n\nUse cases:\n1. Frontend project depends on backend API issue\n2. Shared library changes blocking multiple projects\n3. System administrator tracking work across machines\n4. Monorepo with separate beads databases per component\n\nImplementation challenges:\n- Storage layer needs to query external databases\n- Dependency resolution across repos\n- What if external repo not available?\n- How to handle in JSONL export/import?\n- Security: should repos be able to read others?\n\nDesign questions to resolve first:\n1. Read-only references vs full cross-repo dependencies?\n2. How to handle repo renames/moves?\n3. Absolute paths vs workspace names vs git remotes?\n4. Should bd-77 auto-discover related repos?\n\nRecommendation: \n- Gather user feedback first\n- Start with read-only references\n- Implement as plugin/extension?\n\nContext: This is mentioned in bd-77 as approach #2. Much more complex than daemon multi-repo approach. Only implement if there's strong user demand.\n\nPriority: Backlog (4) - wait for user feedback before designing","status":"closed","priority":4,"issue_type":"feature","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.515219-07:00","closed_at":"2025-10-20T22:00:31.966891-07:00"}
|
||||||
{"id":"bd-29","title":"Document merge command and AI integration","description":"Update README, AGENTS.md with merge command examples. Document AI agent duplicate detection workflow.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.515478-07:00","closed_at":"2025-10-22T11:37:41.104918-07:00"}
|
{"id":"bd-29","title":"Document merge command and AI integration","description":"Update README, AGENTS.md with merge command examples. Document AI agent duplicate detection workflow.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-10-21T23:53:44.31362-07:00","updated_at":"2025-10-22T11:56:36.515478-07:00","closed_at":"2025-10-22T11:37:41.104918-07:00"}
|
||||||
|
|||||||
104
cmd/bd/merge.go
104
cmd/bd/merge.go
@@ -18,14 +18,14 @@ var mergeCmd = &cobra.Command{
|
|||||||
Short: "Merge duplicate issues into a single issue",
|
Short: "Merge duplicate issues into a single issue",
|
||||||
Long: `Merge one or more source issues into a target issue.
|
Long: `Merge one or more source issues into a target issue.
|
||||||
|
|
||||||
This command:
|
This command is idempotent and safe to retry after partial failures:
|
||||||
1. Validates all issues exist and no self-merge
|
1. Validates all issues exist and no self-merge
|
||||||
2. Closes source issues with reason 'Merged into bd-X'
|
2. Migrates all dependencies from sources to target (skips if already exist)
|
||||||
3. Migrates all dependencies from sources to target
|
3. Updates text references in all issue descriptions/notes
|
||||||
4. Updates text references in all issue descriptions/notes
|
4. Closes source issues with reason 'Merged into bd-X' (skips if already closed)
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
bd merge bd-42 bd-43 --into bd-42
|
bd merge bd-42 bd-43 --into bd-41
|
||||||
bd merge bd-10 bd-11 bd-12 --into bd-10 --dry-run`,
|
bd merge bd-10 bd-11 bd-12 --into bd-10 --dry-run`,
|
||||||
Args: cobra.MinimumNArgs(1),
|
Args: cobra.MinimumNArgs(1),
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
@@ -62,7 +62,8 @@ Example:
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Perform merge
|
// Perform merge
|
||||||
if err := performMerge(ctx, targetID, sourceIDs); err != nil {
|
result, err := performMerge(ctx, targetID, sourceIDs)
|
||||||
|
if err != nil {
|
||||||
fmt.Fprintf(os.Stderr, "Error performing merge: %v\n", err)
|
fmt.Fprintf(os.Stderr, "Error performing merge: %v\n", err)
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
@@ -71,15 +72,23 @@ Example:
|
|||||||
markDirtyAndScheduleFlush()
|
markDirtyAndScheduleFlush()
|
||||||
|
|
||||||
if jsonOutput {
|
if jsonOutput {
|
||||||
result := map[string]interface{}{
|
output := map[string]interface{}{
|
||||||
"target_id": targetID,
|
"target_id": targetID,
|
||||||
"source_ids": sourceIDs,
|
"source_ids": sourceIDs,
|
||||||
"merged": len(sourceIDs),
|
"merged": len(sourceIDs),
|
||||||
|
"dependencies_added": result.depsAdded,
|
||||||
|
"dependencies_skipped": result.depsSkipped,
|
||||||
|
"text_references": result.textRefCount,
|
||||||
|
"issues_closed": result.issuesClosed,
|
||||||
|
"issues_skipped": result.issuesSkipped,
|
||||||
}
|
}
|
||||||
outputJSON(result)
|
outputJSON(output)
|
||||||
} else {
|
} else {
|
||||||
green := color.New(color.FgGreen).SprintFunc()
|
green := color.New(color.FgGreen).SprintFunc()
|
||||||
fmt.Printf("%s Merged %d issue(s) into %s\n", green("✓"), len(sourceIDs), targetID)
|
fmt.Printf("%s Merged %d issue(s) into %s\n", green("✓"), len(sourceIDs), targetID)
|
||||||
|
fmt.Printf(" - Dependencies: %d migrated, %d already existed\n", result.depsAdded, result.depsSkipped)
|
||||||
|
fmt.Printf(" - Text references: %d updated\n", result.textRefCount)
|
||||||
|
fmt.Printf(" - Source issues: %d closed, %d already closed\n", result.issuesClosed, result.issuesSkipped)
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
@@ -121,15 +130,26 @@ func validateMerge(targetID string, sourceIDs []string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// mergeResult tracks the results of a merge operation for reporting
|
||||||
|
type mergeResult struct {
|
||||||
|
depsAdded int
|
||||||
|
depsSkipped int
|
||||||
|
textRefCount int
|
||||||
|
issuesClosed int
|
||||||
|
issuesSkipped int
|
||||||
|
}
|
||||||
|
|
||||||
// performMerge executes the merge operation
|
// performMerge executes the merge operation
|
||||||
// TODO(bd-202): Add transaction support for atomicity
|
// TODO(bd-202): Add transaction support for atomicity
|
||||||
func performMerge(ctx context.Context, targetID string, sourceIDs []string) error {
|
func performMerge(ctx context.Context, targetID string, sourceIDs []string) (*mergeResult, error) {
|
||||||
|
result := &mergeResult{}
|
||||||
|
|
||||||
// Step 1: Migrate dependencies from source issues to target
|
// Step 1: Migrate dependencies from source issues to target
|
||||||
for _, sourceID := range sourceIDs {
|
for _, sourceID := range sourceIDs {
|
||||||
// Get all dependencies where source is the dependent (source depends on X)
|
// Get all dependencies where source is the dependent (source depends on X)
|
||||||
deps, err := store.GetDependencyRecords(ctx, sourceID)
|
deps, err := store.GetDependencyRecords(ctx, sourceID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to get dependencies for %s: %w", sourceID, err)
|
return nil, fmt.Errorf("failed to get dependencies for %s: %w", sourceID, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Migrate each dependency to target
|
// Migrate each dependency to target
|
||||||
@@ -137,7 +157,7 @@ func performMerge(ctx context.Context, targetID string, sourceIDs []string) erro
|
|||||||
// Skip if target already has this dependency
|
// Skip if target already has this dependency
|
||||||
existingDeps, err := store.GetDependencyRecords(ctx, targetID)
|
existingDeps, err := store.GetDependencyRecords(ctx, targetID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to check target dependencies: %w", err)
|
return nil, fmt.Errorf("failed to check target dependencies: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
alreadyExists := false
|
alreadyExists := false
|
||||||
@@ -148,7 +168,9 @@ func performMerge(ctx context.Context, targetID string, sourceIDs []string) erro
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if !alreadyExists && dep.DependsOnID != targetID {
|
if alreadyExists || dep.DependsOnID == targetID {
|
||||||
|
result.depsSkipped++
|
||||||
|
} else {
|
||||||
// Add dependency to target
|
// Add dependency to target
|
||||||
newDep := &types.Dependency{
|
newDep := &types.Dependency{
|
||||||
IssueID: targetID,
|
IssueID: targetID,
|
||||||
@@ -158,15 +180,16 @@ func performMerge(ctx context.Context, targetID string, sourceIDs []string) erro
|
|||||||
CreatedBy: actor,
|
CreatedBy: actor,
|
||||||
}
|
}
|
||||||
if err := store.AddDependency(ctx, newDep, actor); err != nil {
|
if err := store.AddDependency(ctx, newDep, actor); err != nil {
|
||||||
return fmt.Errorf("failed to migrate dependency %s -> %s: %w", targetID, dep.DependsOnID, err)
|
return nil, fmt.Errorf("failed to migrate dependency %s -> %s: %w", targetID, dep.DependsOnID, err)
|
||||||
}
|
}
|
||||||
|
result.depsAdded++
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get all dependencies where source is the dependency (X depends on source)
|
// Get all dependencies where source is the dependency (X depends on source)
|
||||||
allDeps, err := store.GetAllDependencyRecords(ctx)
|
allDeps, err := store.GetAllDependencyRecords(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to get all dependencies: %w", err)
|
return nil, fmt.Errorf("failed to get all dependencies: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
for issueID, depList := range allDeps {
|
for issueID, depList := range allDeps {
|
||||||
@@ -176,7 +199,7 @@ func performMerge(ctx context.Context, targetID string, sourceIDs []string) erro
|
|||||||
if err := store.RemoveDependency(ctx, issueID, sourceID, actor); err != nil {
|
if err := store.RemoveDependency(ctx, issueID, sourceID, actor); err != nil {
|
||||||
// Ignore "not found" errors as they may have been cleaned up
|
// Ignore "not found" errors as they may have been cleaned up
|
||||||
if !strings.Contains(err.Error(), "not found") {
|
if !strings.Contains(err.Error(), "not found") {
|
||||||
return fmt.Errorf("failed to remove dependency %s -> %s: %w", issueID, sourceID, err)
|
return nil, fmt.Errorf("failed to remove dependency %s -> %s: %w", issueID, sourceID, err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -192,8 +215,12 @@ func performMerge(ctx context.Context, targetID string, sourceIDs []string) erro
|
|||||||
if err := store.AddDependency(ctx, newDep, actor); err != nil {
|
if err := store.AddDependency(ctx, newDep, actor); err != nil {
|
||||||
// Ignore if dependency already exists
|
// Ignore if dependency already exists
|
||||||
if !strings.Contains(err.Error(), "UNIQUE constraint failed") {
|
if !strings.Contains(err.Error(), "UNIQUE constraint failed") {
|
||||||
return fmt.Errorf("failed to add dependency %s -> %s: %w", issueID, targetID, err)
|
return nil, fmt.Errorf("failed to add dependency %s -> %s: %w", issueID, targetID, err)
|
||||||
|
} else {
|
||||||
|
result.depsSkipped++
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
result.depsAdded++
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -202,27 +229,44 @@ func performMerge(ctx context.Context, targetID string, sourceIDs []string) erro
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Step 2: Update text references in all issues
|
// Step 2: Update text references in all issues
|
||||||
if err := updateMergeTextReferences(ctx, sourceIDs, targetID); err != nil {
|
refCount, err := updateMergeTextReferences(ctx, sourceIDs, targetID)
|
||||||
return fmt.Errorf("failed to update text references: %w", err)
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to update text references: %w", err)
|
||||||
}
|
}
|
||||||
|
result.textRefCount = refCount
|
||||||
|
|
||||||
// Step 3: Close source issues
|
// Step 3: Close source issues (idempotent - skip if already closed)
|
||||||
for _, sourceID := range sourceIDs {
|
for _, sourceID := range sourceIDs {
|
||||||
reason := fmt.Sprintf("Merged into %s", targetID)
|
issue, err := store.GetIssue(ctx, sourceID)
|
||||||
if err := store.CloseIssue(ctx, sourceID, reason, actor); err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to close source issue %s: %w", sourceID, err)
|
return nil, fmt.Errorf("failed to get source issue %s: %w", sourceID, err)
|
||||||
|
}
|
||||||
|
if issue == nil {
|
||||||
|
return nil, fmt.Errorf("source issue not found: %s", sourceID)
|
||||||
|
}
|
||||||
|
|
||||||
|
if issue.Status == types.StatusClosed {
|
||||||
|
// Already closed - skip
|
||||||
|
result.issuesSkipped++
|
||||||
|
} else {
|
||||||
|
reason := fmt.Sprintf("Merged into %s", targetID)
|
||||||
|
if err := store.CloseIssue(ctx, sourceID, reason, actor); err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to close source issue %s: %w", sourceID, err)
|
||||||
|
}
|
||||||
|
result.issuesClosed++
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return result, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// updateMergeTextReferences updates text references from source IDs to target ID
|
// updateMergeTextReferences updates text references from source IDs to target ID
|
||||||
func updateMergeTextReferences(ctx context.Context, sourceIDs []string, targetID string) error {
|
// Returns the count of text references updated
|
||||||
|
func updateMergeTextReferences(ctx context.Context, sourceIDs []string, targetID string) (int, error) {
|
||||||
// Get all issues to scan for references
|
// Get all issues to scan for references
|
||||||
allIssues, err := store.SearchIssues(ctx, "", types.IssueFilter{})
|
allIssues, err := store.SearchIssues(ctx, "", types.IssueFilter{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to get all issues: %w", err)
|
return 0, fmt.Errorf("failed to get all issues: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
updatedCount := 0
|
updatedCount := 0
|
||||||
@@ -284,11 +328,11 @@ func updateMergeTextReferences(ctx context.Context, sourceIDs []string, targetID
|
|||||||
// Apply updates if any
|
// Apply updates if any
|
||||||
if len(updates) > 0 {
|
if len(updates) > 0 {
|
||||||
if err := store.UpdateIssue(ctx, issue.ID, updates, actor); err != nil {
|
if err := store.UpdateIssue(ctx, issue.ID, updates, actor); err != nil {
|
||||||
return fmt.Errorf("failed to update issue %s: %w", issue.ID, err)
|
return updatedCount, fmt.Errorf("failed to update issue %s: %w", issue.ID, err)
|
||||||
}
|
}
|
||||||
updatedCount++
|
updatedCount++
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return updatedCount, nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -180,3 +180,202 @@ func containsSubstring(s, substr string) bool {
|
|||||||
}
|
}
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TestPerformMergeIdempotent verifies that merge operations are idempotent
|
||||||
|
func TestPerformMergeIdempotent(t *testing.T) {
|
||||||
|
tmpDir := t.TempDir()
|
||||||
|
dbFile := filepath.Join(tmpDir, ".beads", "issues.db")
|
||||||
|
if err := os.MkdirAll(filepath.Dir(dbFile), 0755); err != nil {
|
||||||
|
t.Fatalf("Failed to create test directory: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
testStore, err := sqlite.New(dbFile)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to create test storage: %v", err)
|
||||||
|
}
|
||||||
|
defer testStore.Close()
|
||||||
|
|
||||||
|
store = testStore
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
// Create test issues
|
||||||
|
issue1 := &types.Issue{
|
||||||
|
ID: "bd-100",
|
||||||
|
Title: "Target issue",
|
||||||
|
Description: "This is the target",
|
||||||
|
Priority: 1,
|
||||||
|
IssueType: types.TypeTask,
|
||||||
|
Status: types.StatusOpen,
|
||||||
|
}
|
||||||
|
issue2 := &types.Issue{
|
||||||
|
ID: "bd-101",
|
||||||
|
Title: "Source issue 1",
|
||||||
|
Description: "This mentions bd-100",
|
||||||
|
Priority: 1,
|
||||||
|
IssueType: types.TypeTask,
|
||||||
|
Status: types.StatusOpen,
|
||||||
|
}
|
||||||
|
issue3 := &types.Issue{
|
||||||
|
ID: "bd-102",
|
||||||
|
Title: "Source issue 2",
|
||||||
|
Description: "Another source",
|
||||||
|
Priority: 1,
|
||||||
|
IssueType: types.TypeTask,
|
||||||
|
Status: types.StatusOpen,
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, issue := range []*types.Issue{issue1, issue2, issue3} {
|
||||||
|
if err := testStore.CreateIssue(ctx, issue, "test"); err != nil {
|
||||||
|
t.Fatalf("Failed to create issue %s: %v", issue.ID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add a dependency from bd-101 to another issue
|
||||||
|
issue4 := &types.Issue{
|
||||||
|
ID: "bd-103",
|
||||||
|
Title: "Dependency target",
|
||||||
|
Description: "Dependency target",
|
||||||
|
Priority: 1,
|
||||||
|
IssueType: types.TypeTask,
|
||||||
|
Status: types.StatusOpen,
|
||||||
|
}
|
||||||
|
if err := testStore.CreateIssue(ctx, issue4, "test"); err != nil {
|
||||||
|
t.Fatalf("Failed to create issue4: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
dep := &types.Dependency{
|
||||||
|
IssueID: "bd-101",
|
||||||
|
DependsOnID: "bd-103",
|
||||||
|
Type: types.DepBlocks,
|
||||||
|
}
|
||||||
|
if err := testStore.AddDependency(ctx, dep, "test"); err != nil {
|
||||||
|
t.Fatalf("Failed to add dependency: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// First merge - should complete successfully
|
||||||
|
result1, err := performMerge(ctx, "bd-100", []string{"bd-101", "bd-102"})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("First merge failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if result1.issuesClosed != 2 {
|
||||||
|
t.Errorf("First merge: expected 2 issues closed, got %d", result1.issuesClosed)
|
||||||
|
}
|
||||||
|
if result1.issuesSkipped != 0 {
|
||||||
|
t.Errorf("First merge: expected 0 issues skipped, got %d", result1.issuesSkipped)
|
||||||
|
}
|
||||||
|
if result1.depsAdded == 0 {
|
||||||
|
t.Errorf("First merge: expected some dependencies added, got 0")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify issues are closed
|
||||||
|
closed1, _ := testStore.GetIssue(ctx, "bd-101")
|
||||||
|
if closed1.Status != types.StatusClosed {
|
||||||
|
t.Errorf("bd-101 should be closed after first merge")
|
||||||
|
}
|
||||||
|
closed2, _ := testStore.GetIssue(ctx, "bd-102")
|
||||||
|
if closed2.Status != types.StatusClosed {
|
||||||
|
t.Errorf("bd-102 should be closed after first merge")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Second merge (retry) - should be idempotent
|
||||||
|
result2, err := performMerge(ctx, "bd-100", []string{"bd-101", "bd-102"})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Second merge (retry) failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// All operations should be skipped
|
||||||
|
if result2.issuesClosed != 0 {
|
||||||
|
t.Errorf("Second merge: expected 0 issues closed, got %d", result2.issuesClosed)
|
||||||
|
}
|
||||||
|
if result2.issuesSkipped != 2 {
|
||||||
|
t.Errorf("Second merge: expected 2 issues skipped, got %d", result2.issuesSkipped)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Dependencies should be skipped (already exist)
|
||||||
|
if result2.depsAdded != 0 {
|
||||||
|
t.Errorf("Second merge: expected 0 dependencies added, got %d", result2.depsAdded)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Text references are naturally idempotent - count may vary
|
||||||
|
// (it will update again but result is the same)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TestPerformMergePartialRetry tests retrying after partial failure
|
||||||
|
func TestPerformMergePartialRetry(t *testing.T) {
|
||||||
|
tmpDir := t.TempDir()
|
||||||
|
dbFile := filepath.Join(tmpDir, ".beads", "issues.db")
|
||||||
|
if err := os.MkdirAll(filepath.Dir(dbFile), 0755); err != nil {
|
||||||
|
t.Fatalf("Failed to create test directory: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
testStore, err := sqlite.New(dbFile)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Failed to create test storage: %v", err)
|
||||||
|
}
|
||||||
|
defer testStore.Close()
|
||||||
|
|
||||||
|
store = testStore
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
// Create test issues
|
||||||
|
issue1 := &types.Issue{
|
||||||
|
ID: "bd-200",
|
||||||
|
Title: "Target",
|
||||||
|
Description: "Target issue",
|
||||||
|
Priority: 1,
|
||||||
|
IssueType: types.TypeTask,
|
||||||
|
Status: types.StatusOpen,
|
||||||
|
}
|
||||||
|
issue2 := &types.Issue{
|
||||||
|
ID: "bd-201",
|
||||||
|
Title: "Source 1",
|
||||||
|
Description: "Source 1",
|
||||||
|
Priority: 1,
|
||||||
|
IssueType: types.TypeTask,
|
||||||
|
Status: types.StatusOpen,
|
||||||
|
}
|
||||||
|
issue3 := &types.Issue{
|
||||||
|
ID: "bd-202",
|
||||||
|
Title: "Source 2",
|
||||||
|
Description: "Source 2",
|
||||||
|
Priority: 1,
|
||||||
|
IssueType: types.TypeTask,
|
||||||
|
Status: types.StatusOpen,
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, issue := range []*types.Issue{issue1, issue2, issue3} {
|
||||||
|
if err := testStore.CreateIssue(ctx, issue, "test"); err != nil {
|
||||||
|
t.Fatalf("Failed to create issue %s: %v", issue.ID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Simulate partial failure: manually close one source issue
|
||||||
|
if err := testStore.CloseIssue(ctx, "bd-201", "Manually closed", "test"); err != nil {
|
||||||
|
t.Fatalf("Failed to manually close bd-201: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run merge - should handle one already-closed issue gracefully
|
||||||
|
result, err := performMerge(ctx, "bd-200", []string{"bd-201", "bd-202"})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("Merge with partial state failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Should skip the already-closed issue and close the other
|
||||||
|
if result.issuesClosed != 1 {
|
||||||
|
t.Errorf("Expected 1 issue closed, got %d", result.issuesClosed)
|
||||||
|
}
|
||||||
|
if result.issuesSkipped != 1 {
|
||||||
|
t.Errorf("Expected 1 issue skipped, got %d", result.issuesSkipped)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify both are now closed
|
||||||
|
closed1, _ := testStore.GetIssue(ctx, "bd-201")
|
||||||
|
if closed1.Status != types.StatusClosed {
|
||||||
|
t.Errorf("bd-201 should remain closed")
|
||||||
|
}
|
||||||
|
closed2, _ := testStore.GetIssue(ctx, "bd-202")
|
||||||
|
if closed2.Status != types.StatusClosed {
|
||||||
|
t.Errorf("bd-202 should be closed")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user