Add multi-repo migration guide and bd migrate-issues command
- Created comprehensive migration guide at docs/MULTI_REPO_MIGRATION.md - Covers OSS contributor, team, multi-phase, and persona workflows - Step-by-step setup instructions with examples - Configuration reference and troubleshooting - Implemented bd migrate-issues command - Move issues between repos with filtering (status/priority/labels/type) - Dependency preservation with upstream/downstream/closure options - Dry-run mode and strict validation - Interactive confirmation with --yes override - Updated README.md and AGENTS.md with migration guide links Completes: bd-c3ei, bd-mlcz Part of epic: bd-8rd (Migration and onboarding for multi-repo) Amp-Thread-ID: https://ampcode.com/threads/T-c5a7a780-b05e-4cc3-a7c1-5de107821b7e Co-authored-by: Amp <amp@ampcode.com>
This commit is contained in:
@@ -61,6 +61,8 @@ See `integrations/beads-mcp/README.md` for complete documentation.
|
|||||||
|
|
||||||
**RECOMMENDED: Use a single MCP server for all beads projects** - it automatically routes to per-project local daemons.
|
**RECOMMENDED: Use a single MCP server for all beads projects** - it automatically routes to per-project local daemons.
|
||||||
|
|
||||||
|
**For complete multi-repo workflow guide**, see [docs/MULTI_REPO_MIGRATION.md](docs/MULTI_REPO_MIGRATION.md) (OSS contributors, teams, multi-phase development).
|
||||||
|
|
||||||
**Setup (one-time):**
|
**Setup (one-time):**
|
||||||
```bash
|
```bash
|
||||||
# MCP config in ~/.config/amp/settings.json or Claude Desktop config:
|
# MCP config in ~/.config/amp/settings.json or Claude Desktop config:
|
||||||
|
|||||||
@@ -660,6 +660,7 @@ For advanced usage, see:
|
|||||||
- **[README.md](README.md)** - You are here! Core features and quick start
|
- **[README.md](README.md)** - You are here! Core features and quick start
|
||||||
- **[INSTALLING.md](INSTALLING.md)** - Complete installation guide for all platforms
|
- **[INSTALLING.md](INSTALLING.md)** - Complete installation guide for all platforms
|
||||||
- **[QUICKSTART.md](QUICKSTART.md)** - Interactive tutorial (`bd quickstart`)
|
- **[QUICKSTART.md](QUICKSTART.md)** - Interactive tutorial (`bd quickstart`)
|
||||||
|
- **[docs/MULTI_REPO_MIGRATION.md](docs/MULTI_REPO_MIGRATION.md)** - Multi-repo workflow guide (OSS, teams, multi-phase)
|
||||||
- **[FAQ.md](FAQ.md)** - Frequently asked questions
|
- **[FAQ.md](FAQ.md)** - Frequently asked questions
|
||||||
- **[TROUBLESHOOTING.md](TROUBLESHOOTING.md)** - Common issues and solutions
|
- **[TROUBLESHOOTING.md](TROUBLESHOOTING.md)** - Common issues and solutions
|
||||||
- **[ADVANCED.md](ADVANCED.md)** - Advanced features and use cases
|
- **[ADVANCED.md](ADVANCED.md)** - Advanced features and use cases
|
||||||
|
|||||||
703
cmd/bd/migrate_issues.go
Normal file
703
cmd/bd/migrate_issues.go
Normal file
@@ -0,0 +1,703 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"database/sql"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
"github.com/steveyegge/beads/internal/storage/sqlite"
|
||||||
|
)
|
||||||
|
|
||||||
|
var migrateIssuesCmd = &cobra.Command{
|
||||||
|
Use: "migrate-issues",
|
||||||
|
Short: "Move issues between repositories",
|
||||||
|
Long: `Move issues from one source repository to another with filtering and dependency preservation.
|
||||||
|
|
||||||
|
This command updates the source_repo field for selected issues, allowing you to:
|
||||||
|
- Move contributor planning issues to upstream repository
|
||||||
|
- Reorganize issues across multi-phase repositories
|
||||||
|
- Consolidate issues from multiple repos
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
# Preview migration from planning repo to current repo
|
||||||
|
bd migrate-issues --from ~/.beads-planning --to . --dry-run
|
||||||
|
|
||||||
|
# Move all open P1 bugs
|
||||||
|
bd migrate-issues --from ~/repo1 --to ~/repo2 --priority 1 --type bug --status open
|
||||||
|
|
||||||
|
# Move specific issues with their dependencies
|
||||||
|
bd migrate-issues --from . --to ~/archive --id bd-abc --id bd-xyz --include closure
|
||||||
|
|
||||||
|
# Move issues with label filter
|
||||||
|
bd migrate-issues --from . --to ~/feature-work --label frontend --label urgent`,
|
||||||
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
ctx := context.Background()
|
||||||
|
|
||||||
|
// Parse flags
|
||||||
|
from, _ := cmd.Flags().GetString("from")
|
||||||
|
to, _ := cmd.Flags().GetString("to")
|
||||||
|
statusStr, _ := cmd.Flags().GetString("status")
|
||||||
|
priorityInt, _ := cmd.Flags().GetInt("priority")
|
||||||
|
typeStr, _ := cmd.Flags().GetString("type")
|
||||||
|
labels, _ := cmd.Flags().GetStringSlice("label")
|
||||||
|
ids, _ := cmd.Flags().GetStringSlice("id")
|
||||||
|
idsFile, _ := cmd.Flags().GetString("ids-file")
|
||||||
|
include, _ := cmd.Flags().GetString("include")
|
||||||
|
withinFromOnly, _ := cmd.Flags().GetBool("within-from-only")
|
||||||
|
dryRun, _ := cmd.Flags().GetBool("dry-run")
|
||||||
|
strict, _ := cmd.Flags().GetBool("strict")
|
||||||
|
yes, _ := cmd.Flags().GetBool("yes")
|
||||||
|
|
||||||
|
// Validate required flags
|
||||||
|
if from == "" || to == "" {
|
||||||
|
if jsonOutput {
|
||||||
|
outputJSON(map[string]interface{}{
|
||||||
|
"error": "missing_required_flags",
|
||||||
|
"message": "Both --from and --to are required",
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
fmt.Fprintln(os.Stderr, "Error: both --from and --to flags are required")
|
||||||
|
}
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
if from == to {
|
||||||
|
if jsonOutput {
|
||||||
|
outputJSON(map[string]interface{}{
|
||||||
|
"error": "same_source_and_dest",
|
||||||
|
"message": "Source and destination repositories must be different",
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
fmt.Fprintln(os.Stderr, "Error: --from and --to must be different repositories")
|
||||||
|
}
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load IDs from file if specified
|
||||||
|
if idsFile != "" {
|
||||||
|
fileIDs, err := loadIDsFromFile(idsFile)
|
||||||
|
if err != nil {
|
||||||
|
if jsonOutput {
|
||||||
|
outputJSON(map[string]interface{}{
|
||||||
|
"error": "ids_file_read_failed",
|
||||||
|
"message": err.Error(),
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error reading IDs file: %v\n", err)
|
||||||
|
}
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
ids = append(ids, fileIDs...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute migration
|
||||||
|
if err := executeMigrateIssues(ctx, migrateIssuesParams{
|
||||||
|
from: from,
|
||||||
|
to: to,
|
||||||
|
status: statusStr,
|
||||||
|
priority: priorityInt,
|
||||||
|
issueType: typeStr,
|
||||||
|
labels: labels,
|
||||||
|
ids: ids,
|
||||||
|
include: include,
|
||||||
|
withinFromOnly: withinFromOnly,
|
||||||
|
dryRun: dryRun,
|
||||||
|
strict: strict,
|
||||||
|
yes: yes,
|
||||||
|
}); err != nil {
|
||||||
|
if jsonOutput {
|
||||||
|
outputJSON(map[string]interface{}{
|
||||||
|
"error": "migration_failed",
|
||||||
|
"message": err.Error(),
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
fmt.Fprintf(os.Stderr, "Error: %v\n", err)
|
||||||
|
}
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
type migrateIssuesParams struct {
|
||||||
|
from string
|
||||||
|
to string
|
||||||
|
status string
|
||||||
|
priority int
|
||||||
|
issueType string
|
||||||
|
labels []string
|
||||||
|
ids []string
|
||||||
|
include string
|
||||||
|
withinFromOnly bool
|
||||||
|
dryRun bool
|
||||||
|
strict bool
|
||||||
|
yes bool
|
||||||
|
}
|
||||||
|
|
||||||
|
type migrationPlan struct {
|
||||||
|
TotalSelected int `json:"total_selected"`
|
||||||
|
AddedByDependency int `json:"added_by_dependency"`
|
||||||
|
IncomingEdges int `json:"incoming_edges"`
|
||||||
|
OutgoingEdges int `json:"outgoing_edges"`
|
||||||
|
Orphans int `json:"orphans"`
|
||||||
|
OrphanSamples []string `json:"orphan_samples,omitempty"`
|
||||||
|
IssueIDs []string `json:"issue_ids"`
|
||||||
|
From string `json:"from"`
|
||||||
|
To string `json:"to"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func executeMigrateIssues(ctx context.Context, p migrateIssuesParams) error {
|
||||||
|
// Get database connection (use global store)
|
||||||
|
sqlStore, ok := store.(*sqlite.SQLiteStorage)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("migrate-issues requires SQLite storage")
|
||||||
|
}
|
||||||
|
db := sqlStore.UnderlyingDB()
|
||||||
|
|
||||||
|
// Step 1: Validate repositories exist
|
||||||
|
if err := validateRepos(ctx, db, p.from, p.to, p.strict); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 2: Build initial candidate set C using filters
|
||||||
|
candidates, err := findCandidateIssues(ctx, db, p)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to find candidate issues: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(candidates) == 0 {
|
||||||
|
if jsonOutput {
|
||||||
|
outputJSON(map[string]interface{}{
|
||||||
|
"message": "No issues match the specified filters",
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
fmt.Println("Nothing to do: no issues match the specified filters")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 3: Expand set to M (migration set) based on --include
|
||||||
|
migrationSet, dependencyStats, err := expandMigrationSet(ctx, db, candidates, p)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to compute migration set: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 4: Check for orphaned dependencies
|
||||||
|
orphans, err := checkOrphanedDependencies(ctx, db, migrationSet)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to check dependencies: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(orphans) > 0 && p.strict {
|
||||||
|
return fmt.Errorf("strict mode: found %d orphaned dependencies", len(orphans))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 5: Build migration plan
|
||||||
|
plan := buildMigrationPlan(candidates, migrationSet, dependencyStats, orphans, p.from, p.to)
|
||||||
|
|
||||||
|
// Step 6: Display plan
|
||||||
|
if err := displayMigrationPlan(plan, p.dryRun); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Step 7: Execute migration if not dry-run
|
||||||
|
if !p.dryRun {
|
||||||
|
if !p.yes && !jsonOutput {
|
||||||
|
if !confirmMigration(plan) {
|
||||||
|
fmt.Println("Migration cancelled")
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := executeMigration(ctx, db, migrationSet, p.to); err != nil {
|
||||||
|
return fmt.Errorf("migration failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if jsonOutput {
|
||||||
|
outputJSON(map[string]interface{}{
|
||||||
|
"success": true,
|
||||||
|
"message": fmt.Sprintf("Migrated %d issues from %s to %s", len(migrationSet), p.from, p.to),
|
||||||
|
"plan": plan,
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
fmt.Printf("\n✓ Successfully migrated %d issues from %s to %s\n", len(migrationSet), p.from, p.to)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func validateRepos(ctx context.Context, db *sql.DB, from, to string, strict bool) error {
|
||||||
|
// Check if source repo exists
|
||||||
|
var fromCount int
|
||||||
|
err := db.QueryRowContext(ctx, "SELECT COUNT(*) FROM issues WHERE source_repo = ?", from).Scan(&fromCount)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to check source repository: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if fromCount == 0 {
|
||||||
|
msg := fmt.Sprintf("source repository '%s' has no issues", from)
|
||||||
|
if strict {
|
||||||
|
return fmt.Errorf("%s", msg)
|
||||||
|
}
|
||||||
|
if !jsonOutput {
|
||||||
|
fmt.Fprintf(os.Stderr, "Warning: %s\n", msg)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if destination repo exists (just a warning)
|
||||||
|
var toCount int
|
||||||
|
err = db.QueryRowContext(ctx, "SELECT COUNT(*) FROM issues WHERE source_repo = ?", to).Scan(&toCount)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to check destination repository: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if toCount == 0 && !jsonOutput {
|
||||||
|
fmt.Fprintf(os.Stderr, "Info: destination repository '%s' will be created\n", to)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func findCandidateIssues(ctx context.Context, db *sql.DB, p migrateIssuesParams) ([]string, error) {
|
||||||
|
// Build WHERE clause
|
||||||
|
var conditions []string
|
||||||
|
var args []interface{}
|
||||||
|
|
||||||
|
// Always filter by source_repo
|
||||||
|
conditions = append(conditions, "source_repo = ?")
|
||||||
|
args = append(args, p.from)
|
||||||
|
|
||||||
|
// Filter by status
|
||||||
|
if p.status != "" && p.status != "all" {
|
||||||
|
conditions = append(conditions, "status = ?")
|
||||||
|
args = append(args, p.status)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter by priority
|
||||||
|
if p.priority >= 0 {
|
||||||
|
conditions = append(conditions, "priority = ?")
|
||||||
|
args = append(args, p.priority)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter by type
|
||||||
|
if p.issueType != "" && p.issueType != "all" {
|
||||||
|
conditions = append(conditions, "issue_type = ?")
|
||||||
|
args = append(args, p.issueType)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter by labels
|
||||||
|
if len(p.labels) > 0 {
|
||||||
|
// Issues must have ALL specified labels (AND logic)
|
||||||
|
for _, label := range p.labels {
|
||||||
|
conditions = append(conditions, `id IN (SELECT issue_id FROM issue_labels WHERE label = ?)`)
|
||||||
|
args = append(args, label)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build query
|
||||||
|
query := "SELECT id FROM issues WHERE " + strings.Join(conditions, " AND ")
|
||||||
|
|
||||||
|
rows, err := db.QueryContext(ctx, query, args...)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
|
||||||
|
var candidates []string
|
||||||
|
for rows.Next() {
|
||||||
|
var id string
|
||||||
|
if err := rows.Scan(&id); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
candidates = append(candidates, id)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter by explicit ID list if provided
|
||||||
|
if len(p.ids) > 0 {
|
||||||
|
idSet := make(map[string]bool)
|
||||||
|
for _, id := range p.ids {
|
||||||
|
idSet[id] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
var filtered []string
|
||||||
|
for _, id := range candidates {
|
||||||
|
if idSet[id] {
|
||||||
|
filtered = append(filtered, id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
candidates = filtered
|
||||||
|
}
|
||||||
|
|
||||||
|
return candidates, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type dependencyStats struct {
|
||||||
|
incomingEdges int
|
||||||
|
outgoingEdges int
|
||||||
|
}
|
||||||
|
|
||||||
|
func expandMigrationSet(ctx context.Context, db *sql.DB, candidates []string, p migrateIssuesParams) ([]string, dependencyStats, error) {
|
||||||
|
if p.include == "none" || p.include == "" {
|
||||||
|
return candidates, dependencyStats{}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build initial set
|
||||||
|
migrationSet := make(map[string]bool)
|
||||||
|
for _, id := range candidates {
|
||||||
|
migrationSet[id] = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// BFS traversal for dependency closure
|
||||||
|
visited := make(map[string]bool)
|
||||||
|
queue := make([]string, len(candidates))
|
||||||
|
copy(queue, candidates)
|
||||||
|
|
||||||
|
for len(queue) > 0 {
|
||||||
|
current := queue[0]
|
||||||
|
queue = queue[1:]
|
||||||
|
|
||||||
|
if visited[current] {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
visited[current] = true
|
||||||
|
|
||||||
|
// Traverse based on include mode
|
||||||
|
var deps []string
|
||||||
|
var err error
|
||||||
|
|
||||||
|
switch p.include {
|
||||||
|
case "upstream":
|
||||||
|
deps, err = getUpstreamDependencies(ctx, db, current, p.from, p.withinFromOnly)
|
||||||
|
case "downstream":
|
||||||
|
deps, err = getDownstreamDependencies(ctx, db, current, p.from, p.withinFromOnly)
|
||||||
|
case "closure":
|
||||||
|
upDeps, err1 := getUpstreamDependencies(ctx, db, current, p.from, p.withinFromOnly)
|
||||||
|
downDeps, err2 := getDownstreamDependencies(ctx, db, current, p.from, p.withinFromOnly)
|
||||||
|
if err1 != nil {
|
||||||
|
err = err1
|
||||||
|
} else if err2 != nil {
|
||||||
|
err = err2
|
||||||
|
} else {
|
||||||
|
deps = append(upDeps, downDeps...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return nil, dependencyStats{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, dep := range deps {
|
||||||
|
if !visited[dep] {
|
||||||
|
migrationSet[dep] = true
|
||||||
|
queue = append(queue, dep)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert map to slice
|
||||||
|
result := make([]string, 0, len(migrationSet))
|
||||||
|
for id := range migrationSet {
|
||||||
|
result = append(result, id)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Count cross-repo edges
|
||||||
|
stats, err := countCrossRepoEdges(ctx, db, result)
|
||||||
|
if err != nil {
|
||||||
|
return nil, dependencyStats{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return result, stats, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func getUpstreamDependencies(ctx context.Context, db *sql.DB, issueID, fromRepo string, withinFromOnly bool) ([]string, error) {
|
||||||
|
query := `SELECT depends_on_id FROM dependencies WHERE issue_id = ?`
|
||||||
|
if withinFromOnly {
|
||||||
|
query = `SELECT d.depends_on_id FROM dependencies d
|
||||||
|
JOIN issues i ON d.depends_on_id = i.id
|
||||||
|
WHERE d.issue_id = ? AND i.source_repo = ?`
|
||||||
|
}
|
||||||
|
|
||||||
|
var rows *sql.Rows
|
||||||
|
var err error
|
||||||
|
|
||||||
|
if withinFromOnly {
|
||||||
|
rows, err = db.QueryContext(ctx, query, issueID, fromRepo)
|
||||||
|
} else {
|
||||||
|
rows, err = db.QueryContext(ctx, query, issueID)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
|
||||||
|
var deps []string
|
||||||
|
for rows.Next() {
|
||||||
|
var dep string
|
||||||
|
if err := rows.Scan(&dep); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
deps = append(deps, dep)
|
||||||
|
}
|
||||||
|
|
||||||
|
return deps, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func getDownstreamDependencies(ctx context.Context, db *sql.DB, issueID, fromRepo string, withinFromOnly bool) ([]string, error) {
|
||||||
|
query := `SELECT issue_id FROM dependencies WHERE depends_on_id = ?`
|
||||||
|
if withinFromOnly {
|
||||||
|
query = `SELECT d.issue_id FROM dependencies d
|
||||||
|
JOIN issues i ON d.issue_id = i.id
|
||||||
|
WHERE d.depends_on_id = ? AND i.source_repo = ?`
|
||||||
|
}
|
||||||
|
|
||||||
|
var rows *sql.Rows
|
||||||
|
var err error
|
||||||
|
|
||||||
|
if withinFromOnly {
|
||||||
|
rows, err = db.QueryContext(ctx, query, issueID, fromRepo)
|
||||||
|
} else {
|
||||||
|
rows, err = db.QueryContext(ctx, query, issueID)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
|
||||||
|
var deps []string
|
||||||
|
for rows.Next() {
|
||||||
|
var dep string
|
||||||
|
if err := rows.Scan(&dep); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
deps = append(deps, dep)
|
||||||
|
}
|
||||||
|
|
||||||
|
return deps, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func countCrossRepoEdges(ctx context.Context, db *sql.DB, migrationSet []string) (dependencyStats, error) {
|
||||||
|
if len(migrationSet) == 0 {
|
||||||
|
return dependencyStats{}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build placeholders for IN clause
|
||||||
|
placeholders := make([]string, len(migrationSet))
|
||||||
|
args := make([]interface{}, len(migrationSet))
|
||||||
|
for i, id := range migrationSet {
|
||||||
|
placeholders[i] = "?"
|
||||||
|
args[i] = id
|
||||||
|
}
|
||||||
|
inClause := strings.Join(placeholders, ",")
|
||||||
|
|
||||||
|
// Count incoming edges (external issues depend on migrated issues)
|
||||||
|
incomingQuery := fmt.Sprintf(`
|
||||||
|
SELECT COUNT(*) FROM dependencies
|
||||||
|
WHERE depends_on_id IN (%s)
|
||||||
|
AND issue_id NOT IN (%s)`, inClause, inClause)
|
||||||
|
|
||||||
|
var incoming int
|
||||||
|
if err := db.QueryRowContext(ctx, incomingQuery, append(args, args...)...).Scan(&incoming); err != nil {
|
||||||
|
return dependencyStats{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Count outgoing edges (migrated issues depend on external issues)
|
||||||
|
outgoingQuery := fmt.Sprintf(`
|
||||||
|
SELECT COUNT(*) FROM dependencies
|
||||||
|
WHERE issue_id IN (%s)
|
||||||
|
AND depends_on_id NOT IN (%s)`, inClause, inClause)
|
||||||
|
|
||||||
|
var outgoing int
|
||||||
|
if err := db.QueryRowContext(ctx, outgoingQuery, append(args, args...)...).Scan(&outgoing); err != nil {
|
||||||
|
return dependencyStats{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return dependencyStats{
|
||||||
|
incomingEdges: incoming,
|
||||||
|
outgoingEdges: outgoing,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func checkOrphanedDependencies(ctx context.Context, db *sql.DB, migrationSet []string) ([]string, error) {
|
||||||
|
// Check for dependencies referencing non-existent issues
|
||||||
|
query := `
|
||||||
|
SELECT DISTINCT d.depends_on_id
|
||||||
|
FROM dependencies d
|
||||||
|
LEFT JOIN issues i ON d.depends_on_id = i.id
|
||||||
|
WHERE i.id IS NULL
|
||||||
|
UNION
|
||||||
|
SELECT DISTINCT d.issue_id
|
||||||
|
FROM dependencies d
|
||||||
|
LEFT JOIN issues i ON d.issue_id = i.id
|
||||||
|
WHERE i.id IS NULL
|
||||||
|
`
|
||||||
|
|
||||||
|
rows, err := db.QueryContext(ctx, query)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer rows.Close()
|
||||||
|
|
||||||
|
var orphans []string
|
||||||
|
for rows.Next() {
|
||||||
|
var orphan string
|
||||||
|
if err := rows.Scan(&orphan); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
orphans = append(orphans, orphan)
|
||||||
|
}
|
||||||
|
|
||||||
|
return orphans, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func buildMigrationPlan(candidates, migrationSet []string, stats dependencyStats, orphans []string, from, to string) migrationPlan {
|
||||||
|
orphanSamples := orphans
|
||||||
|
if len(orphanSamples) > 10 {
|
||||||
|
orphanSamples = orphanSamples[:10]
|
||||||
|
}
|
||||||
|
|
||||||
|
return migrationPlan{
|
||||||
|
TotalSelected: len(candidates),
|
||||||
|
AddedByDependency: len(migrationSet) - len(candidates),
|
||||||
|
IncomingEdges: stats.incomingEdges,
|
||||||
|
OutgoingEdges: stats.outgoingEdges,
|
||||||
|
Orphans: len(orphans),
|
||||||
|
OrphanSamples: orphanSamples,
|
||||||
|
IssueIDs: migrationSet,
|
||||||
|
From: from,
|
||||||
|
To: to,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func displayMigrationPlan(plan migrationPlan, dryRun bool) error {
|
||||||
|
if jsonOutput {
|
||||||
|
output := map[string]interface{}{
|
||||||
|
"plan": plan,
|
||||||
|
"dry_run": dryRun,
|
||||||
|
}
|
||||||
|
outputJSON(output); return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Human-readable output
|
||||||
|
fmt.Println("\n=== Migration Plan ===")
|
||||||
|
fmt.Printf("From: %s\n", plan.From)
|
||||||
|
fmt.Printf("To: %s\n", plan.To)
|
||||||
|
fmt.Println()
|
||||||
|
fmt.Printf("Total selected: %d issues\n", plan.TotalSelected)
|
||||||
|
if plan.AddedByDependency > 0 {
|
||||||
|
fmt.Printf("Added by dependencies: %d issues\n", plan.AddedByDependency)
|
||||||
|
}
|
||||||
|
fmt.Printf("Total to migrate: %d issues\n", len(plan.IssueIDs))
|
||||||
|
fmt.Println()
|
||||||
|
fmt.Printf("Cross-repo edges preserved:\n")
|
||||||
|
fmt.Printf(" Incoming: %d\n", plan.IncomingEdges)
|
||||||
|
fmt.Printf(" Outgoing: %d\n", plan.OutgoingEdges)
|
||||||
|
|
||||||
|
if plan.Orphans > 0 {
|
||||||
|
fmt.Println()
|
||||||
|
fmt.Printf("⚠️ Warning: Found %d orphaned dependencies\n", plan.Orphans)
|
||||||
|
if len(plan.OrphanSamples) > 0 {
|
||||||
|
fmt.Println("Sample orphaned IDs:")
|
||||||
|
for _, id := range plan.OrphanSamples {
|
||||||
|
fmt.Printf(" - %s\n", id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if dryRun {
|
||||||
|
fmt.Println("\n[DRY RUN] No changes made")
|
||||||
|
if len(plan.IssueIDs) <= 20 {
|
||||||
|
fmt.Println("\nIssues to migrate:")
|
||||||
|
for _, id := range plan.IssueIDs {
|
||||||
|
fmt.Printf(" - %s\n", id)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
fmt.Printf("\n(%d issues would be migrated, showing first 20)\n", len(plan.IssueIDs))
|
||||||
|
for i := 0; i < 20 && i < len(plan.IssueIDs); i++ {
|
||||||
|
fmt.Printf(" - %s\n", plan.IssueIDs[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func confirmMigration(plan migrationPlan) bool {
|
||||||
|
fmt.Printf("\nMigrate %d issues from %s to %s? [y/N] ", len(plan.IssueIDs), plan.From, plan.To)
|
||||||
|
var response string
|
||||||
|
fmt.Scanln(&response)
|
||||||
|
return strings.ToLower(strings.TrimSpace(response)) == "y"
|
||||||
|
}
|
||||||
|
|
||||||
|
func executeMigration(ctx context.Context, db *sql.DB, migrationSet []string, to string) error {
|
||||||
|
tx, err := db.BeginTx(ctx, nil)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to begin transaction: %w", err)
|
||||||
|
}
|
||||||
|
defer tx.Rollback()
|
||||||
|
|
||||||
|
now := time.Now()
|
||||||
|
|
||||||
|
// Update source_repo for all issues in migration set
|
||||||
|
for _, id := range migrationSet {
|
||||||
|
_, err := tx.ExecContext(ctx,
|
||||||
|
"UPDATE issues SET source_repo = ?, updated_at = ? WHERE id = ?",
|
||||||
|
to, now, id)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to update issue %s: %w", id, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mark as dirty for export
|
||||||
|
_, err = tx.ExecContext(ctx,
|
||||||
|
"INSERT OR IGNORE INTO dirty_issues(issue_id) VALUES (?)", id)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to mark issue %s as dirty: %w", id, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return tx.Commit()
|
||||||
|
}
|
||||||
|
|
||||||
|
func loadIDsFromFile(path string) ([]string, error) {
|
||||||
|
data, err := os.ReadFile(path)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
lines := strings.Split(string(data), "\n")
|
||||||
|
var ids []string
|
||||||
|
for _, line := range lines {
|
||||||
|
line = strings.TrimSpace(line)
|
||||||
|
if line != "" && !strings.HasPrefix(line, "#") {
|
||||||
|
ids = append(ids, line)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return ids, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
rootCmd.AddCommand(migrateIssuesCmd)
|
||||||
|
|
||||||
|
migrateIssuesCmd.Flags().String("from", "", "Source repository (required)")
|
||||||
|
migrateIssuesCmd.Flags().String("to", "", "Destination repository (required)")
|
||||||
|
migrateIssuesCmd.Flags().String("status", "", "Filter by status (open/closed/all)")
|
||||||
|
migrateIssuesCmd.Flags().Int("priority", -1, "Filter by priority (0-4)")
|
||||||
|
migrateIssuesCmd.Flags().String("type", "", "Filter by issue type (bug/feature/task/epic/chore)")
|
||||||
|
migrateIssuesCmd.Flags().StringSlice("label", nil, "Filter by labels (can specify multiple)")
|
||||||
|
migrateIssuesCmd.Flags().StringSlice("id", nil, "Specific issue IDs to migrate (can specify multiple)")
|
||||||
|
migrateIssuesCmd.Flags().String("ids-file", "", "File containing issue IDs (one per line)")
|
||||||
|
migrateIssuesCmd.Flags().String("include", "none", "Include dependencies: none/upstream/downstream/closure")
|
||||||
|
migrateIssuesCmd.Flags().Bool("within-from-only", true, "Only include dependencies from source repo")
|
||||||
|
migrateIssuesCmd.Flags().Bool("dry-run", false, "Show plan without making changes")
|
||||||
|
migrateIssuesCmd.Flags().Bool("strict", false, "Fail on orphaned dependencies or missing repos")
|
||||||
|
migrateIssuesCmd.Flags().Bool("yes", false, "Skip confirmation prompt")
|
||||||
|
|
||||||
|
migrateIssuesCmd.MarkFlagRequired("from")
|
||||||
|
migrateIssuesCmd.MarkFlagRequired("to")
|
||||||
|
}
|
||||||
487
docs/MULTI_REPO_MIGRATION.md
Normal file
487
docs/MULTI_REPO_MIGRATION.md
Normal file
@@ -0,0 +1,487 @@
|
|||||||
|
# Multi-Repo Migration Guide
|
||||||
|
|
||||||
|
This guide helps you adopt beads' multi-repo workflow for OSS contributions, team collaboration, and multi-phase development.
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
**Already have beads installed?** Jump to your scenario:
|
||||||
|
- [OSS Contributor](#oss-contributor-workflow) - Keep planning out of upstream PRs
|
||||||
|
- [Team Member](#team-workflow) - Shared planning on branches
|
||||||
|
- [Multi-Phase Development](#multi-phase-development) - Separate repos per phase
|
||||||
|
- [Multiple Personas](#multiple-personas) - Architect vs. implementer separation
|
||||||
|
|
||||||
|
**New to beads?** See [QUICKSTART.md](../QUICKSTART.md) first.
|
||||||
|
|
||||||
|
## What is Multi-Repo Mode?
|
||||||
|
|
||||||
|
By default, beads stores issues in `.beads/beads.jsonl` in your current repository. Multi-repo mode lets you:
|
||||||
|
|
||||||
|
- **Route issues to different repositories** based on your role (maintainer vs. contributor)
|
||||||
|
- **Aggregate issues from multiple repos** into a unified view
|
||||||
|
- **Keep contributor planning separate** from upstream projects
|
||||||
|
- **Maintain git ledger everywhere** - no gitignored files
|
||||||
|
|
||||||
|
## When Do You Need Multi-Repo?
|
||||||
|
|
||||||
|
### You DON'T need multi-repo if:
|
||||||
|
- ✅ Working solo on your own project
|
||||||
|
- ✅ Team with shared repository and trust model
|
||||||
|
- ✅ All issues belong in the project's git history
|
||||||
|
|
||||||
|
### You DO need multi-repo if:
|
||||||
|
- 🔴 Contributing to OSS - don't pollute upstream with planning
|
||||||
|
- 🔴 Fork workflow - planning shouldn't appear in PRs
|
||||||
|
- 🔴 Multiple work phases - design vs. implementation repos
|
||||||
|
- 🔴 Multiple personas - architect planning vs. implementer tasks
|
||||||
|
|
||||||
|
## Core Concepts
|
||||||
|
|
||||||
|
### 1. Source Repository (`source_repo`)
|
||||||
|
|
||||||
|
Every issue has a `source_repo` field indicating which repository owns it:
|
||||||
|
|
||||||
|
```jsonl
|
||||||
|
{"id":"bd-abc","source_repo":".","title":"Core issue"}
|
||||||
|
{"id":"bd-xyz","source_repo":"~/.beads-planning","title":"Planning issue"}
|
||||||
|
```
|
||||||
|
|
||||||
|
- `.` = Current repository (default)
|
||||||
|
- `~/.beads-planning` = Contributor planning repo
|
||||||
|
- `/path/to/repo` = Absolute path to another repo
|
||||||
|
|
||||||
|
### 2. Auto-Routing
|
||||||
|
|
||||||
|
Beads automatically routes new issues to the right repository based on your role:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Maintainer (has SSH push access)
|
||||||
|
bd create "Fix bug" -p 1
|
||||||
|
# → Creates in current repo (source_repo = ".")
|
||||||
|
|
||||||
|
# Contributor (HTTPS or no push access)
|
||||||
|
bd create "Fix bug" -p 1
|
||||||
|
# → Creates in ~/.beads-planning (source_repo = "~/.beads-planning")
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Multi-Repo Hydration
|
||||||
|
|
||||||
|
Beads can aggregate issues from multiple repositories into a unified database:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
bd list --json
|
||||||
|
# Shows issues from:
|
||||||
|
# - Current repo (.)
|
||||||
|
# - Planning repo (~/.beads-planning)
|
||||||
|
# - Any configured additional repos
|
||||||
|
```
|
||||||
|
|
||||||
|
## OSS Contributor Workflow
|
||||||
|
|
||||||
|
**Problem:** You're contributing to an OSS project but don't want your experimental planning to appear in PRs.
|
||||||
|
|
||||||
|
**Solution:** Use a separate planning repository that's never committed to upstream.
|
||||||
|
|
||||||
|
### Setup (One-Time)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Fork and clone the upstream project
|
||||||
|
git clone https://github.com/you/project.git
|
||||||
|
cd project
|
||||||
|
|
||||||
|
# 2. Initialize beads (if not already done)
|
||||||
|
bd init
|
||||||
|
|
||||||
|
# 3. Run the contributor setup wizard
|
||||||
|
bd init --contributor
|
||||||
|
|
||||||
|
# The wizard will:
|
||||||
|
# - Detect that you're in a fork (checks for 'upstream' remote)
|
||||||
|
# - Prompt you to create a planning repo (~/.beads-planning by default)
|
||||||
|
# - Configure auto-routing (contributor → planning repo)
|
||||||
|
# - Set up multi-repo hydration
|
||||||
|
```
|
||||||
|
|
||||||
|
### Manual Configuration
|
||||||
|
|
||||||
|
If you prefer manual setup:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Create planning repository
|
||||||
|
mkdir -p ~/.beads-planning
|
||||||
|
cd ~/.beads-planning
|
||||||
|
git init
|
||||||
|
bd init --prefix plan
|
||||||
|
|
||||||
|
# 2. Configure routing in your fork
|
||||||
|
cd ~/projects/project
|
||||||
|
bd config set routing.mode auto
|
||||||
|
bd config set routing.contributor "~/.beads-planning"
|
||||||
|
|
||||||
|
# 3. Add planning repo to hydration sources
|
||||||
|
bd config set repos.additional "~/.beads-planning"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Daily Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Work in your fork
|
||||||
|
cd ~/projects/project
|
||||||
|
|
||||||
|
# Create planning issues (auto-routed to ~/.beads-planning)
|
||||||
|
bd create "Investigate auth implementation" -p 1
|
||||||
|
bd create "Draft RFC for new feature" -p 2
|
||||||
|
|
||||||
|
# View all issues (current repo + planning repo)
|
||||||
|
bd ready
|
||||||
|
bd list --json
|
||||||
|
|
||||||
|
# Work on an issue
|
||||||
|
bd update plan-42 --status in_progress
|
||||||
|
|
||||||
|
# Complete work
|
||||||
|
bd close plan-42 --reason "Completed"
|
||||||
|
|
||||||
|
# Create PR - your planning issues never appear!
|
||||||
|
git add .
|
||||||
|
git commit -m "Fix authentication bug"
|
||||||
|
git push origin my-feature-branch
|
||||||
|
# ✅ PR only contains code changes, no .beads/ pollution
|
||||||
|
```
|
||||||
|
|
||||||
|
### Proposing Issues Upstream
|
||||||
|
|
||||||
|
If you want to share a planning issue with upstream:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Option 1: Manually copy issue to upstream repo
|
||||||
|
bd show plan-42 --json > /tmp/issue.json
|
||||||
|
# (Send to maintainers or create GitHub issue)
|
||||||
|
|
||||||
|
# Option 2: Migrate issue (future feature, see bd-mlcz)
|
||||||
|
bd migrate plan-42 --to . --dry-run
|
||||||
|
bd migrate plan-42 --to .
|
||||||
|
```
|
||||||
|
|
||||||
|
## Team Workflow
|
||||||
|
|
||||||
|
**Problem:** Team members working on shared repository with branches, but different levels of planning granularity.
|
||||||
|
|
||||||
|
**Solution:** Use branch-based workflow with optional personal planning repos.
|
||||||
|
|
||||||
|
### Setup (Team Lead)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Initialize beads in main repo
|
||||||
|
cd ~/projects/team-project
|
||||||
|
bd init --prefix team
|
||||||
|
|
||||||
|
# 2. Run team setup wizard
|
||||||
|
bd init --team
|
||||||
|
|
||||||
|
# The wizard will:
|
||||||
|
# - Detect shared repository (SSH push access)
|
||||||
|
# - Configure auto-routing (maintainer → current repo)
|
||||||
|
# - Set up protected branch workflow (if using GitHub/GitLab)
|
||||||
|
# - Create example workflows
|
||||||
|
```
|
||||||
|
|
||||||
|
### Setup (Team Member)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Clone team repo
|
||||||
|
git clone git@github.com:team/project.git
|
||||||
|
cd project
|
||||||
|
|
||||||
|
# 2. Beads auto-detects you're a maintainer (SSH access)
|
||||||
|
bd create "Implement feature X" -p 1
|
||||||
|
# → Creates in current repo (team-123)
|
||||||
|
|
||||||
|
# 3. Optional: Create personal planning repo for experiments
|
||||||
|
mkdir -p ~/.beads-planning-personal
|
||||||
|
cd ~/.beads-planning-personal
|
||||||
|
git init
|
||||||
|
bd init --prefix exp
|
||||||
|
|
||||||
|
# 4. Configure multi-repo in team project
|
||||||
|
cd ~/projects/project
|
||||||
|
bd config set repos.additional "~/.beads-planning-personal"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Daily Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Shared team planning (committed to repo)
|
||||||
|
bd create "Implement auth" -p 1 --repo .
|
||||||
|
# → team-42 (visible to entire team)
|
||||||
|
|
||||||
|
# Personal experiments (not committed to team repo)
|
||||||
|
bd create "Try alternative approach" -p 2 --repo ~/.beads-planning-personal
|
||||||
|
# → exp-99 (private planning)
|
||||||
|
|
||||||
|
# View all work
|
||||||
|
bd ready
|
||||||
|
bd list --json
|
||||||
|
|
||||||
|
# Complete team work
|
||||||
|
git add .beads/beads.jsonl
|
||||||
|
git commit -m "Updated issue tracker"
|
||||||
|
git push origin main
|
||||||
|
```
|
||||||
|
|
||||||
|
## Multi-Phase Development
|
||||||
|
|
||||||
|
**Problem:** Project has distinct phases (planning, implementation, maintenance) that need separate issue spaces.
|
||||||
|
|
||||||
|
**Solution:** Use separate repositories for each phase.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Create phase repositories
|
||||||
|
mkdir -p ~/projects/myapp-planning
|
||||||
|
mkdir -p ~/projects/myapp-implementation
|
||||||
|
mkdir -p ~/projects/myapp-maintenance
|
||||||
|
|
||||||
|
# 2. Initialize each phase
|
||||||
|
cd ~/projects/myapp-planning
|
||||||
|
git init
|
||||||
|
bd init --prefix plan
|
||||||
|
|
||||||
|
cd ~/projects/myapp-implementation
|
||||||
|
git init
|
||||||
|
bd init --prefix impl
|
||||||
|
|
||||||
|
cd ~/projects/myapp-maintenance
|
||||||
|
git init
|
||||||
|
bd init --prefix maint
|
||||||
|
|
||||||
|
# 3. Configure aggregation in main workspace
|
||||||
|
cd ~/projects/myapp-implementation
|
||||||
|
bd config set repos.additional "~/projects/myapp-planning,~/projects/myapp-maintenance"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Phase 1: Planning
|
||||||
|
cd ~/projects/myapp-planning
|
||||||
|
bd create "Design auth system" -p 1 -t epic
|
||||||
|
bd create "Research OAuth providers" -p 1
|
||||||
|
|
||||||
|
# Phase 2: Implementation (view planning + implementation issues)
|
||||||
|
cd ~/projects/myapp-implementation
|
||||||
|
bd ready # Shows issues from both repos
|
||||||
|
bd create "Implement auth backend" -p 1
|
||||||
|
bd dep add impl-42 plan-10 --type blocks # Link across repos
|
||||||
|
|
||||||
|
# Phase 3: Maintenance
|
||||||
|
cd ~/projects/myapp-maintenance
|
||||||
|
bd create "Security patch for auth" -p 0 -t bug
|
||||||
|
```
|
||||||
|
|
||||||
|
## Multiple Personas
|
||||||
|
|
||||||
|
**Problem:** You work as both architect (high-level planning) and implementer (detailed tasks).
|
||||||
|
|
||||||
|
**Solution:** Separate repositories for each persona's work.
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Create persona repos
|
||||||
|
mkdir -p ~/architect-planning
|
||||||
|
mkdir -p ~/implementer-tasks
|
||||||
|
|
||||||
|
cd ~/architect-planning
|
||||||
|
git init
|
||||||
|
bd init --prefix arch
|
||||||
|
|
||||||
|
cd ~/implementer-tasks
|
||||||
|
git init
|
||||||
|
bd init --prefix impl
|
||||||
|
|
||||||
|
# 2. Configure aggregation
|
||||||
|
cd ~/implementer-tasks
|
||||||
|
bd config set repos.additional "~/architect-planning"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Workflow
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Architect mode
|
||||||
|
cd ~/architect-planning
|
||||||
|
bd create "System architecture for feature X" -p 1 -t epic
|
||||||
|
bd create "Database schema design" -p 1
|
||||||
|
|
||||||
|
# Implementer mode (sees both architect + implementation tasks)
|
||||||
|
cd ~/implementer-tasks
|
||||||
|
bd ready
|
||||||
|
bd create "Implement user table" -p 1
|
||||||
|
bd dep add impl-10 arch-42 --type blocks
|
||||||
|
|
||||||
|
# Complete implementation
|
||||||
|
bd close impl-10 --reason "Completed"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuration Reference
|
||||||
|
|
||||||
|
### Routing Settings
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Auto-detect role and route accordingly
|
||||||
|
bd config set routing.mode auto
|
||||||
|
|
||||||
|
# Always use default repo (ignore role detection)
|
||||||
|
bd config set routing.mode explicit
|
||||||
|
bd config set routing.default "."
|
||||||
|
|
||||||
|
# Configure repos for each role
|
||||||
|
bd config set routing.maintainer "."
|
||||||
|
bd config set routing.contributor "~/.beads-planning"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multi-Repo Hydration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Add additional repos to aggregate
|
||||||
|
bd config set repos.additional "~/repo1,~/repo2,~/repo3"
|
||||||
|
|
||||||
|
# Set primary repo (optional)
|
||||||
|
bd config set repos.primary "."
|
||||||
|
```
|
||||||
|
|
||||||
|
### Override Auto-Routing
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Force issue to specific repo (ignores auto-routing)
|
||||||
|
bd create "Issue" -p 1 --repo /path/to/repo
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Issues appearing in wrong repository
|
||||||
|
|
||||||
|
**Problem:** `bd create` routes issues to unexpected repository.
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```bash
|
||||||
|
# Check current routing configuration
|
||||||
|
bd config get routing.mode
|
||||||
|
bd config get routing.maintainer
|
||||||
|
bd config get routing.contributor
|
||||||
|
|
||||||
|
# Check detected role
|
||||||
|
bd info --json | jq '.role'
|
||||||
|
|
||||||
|
# Override with explicit flag
|
||||||
|
bd create "Issue" -p 1 --repo .
|
||||||
|
```
|
||||||
|
|
||||||
|
### Can't see issues from other repos
|
||||||
|
|
||||||
|
**Problem:** `bd list` only shows issues from current repo.
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```bash
|
||||||
|
# Check multi-repo configuration
|
||||||
|
bd config get repos.additional
|
||||||
|
|
||||||
|
# Add missing repos
|
||||||
|
bd config set repos.additional "~/repo1,~/repo2"
|
||||||
|
|
||||||
|
# Verify hydration
|
||||||
|
bd sync
|
||||||
|
bd list --json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Git merge conflicts in .beads/beads.jsonl
|
||||||
|
|
||||||
|
**Problem:** Multiple repos modifying same JSONL file.
|
||||||
|
|
||||||
|
**Solution:** See [TROUBLESHOOTING.md](../TROUBLESHOOTING.md#git-merge-conflicts) and consider [beads-merge](https://github.com/neongreen/mono/tree/main/beads-merge) tool.
|
||||||
|
|
||||||
|
### Discovered issues in wrong repository
|
||||||
|
|
||||||
|
**Problem:** Issues created with `discovered-from` dependency appear in wrong repo.
|
||||||
|
|
||||||
|
**Solution:** Discovered issues automatically inherit parent's `source_repo`. This is intentional. To override:
|
||||||
|
```bash
|
||||||
|
bd create "Issue" -p 1 --deps discovered-from:bd-42 --repo /different/repo
|
||||||
|
```
|
||||||
|
|
||||||
|
### Planning repo polluting PRs
|
||||||
|
|
||||||
|
**Problem:** Your `~/.beads-planning` changes appear in PRs to upstream.
|
||||||
|
|
||||||
|
**Solution:** This shouldn't happen if configured correctly. Verify:
|
||||||
|
```bash
|
||||||
|
# Check that planning repo is separate from fork
|
||||||
|
ls -la ~/.beads-planning/.git # Should exist
|
||||||
|
ls -la ~/projects/fork/.beads/ # Should NOT contain planning issues
|
||||||
|
|
||||||
|
# Verify routing
|
||||||
|
bd config get routing.contributor # Should be ~/.beads-planning
|
||||||
|
```
|
||||||
|
|
||||||
|
## Backward Compatibility
|
||||||
|
|
||||||
|
### Migrating from Single-Repo
|
||||||
|
|
||||||
|
No migration needed! Multi-repo mode is opt-in:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Before (single repo)
|
||||||
|
bd create "Issue" -p 1
|
||||||
|
# → Creates in .beads/beads.jsonl
|
||||||
|
|
||||||
|
# After (multi-repo configured)
|
||||||
|
bd create "Issue" -p 1
|
||||||
|
# → Auto-routed based on role
|
||||||
|
# → Old issues in .beads/beads.jsonl still work
|
||||||
|
```
|
||||||
|
|
||||||
|
### Disabling Multi-Repo
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Remove routing configuration
|
||||||
|
bd config unset routing.mode
|
||||||
|
bd config unset repos.additional
|
||||||
|
|
||||||
|
# All issues go to current repo again
|
||||||
|
bd create "Issue" -p 1
|
||||||
|
# → Back to single-repo mode
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### OSS Contributors
|
||||||
|
- ✅ Always use `~/.beads-planning` or similar for personal planning
|
||||||
|
- ✅ Never commit `.beads/` changes to upstream PRs
|
||||||
|
- ✅ Use descriptive prefixes (`plan-`, `exp-`) for clarity
|
||||||
|
- ❌ Don't mix planning and implementation in the same repo
|
||||||
|
|
||||||
|
### Teams
|
||||||
|
- ✅ Commit `.beads/beads.jsonl` to shared repository
|
||||||
|
- ✅ Use protected branch workflow for main/master
|
||||||
|
- ✅ Review issue changes in PRs like code changes
|
||||||
|
- ❌ Don't gitignore `.beads/` - you lose the git ledger
|
||||||
|
|
||||||
|
### Multi-Phase Projects
|
||||||
|
- ✅ Use clear phase naming (`planning`, `impl`, `maint`)
|
||||||
|
- ✅ Link issues across phases with dependencies
|
||||||
|
- ✅ Archive completed phases periodically
|
||||||
|
- ❌ Don't duplicate issues across phases
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
- **CLI Reference:** See [README.md](../README.md) for command details
|
||||||
|
- **Configuration Guide:** See [CONFIG.md](../CONFIG.md) for all config options
|
||||||
|
- **Troubleshooting:** See [TROUBLESHOOTING.md](../TROUBLESHOOTING.md)
|
||||||
|
- **Multi-Repo Internals:** See [MULTI_REPO_HYDRATION.md](MULTI_REPO_HYDRATION.md) and [ROUTING.md](ROUTING.md)
|
||||||
|
|
||||||
|
## Related Issues
|
||||||
|
|
||||||
|
- [bd-8rd](/.beads/beads.jsonl#bd-8rd) - Migration and onboarding epic
|
||||||
|
- [bd-mlcz](/.beads/beads.jsonl#bd-mlcz) - `bd migrate` command (planned)
|
||||||
|
- [bd-kla1](/.beads/beads.jsonl#bd-kla1) - `bd init --contributor` wizard (planned)
|
||||||
|
- [bd-twlr](/.beads/beads.jsonl#bd-twlr) - `bd init --team` wizard (planned)
|
||||||
Reference in New Issue
Block a user