fix(routing): auto-enable hydration and flush JSONL after routed create (#1251)

* fix(routing): auto-enable hydration and flush JSONL after routed create

Fixes split-brain bug where issues routed to different repos (via routing.mode=auto)
weren't visible in bd list because JSONL wasn't updated and hydration wasn't configured.

**Problem**: When routing.mode=auto routes issues to a separate repo (e.g., ~/.beads-planning),
those issues don't appear in 'bd list' because:
1. Target repo's JSONL isn't flushed after create
2. Multi-repo hydration (repos.additional) not configured automatically
3. No doctor warnings about the misconfiguration

**Changes**:

1. **Auto-flush JSONL after routed create** (cmd/bd/create.go)
   - After routing issue to target repo, immediately flush to JSONL
   - Tries target daemon's export RPC first (if daemon running)
   - Falls back to direct JSONL export if no daemon
   - Ensures hydration can read the new issue immediately

2. **Enable hydration in bd init --contributor** (cmd/bd/init_contributor.go)
   - Wizard now automatically adds planning repo to repos.additional
   - Users no longer need to manually run 'bd repo add'
   - Routed issues appear in bd list immediately after setup

3. **Add doctor check for hydrated repo daemons** (cmd/bd/doctor/daemon.go)
   - New CheckHydratedRepoDaemons() warns if daemons not running
   - Without daemons, JSONL becomes stale and hydration breaks
   - Suggests: cd <repo> && bd daemon start --local

4. **Add doctor check for routing+hydration mismatch** (cmd/bd/doctor/config_values.go)
   - Validates routing targets are in repos.additional
   - Catches split-brain configuration before users encounter it
   - Suggests: bd repo add <routing-target>

**Testing**: Builds successfully. Unit/integration tests pending.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* test(routing): add comprehensive tests for routing fixes

Add unit tests for all 4 routing/hydration fixes:

1. **create_routing_flush_test.go** - Test JSONL flush after routing
   - TestFlushRoutedRepo_DirectExport: Verify direct JSONL export
   - TestPerformAtomicExport: Test atomic file operations
   - TestFlushRoutedRepo_PathExpansion: Test path handling
   - TestRoutingWithHydrationIntegration: E2E routing+hydration test

2. **daemon_test.go** - Test hydrated repo daemon check
   - TestCheckHydratedRepoDaemons: Test with/without daemons running
   - Covers no repos, daemons running, daemons missing scenarios

3. **config_values_test.go** - Test routing+hydration validation
   - Test routing without hydration (should warn)
   - Test routing with correct hydration (should pass)
   - Test routing target not in hydration list (should warn)
   - Test maintainer="." edge case (should pass)

All tests follow existing patterns and use t.TempDir() for isolation.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* fix(tests): fix test failures and refine routing validation logic

Fixes test failures and improves validation accuracy:

1. **Fix routing+hydration validation** (config_values.go)
   - Exclude "." from hasRoutingTargets check (current repo doesn't need hydration)
   - Prevents false warnings when maintainer="." or contributor="."

2. **Fix test ID generation** (create_routing_flush_test.go)
   - Use auto-generated IDs instead of hard-coded "beads-test1"
   - Respects test store prefix configuration (test-)
   - Fixed json.NewDecoder usage (file handle, not os.Open result)

3. **Fix config validation tests** (config_values_test.go)
   - Create actual directories for routing paths to pass path validation
   - Tests now verify both routing+hydration AND path existence checks

4. **Fix daemon test expectations** (daemon_test.go)
   - When database unavailable, check returns "No additional repos" not error
   - This is correct behavior (graceful degradation)

All tests now pass:
- TestFlushRoutedRepo* (3 tests)
- TestPerformAtomicExport
- TestCheckHydratedRepoDaemons (3 subtests)
- TestCheckConfigValues routing tests (5 subtests)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* docs: clarify when git config beads.role maintainer is needed

Clarify that maintainer role config is only needed in edge case:
- Using GitHub HTTPS URL without credentials
- But you have write access (are a maintainer)

In most cases, beads auto-detects correctly via:
- SSH URLs (git@github.com:owner/repo.git)
- HTTPS with credentials

This prevents confusion - users with SSH or credential-based HTTPS
don't need to manually configure their role.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* fix(lint): address linter warnings in routing flush code

- Add missing sqlite import in daemon.go
- Fix unchecked client.Close() error return
- Fix unchecked tempFile.Close() error returns
- Mark unused parameters with _ prefix
- Add nolint:gosec for safe tempPath construction

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Roland Tritsch <roland@ailtir.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
Steve Yegge
2026-01-21 21:22:04 -08:00
committed by GitHub
parent abd3feb761
commit be306b6c66
11 changed files with 930 additions and 0 deletions

View File

@@ -3,6 +3,7 @@ package doctor
import (
"context"
"database/sql"
"encoding/json"
"fmt"
"os"
"path/filepath"
@@ -11,6 +12,7 @@ import (
"github.com/steveyegge/beads/internal/git"
"github.com/steveyegge/beads/internal/rpc"
"github.com/steveyegge/beads/internal/storage/factory"
"github.com/steveyegge/beads/internal/storage/sqlite"
"github.com/steveyegge/beads/internal/syncbranch"
)
@@ -287,3 +289,98 @@ func CheckLegacyDaemonConfig(path string) DoctorCheck {
Message: "Using current config format",
}
}
// CheckHydratedRepoDaemons checks if daemons are running for all repos
// configured in repos.additional. Without running daemons, JSONL files won't
// be kept updated, causing multi-repo hydration to become stale (bd-fix-routing).
func CheckHydratedRepoDaemons(path string) DoctorCheck {
beadsDir := filepath.Join(path, ".beads")
dbPath := filepath.Join(beadsDir, "beads.db")
ctx := context.Background()
store, err := sqlite.New(ctx, dbPath)
if err != nil {
return DoctorCheck{
Name: "Hydrated Repo Daemons",
Status: StatusOK,
Message: "Could not check config (database unavailable)",
}
}
defer func() { _ = store.Close() }()
// Get repos.additional from config
additionalReposStr, _ := store.GetConfig(ctx, "repos.additional")
if additionalReposStr == "" {
return DoctorCheck{
Name: "Hydrated Repo Daemons",
Status: StatusOK,
Message: "No additional repos configured (N/A)",
}
}
// Parse additional repos (stored as JSON array string)
var additionalRepos []string
if err := unmarshalConfigValue(additionalReposStr, &additionalRepos); err != nil {
return DoctorCheck{
Name: "Hydrated Repo Daemons",
Status: StatusWarning,
Message: "Could not parse repos.additional config",
Detail: err.Error(),
}
}
if len(additionalRepos) == 0 {
return DoctorCheck{
Name: "Hydrated Repo Daemons",
Status: StatusOK,
Message: "No additional repos configured (N/A)",
}
}
// Check each additional repo for running daemon
var missingDaemons []string
for _, repoPath := range additionalRepos {
// Expand ~ to home directory
expandedPath := expandPath(repoPath)
// Construct socket path
socketPath := filepath.Join(expandedPath, ".beads", "bd.sock")
// Try to connect to daemon
client, err := rpc.TryConnect(socketPath)
if err == nil && client != nil {
_ = client.Close()
// Daemon is running, all good
} else {
// No daemon running
missingDaemons = append(missingDaemons, repoPath)
}
}
if len(missingDaemons) > 0 {
return DoctorCheck{
Name: "Hydrated Repo Daemons",
Status: StatusWarning,
Message: fmt.Sprintf("Daemons not running in %d hydrated repo(s)", len(missingDaemons)),
Detail: fmt.Sprintf("Missing daemons in: %v", missingDaemons),
Fix: "For each repo, run: cd <repo> && bd daemon start --local",
}
}
return DoctorCheck{
Name: "Hydrated Repo Daemons",
Status: StatusOK,
Message: fmt.Sprintf("All %d hydrated repo(s) have running daemons", len(additionalRepos)),
}
}
// unmarshalConfigValue unmarshals a JSON config value
func unmarshalConfigValue(value string, target interface{}) error {
// Config values are stored as JSON
if value == "" {
return nil
}
// Unmarshal JSON into target
return json.Unmarshal([]byte(value), target)
}