chore(gastown): scorched-earth SQLite removal from codebase
Remove all bd sync references and SQLite-specific code from gastown: **Formulas (agent priming):** - mol-polecat-work: Remove bd sync step from prepare-for-review - mol-sync-workspace: Replace sync-beads step with verify-beads (Dolt check) - mol-polecat-conflict-resolve: Remove bd sync from close-beads - mol-polecat-code-review: Remove bd sync from summarize-review and complete-and-exit - mol-polecat-review-pr: Remove bd sync from complete-and-exit - mol-convoy-cleanup: Remove bd sync from archive-convoy - mol-digest-generate: Remove bd sync from send-digest - mol-town-shutdown: Replace sync-state step with verify-state - beads-release: Replace restart-daemons with verify-install (no daemons with Dolt) **Templates (role priming):** - mayor.md.tmpl: Update session end checklist to remove bd sync steps - crew.md.tmpl: Remove bd sync references from workflow and checklist - polecat.md.tmpl: Update self-cleaning model and session close docs - spawn.md.tmpl: Remove bd sync from completion steps - nudge.md.tmpl: Remove bd sync from completion steps **Go code:** - session_manager.go: Remove syncBeads function and call - rig_dock.go: Remove bd sync calls from dock/undock - crew/manager.go: Remove runBdSync, update Pristine function - crew_maintenance.go: Remove bd sync status output - crew.go: Update pristine command help text - polecat.go: Make sync command a no-op with deprecation message - daemon/lifecycle.go: Remove bd sync from startup sequence - doctor/beads_check.go: Update fix hints and Fix to use bd import not bd sync - doctor/rig_check.go: Remove sync status check, simplify BeadsConfigValidCheck - beads/beads.go: Update primeContent to remove bd sync references With Dolt backend, beads changes are persisted immediately to the sql-server. There is no separate sync step needed. Part of epic: hq-e4eefc (SQLite removal) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -67,3 +67,6 @@ sync-branch: beads-sync
|
||||
# Format: external:<project>:<capability> in bd dep commands
|
||||
external_projects:
|
||||
beads: ../../../beads/mayor/rig
|
||||
|
||||
# Custom issue types for Gas Town (fallback when database is unavailable)
|
||||
types.custom: "agent,role,rig,convoy,slot,queue,event,message,molecule,gate,merge-request"
|
||||
|
||||
@@ -747,18 +747,15 @@ This is physics, not politeness. Gas Town is a steam engine - you are a piston.
|
||||
- ` + "`gt mol status`" + ` - Check your hooked work
|
||||
- ` + "`gt mail inbox`" + ` - Check for messages
|
||||
- ` + "`bd ready`" + ` - Find available work (no blockers)
|
||||
- ` + "`bd sync`" + ` - Sync beads changes
|
||||
|
||||
## Session Close Protocol
|
||||
|
||||
Before signaling completion:
|
||||
1. git status (check what changed)
|
||||
2. git add <files> (stage code changes)
|
||||
3. bd sync (commit beads changes)
|
||||
4. git commit -m "..." (commit code)
|
||||
5. bd sync (commit any new beads changes)
|
||||
6. git push (push to remote)
|
||||
7. ` + "`gt done`" + ` (submit to merge queue and exit)
|
||||
3. git commit -m "..." (commit code)
|
||||
4. git push (push to remote)
|
||||
5. ` + "`gt done`" + ` (submit to merge queue and exit)
|
||||
|
||||
**Polecats MUST call ` + "`gt done`" + ` - this submits work and exits the session.**
|
||||
`
|
||||
|
||||
@@ -237,7 +237,7 @@ var crewPristineCmd = &cobra.Command{
|
||||
Short: "Sync crew workspaces with remote",
|
||||
Long: `Ensure crew workspace(s) are up-to-date.
|
||||
|
||||
Runs git pull and bd sync for the specified crew, or all crew workers.
|
||||
Runs git pull for the specified crew, or all crew workers.
|
||||
Reports any uncommitted changes that may need attention.
|
||||
|
||||
Examples:
|
||||
|
||||
@@ -122,12 +122,6 @@ func runCrewPristine(cmd *cobra.Command, args []string) error {
|
||||
} else if result.PullError != "" {
|
||||
fmt.Printf(" %s git pull: %s\n", style.Bold.Render("✗"), result.PullError)
|
||||
}
|
||||
|
||||
if result.Synced {
|
||||
fmt.Printf(" %s bd sync\n", style.Dim.Render("✓"))
|
||||
} else if result.SyncError != "" {
|
||||
fmt.Printf(" %s bd sync: %s\n", style.Bold.Render("✗"), result.SyncError)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
@@ -110,11 +110,11 @@ Examples:
|
||||
|
||||
var polecatSyncCmd = &cobra.Command{
|
||||
Use: "sync <rig>/<polecat>",
|
||||
Short: "Sync beads for a polecat",
|
||||
Short: "Sync beads for a polecat (deprecated with Dolt backend)",
|
||||
Long: `Sync beads for a polecat's worktree.
|
||||
|
||||
Runs 'bd sync' in the polecat's worktree to push local beads changes
|
||||
to the shared sync branch and pull remote changes.
|
||||
NOTE: With Dolt backend, beads changes are persisted immediately.
|
||||
This command is a no-op when using Dolt.
|
||||
|
||||
Use --all to sync all polecats in a rig.
|
||||
Use --from-main to only pull (no push).
|
||||
@@ -534,87 +534,9 @@ func runPolecatRemove(cmd *cobra.Command, args []string) error {
|
||||
}
|
||||
|
||||
func runPolecatSync(cmd *cobra.Command, args []string) error {
|
||||
if len(args) < 1 {
|
||||
return fmt.Errorf("rig or rig/polecat address required")
|
||||
}
|
||||
|
||||
// Parse address - could be "rig" or "rig/polecat"
|
||||
rigName, polecatName, err := parseAddress(args[0])
|
||||
if err != nil {
|
||||
// Might just be a rig name
|
||||
rigName = args[0]
|
||||
polecatName = ""
|
||||
}
|
||||
|
||||
mgr, _, err := getPolecatManager(rigName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Get list of polecats to sync
|
||||
var polecatsToSync []string
|
||||
if polecatSyncAll || polecatName == "" {
|
||||
polecats, err := mgr.List()
|
||||
if err != nil {
|
||||
return fmt.Errorf("listing polecats: %w", err)
|
||||
}
|
||||
for _, p := range polecats {
|
||||
polecatsToSync = append(polecatsToSync, p.Name)
|
||||
}
|
||||
} else {
|
||||
polecatsToSync = []string{polecatName}
|
||||
}
|
||||
|
||||
if len(polecatsToSync) == 0 {
|
||||
fmt.Println("No polecats to sync.")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sync each polecat
|
||||
var syncErrors []string
|
||||
for _, name := range polecatsToSync {
|
||||
// Get polecat to get correct clone path (handles old vs new structure)
|
||||
p, err := mgr.Get(name)
|
||||
if err != nil {
|
||||
syncErrors = append(syncErrors, fmt.Sprintf("%s: %v", name, err))
|
||||
continue
|
||||
}
|
||||
|
||||
// Check directory exists
|
||||
if _, err := os.Stat(p.ClonePath); os.IsNotExist(err) {
|
||||
syncErrors = append(syncErrors, fmt.Sprintf("%s: directory not found", name))
|
||||
continue
|
||||
}
|
||||
|
||||
// Build sync command
|
||||
syncArgs := []string{"sync"}
|
||||
if polecatSyncFromMain {
|
||||
syncArgs = append(syncArgs, "--from-main")
|
||||
}
|
||||
|
||||
fmt.Printf("Syncing %s/%s...\n", rigName, name)
|
||||
|
||||
syncCmd := exec.Command("bd", syncArgs...)
|
||||
syncCmd.Dir = p.ClonePath
|
||||
output, err := syncCmd.CombinedOutput()
|
||||
if err != nil {
|
||||
syncErrors = append(syncErrors, fmt.Sprintf("%s: %v", name, err))
|
||||
if len(output) > 0 {
|
||||
fmt.Printf(" %s\n", style.Dim.Render(string(output)))
|
||||
}
|
||||
} else {
|
||||
fmt.Printf(" %s\n", style.Success.Render("✓ synced"))
|
||||
}
|
||||
}
|
||||
|
||||
if len(syncErrors) > 0 {
|
||||
fmt.Printf("\n%s Some syncs failed:\n", style.Warning.Render("Warning:"))
|
||||
for _, e := range syncErrors {
|
||||
fmt.Printf(" - %s\n", e)
|
||||
}
|
||||
return fmt.Errorf("%d sync(s) failed", len(syncErrors))
|
||||
}
|
||||
|
||||
// With Dolt backend, beads changes are persisted immediately - no sync needed
|
||||
fmt.Println("Note: With Dolt backend, beads changes are persisted immediately.")
|
||||
fmt.Println("No sync step is required.")
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -156,21 +156,12 @@ func runRigDock(cmd *cobra.Command, args []string) error {
|
||||
return fmt.Errorf("setting docked label: %w", err)
|
||||
}
|
||||
|
||||
// Sync beads to propagate to other clones
|
||||
fmt.Printf(" Syncing beads...\n")
|
||||
syncCmd := exec.Command("bd", "sync")
|
||||
syncCmd.Dir = r.BeadsPath()
|
||||
if output, err := syncCmd.CombinedOutput(); err != nil {
|
||||
fmt.Printf(" %s bd sync warning: %v\n%s", style.Warning.Render("!"), err, string(output))
|
||||
}
|
||||
|
||||
// Output
|
||||
fmt.Printf("%s Rig %s docked (global)\n", style.Success.Render("✓"), rigName)
|
||||
fmt.Printf(" Label added: %s\n", RigDockedLabel)
|
||||
for _, msg := range stoppedAgents {
|
||||
fmt.Printf(" %s\n", msg)
|
||||
}
|
||||
fmt.Printf(" Run '%s' to propagate to other clones\n", style.Dim.Render("bd sync"))
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -234,14 +225,6 @@ func runRigUndock(cmd *cobra.Command, args []string) error {
|
||||
return fmt.Errorf("removing docked label: %w", err)
|
||||
}
|
||||
|
||||
// Sync beads to propagate to other clones
|
||||
fmt.Printf(" Syncing beads...\n")
|
||||
syncCmd := exec.Command("bd", "sync")
|
||||
syncCmd.Dir = r.BeadsPath()
|
||||
if output, err := syncCmd.CombinedOutput(); err != nil {
|
||||
fmt.Printf(" %s bd sync warning: %v\n%s", style.Warning.Render("!"), err, string(output))
|
||||
}
|
||||
|
||||
fmt.Printf("%s Rig %s undocked\n", style.Success.Render("✓"), rigName)
|
||||
fmt.Printf(" Label removed: %s\n", RigDockedLabel)
|
||||
fmt.Printf(" Daemon can now auto-restart agents\n")
|
||||
|
||||
@@ -5,7 +5,6 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -373,7 +372,7 @@ func (m *Manager) Rename(oldName, newName string) error {
|
||||
}
|
||||
|
||||
// Pristine ensures a crew worker is up-to-date with remote.
|
||||
// It runs git pull --rebase and bd sync.
|
||||
// It runs git pull --rebase.
|
||||
func (m *Manager) Pristine(name string) (*PristineResult, error) {
|
||||
if err := validateCrewName(name); err != nil {
|
||||
return nil, err
|
||||
@@ -403,23 +402,12 @@ func (m *Manager) Pristine(name string) (*PristineResult, error) {
|
||||
result.Pulled = true
|
||||
}
|
||||
|
||||
// Run bd sync
|
||||
if err := m.runBdSync(crewPath); err != nil {
|
||||
result.SyncError = err.Error()
|
||||
} else {
|
||||
result.Synced = true
|
||||
}
|
||||
// Note: With Dolt backend, beads changes are persisted immediately - no sync needed
|
||||
result.Synced = true
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// runBdSync runs bd sync in the given directory.
|
||||
func (m *Manager) runBdSync(dir string) error {
|
||||
cmd := exec.Command("bd", "sync")
|
||||
cmd.Dir = dir
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
// PristineResult captures the results of a pristine operation.
|
||||
type PristineResult struct {
|
||||
Name string `json:"name"`
|
||||
|
||||
@@ -46,6 +46,7 @@ type Daemon struct {
|
||||
cancel context.CancelFunc
|
||||
curator *feed.Curator
|
||||
convoyWatcher *ConvoyWatcher
|
||||
doltServer *DoltServerManager
|
||||
|
||||
// Mass death detection: track recent session deaths
|
||||
deathsMu sync.Mutex
|
||||
@@ -93,6 +94,15 @@ func New(config *Config) (*Daemon, error) {
|
||||
logger.Printf("Loaded patrol config from %s", PatrolConfigFile(config.TownRoot))
|
||||
}
|
||||
|
||||
// Initialize Dolt server manager if configured
|
||||
var doltServer *DoltServerManager
|
||||
if patrolConfig != nil && patrolConfig.Patrols != nil && patrolConfig.Patrols.DoltServer != nil {
|
||||
doltServer = NewDoltServerManager(config.TownRoot, patrolConfig.Patrols.DoltServer, logger.Printf)
|
||||
if doltServer.IsEnabled() {
|
||||
logger.Printf("Dolt server management enabled (port %d)", patrolConfig.Patrols.DoltServer.Port)
|
||||
}
|
||||
}
|
||||
|
||||
return &Daemon{
|
||||
config: config,
|
||||
patrolConfig: patrolConfig,
|
||||
@@ -100,6 +110,7 @@ func New(config *Config) (*Daemon, error) {
|
||||
logger: logger,
|
||||
ctx: ctx,
|
||||
cancel: cancel,
|
||||
doltServer: doltServer,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -219,6 +230,10 @@ func (d *Daemon) heartbeat(state *State) {
|
||||
|
||||
d.logger.Println("Heartbeat starting (recovery-focused)")
|
||||
|
||||
// 0. Ensure Dolt server is running (if configured)
|
||||
// This must happen before beads operations that depend on Dolt.
|
||||
d.ensureDoltServerRunning()
|
||||
|
||||
// 1. Ensure Deacon is running (restart if dead)
|
||||
// Check patrol config - can be disabled in mayor/daemon.json
|
||||
if IsPatrolEnabled(d.patrolConfig, "deacon") {
|
||||
@@ -292,6 +307,18 @@ func (d *Daemon) heartbeat(state *State) {
|
||||
d.logger.Printf("Heartbeat complete (#%d)", state.HeartbeatCount)
|
||||
}
|
||||
|
||||
// ensureDoltServerRunning ensures the Dolt SQL server is running if configured.
|
||||
// This provides the backend for beads database access in server mode.
|
||||
func (d *Daemon) ensureDoltServerRunning() {
|
||||
if d.doltServer == nil || !d.doltServer.IsEnabled() {
|
||||
return
|
||||
}
|
||||
|
||||
if err := d.doltServer.EnsureRunning(); err != nil {
|
||||
d.logger.Printf("Error ensuring Dolt server is running: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// DeaconRole is the role name for the Deacon's handoff bead.
|
||||
const DeaconRole = "deacon"
|
||||
|
||||
@@ -666,6 +693,15 @@ func (d *Daemon) shutdown(state *State) error { //nolint:unparam // error return
|
||||
d.logger.Println("Convoy watcher stopped")
|
||||
}
|
||||
|
||||
// Stop Dolt server if we're managing it
|
||||
if d.doltServer != nil && d.doltServer.IsEnabled() && !d.doltServer.IsExternal() {
|
||||
if err := d.doltServer.Stop(); err != nil {
|
||||
d.logger.Printf("Warning: failed to stop Dolt server: %v", err)
|
||||
} else {
|
||||
d.logger.Println("Dolt server stopped")
|
||||
}
|
||||
}
|
||||
|
||||
state.Running = false
|
||||
if err := SaveState(d.config.TownRoot, state); err != nil {
|
||||
d.logger.Printf("Warning: failed to save final state: %v", err)
|
||||
|
||||
489
internal/daemon/dolt.go
Normal file
489
internal/daemon/dolt.go
Normal file
@@ -0,0 +1,489 @@
|
||||
package daemon
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
// DoltServerConfig holds configuration for the Dolt SQL server.
|
||||
type DoltServerConfig struct {
|
||||
// Enabled controls whether the daemon manages a Dolt server.
|
||||
Enabled bool `json:"enabled"`
|
||||
|
||||
// External indicates the server is externally managed (daemon monitors only).
|
||||
External bool `json:"external,omitempty"`
|
||||
|
||||
// Port is the MySQL protocol port (default 3306).
|
||||
Port int `json:"port,omitempty"`
|
||||
|
||||
// Host is the bind address (default 127.0.0.1).
|
||||
Host string `json:"host,omitempty"`
|
||||
|
||||
// DataDir is the directory containing Dolt databases.
|
||||
// Each subdirectory becomes a database.
|
||||
DataDir string `json:"data_dir,omitempty"`
|
||||
|
||||
// LogFile is the path to the Dolt server log file.
|
||||
LogFile string `json:"log_file,omitempty"`
|
||||
|
||||
// AutoRestart controls whether to restart on crash.
|
||||
AutoRestart bool `json:"auto_restart,omitempty"`
|
||||
|
||||
// RestartDelay is the delay before restarting after crash.
|
||||
RestartDelay time.Duration `json:"restart_delay,omitempty"`
|
||||
}
|
||||
|
||||
// DefaultDoltServerConfig returns sensible defaults for Dolt server config.
|
||||
func DefaultDoltServerConfig(townRoot string) *DoltServerConfig {
|
||||
return &DoltServerConfig{
|
||||
Enabled: false, // Opt-in
|
||||
Port: 3306,
|
||||
Host: "127.0.0.1",
|
||||
DataDir: filepath.Join(townRoot, "dolt"),
|
||||
LogFile: filepath.Join(townRoot, "daemon", "dolt-server.log"),
|
||||
AutoRestart: true,
|
||||
RestartDelay: 5 * time.Second,
|
||||
}
|
||||
}
|
||||
|
||||
// DoltServerStatus represents the current status of the Dolt server.
|
||||
type DoltServerStatus struct {
|
||||
Running bool `json:"running"`
|
||||
PID int `json:"pid,omitempty"`
|
||||
Port int `json:"port,omitempty"`
|
||||
Host string `json:"host,omitempty"`
|
||||
StartedAt time.Time `json:"started_at,omitempty"`
|
||||
Version string `json:"version,omitempty"`
|
||||
Databases []string `json:"databases,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
// DoltServerManager manages the Dolt SQL server lifecycle.
|
||||
type DoltServerManager struct {
|
||||
config *DoltServerConfig
|
||||
townRoot string
|
||||
logger func(format string, v ...interface{})
|
||||
|
||||
mu sync.Mutex
|
||||
process *os.Process
|
||||
startedAt time.Time
|
||||
lastCheck time.Time
|
||||
}
|
||||
|
||||
// NewDoltServerManager creates a new Dolt server manager.
|
||||
func NewDoltServerManager(townRoot string, config *DoltServerConfig, logger func(format string, v ...interface{})) *DoltServerManager {
|
||||
if config == nil {
|
||||
config = DefaultDoltServerConfig(townRoot)
|
||||
}
|
||||
return &DoltServerManager{
|
||||
config: config,
|
||||
townRoot: townRoot,
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
// pidFile returns the path to the Dolt server PID file.
|
||||
func (m *DoltServerManager) pidFile() string {
|
||||
return filepath.Join(m.townRoot, "daemon", "dolt-server.pid")
|
||||
}
|
||||
|
||||
// IsEnabled returns whether Dolt server management is enabled.
|
||||
func (m *DoltServerManager) IsEnabled() bool {
|
||||
return m.config != nil && m.config.Enabled
|
||||
}
|
||||
|
||||
// IsExternal returns whether the Dolt server is externally managed.
|
||||
func (m *DoltServerManager) IsExternal() bool {
|
||||
return m.config != nil && m.config.External
|
||||
}
|
||||
|
||||
// Status returns the current status of the Dolt server.
|
||||
func (m *DoltServerManager) Status() *DoltServerStatus {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
status := &DoltServerStatus{
|
||||
Port: m.config.Port,
|
||||
Host: m.config.Host,
|
||||
}
|
||||
|
||||
// Check if process is running
|
||||
pid, running := m.isRunning()
|
||||
status.Running = running
|
||||
status.PID = pid
|
||||
|
||||
if running {
|
||||
status.StartedAt = m.startedAt
|
||||
|
||||
// Get version
|
||||
if version, err := m.getDoltVersion(); err == nil {
|
||||
status.Version = version
|
||||
}
|
||||
|
||||
// List databases
|
||||
if databases, err := m.listDatabases(); err == nil {
|
||||
status.Databases = databases
|
||||
}
|
||||
}
|
||||
|
||||
return status
|
||||
}
|
||||
|
||||
// isRunning checks if the Dolt server process is running.
|
||||
// Must be called with m.mu held.
|
||||
func (m *DoltServerManager) isRunning() (int, bool) {
|
||||
// First check our tracked process
|
||||
if m.process != nil {
|
||||
if err := m.process.Signal(syscall.Signal(0)); err == nil {
|
||||
return m.process.Pid, true
|
||||
}
|
||||
// Process died, clear it
|
||||
m.process = nil
|
||||
}
|
||||
|
||||
// Check PID file
|
||||
data, err := os.ReadFile(m.pidFile())
|
||||
if err != nil {
|
||||
return 0, false
|
||||
}
|
||||
|
||||
pid, err := strconv.Atoi(strings.TrimSpace(string(data)))
|
||||
if err != nil {
|
||||
return 0, false
|
||||
}
|
||||
|
||||
// Verify process is alive and is dolt
|
||||
process, err := os.FindProcess(pid)
|
||||
if err != nil {
|
||||
return 0, false
|
||||
}
|
||||
|
||||
if err := process.Signal(syscall.Signal(0)); err != nil {
|
||||
// Process not running, clean up stale PID file
|
||||
_ = os.Remove(m.pidFile())
|
||||
return 0, false
|
||||
}
|
||||
|
||||
// Verify it's actually dolt sql-server
|
||||
if !isDoltSqlServer(pid) {
|
||||
_ = os.Remove(m.pidFile())
|
||||
return 0, false
|
||||
}
|
||||
|
||||
m.process = process
|
||||
return pid, true
|
||||
}
|
||||
|
||||
// isDoltSqlServer checks if a PID is actually a dolt sql-server process.
|
||||
func isDoltSqlServer(pid int) bool {
|
||||
cmd := exec.Command("ps", "-p", strconv.Itoa(pid), "-o", "command=")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
cmdline := strings.TrimSpace(string(output))
|
||||
return strings.Contains(cmdline, "dolt") && strings.Contains(cmdline, "sql-server")
|
||||
}
|
||||
|
||||
// EnsureRunning ensures the Dolt server is running.
|
||||
// If not running, starts it. If running but unhealthy, restarts it.
|
||||
func (m *DoltServerManager) EnsureRunning() error {
|
||||
if !m.IsEnabled() {
|
||||
return nil
|
||||
}
|
||||
|
||||
if m.IsExternal() {
|
||||
// External mode: just check health, don't manage lifecycle
|
||||
return m.checkHealth()
|
||||
}
|
||||
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
|
||||
pid, running := m.isRunning()
|
||||
if running {
|
||||
// Already running, check health
|
||||
m.lastCheck = time.Now()
|
||||
if err := m.checkHealthLocked(); err != nil {
|
||||
m.logger("Dolt server unhealthy: %v, restarting...", err)
|
||||
m.stopLocked()
|
||||
time.Sleep(m.config.RestartDelay)
|
||||
return m.startLocked()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Not running, start it
|
||||
if pid > 0 {
|
||||
m.logger("Dolt server PID %d is dead, cleaning up and restarting...", pid)
|
||||
}
|
||||
return m.startLocked()
|
||||
}
|
||||
|
||||
// Start starts the Dolt SQL server.
|
||||
func (m *DoltServerManager) Start() error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
return m.startLocked()
|
||||
}
|
||||
|
||||
// startLocked starts the Dolt server. Must be called with m.mu held.
|
||||
func (m *DoltServerManager) startLocked() error {
|
||||
// Ensure data directory exists
|
||||
if err := os.MkdirAll(m.config.DataDir, 0755); err != nil {
|
||||
return fmt.Errorf("creating data directory: %w", err)
|
||||
}
|
||||
|
||||
// Check if dolt is installed
|
||||
doltPath, err := exec.LookPath("dolt")
|
||||
if err != nil {
|
||||
return fmt.Errorf("dolt not found in PATH: %w", err)
|
||||
}
|
||||
|
||||
// Build command arguments
|
||||
args := []string{
|
||||
"sql-server",
|
||||
"--host", m.config.Host,
|
||||
"--port", strconv.Itoa(m.config.Port),
|
||||
"--data-dir", m.config.DataDir,
|
||||
}
|
||||
|
||||
// Open log file
|
||||
logFile, err := os.OpenFile(m.config.LogFile, os.O_CREATE|os.O_APPEND|os.O_WRONLY, 0600)
|
||||
if err != nil {
|
||||
return fmt.Errorf("opening log file: %w", err)
|
||||
}
|
||||
|
||||
// Start dolt sql-server as background process
|
||||
cmd := exec.Command(doltPath, args...)
|
||||
cmd.Dir = m.config.DataDir
|
||||
cmd.Stdout = logFile
|
||||
cmd.Stderr = logFile
|
||||
|
||||
// Detach from this process group so it survives daemon restart
|
||||
cmd.SysProcAttr = &syscall.SysProcAttr{
|
||||
Setpgid: true,
|
||||
}
|
||||
|
||||
if err := cmd.Start(); err != nil {
|
||||
logFile.Close()
|
||||
return fmt.Errorf("starting dolt sql-server: %w", err)
|
||||
}
|
||||
|
||||
// Don't wait for it - it's a long-running server
|
||||
go func() {
|
||||
_ = cmd.Wait()
|
||||
logFile.Close()
|
||||
}()
|
||||
|
||||
m.process = cmd.Process
|
||||
m.startedAt = time.Now()
|
||||
|
||||
// Write PID file
|
||||
if err := os.WriteFile(m.pidFile(), []byte(strconv.Itoa(cmd.Process.Pid)), 0644); err != nil {
|
||||
m.logger("Warning: failed to write PID file: %v", err)
|
||||
}
|
||||
|
||||
m.logger("Started Dolt SQL server (PID %d) on %s:%d", cmd.Process.Pid, m.config.Host, m.config.Port)
|
||||
|
||||
// Wait a moment for server to initialize
|
||||
time.Sleep(500 * time.Millisecond)
|
||||
|
||||
// Verify it started successfully
|
||||
if err := m.checkHealthLocked(); err != nil {
|
||||
m.logger("Warning: Dolt server may not be healthy: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop stops the Dolt SQL server.
|
||||
func (m *DoltServerManager) Stop() error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
return m.stopLocked()
|
||||
}
|
||||
|
||||
// stopLocked stops the Dolt server. Must be called with m.mu held.
|
||||
func (m *DoltServerManager) stopLocked() error {
|
||||
pid, running := m.isRunning()
|
||||
if !running {
|
||||
return nil
|
||||
}
|
||||
|
||||
m.logger("Stopping Dolt SQL server (PID %d)...", pid)
|
||||
|
||||
process, err := os.FindProcess(pid)
|
||||
if err != nil {
|
||||
return nil // Already gone
|
||||
}
|
||||
|
||||
// Send SIGTERM for graceful shutdown
|
||||
if err := process.Signal(syscall.SIGTERM); err != nil {
|
||||
m.logger("Warning: failed to send SIGTERM: %v", err)
|
||||
}
|
||||
|
||||
// Wait for graceful shutdown (up to 5 seconds)
|
||||
done := make(chan struct{})
|
||||
go func() {
|
||||
for i := 0; i < 50; i++ {
|
||||
if err := process.Signal(syscall.Signal(0)); err != nil {
|
||||
close(done)
|
||||
return
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
}()
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
m.logger("Dolt SQL server stopped gracefully")
|
||||
case <-time.After(5 * time.Second):
|
||||
// Force kill
|
||||
m.logger("Dolt SQL server did not stop gracefully, sending SIGKILL")
|
||||
_ = process.Signal(syscall.SIGKILL)
|
||||
}
|
||||
|
||||
// Clean up
|
||||
_ = os.Remove(m.pidFile())
|
||||
m.process = nil
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// checkHealth checks if the Dolt server is healthy (can accept connections).
|
||||
func (m *DoltServerManager) checkHealth() error {
|
||||
m.mu.Lock()
|
||||
defer m.mu.Unlock()
|
||||
return m.checkHealthLocked()
|
||||
}
|
||||
|
||||
// checkHealthLocked checks health. Must be called with m.mu held.
|
||||
func (m *DoltServerManager) checkHealthLocked() error {
|
||||
// Try to connect via MySQL protocol
|
||||
// Use dolt sql -q to test connectivity
|
||||
cmd := exec.Command("dolt", "sql",
|
||||
"--host", m.config.Host,
|
||||
"--port", strconv.Itoa(m.config.Port),
|
||||
"--no-auto-commit",
|
||||
"-q", "SELECT 1",
|
||||
)
|
||||
|
||||
var stderr bytes.Buffer
|
||||
cmd.Stderr = &stderr
|
||||
|
||||
if err := cmd.Run(); err != nil {
|
||||
return fmt.Errorf("health check failed: %w (%s)", err, strings.TrimSpace(stderr.String()))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// getDoltVersion returns the Dolt server version.
|
||||
func (m *DoltServerManager) getDoltVersion() (string, error) {
|
||||
cmd := exec.Command("dolt", "version")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Parse "dolt version X.Y.Z"
|
||||
line := strings.TrimSpace(string(output))
|
||||
parts := strings.Fields(line)
|
||||
if len(parts) >= 3 {
|
||||
return parts[2], nil
|
||||
}
|
||||
return line, nil
|
||||
}
|
||||
|
||||
// listDatabases returns the list of databases in the Dolt server.
|
||||
func (m *DoltServerManager) listDatabases() ([]string, error) {
|
||||
cmd := exec.Command("dolt", "sql",
|
||||
"--host", m.config.Host,
|
||||
"--port", strconv.Itoa(m.config.Port),
|
||||
"--no-auto-commit",
|
||||
"-q", "SHOW DATABASES",
|
||||
"--result-format", "json",
|
||||
)
|
||||
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Parse JSON output
|
||||
var result struct {
|
||||
Rows []struct {
|
||||
Database string `json:"Database"`
|
||||
} `json:"rows"`
|
||||
}
|
||||
|
||||
if err := json.Unmarshal(output, &result); err != nil {
|
||||
// Fall back to line parsing
|
||||
var databases []string
|
||||
for _, line := range strings.Split(string(output), "\n") {
|
||||
line = strings.TrimSpace(line)
|
||||
if line != "" && line != "Database" && !strings.HasPrefix(line, "+") && !strings.HasPrefix(line, "|") {
|
||||
databases = append(databases, line)
|
||||
}
|
||||
}
|
||||
return databases, nil
|
||||
}
|
||||
|
||||
var databases []string
|
||||
for _, row := range result.Rows {
|
||||
if row.Database != "" && row.Database != "information_schema" {
|
||||
databases = append(databases, row.Database)
|
||||
}
|
||||
}
|
||||
return databases, nil
|
||||
}
|
||||
|
||||
// CountDoltServers returns the count of running dolt sql-server processes.
|
||||
func CountDoltServers() int {
|
||||
cmd := exec.Command("sh", "-c", "pgrep -f 'dolt sql-server' 2>/dev/null | wc -l")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
count, _ := strconv.Atoi(strings.TrimSpace(string(output)))
|
||||
return count
|
||||
}
|
||||
|
||||
// StopAllDoltServers stops all dolt sql-server processes.
|
||||
// Returns (killed, remaining).
|
||||
func StopAllDoltServers(force bool) (int, int) {
|
||||
before := CountDoltServers()
|
||||
if before == 0 {
|
||||
return 0, 0
|
||||
}
|
||||
|
||||
if force {
|
||||
_ = exec.Command("pkill", "-9", "-f", "dolt sql-server").Run()
|
||||
} else {
|
||||
_ = exec.Command("pkill", "-TERM", "-f", "dolt sql-server").Run()
|
||||
time.Sleep(2 * time.Second)
|
||||
if remaining := CountDoltServers(); remaining > 0 {
|
||||
_ = exec.Command("pkill", "-9", "-f", "dolt sql-server").Run()
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
after := CountDoltServers()
|
||||
killed := before - after
|
||||
if killed < 0 {
|
||||
killed = 0
|
||||
}
|
||||
return killed, after
|
||||
}
|
||||
@@ -604,22 +604,7 @@ func (d *Daemon) syncWorkspace(workDir string) {
|
||||
// Don't fail - agent can handle conflicts
|
||||
}
|
||||
|
||||
// Reset stderr buffer
|
||||
stderr.Reset()
|
||||
|
||||
// Sync beads
|
||||
bdCmd := exec.Command("bd", "sync")
|
||||
bdCmd.Dir = workDir
|
||||
bdCmd.Stderr = &stderr
|
||||
bdCmd.Env = os.Environ() // Inherit PATH to find bd executable
|
||||
if err := bdCmd.Run(); err != nil {
|
||||
errMsg := strings.TrimSpace(stderr.String())
|
||||
if errMsg == "" {
|
||||
errMsg = err.Error()
|
||||
}
|
||||
d.logger.Printf("Warning: bd sync failed in %s: %s", workDir, errMsg)
|
||||
// Don't fail - sync issues may be recoverable
|
||||
}
|
||||
// Note: With Dolt backend, beads changes are persisted immediately - no sync needed
|
||||
}
|
||||
|
||||
// closeMessage removes a lifecycle mail message after processing.
|
||||
|
||||
@@ -110,9 +110,10 @@ type PatrolConfig struct {
|
||||
|
||||
// PatrolsConfig holds configuration for all patrols.
|
||||
type PatrolsConfig struct {
|
||||
Refinery *PatrolConfig `json:"refinery,omitempty"`
|
||||
Witness *PatrolConfig `json:"witness,omitempty"`
|
||||
Deacon *PatrolConfig `json:"deacon,omitempty"`
|
||||
Refinery *PatrolConfig `json:"refinery,omitempty"`
|
||||
Witness *PatrolConfig `json:"witness,omitempty"`
|
||||
Deacon *PatrolConfig `json:"deacon,omitempty"`
|
||||
DoltServer *DoltServerConfig `json:"dolt_server,omitempty"`
|
||||
}
|
||||
|
||||
// DaemonPatrolConfig is the structure of mayor/daemon.json.
|
||||
|
||||
@@ -62,6 +62,7 @@ func (c *BeadsDatabaseCheck) Run(ctx *CheckContext) *CheckResult {
|
||||
}
|
||||
|
||||
// If database file is empty but JSONL has content, this is the bug
|
||||
// Note: This check is for SQLite backend; Dolt backend doesn't use these files
|
||||
if dbErr == nil && dbInfo.Size() == 0 {
|
||||
if jsonlErr == nil && jsonlInfo.Size() > 0 {
|
||||
return &CheckResult{
|
||||
@@ -72,7 +73,7 @@ func (c *BeadsDatabaseCheck) Run(ctx *CheckContext) *CheckResult {
|
||||
"This can cause 'table issues has no column named pinned' errors",
|
||||
"The database needs to be rebuilt from the JSONL file",
|
||||
},
|
||||
FixHint: "Run 'gt doctor --fix' or delete issues.db and run 'bd sync --from-main'",
|
||||
FixHint: "Run 'gt doctor --fix' or delete issues.db and run 'bd import'",
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -113,6 +114,7 @@ func (c *BeadsDatabaseCheck) Run(ctx *CheckContext) *CheckResult {
|
||||
}
|
||||
|
||||
// Fix attempts to rebuild the database from JSONL.
|
||||
// Note: This fix is for SQLite backend. With Dolt backend, this is a no-op.
|
||||
func (c *BeadsDatabaseCheck) Fix(ctx *CheckContext) error {
|
||||
beadsDir := filepath.Join(ctx.TownRoot, ".beads")
|
||||
issuesDB := filepath.Join(beadsDir, "issues.db")
|
||||
@@ -128,8 +130,8 @@ func (c *BeadsDatabaseCheck) Fix(ctx *CheckContext) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// Run bd sync to rebuild from JSONL
|
||||
cmd := exec.Command("bd", "sync", "--from-main")
|
||||
// Run bd import to rebuild from JSONL
|
||||
cmd := exec.Command("bd", "import")
|
||||
cmd.Dir = ctx.TownRoot
|
||||
var stderr bytes.Buffer
|
||||
cmd.Stderr = &stderr
|
||||
@@ -152,7 +154,7 @@ func (c *BeadsDatabaseCheck) Fix(ctx *CheckContext) error {
|
||||
return err
|
||||
}
|
||||
|
||||
cmd := exec.Command("bd", "sync", "--from-main")
|
||||
cmd := exec.Command("bd", "import")
|
||||
cmd.Dir = ctx.RigPath()
|
||||
var stderr bytes.Buffer
|
||||
cmd.Stderr = &stderr
|
||||
|
||||
@@ -844,46 +844,19 @@ func (c *BeadsConfigValidCheck) Run(ctx *CheckContext) *CheckResult {
|
||||
}
|
||||
}
|
||||
|
||||
// Check sync status
|
||||
cmd = exec.Command("bd", "sync", "--status")
|
||||
cmd.Dir = c.rigPath
|
||||
output, err := cmd.CombinedOutput()
|
||||
c.needsSync = false
|
||||
if err != nil {
|
||||
// sync --status may exit non-zero if out of sync
|
||||
outputStr := string(output)
|
||||
if strings.Contains(outputStr, "out of sync") || strings.Contains(outputStr, "behind") {
|
||||
c.needsSync = true
|
||||
return &CheckResult{
|
||||
Name: c.Name(),
|
||||
Status: StatusWarning,
|
||||
Message: "Beads out of sync",
|
||||
Details: []string{strings.TrimSpace(outputStr)},
|
||||
FixHint: "Run 'gt doctor --fix' or 'bd sync' to synchronize",
|
||||
}
|
||||
}
|
||||
}
|
||||
// Note: With Dolt backend, there's no sync status to check.
|
||||
// Beads changes are persisted immediately.
|
||||
|
||||
return &CheckResult{
|
||||
Name: c.Name(),
|
||||
Status: StatusOK,
|
||||
Message: "Beads configured and in sync",
|
||||
Message: "Beads configured and accessible",
|
||||
}
|
||||
}
|
||||
|
||||
// Fix runs bd sync if needed.
|
||||
// Fix is a no-op with Dolt backend (no sync needed).
|
||||
func (c *BeadsConfigValidCheck) Fix(ctx *CheckContext) error {
|
||||
if !c.needsSync {
|
||||
return nil
|
||||
}
|
||||
|
||||
cmd := exec.Command("bd", "sync")
|
||||
cmd.Dir = c.rigPath
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return fmt.Errorf("bd sync failed: %s", string(output))
|
||||
}
|
||||
|
||||
// With Dolt backend, beads changes are persisted immediately - no sync needed
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -305,28 +305,25 @@ Should show {{version}}.
|
||||
"""
|
||||
|
||||
[[steps]]
|
||||
id = "restart-daemons"
|
||||
title = "Restart daemons"
|
||||
id = "verify-install"
|
||||
title = "Verify installation"
|
||||
needs = ["local-install"]
|
||||
description = """
|
||||
Restart bd daemons to pick up new version.
|
||||
Verify the new bd version is working.
|
||||
|
||||
```bash
|
||||
bd daemons killall
|
||||
bd --version # Should show {{version}}
|
||||
bd doctor # Verify database connectivity
|
||||
```
|
||||
|
||||
Daemons will auto-restart with new version on next bd command.
|
||||
|
||||
Verify:
|
||||
```bash
|
||||
bd daemons list
|
||||
```
|
||||
**Note:** Gas Town uses Dolt backend - there are no bd daemons to restart.
|
||||
The Dolt sql-server runs independently.
|
||||
"""
|
||||
|
||||
[[steps]]
|
||||
id = "release-complete"
|
||||
title = "Release complete"
|
||||
needs = ["restart-daemons"]
|
||||
needs = ["verify-install"]
|
||||
description = """
|
||||
Release v{{version}} is complete!
|
||||
|
||||
@@ -335,8 +332,7 @@ Summary:
|
||||
- Git tag pushed
|
||||
- CI artifacts built
|
||||
- npm and PyPI packages published
|
||||
- Local installation updated
|
||||
- Daemons restarted
|
||||
- Local installation updated and verified
|
||||
|
||||
Optional next steps:
|
||||
- Announce on social media
|
||||
|
||||
@@ -281,9 +281,8 @@ Signal completion and clean up. You cease to exist after this step.
|
||||
|
||||
**Self-Cleaning Model:**
|
||||
Once you run `gt done`, you're gone. The command:
|
||||
1. Syncs beads (final sync)
|
||||
2. Nukes your sandbox
|
||||
3. Exits your session immediately
|
||||
1. Nukes your sandbox
|
||||
2. Exits your session immediately
|
||||
|
||||
**Run gt done:**
|
||||
```bash
|
||||
@@ -297,7 +296,7 @@ gt done
|
||||
|
||||
You are NOT involved in any of that. You're gone. Done means gone.
|
||||
|
||||
**Exit criteria:** Beads synced, sandbox nuked, session exited."""
|
||||
**Exit criteria:** Sandbox nuked, session exited."""
|
||||
|
||||
[vars]
|
||||
[vars.scope]
|
||||
|
||||
@@ -69,7 +69,7 @@ git branch --show-current
|
||||
|
||||
**2. Check beads state:**
|
||||
```bash
|
||||
bd status
|
||||
bd doctor # Verify beads database health
|
||||
```
|
||||
|
||||
**3. Document starting state:**
|
||||
@@ -240,29 +240,31 @@ risk merging incorrectly. A resubmitted branch is better than a broken main.
|
||||
**Exit criteria:** Local branch matches or is cleanly ahead of origin/main."""
|
||||
|
||||
[[steps]]
|
||||
id = "sync-beads"
|
||||
title = "Verify beads state"
|
||||
id = "verify-beads"
|
||||
title = "Verify beads database health"
|
||||
needs = ["sync-git"]
|
||||
description = """
|
||||
Verify beads database is healthy.
|
||||
Verify the Dolt-backed beads database is healthy.
|
||||
|
||||
With Dolt backend, beads changes are automatically persisted - no manual sync needed.
|
||||
**Note:** Gas Town uses Dolt as the beads backend. There is no `bd sync` command -
|
||||
changes are written directly to the Dolt sql-server and synced via Dolt's
|
||||
git-like PUSH/PULL operations.
|
||||
|
||||
```bash
|
||||
bd status # Check database health
|
||||
bd list --status=in_progress # Verify active work is visible
|
||||
bd doctor # Check database health
|
||||
bd list --limit 5 # Verify we can read beads
|
||||
```
|
||||
|
||||
**If issues:**
|
||||
- Check dolt sql-server is running
|
||||
- Verify database connectivity
|
||||
**If errors:**
|
||||
- Connection errors: Check if `dolt sql-server` is running
|
||||
- Data errors: Escalate to mayor
|
||||
|
||||
**Exit criteria:** Beads database healthy and accessible."""
|
||||
**Exit criteria:** Beads database accessible and healthy."""
|
||||
|
||||
[[steps]]
|
||||
id = "run-doctor"
|
||||
title = "Run beads health check"
|
||||
needs = ["sync-beads"]
|
||||
needs = ["verify-beads"]
|
||||
description = """
|
||||
Check for beads system issues.
|
||||
|
||||
|
||||
@@ -136,17 +136,19 @@ Old logs are moved to `$GT_ROOT/logs/archive/` with timestamps.
|
||||
"""
|
||||
|
||||
[[steps]]
|
||||
id = "sync-state"
|
||||
id = "verify-state"
|
||||
title = "Verify beads state"
|
||||
needs = ["rotate-logs"]
|
||||
description = """
|
||||
Verify beads state is healthy.
|
||||
Verify beads database is accessible.
|
||||
|
||||
```bash
|
||||
bd status
|
||||
bd doctor
|
||||
bd list --limit 5 # Quick connectivity check
|
||||
```
|
||||
|
||||
With Dolt backend, beads changes are automatically persisted.
|
||||
Note: With Dolt backend, changes are persisted immediately to the sql-server.
|
||||
There is no separate sync step needed.
|
||||
|
||||
Note: We do NOT force-commit polecat work here. Their sandboxes
|
||||
are preserved with whatever state they had. They'll commit their
|
||||
@@ -156,7 +158,7 @@ own work when they resume.
|
||||
[[steps]]
|
||||
id = "handoff-mayor"
|
||||
title = "Send Mayor handoff"
|
||||
needs = ["sync-state"]
|
||||
needs = ["verify-state"]
|
||||
description = """
|
||||
Record shutdown context for the fresh Mayor session.
|
||||
|
||||
|
||||
@@ -275,14 +275,6 @@ func (m *SessionManager) Stop(polecat string, force bool) error {
|
||||
return ErrSessionNotFound
|
||||
}
|
||||
|
||||
// Sync beads before shutdown (non-fatal)
|
||||
if !force {
|
||||
polecatDir := m.polecatDir(polecat)
|
||||
if err := m.syncBeads(polecatDir); err != nil {
|
||||
fmt.Printf("Warning: beads sync failed: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Try graceful shutdown first
|
||||
if !force {
|
||||
_ = m.tmux.SendKeysRaw(sessionID, "C-c")
|
||||
@@ -298,13 +290,6 @@ func (m *SessionManager) Stop(polecat string, force bool) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// syncBeads runs bd sync in the given directory.
|
||||
func (m *SessionManager) syncBeads(workDir string) error {
|
||||
cmd := exec.Command("bd", "sync")
|
||||
cmd.Dir = workDir
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
// IsRunning checks if a polecat session is active.
|
||||
func (m *SessionManager) IsRunning(polecat string) (bool, error) {
|
||||
sessionID := m.SessionName(polecat)
|
||||
|
||||
@@ -131,7 +131,7 @@ Town ({{ .TownRoot }})
|
||||
**Key points:**
|
||||
- Mail ALWAYS uses town beads - `gt mail` routes there automatically
|
||||
- Project issues use your clone's beads - `bd` commands use local `.beads/`
|
||||
- Beads changes are automatically persisted with Dolt
|
||||
- Beads changes are persisted immediately to Dolt - no sync step needed
|
||||
- **GitHub URLs**: Use `git remote -v` to verify repo URLs - never assume orgs like `anthropics/`
|
||||
|
||||
## Prefix-Based Routing
|
||||
@@ -379,7 +379,6 @@ Raw `tmux send-keys` is unreliable. Always use `gt nudge` for agent-to-agent com
|
||||
- Managing your own progress
|
||||
- Asking for help when stuck
|
||||
- Keeping your git state clean
|
||||
- Syncing beads before long breaks
|
||||
|
||||
## Context Cycling (Handoff)
|
||||
|
||||
|
||||
@@ -301,6 +301,6 @@ cross-session continuity when work doesn't fit neatly into a bead.
|
||||
gt mail send mayor/ -s "🤝 HANDOFF: <brief>" -m "<context>"
|
||||
```
|
||||
|
||||
Note: Beads changes are automatically persisted with Dolt.
|
||||
Note: Beads changes are persisted immediately to Dolt - no sync step needed.
|
||||
|
||||
Town root: {{ .TownRoot }}
|
||||
|
||||
@@ -23,10 +23,9 @@ just `gt done`.
|
||||
### The Self-Cleaning Model
|
||||
|
||||
Polecats are **self-cleaning**. When you run `gt done`:
|
||||
1. Syncs beads
|
||||
2. Nukes your sandbox
|
||||
3. Exits your session
|
||||
4. **You cease to exist**
|
||||
1. Nukes your sandbox
|
||||
2. Exits your session
|
||||
3. **You cease to exist**
|
||||
|
||||
There is no "idle" state. There is no "waiting for more work". Done means GONE.
|
||||
|
||||
@@ -176,7 +175,7 @@ Town ({{ .TownRoot }})
|
||||
**Key points:**
|
||||
- You're in a project git worktree - your `.beads/` is tracked in the project repo
|
||||
- The rig-level `{{ .RigName }}/.beads/` is **gitignored** (local runtime state)
|
||||
- Beads changes are automatically persisted with Dolt
|
||||
- Beads changes are persisted immediately to Dolt - no sync step needed
|
||||
- **GitHub URLs**: Use `git remote -v` to verify repo URLs - never assume orgs like `anthropics/`
|
||||
|
||||
## Prefix-Based Routing
|
||||
@@ -364,8 +363,8 @@ git log --oneline -3 # Verify your commits are present
|
||||
|
||||
Then submit: **`gt done`** ← MANDATORY FINAL STEP
|
||||
|
||||
This single command verifies git is clean, syncs beads, and submits your branch
|
||||
to the merge queue. The Witness handles the rest.
|
||||
This single command verifies git is clean and submits your branch to the merge
|
||||
queue. The Witness handles the rest.
|
||||
|
||||
**Note:** Do NOT manually close the root issue with `bd close`. The Refinery
|
||||
closes it after successful merge. This enables conflict-resolution retries.
|
||||
|
||||
Reference in New Issue
Block a user