1 Commits

Author SHA1 Message Date
mayor
cedb1bbd13 chore: release v0.3.1
Some checks failed
Release / goreleaser (push) Failing after 6m8s
Release / publish-npm (push) Has been skipped
Release / update-homebrew (push) Has been skipped
### Fixed
- Orphan cleanup on macOS - TTY comparison now handles macOS '??' format
- Session kill orphan prevention - gt done and gt crew stop use KillSessionWithProcesses

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 03:53:50 -08:00
27 changed files with 220 additions and 413 deletions

5
.beads/.gitignore vendored
View File

@@ -32,11 +32,6 @@ beads.left.meta.json
beads.right.jsonl
beads.right.meta.json
# Sync state (local-only, per-machine)
# These files are machine-specific and should not be shared across clones
.sync.lock
sync_base.jsonl
# NOTE: Do NOT add negation patterns (e.g., !issues.jsonl) here.
# They would override fork protection in .git/info/exclude, allowing
# contributors to accidentally commit upstream issue databases.

View File

@@ -0,0 +1,51 @@
name: Block Internal PRs
on:
pull_request:
types: [opened, reopened]
jobs:
block-internal-prs:
name: Block Internal PRs
# Only run if PR is from the same repo (not a fork)
if: github.event.pull_request.head.repo.full_name == github.repository
runs-on: ubuntu-latest
steps:
- name: Close PR and comment
uses: actions/github-script@v7
with:
script: |
const prNumber = context.issue.number;
const branch = context.payload.pull_request.head.ref;
const body = [
'**Internal PRs are not allowed.**',
'',
'Gas Town agents push directly to main. PRs are for external contributors only.',
'',
'To land your changes:',
'```bash',
'git checkout main',
'git merge ' + branch,
'git push origin main',
'git push origin --delete ' + branch,
'```',
'',
'See CLAUDE.md: "Crew workers push directly to main. No feature branches. NEVER create PRs."'
].join('\n');
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: prNumber,
body: body
});
await github.rest.pulls.update({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: prNumber,
state: 'closed'
});
core.setFailed('Internal PR blocked. Push directly to main instead.');

View File

@@ -7,12 +7,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [0.4.0] - 2026-01-17
### Fixed
- **Orphan cleanup skips valid tmux sessions** - `gt orphans kill` and automatic orphan cleanup now check for Claude processes belonging to valid Gas Town tmux sessions (gt-*/hq-*) before killing. This prevents false kills of witnesses, refineries, and deacon during startup when they may temporarily show TTY "?"
## [0.3.1] - 2026-01-17
### Fixed

View File

@@ -200,8 +200,7 @@ gt done # Signal completion (syncs, submits to MQ, notifi
## Best Practices
1. **CRITICAL: Close steps in real-time** - Mark `in_progress` BEFORE starting, `closed` IMMEDIATELY after completing. Never batch-close steps at the end. Molecules ARE the ledger - each step closure is a timestamped CV entry. Batch-closing corrupts the timeline and violates HOP's core promise.
2. **Use `--continue` for propulsion** - Keep momentum by auto-advancing
3. **Check progress with `bd mol current`** - Know where you are before resuming
4. **Squash completed molecules** - Create digests for audit trail
5. **Burn routine wisps** - Don't accumulate ephemeral patrol data
1. **Use `--continue` for propulsion** - Keep momentum by auto-advancing
2. **Check progress with `bd mol current`** - Know where you are before resuming
3. **Squash completed molecules** - Create digests for audit trail
4. **Burn routine wisps** - Don't accumulate ephemeral patrol data

View File

@@ -113,7 +113,6 @@ type SyncStatus struct {
type Beads struct {
workDir string
beadsDir string // Optional BEADS_DIR override for cross-database access
isolated bool // If true, suppress inherited beads env vars (for test isolation)
}
// New creates a new Beads wrapper for the given directory.
@@ -121,36 +120,12 @@ func New(workDir string) *Beads {
return &Beads{workDir: workDir}
}
// NewIsolated creates a Beads wrapper for test isolation.
// This suppresses inherited beads env vars (BD_ACTOR, BEADS_DB) to prevent
// tests from accidentally routing to production databases.
func NewIsolated(workDir string) *Beads {
return &Beads{workDir: workDir, isolated: true}
}
// NewWithBeadsDir creates a Beads wrapper with an explicit BEADS_DIR.
// This is needed when running from a polecat worktree but accessing town-level beads.
func NewWithBeadsDir(workDir, beadsDir string) *Beads {
return &Beads{workDir: workDir, beadsDir: beadsDir}
}
// getActor returns the BD_ACTOR value for this context.
// Returns empty string when in isolated mode (tests) to prevent
// inherited actors from routing to production databases.
func (b *Beads) getActor() string {
if b.isolated {
return ""
}
return os.Getenv("BD_ACTOR")
}
// Init initializes a new beads database in the working directory.
// This uses the same environment isolation as other commands.
func (b *Beads) Init(prefix string) error {
_, err := b.run("init", "--prefix", prefix, "--quiet")
return err
}
// run executes a bd command and returns stdout.
func (b *Beads) run(args ...string) ([]byte, error) {
// Use --no-daemon for faster read operations (avoids daemon IPC overhead)
@@ -158,6 +133,8 @@ func (b *Beads) run(args ...string) ([]byte, error) {
// Use --allow-stale to prevent failures when db is out of sync with JSONL
// (e.g., after daemon is killed during shutdown before syncing).
fullArgs := append([]string{"--no-daemon", "--allow-stale"}, args...)
cmd := exec.Command("bd", fullArgs...) //nolint:gosec // G204: bd is a trusted internal tool
cmd.Dir = b.workDir
// Always explicitly set BEADS_DIR to prevent inherited env vars from
// causing prefix mismatches. Use explicit beadsDir if set, otherwise
@@ -166,28 +143,7 @@ func (b *Beads) run(args ...string) ([]byte, error) {
if beadsDir == "" {
beadsDir = ResolveBeadsDir(b.workDir)
}
// In isolated mode, use --db flag to force specific database path
// This bypasses bd's routing logic that can redirect to .beads-planning
// Skip --db for init command since it creates the database
isInit := len(args) > 0 && args[0] == "init"
if b.isolated && !isInit {
beadsDB := filepath.Join(beadsDir, "beads.db")
fullArgs = append([]string{"--db", beadsDB}, fullArgs...)
}
cmd := exec.Command("bd", fullArgs...) //nolint:gosec // G204: bd is a trusted internal tool
cmd.Dir = b.workDir
// Build environment: filter beads env vars when in isolated mode (tests)
// to prevent routing to production databases.
var env []string
if b.isolated {
env = filterBeadsEnv(os.Environ())
} else {
env = os.Environ()
}
cmd.Env = append(env, "BEADS_DIR="+beadsDir)
cmd.Env = append(os.Environ(), "BEADS_DIR="+beadsDir)
var stdout, stderr bytes.Buffer
cmd.Stdout = &stdout
@@ -240,27 +196,6 @@ func (b *Beads) wrapError(err error, stderr string, args []string) error {
return fmt.Errorf("bd %s: %w", strings.Join(args, " "), err)
}
// filterBeadsEnv removes beads-related environment variables from the given
// environment slice. This ensures test isolation by preventing inherited
// BD_ACTOR, BEADS_DB, GT_ROOT, HOME etc. from routing commands to production databases.
func filterBeadsEnv(environ []string) []string {
filtered := make([]string, 0, len(environ))
for _, env := range environ {
// Skip beads-related env vars that could interfere with test isolation
// BD_ACTOR, BEADS_* - direct beads config
// GT_ROOT - causes bd to find global routes file
// HOME - causes bd to find ~/.beads-planning routing
if strings.HasPrefix(env, "BD_ACTOR=") ||
strings.HasPrefix(env, "BEADS_") ||
strings.HasPrefix(env, "GT_ROOT=") ||
strings.HasPrefix(env, "HOME=") {
continue
}
filtered = append(filtered, env)
}
return filtered
}
// List returns issues matching the given options.
func (b *Beads) List(opts ListOptions) ([]*Issue, error) {
args := []string{"list", "--json"}
@@ -463,10 +398,9 @@ func (b *Beads) Create(opts CreateOptions) (*Issue, error) {
args = append(args, "--ephemeral")
}
// Default Actor from BD_ACTOR env var if not specified
// Uses getActor() to respect isolated mode (tests)
actor := opts.Actor
if actor == "" {
actor = b.getActor()
actor = os.Getenv("BD_ACTOR")
}
if actor != "" {
args = append(args, "--actor="+actor)
@@ -511,10 +445,9 @@ func (b *Beads) CreateWithID(id string, opts CreateOptions) (*Issue, error) {
args = append(args, "--parent="+opts.Parent)
}
// Default Actor from BD_ACTOR env var if not specified
// Uses getActor() to respect isolated mode (tests)
actor := opts.Actor
if actor == "" {
actor = b.getActor()
actor = os.Getenv("BD_ACTOR")
}
if actor != "" {
args = append(args, "--actor="+actor)

View File

@@ -5,6 +5,7 @@ import (
"encoding/json"
"errors"
"fmt"
"os"
"strings"
)
@@ -143,8 +144,7 @@ func (b *Beads) CreateAgentBead(id, title string, fields *AgentFields) (*Issue,
}
// Default actor from BD_ACTOR env var for provenance tracking
// Uses getActor() to respect isolated mode (tests)
if actor := b.getActor(); actor != "" {
if actor := os.Getenv("BD_ACTOR"); actor != "" {
args = append(args, "--actor="+actor)
}

View File

@@ -6,6 +6,7 @@ import (
"encoding/json"
"errors"
"fmt"
"os"
"strconv"
"strings"
"time"
@@ -161,8 +162,7 @@ func (b *Beads) CreateChannelBead(name string, subscribers []string, createdBy s
}
// Default actor from BD_ACTOR env var for provenance tracking
// Uses getActor() to respect isolated mode (tests)
if actor := b.getActor(); actor != "" {
if actor := os.Getenv("BD_ACTOR"); actor != "" {
args = append(args, "--actor="+actor)
}
@@ -382,7 +382,7 @@ func (b *Beads) LookupChannelByName(name string) (*Issue, *ChannelFields, error)
// EnforceChannelRetention prunes old messages from a channel to enforce retention.
// Called after posting a new message to the channel (on-write cleanup).
// Enforces both count-based (RetentionCount) and time-based (RetentionHours) limits.
// If channel has >= retainCount messages, deletes oldest until count < retainCount.
func (b *Beads) EnforceChannelRetention(name string) error {
// Get channel config
_, fields, err := b.GetChannelBead(name)
@@ -393,8 +393,8 @@ func (b *Beads) EnforceChannelRetention(name string) error {
return fmt.Errorf("channel not found: %s", name)
}
// Skip if no retention limits configured
if fields.RetentionCount <= 0 && fields.RetentionHours <= 0 {
// Skip if no retention limit
if fields.RetentionCount <= 0 {
return nil
}
@@ -411,42 +411,23 @@ func (b *Beads) EnforceChannelRetention(name string) error {
}
var messages []struct {
ID string `json:"id"`
CreatedAt string `json:"created_at"`
ID string `json:"id"`
}
if err := json.Unmarshal(out, &messages); err != nil {
return fmt.Errorf("parsing channel messages: %w", err)
}
// Track which messages to delete (use map to avoid duplicates)
toDeleteIDs := make(map[string]bool)
// Time-based retention: delete messages older than RetentionHours
if fields.RetentionHours > 0 {
cutoff := time.Now().Add(-time.Duration(fields.RetentionHours) * time.Hour)
for _, msg := range messages {
createdAt, err := time.Parse(time.RFC3339, msg.CreatedAt)
if err != nil {
continue // Skip messages with unparseable timestamps
}
if createdAt.Before(cutoff) {
toDeleteIDs[msg.ID] = true
}
}
// Calculate how many to delete
// We're being called after a new message is posted, so we want to end up with retainCount
toDelete := len(messages) - fields.RetentionCount
if toDelete <= 0 {
return nil // No pruning needed
}
// Count-based retention: delete oldest messages beyond RetentionCount
if fields.RetentionCount > 0 {
toDeleteByCount := len(messages) - fields.RetentionCount
for i := 0; i < toDeleteByCount && i < len(messages); i++ {
toDeleteIDs[messages[i].ID] = true
}
}
// Delete marked messages (best-effort)
for id := range toDeleteIDs {
// Delete oldest messages (best-effort)
for i := 0; i < toDelete && i < len(messages); i++ {
// Use close instead of delete for audit trail
_, _ = b.run("close", id, "--reason=channel retention pruning")
_, _ = b.run("close", messages[i].ID, "--reason=channel retention pruning")
}
return nil
@@ -454,8 +435,7 @@ func (b *Beads) EnforceChannelRetention(name string) error {
// PruneAllChannels enforces retention on all channels.
// Called by Deacon patrol as a backup cleanup mechanism.
// Enforces both count-based (RetentionCount) and time-based (RetentionHours) limits.
// Uses a 10% buffer for count-based pruning to avoid thrashing.
// Uses a 10% buffer to avoid thrashing (only prunes if count > retainCount * 1.1).
func (b *Beads) PruneAllChannels() (int, error) {
channels, err := b.ListChannelBeads()
if err != nil {
@@ -464,62 +444,38 @@ func (b *Beads) PruneAllChannels() (int, error) {
pruned := 0
for name, fields := range channels {
// Skip if no retention limits configured
if fields.RetentionCount <= 0 && fields.RetentionHours <= 0 {
if fields.RetentionCount <= 0 {
continue
}
// Get messages with timestamps
// Count messages
out, err := b.run("list",
"--type=message",
"--label=channel:"+name,
"--json",
"--limit=0",
"--sort=created",
)
if err != nil {
continue // Skip on error
}
var messages []struct {
ID string `json:"id"`
CreatedAt string `json:"created_at"`
ID string `json:"id"`
}
if err := json.Unmarshal(out, &messages); err != nil {
continue
}
// Track which messages to delete (use map to avoid duplicates)
toDeleteIDs := make(map[string]bool)
// Time-based retention: delete messages older than RetentionHours
if fields.RetentionHours > 0 {
cutoff := time.Now().Add(-time.Duration(fields.RetentionHours) * time.Hour)
for _, msg := range messages {
createdAt, err := time.Parse(time.RFC3339, msg.CreatedAt)
if err != nil {
continue // Skip messages with unparseable timestamps
}
if createdAt.Before(cutoff) {
toDeleteIDs[msg.ID] = true
}
}
// 10% buffer - only prune if significantly over limit
threshold := int(float64(fields.RetentionCount) * 1.1)
if len(messages) <= threshold {
continue
}
// Count-based retention with 10% buffer to avoid thrashing
if fields.RetentionCount > 0 {
threshold := int(float64(fields.RetentionCount) * 1.1)
if len(messages) > threshold {
toDeleteByCount := len(messages) - fields.RetentionCount
for i := 0; i < toDeleteByCount && i < len(messages); i++ {
toDeleteIDs[messages[i].ID] = true
}
}
}
// Delete marked messages
for id := range toDeleteIDs {
if _, err := b.run("close", id, "--reason=patrol retention pruning"); err == nil {
// Prune down to exactly retainCount
toDelete := len(messages) - fields.RetentionCount
for i := 0; i < toDelete && i < len(messages); i++ {
if _, err := b.run("close", messages[i].ID, "--reason=patrol retention pruning"); err == nil {
pruned++
}
}

View File

@@ -4,6 +4,7 @@ package beads
import (
"encoding/json"
"fmt"
"os"
"strings"
)
@@ -27,8 +28,7 @@ func (b *Beads) CreateDogAgentBead(name, location string) (*Issue, error) {
}
// Default actor from BD_ACTOR env var for provenance tracking
// Uses getActor() to respect isolated mode (tests)
if actor := b.getActor(); actor != "" {
if actor := os.Getenv("BD_ACTOR"); actor != "" {
args = append(args, "--actor="+actor)
}

View File

@@ -5,6 +5,7 @@ import (
"encoding/json"
"errors"
"fmt"
"os"
"strconv"
"strings"
"time"
@@ -182,8 +183,7 @@ func (b *Beads) CreateEscalationBead(title string, fields *EscalationFields) (*I
}
// Default actor from BD_ACTOR env var for provenance tracking
// Uses getActor() to respect isolated mode (tests)
if actor := b.getActor(); actor != "" {
if actor := os.Getenv("BD_ACTOR"); actor != "" {
args = append(args, "--actor="+actor)
}

View File

@@ -6,6 +6,7 @@ import (
"encoding/json"
"errors"
"fmt"
"os"
"strings"
"time"
)
@@ -129,8 +130,7 @@ func (b *Beads) CreateGroupBead(name string, members []string, createdBy string)
}
// Default actor from BD_ACTOR env var for provenance tracking
// Uses getActor() to respect isolated mode (tests)
if actor := b.getActor(); actor != "" {
if actor := os.Getenv("BD_ACTOR"); actor != "" {
args = append(args, "--actor="+actor)
}

View File

@@ -5,6 +5,7 @@ import (
"encoding/json"
"errors"
"fmt"
"os"
"strconv"
"strings"
)
@@ -179,8 +180,7 @@ func (b *Beads) CreateQueueBead(id, title string, fields *QueueFields) (*Issue,
}
// Default actor from BD_ACTOR env var for provenance tracking
// Uses getActor() to respect isolated mode (tests)
if actor := b.getActor(); actor != "" {
if actor := os.Getenv("BD_ACTOR"); actor != "" {
args = append(args, "--actor="+actor)
}

View File

@@ -4,6 +4,7 @@ package beads
import (
"encoding/json"
"fmt"
"os"
"strings"
)
@@ -89,8 +90,7 @@ func (b *Beads) CreateRigBead(id, title string, fields *RigFields) (*Issue, erro
}
// Default actor from BD_ACTOR env var for provenance tracking
// Uses getActor() to respect isolated mode (tests)
if actor := b.getActor(); actor != "" {
if actor := os.Getenv("BD_ACTOR"); actor != "" {
args = append(args, "--actor="+actor)
}

View File

@@ -1812,19 +1812,18 @@ func TestSetupRedirect(t *testing.T) {
// 4. BUG: bd create fails with UNIQUE constraint
// 5. BUG: bd reopen fails with "issue not found" (tombstones are invisible)
func TestAgentBeadTombstoneBug(t *testing.T) {
// Skip: bd CLI 0.47.2 has a bug where database writes don't commit
// ("sql: database is closed" during auto-flush). This blocks all tests
// that need to create issues. See internal issue for tracking.
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
tmpDir := t.TempDir()
// Create isolated beads instance and initialize database
bd := NewIsolated(tmpDir)
if err := bd.Init("test"); err != nil {
t.Fatalf("bd init: %v", err)
// Initialize beads database
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
cmd.Dir = tmpDir
if output, err := cmd.CombinedOutput(); err != nil {
t.Fatalf("bd init: %v\n%s", err, output)
}
beadsDir := filepath.Join(tmpDir, ".beads")
bd := New(beadsDir)
agentID := "test-testrig-polecat-tombstone"
// Step 1: Create agent bead
@@ -1897,14 +1896,18 @@ func TestAgentBeadTombstoneBug(t *testing.T) {
// TestAgentBeadCloseReopenWorkaround demonstrates the workaround for the tombstone bug:
// use Close instead of Delete, then Reopen works.
func TestAgentBeadCloseReopenWorkaround(t *testing.T) {
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
tmpDir := t.TempDir()
bd := NewIsolated(tmpDir)
if err := bd.Init("test"); err != nil {
t.Fatalf("bd init: %v", err)
// Initialize beads database
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
cmd.Dir = tmpDir
if output, err := cmd.CombinedOutput(); err != nil {
t.Fatalf("bd init: %v\n%s", err, output)
}
beadsDir := filepath.Join(tmpDir, ".beads")
bd := New(beadsDir)
agentID := "test-testrig-polecat-closereopen"
// Step 1: Create agent bead
@@ -1954,14 +1957,18 @@ func TestAgentBeadCloseReopenWorkaround(t *testing.T) {
// TestCreateOrReopenAgentBead_ClosedBead tests that CreateOrReopenAgentBead
// successfully reopens a closed agent bead and updates its fields.
func TestCreateOrReopenAgentBead_ClosedBead(t *testing.T) {
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
tmpDir := t.TempDir()
bd := NewIsolated(tmpDir)
if err := bd.Init("test"); err != nil {
t.Fatalf("bd init: %v", err)
// Initialize beads database
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
cmd.Dir = tmpDir
if output, err := cmd.CombinedOutput(); err != nil {
t.Fatalf("bd init: %v\n%s", err, output)
}
beadsDir := filepath.Join(tmpDir, ".beads")
bd := New(beadsDir)
agentID := "test-testrig-polecat-lifecycle"
// Simulate polecat lifecycle: spawn → nuke → respawn
@@ -2038,14 +2045,18 @@ func TestCreateOrReopenAgentBead_ClosedBead(t *testing.T) {
// fields to emulate delete --force --hard behavior. This ensures reopened agent
// beads don't have stale state from previous lifecycle.
func TestCloseAndClearAgentBead_FieldClearing(t *testing.T) {
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
tmpDir := t.TempDir()
bd := NewIsolated(tmpDir)
if err := bd.Init("test"); err != nil {
t.Fatalf("bd init: %v", err)
// Initialize beads database
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
cmd.Dir = tmpDir
if output, err := cmd.CombinedOutput(); err != nil {
t.Fatalf("bd init: %v\n%s", err, output)
}
beadsDir := filepath.Join(tmpDir, ".beads")
bd := New(beadsDir)
// Test cases for field clearing permutations
tests := []struct {
name string
@@ -2193,14 +2204,17 @@ func TestCloseAndClearAgentBead_FieldClearing(t *testing.T) {
// TestCloseAndClearAgentBead_NonExistent tests behavior when closing a non-existent agent bead.
func TestCloseAndClearAgentBead_NonExistent(t *testing.T) {
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
tmpDir := t.TempDir()
bd := NewIsolated(tmpDir)
if err := bd.Init("test"); err != nil {
t.Fatalf("bd init: %v", err)
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
cmd.Dir = tmpDir
if output, err := cmd.CombinedOutput(); err != nil {
t.Fatalf("bd init: %v\n%s", err, output)
}
beadsDir := filepath.Join(tmpDir, ".beads")
bd := New(beadsDir)
// Attempt to close non-existent bead
err := bd.CloseAndClearAgentBead("test-nonexistent-polecat-xyz", "should fail")
@@ -2212,14 +2226,17 @@ func TestCloseAndClearAgentBead_NonExistent(t *testing.T) {
// TestCloseAndClearAgentBead_AlreadyClosed tests behavior when closing an already-closed agent bead.
func TestCloseAndClearAgentBead_AlreadyClosed(t *testing.T) {
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
tmpDir := t.TempDir()
bd := NewIsolated(tmpDir)
if err := bd.Init("test"); err != nil {
t.Fatalf("bd init: %v", err)
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
cmd.Dir = tmpDir
if output, err := cmd.CombinedOutput(); err != nil {
t.Fatalf("bd init: %v\n%s", err, output)
}
beadsDir := filepath.Join(tmpDir, ".beads")
bd := New(beadsDir)
agentID := "test-testrig-polecat-doubleclosed"
// Create agent bead
@@ -2263,14 +2280,17 @@ func TestCloseAndClearAgentBead_AlreadyClosed(t *testing.T) {
// TestCloseAndClearAgentBead_ReopenHasCleanState tests that reopening a closed agent bead
// starts with clean state (no stale hook_bead, active_mr, etc.).
func TestCloseAndClearAgentBead_ReopenHasCleanState(t *testing.T) {
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
tmpDir := t.TempDir()
bd := NewIsolated(tmpDir)
if err := bd.Init("test"); err != nil {
t.Fatalf("bd init: %v", err)
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
cmd.Dir = tmpDir
if output, err := cmd.CombinedOutput(); err != nil {
t.Fatalf("bd init: %v\n%s", err, output)
}
beadsDir := filepath.Join(tmpDir, ".beads")
bd := New(beadsDir)
agentID := "test-testrig-polecat-cleanreopen"
// Step 1: Create agent with all fields populated
@@ -2328,14 +2348,17 @@ func TestCloseAndClearAgentBead_ReopenHasCleanState(t *testing.T) {
// TestCloseAndClearAgentBead_ReasonVariations tests close with different reason values.
func TestCloseAndClearAgentBead_ReasonVariations(t *testing.T) {
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
tmpDir := t.TempDir()
bd := NewIsolated(tmpDir)
if err := bd.Init("test"); err != nil {
t.Fatalf("bd init: %v", err)
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
cmd.Dir = tmpDir
if output, err := cmd.CombinedOutput(); err != nil {
t.Fatalf("bd init: %v\n%s", err, output)
}
beadsDir := filepath.Join(tmpDir, ".beads")
bd := New(beadsDir)
tests := []struct {
name string
reason string

View File

@@ -253,11 +253,6 @@ func TestDoneCircularRedirectProtection(t *testing.T) {
// This is critical because branch names like "polecat/furiosa-mkb0vq9f" don't
// contain the actual issue ID (test-845.1), but the agent's hook does.
func TestGetIssueFromAgentHook(t *testing.T) {
// Skip: bd CLI 0.47.2 has a bug where database writes don't commit
// ("sql: database is closed" during auto-flush). This blocks tests
// that need to create issues. See internal issue for tracking.
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
tests := []struct {
name string
agentBeadID string

View File

@@ -74,13 +74,6 @@ type VersionChange struct {
// versionChanges contains agent-actionable changes for recent versions
var versionChanges = []VersionChange{
{
Version: "0.4.0",
Date: "2026-01-17",
Changes: []string{
"FIX: Orphan cleanup skips valid tmux sessions - Prevents false kills of witnesses/refineries/deacon during startup by checking gt-*/hq-* session membership",
},
},
{
Version: "0.3.1",
Date: "2026-01-17",

View File

@@ -352,23 +352,6 @@ func runChannelSubscribe(cmd *cobra.Command, args []string) error {
b := beads.New(townRoot)
// Check channel exists and current subscription status
_, fields, err := b.GetChannelBead(name)
if err != nil {
return fmt.Errorf("getting channel: %w", err)
}
if fields == nil {
return fmt.Errorf("channel not found: %s", name)
}
// Check if already subscribed
for _, s := range fields.Subscribers {
if s == subscriber {
fmt.Printf("%s is already subscribed to channel %q\n", subscriber, name)
return nil
}
}
if err := b.SubscribeToChannel(name, subscriber); err != nil {
return fmt.Errorf("subscribing to channel: %w", err)
}
@@ -392,28 +375,6 @@ func runChannelUnsubscribe(cmd *cobra.Command, args []string) error {
b := beads.New(townRoot)
// Check channel exists and current subscription status
_, fields, err := b.GetChannelBead(name)
if err != nil {
return fmt.Errorf("getting channel: %w", err)
}
if fields == nil {
return fmt.Errorf("channel not found: %s", name)
}
// Check if actually subscribed
found := false
for _, s := range fields.Subscribers {
if s == subscriber {
found = true
break
}
}
if !found {
fmt.Printf("%s is not subscribed to channel %q\n", subscriber, name)
return nil
}
if err := b.UnsubscribeFromChannel(name, subscriber); err != nil {
return fmt.Errorf("unsubscribing from channel: %w", err)
}
@@ -441,13 +402,9 @@ func runChannelSubscribers(cmd *cobra.Command, args []string) error {
}
if channelJSON {
subs := fields.Subscribers
if subs == nil {
subs = []string{}
}
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
return enc.Encode(subs)
return enc.Encode(fields.Subscribers)
}
if len(fields.Subscribers) == 0 {

View File

@@ -374,11 +374,6 @@ func TestDetectSessionState(t *testing.T) {
})
t.Run("autonomous_state_hooked_bead", func(t *testing.T) {
// Skip: bd CLI 0.47.2 has a bug where database writes don't commit
// ("sql: database is closed" during auto-flush). This blocks tests
// that need to create issues. See internal issue for tracking.
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
// Skip if bd CLI is not available
if _, err := exec.LookPath("bd"); err != nil {
t.Skip("bd binary not found in PATH")

View File

@@ -616,7 +616,6 @@ exit 0
t.Setenv(EnvGTRole, "crew")
t.Setenv("GT_CREW", "jv")
t.Setenv("GT_POLECAT", "")
t.Setenv("TMUX_PANE", "") // Prevent inheriting real tmux pane from test runner
cwd, err := os.Getwd()
if err != nil {
@@ -638,9 +637,6 @@ exit 0
slingDryRun = true
slingNoConvoy = true
// Prevent real tmux nudge from firing during tests (causes agent self-interruption)
t.Setenv("GT_TEST_NO_NUDGE", "1")
// EXPECTED: gt sling should use daemon mode and succeed
// ACTUAL: verifyBeadExists uses --no-daemon and fails with sync error
beadID := "jv-v599"
@@ -796,7 +792,6 @@ exit 0
t.Setenv(EnvGTRole, "mayor")
t.Setenv("GT_POLECAT", "")
t.Setenv("GT_CREW", "")
t.Setenv("TMUX_PANE", "") // Prevent inheriting real tmux pane from test runner
cwd, err := os.Getwd()
if err != nil {
@@ -824,9 +819,6 @@ exit 0
slingVars = nil
slingOnTarget = "gt-abc123" // The bug bead we're applying formula to
// Prevent real tmux nudge from firing during tests (causes agent self-interruption)
t.Setenv("GT_TEST_NO_NUDGE", "1")
if err := runSling(nil, []string{"mol-polecat-work"}); err != nil {
t.Fatalf("runSling: %v", err)
}

View File

@@ -184,9 +184,10 @@ func runMayorStatusLine(t *tmux.Tmux) error {
// Track per-rig status for LED indicators and sorting
type rigStatus struct {
hasWitness bool
hasRefinery bool
opState string // "OPERATIONAL", "PARKED", or "DOCKED"
hasWitness bool
hasRefinery bool
polecatCount int
opState string // "OPERATIONAL", "PARKED", or "DOCKED"
}
rigStatuses := make(map[string]*rigStatus)
@@ -201,8 +202,10 @@ func runMayorStatusLine(t *tmux.Tmux) error {
working int
}
healthByType := map[AgentType]*agentHealth{
AgentPolecat: {},
AgentWitness: {},
AgentRefinery: {},
AgentDeacon: {},
}
// Single pass: track rig status AND agent health
@@ -212,8 +215,7 @@ func runMayorStatusLine(t *tmux.Tmux) error {
continue
}
// Track rig-level status (witness/refinery presence)
// Polecats are not tracked in tmux - they're a GC concern, not a display concern
// Track rig-level status (witness/refinery/polecat presence)
if agent.Rig != "" && registeredRigs[agent.Rig] {
if rigStatuses[agent.Rig] == nil {
rigStatuses[agent.Rig] = &rigStatus{}
@@ -223,6 +225,8 @@ func runMayorStatusLine(t *tmux.Tmux) error {
rigStatuses[agent.Rig].hasWitness = true
case AgentRefinery:
rigStatuses[agent.Rig].hasRefinery = true
case AgentPolecat:
rigStatuses[agent.Rig].polecatCount++
}
}
@@ -250,10 +254,9 @@ func runMayorStatusLine(t *tmux.Tmux) error {
var parts []string
// Add per-agent-type health in consistent order
// Format: "1/3 👁️" = 1 working out of 3 total
// Format: "1/10 😺" = 1 working out of 10 total
// Only show agent types that have sessions
// Note: Polecats and Deacon excluded - idle state display is misleading noise
agentOrder := []AgentType{AgentWitness, AgentRefinery}
agentOrder := []AgentType{AgentPolecat, AgentWitness, AgentRefinery, AgentDeacon}
var agentParts []string
for _, agentType := range agentOrder {
health := healthByType[agentType]
@@ -284,7 +287,7 @@ func runMayorStatusLine(t *tmux.Tmux) error {
rigs = append(rigs, rigInfo{name: rigName, status: status})
}
// Sort by: 1) running state, 2) operational state, 3) alphabetical
// Sort by: 1) running state, 2) polecat count (desc), 3) operational state, 4) alphabetical
sort.Slice(rigs, func(i, j int) bool {
isRunningI := rigs[i].status.hasWitness || rigs[i].status.hasRefinery
isRunningJ := rigs[j].status.hasWitness || rigs[j].status.hasRefinery
@@ -294,7 +297,12 @@ func runMayorStatusLine(t *tmux.Tmux) error {
return isRunningI
}
// Secondary sort: operational state (for non-running rigs: OPERATIONAL < PARKED < DOCKED)
// Secondary sort: polecat count (descending)
if rigs[i].status.polecatCount != rigs[j].status.polecatCount {
return rigs[i].status.polecatCount > rigs[j].status.polecatCount
}
// Tertiary sort: operational state (for non-running rigs: OPERATIONAL < PARKED < DOCKED)
stateOrder := map[string]int{"OPERATIONAL": 0, "PARKED": 1, "DOCKED": 2}
stateI := stateOrder[rigs[i].status.opState]
stateJ := stateOrder[rigs[j].status.opState]
@@ -302,7 +310,7 @@ func runMayorStatusLine(t *tmux.Tmux) error {
return stateI < stateJ
}
// Tertiary sort: alphabetical
// Quaternary sort: alphabetical
return rigs[i].name < rigs[j].name
})
@@ -344,12 +352,17 @@ func runMayorStatusLine(t *tmux.Tmux) error {
}
}
// Show polecat count if > 0
// All icons get 1 space, Park gets 2
space := " "
if led == "🅿️" {
space = " "
}
rigParts = append(rigParts, led+space+rig.name)
display := led + space + rig.name
if status.polecatCount > 0 {
display += fmt.Sprintf("(%d)", status.polecatCount)
}
rigParts = append(rigParts, display)
}
if len(rigParts) > 0 {
@@ -408,6 +421,7 @@ func runDeaconStatusLine(t *tmux.Tmux) error {
}
rigs := make(map[string]bool)
polecatCount := 0
for _, s := range sessions {
agent := categorizeSession(s)
if agent == nil {
@@ -417,13 +431,16 @@ func runDeaconStatusLine(t *tmux.Tmux) error {
if agent.Rig != "" && registeredRigs[agent.Rig] {
rigs[agent.Rig] = true
}
if agent.Type == AgentPolecat && registeredRigs[agent.Rig] {
polecatCount++
}
}
rigCount := len(rigs)
// Build status
// Note: Polecats excluded - they're ephemeral and idle detection is a GC concern
var parts []string
parts = append(parts, fmt.Sprintf("%d rigs", rigCount))
parts = append(parts, fmt.Sprintf("%d 😺", polecatCount))
// Priority 1: Check for hooked work (town beads for deacon)
hookedWork := ""
@@ -449,8 +466,7 @@ func runDeaconStatusLine(t *tmux.Tmux) error {
}
// runWitnessStatusLine outputs status for a witness session.
// Shows: crew count, hook or mail preview
// Note: Polecats excluded - they're ephemeral and idle detection is a GC concern
// Shows: polecat count, crew count, hook or mail preview
func runWitnessStatusLine(t *tmux.Tmux, rigName string) error {
if rigName == "" {
// Try to extract from session name: gt-<rig>-witness
@@ -467,20 +483,25 @@ func runWitnessStatusLine(t *tmux.Tmux, rigName string) error {
townRoot, _ = workspace.Find(paneDir)
}
// Count crew in this rig (crew are persistent, worth tracking)
// Count polecats and crew in this rig
sessions, err := t.ListSessions()
if err != nil {
return nil // Silent fail
}
polecatCount := 0
crewCount := 0
for _, s := range sessions {
agent := categorizeSession(s)
if agent == nil {
continue
}
if agent.Rig == rigName && agent.Type == AgentCrew {
crewCount++
if agent.Rig == rigName {
if agent.Type == AgentPolecat {
polecatCount++
} else if agent.Type == AgentCrew {
crewCount++
}
}
}
@@ -488,6 +509,7 @@ func runWitnessStatusLine(t *tmux.Tmux, rigName string) error {
// Build status
var parts []string
parts = append(parts, fmt.Sprintf("%d 😺", polecatCount))
if crewCount > 0 {
parts = append(parts, fmt.Sprintf("%d crew", crewCount))
}

View File

@@ -12,7 +12,7 @@ import (
// Version information - set at build time via ldflags
var (
Version = "0.4.0"
Version = "0.3.1"
// Build can be set via ldflags at compile time
Build = "dev"
// Commit and Branch - the git revision the binary was built from (optional ldflag)

View File

@@ -49,43 +49,36 @@ func AgentEnv(cfg AgentEnvConfig) map[string]string {
case "mayor":
env["BD_ACTOR"] = "mayor"
env["GIT_AUTHOR_NAME"] = "mayor"
env["GIT_AUTHOR_EMAIL"] = "mayor@gastown.local"
case "deacon":
env["BD_ACTOR"] = "deacon"
env["GIT_AUTHOR_NAME"] = "deacon"
env["GIT_AUTHOR_EMAIL"] = "deacon@gastown.local"
case "boot":
env["BD_ACTOR"] = "deacon-boot"
env["GIT_AUTHOR_NAME"] = "boot"
env["GIT_AUTHOR_EMAIL"] = "boot@gastown.local"
case "witness":
env["GT_RIG"] = cfg.Rig
env["BD_ACTOR"] = fmt.Sprintf("%s/witness", cfg.Rig)
env["GIT_AUTHOR_NAME"] = fmt.Sprintf("%s/witness", cfg.Rig)
env["GIT_AUTHOR_EMAIL"] = fmt.Sprintf("%s-witness@gastown.local", cfg.Rig)
case "refinery":
env["GT_RIG"] = cfg.Rig
env["BD_ACTOR"] = fmt.Sprintf("%s/refinery", cfg.Rig)
env["GIT_AUTHOR_NAME"] = fmt.Sprintf("%s/refinery", cfg.Rig)
env["GIT_AUTHOR_EMAIL"] = fmt.Sprintf("%s-refinery@gastown.local", cfg.Rig)
case "polecat":
env["GT_RIG"] = cfg.Rig
env["GT_POLECAT"] = cfg.AgentName
env["BD_ACTOR"] = fmt.Sprintf("%s/polecats/%s", cfg.Rig, cfg.AgentName)
env["GIT_AUTHOR_NAME"] = cfg.AgentName
env["GIT_AUTHOR_EMAIL"] = fmt.Sprintf("%s-polecat-%s@gastown.local", cfg.Rig, cfg.AgentName)
case "crew":
env["GT_RIG"] = cfg.Rig
env["GT_CREW"] = cfg.AgentName
env["BD_ACTOR"] = fmt.Sprintf("%s/crew/%s", cfg.Rig, cfg.AgentName)
env["GIT_AUTHOR_NAME"] = cfg.AgentName
env["GIT_AUTHOR_EMAIL"] = fmt.Sprintf("%s-crew-%s@gastown.local", cfg.Rig, cfg.AgentName)
}
// Only set GT_ROOT if provided

View File

@@ -14,7 +14,6 @@ func TestAgentEnv_Mayor(t *testing.T) {
assertEnv(t, env, "GT_ROLE", "mayor")
assertEnv(t, env, "BD_ACTOR", "mayor")
assertEnv(t, env, "GIT_AUTHOR_NAME", "mayor")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "mayor@gastown.local")
assertEnv(t, env, "GT_ROOT", "/town")
assertNotSet(t, env, "GT_RIG")
assertNotSet(t, env, "BEADS_NO_DAEMON")
@@ -32,7 +31,6 @@ func TestAgentEnv_Witness(t *testing.T) {
assertEnv(t, env, "GT_RIG", "myrig")
assertEnv(t, env, "BD_ACTOR", "myrig/witness")
assertEnv(t, env, "GIT_AUTHOR_NAME", "myrig/witness")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "myrig-witness@gastown.local")
assertEnv(t, env, "GT_ROOT", "/town")
}
@@ -51,7 +49,6 @@ func TestAgentEnv_Polecat(t *testing.T) {
assertEnv(t, env, "GT_POLECAT", "Toast")
assertEnv(t, env, "BD_ACTOR", "myrig/polecats/Toast")
assertEnv(t, env, "GIT_AUTHOR_NAME", "Toast")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "myrig-polecat-Toast@gastown.local")
assertEnv(t, env, "BEADS_AGENT_NAME", "myrig/Toast")
assertEnv(t, env, "BEADS_NO_DAEMON", "1")
}
@@ -71,7 +68,6 @@ func TestAgentEnv_Crew(t *testing.T) {
assertEnv(t, env, "GT_CREW", "emma")
assertEnv(t, env, "BD_ACTOR", "myrig/crew/emma")
assertEnv(t, env, "GIT_AUTHOR_NAME", "emma")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "myrig-crew-emma@gastown.local")
assertEnv(t, env, "BEADS_AGENT_NAME", "myrig/emma")
assertEnv(t, env, "BEADS_NO_DAEMON", "1")
}
@@ -89,7 +85,6 @@ func TestAgentEnv_Refinery(t *testing.T) {
assertEnv(t, env, "GT_RIG", "myrig")
assertEnv(t, env, "BD_ACTOR", "myrig/refinery")
assertEnv(t, env, "GIT_AUTHOR_NAME", "myrig/refinery")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "myrig-refinery@gastown.local")
assertEnv(t, env, "BEADS_NO_DAEMON", "1")
}
@@ -103,7 +98,6 @@ func TestAgentEnv_Deacon(t *testing.T) {
assertEnv(t, env, "GT_ROLE", "deacon")
assertEnv(t, env, "BD_ACTOR", "deacon")
assertEnv(t, env, "GIT_AUTHOR_NAME", "deacon")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "deacon@gastown.local")
assertEnv(t, env, "GT_ROOT", "/town")
assertNotSet(t, env, "GT_RIG")
assertNotSet(t, env, "BEADS_NO_DAEMON")
@@ -119,7 +113,6 @@ func TestAgentEnv_Boot(t *testing.T) {
assertEnv(t, env, "GT_ROLE", "boot")
assertEnv(t, env, "BD_ACTOR", "deacon-boot")
assertEnv(t, env, "GIT_AUTHOR_NAME", "boot")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "boot@gastown.local")
assertEnv(t, env, "GT_ROOT", "/town")
assertNotSet(t, env, "GT_RIG")
assertNotSet(t, env, "BEADS_NO_DAEMON")

View File

@@ -242,7 +242,6 @@ func (m *Manager) List() ([]*Dog, error) {
}
// Get returns a specific dog by name.
// Returns ErrDogNotFound if the dog directory or .dog.json state file doesn't exist.
func (m *Manager) Get(name string) (*Dog, error) {
if !m.exists(name) {
return nil, ErrDogNotFound
@@ -250,9 +249,12 @@ func (m *Manager) Get(name string) (*Dog, error) {
state, err := m.loadState(name)
if err != nil {
// No .dog.json means this isn't a valid dog worker
// (e.g., "boot" is the boot watchdog using .boot-status.json, not a dog)
return nil, ErrDogNotFound
// Return minimal dog if state file is missing
return &Dog{
Name: name,
State: StateIdle,
Path: m.dogDir(name),
}, nil
}
return &Dog{

View File

@@ -35,9 +35,7 @@ drive shaft - if you stall, the whole town stalls.
**Your startup behavior:**
1. Check hook (`gt hook`)
2. If work is hooked → EXECUTE (no announcement beyond one line, no waiting)
3. If hook empty → Check escalations (`gt escalate list`)
4. Handle any pending escalations (these are urgent items from other agents)
5. Check mail, then wait for user instructions
3. If hook empty → Check mail, then wait for user instructions
**Note:** "Hooked" means work assigned to you. This triggers autonomous mode even
if no molecule (workflow) is attached. Don't confuse with "pinned" which is for
@@ -243,21 +241,16 @@ Like crew, you're human-managed. But the hook protocol still applies:
gt hook # Shows hooked work (if any)
# Step 2: Work hooked? → RUN IT
# Step 3: Hook empty? → Check escalations (mayor-specific)
gt escalate list # Shows pending escalations from other agents
# Handle any pending escalations - these are urgent items requiring your attention
# Step 4: Check mail for attached work
# Hook empty? → Check mail for attached work
gt mail inbox
# If mail contains attached work, hook it:
gt mol attach-from-mail <mail-id>
# Step 5: Still nothing? Wait for user instructions
# Step 3: Still nothing? Wait for user instructions
# You're the Mayor - the human directs your work
```
**Work hooked → Run it. Hook empty → Check escalations → Check mail. Nothing anywhere → Wait for user.**
**Work hooked → Run it. Hook empty → Check mail. Nothing anywhere → Wait for user.**
Your hooked work persists across sessions. Handoff mail (🤝 HANDOFF subject) provides context notes.

View File

@@ -19,60 +19,6 @@ import (
// processes and avoids killing legitimate short-lived subagents.
const minOrphanAge = 60
// getGasTownSessionPIDs returns a set of PIDs belonging to valid Gas Town tmux sessions.
// This prevents killing Claude processes that are part of witness/refinery/deacon sessions
// even if they temporarily show TTY "?" during startup or session transitions.
func getGasTownSessionPIDs() map[int]bool {
pids := make(map[int]bool)
// Get list of Gas Town tmux sessions (gt-* and hq-*)
out, err := exec.Command("tmux", "list-sessions", "-F", "#{session_name}").Output()
if err != nil {
return pids // tmux not available or no sessions
}
var gasTownSessions []string
for _, line := range strings.Split(strings.TrimSpace(string(out)), "\n") {
if strings.HasPrefix(line, "gt-") || strings.HasPrefix(line, "hq-") {
gasTownSessions = append(gasTownSessions, line)
}
}
// For each Gas Town session, get the PIDs of processes in its panes
for _, session := range gasTownSessions {
out, err := exec.Command("tmux", "list-panes", "-t", session, "-F", "#{pane_pid}").Output()
if err != nil {
continue
}
for _, pidStr := range strings.Split(strings.TrimSpace(string(out)), "\n") {
if pid, err := strconv.Atoi(pidStr); err == nil && pid > 0 {
pids[pid] = true
// Also add child processes of the pane shell
addChildPIDs(pid, pids)
}
}
}
return pids
}
// addChildPIDs adds all descendant PIDs of a process to the set.
// This catches Claude processes spawned by the shell in a tmux pane.
func addChildPIDs(parentPID int, pids map[int]bool) {
// Use pgrep to find children (more reliable than parsing ps output)
out, err := exec.Command("pgrep", "-P", strconv.Itoa(parentPID)).Output()
if err != nil {
return
}
for _, pidStr := range strings.Split(strings.TrimSpace(string(out)), "\n") {
if pid, err := strconv.Atoi(pidStr); err == nil && pid > 0 {
pids[pid] = true
// Recurse to get grandchildren
addChildPIDs(pid, pids)
}
}
}
// sigkillGracePeriod is how long (in seconds) we wait after sending SIGTERM
// before escalating to SIGKILL. If a process was sent SIGTERM and is still
// around after this period, we use SIGKILL on the next cleanup cycle.
@@ -220,10 +166,6 @@ type OrphanedProcess struct {
// Additionally, processes must be older than minOrphanAge seconds to be considered
// orphaned. This prevents race conditions with newly spawned processes.
func FindOrphanedClaudeProcesses() ([]OrphanedProcess, error) {
// Get PIDs belonging to valid Gas Town tmux sessions.
// These should not be killed even if they show TTY "?" during startup.
gasTownPIDs := getGasTownSessionPIDs()
// Use ps to get PID, TTY, command, and elapsed time for all processes
// TTY "?" indicates no controlling terminal
// etime is elapsed time in [[DD-]HH:]MM:SS format (portable across Linux/macOS)
@@ -260,13 +202,6 @@ func FindOrphanedClaudeProcesses() ([]OrphanedProcess, error) {
continue
}
// Skip processes that belong to valid Gas Town tmux sessions.
// This prevents killing witnesses/refineries/deacon during startup
// when they may temporarily show TTY "?".
if gasTownPIDs[pid] {
continue
}
// Skip processes younger than minOrphanAge seconds
// This prevents killing newly spawned subagents and reduces false positives
age, err := parseEtime(etimeStr)

View File

@@ -1,6 +1,6 @@
{
"name": "@gastown/gt",
"version": "0.4.0",
"version": "0.3.0",
"description": "Gas Town CLI - multi-agent workspace manager with native binary support",
"main": "bin/gt.js",
"bin": {

View File

@@ -235,23 +235,9 @@ merge queue. Without this step:
**You own your session cadence.** The Witness monitors but doesn't force recycles.
### 🚨 THE BATCH-CLOSURE HERESY 🚨
Molecules are the **LEDGER** - not a task checklist. Each step closure is a timestamped entry in your permanent work record (your CV).
**The discipline:**
1. Mark step `in_progress` BEFORE starting it: `bd update <step-id> --status=in_progress`
2. Mark step `closed` IMMEDIATELY after completing it: `bd close <step-id>`
3. **NEVER** batch-close steps at the end
**Why this matters:** Batch-closing corrupts the timeline. It creates a lie - showing all steps completed at the same moment instead of the actual work progression. The activity feed should show your REAL work timeline.
**Wrong:** Do all work, then close steps 1, 2, 3, 4, 5 in sequence at the end
**Right:**
- Mark step 1 in_progress → do work → close step 1
- Mark step 2 in_progress → do work → close step 2
- (repeat for each step)
### Closing Steps (for Activity Feed)
As you complete each molecule step, close it:
```bash
bd close <step-id> --reason "Implemented: <what you did>"
```