Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cedb1bbd13 |
5
.beads/.gitignore
vendored
5
.beads/.gitignore
vendored
@@ -32,11 +32,6 @@ beads.left.meta.json
|
||||
beads.right.jsonl
|
||||
beads.right.meta.json
|
||||
|
||||
# Sync state (local-only, per-machine)
|
||||
# These files are machine-specific and should not be shared across clones
|
||||
.sync.lock
|
||||
sync_base.jsonl
|
||||
|
||||
# NOTE: Do NOT add negation patterns (e.g., !issues.jsonl) here.
|
||||
# They would override fork protection in .git/info/exclude, allowing
|
||||
# contributors to accidentally commit upstream issue databases.
|
||||
|
||||
@@ -15,8 +15,6 @@ Each leg examines the code from a different perspective. Findings are
|
||||
collected and synthesized into a prioritized, actionable review.
|
||||
|
||||
## Legs (parallel execution)
|
||||
|
||||
### Analysis Legs (read and analyze code)
|
||||
- **correctness**: Logic errors, bugs, edge cases
|
||||
- **performance**: Bottlenecks, efficiency issues
|
||||
- **security**: Vulnerabilities, OWASP concerns
|
||||
@@ -25,16 +23,6 @@ collected and synthesized into a prioritized, actionable review.
|
||||
- **style**: Convention compliance, consistency
|
||||
- **smells**: Anti-patterns, technical debt
|
||||
|
||||
### Verification Legs (check implementation quality)
|
||||
- **wiring**: Installed-but-not-wired gaps (deps added but not used)
|
||||
- **commit-discipline**: Commit quality and atomicity
|
||||
- **test-quality**: Test meaningfulness, not just coverage
|
||||
|
||||
## Presets
|
||||
- **gate**: Light review for automatic flow (wiring, security, smells, test-quality)
|
||||
- **full**: Comprehensive review (all 10 legs)
|
||||
- **custom**: Select specific legs via --legs flag
|
||||
|
||||
## Execution Model
|
||||
1. Each leg spawns as a separate polecat
|
||||
2. Polecats work in parallel
|
||||
@@ -305,125 +293,6 @@ Review the code for code smells and anti-patterns.
|
||||
- Is technical debt being added or paid down?
|
||||
"""
|
||||
|
||||
# ============================================================================
|
||||
# VERIFICATION LEGS - Check implementation quality (not just code analysis)
|
||||
# ============================================================================
|
||||
|
||||
[[legs]]
|
||||
id = "wiring"
|
||||
title = "Wiring Review"
|
||||
focus = "Installed-but-not-wired gaps"
|
||||
description = """
|
||||
Detect dependencies, configs, or libraries that were added but not actually used.
|
||||
|
||||
This catches subtle bugs where the implementer THINKS they integrated something,
|
||||
but the old implementation is still being used.
|
||||
|
||||
**Look for:**
|
||||
- New dependency in manifest but never imported
|
||||
- Go: module in go.mod but no import
|
||||
- Rust: crate in Cargo.toml but no `use`
|
||||
- Node: package in package.json but no import/require
|
||||
|
||||
- SDK added but old implementation remains
|
||||
- Added Sentry but still using console.error for errors
|
||||
- Added Zod but still using manual typeof validation
|
||||
|
||||
- Config/env var defined but never loaded
|
||||
- New .env var that isn't accessed in code
|
||||
|
||||
**Questions to answer:**
|
||||
- Is every new dependency actually used?
|
||||
- Are there old patterns that should have been replaced?
|
||||
- Is there dead config that suggests incomplete migration?
|
||||
"""
|
||||
|
||||
[[legs]]
|
||||
id = "commit-discipline"
|
||||
title = "Commit Discipline Review"
|
||||
focus = "Commit quality and atomicity"
|
||||
description = """
|
||||
Review commit history for good practices.
|
||||
|
||||
Good commits make the codebase easier to understand, bisect, and revert.
|
||||
|
||||
**Look for:**
|
||||
- Giant "WIP" or "fix" commits
|
||||
- Multiple unrelated changes in one commit
|
||||
- Commits that touch 20+ files across different features
|
||||
|
||||
- Poor commit messages
|
||||
- "stuff", "update", "asdf", "fix"
|
||||
- No context about WHY the change was made
|
||||
|
||||
- Unatomic commits
|
||||
- Feature + refactor + bugfix in same commit
|
||||
- Should be separable logical units
|
||||
|
||||
- Missing type prefixes (if project uses conventional commits)
|
||||
- feat:, fix:, refactor:, test:, docs:, chore:
|
||||
|
||||
**Questions to answer:**
|
||||
- Could this history be bisected effectively?
|
||||
- Would a reviewer understand the progression?
|
||||
- Are commits atomic (one logical change each)?
|
||||
"""
|
||||
|
||||
[[legs]]
|
||||
id = "test-quality"
|
||||
title = "Test Quality Review"
|
||||
focus = "Test meaningfulness, not just coverage"
|
||||
description = """
|
||||
Verify tests are actually testing something meaningful.
|
||||
|
||||
Coverage numbers lie. A test that can't fail provides no value.
|
||||
|
||||
**Look for:**
|
||||
- Weak assertions
|
||||
- Only checking != nil / !== null / is not None
|
||||
- Using .is_ok() without checking the value
|
||||
- assertTrue(true) or equivalent
|
||||
|
||||
- Missing negative test cases
|
||||
- Happy path only, no error cases
|
||||
- No boundary testing
|
||||
- No invalid input testing
|
||||
|
||||
- Tests that can't fail
|
||||
- Mocked so heavily the test is meaningless
|
||||
- Testing implementation details, not behavior
|
||||
|
||||
- Flaky test indicators
|
||||
- Sleep/delay in tests
|
||||
- Time-dependent assertions
|
||||
|
||||
**Questions to answer:**
|
||||
- Do these tests actually verify behavior?
|
||||
- Would a bug in the implementation cause a test failure?
|
||||
- Are edge cases and error paths tested?
|
||||
"""
|
||||
|
||||
# ============================================================================
|
||||
# PRESETS - Configurable leg selection
|
||||
# ============================================================================
|
||||
|
||||
[presets]
|
||||
[presets.gate]
|
||||
description = "Light review for automatic flow - fast, focused on blockers"
|
||||
legs = ["wiring", "security", "smells", "test-quality"]
|
||||
|
||||
[presets.full]
|
||||
description = "Comprehensive review - all legs, for major features"
|
||||
legs = ["correctness", "performance", "security", "elegance", "resilience", "style", "smells", "wiring", "commit-discipline", "test-quality"]
|
||||
|
||||
[presets.security-focused]
|
||||
description = "Security-heavy review for sensitive changes"
|
||||
legs = ["security", "resilience", "correctness", "wiring"]
|
||||
|
||||
[presets.refactor]
|
||||
description = "Review focused on code quality during refactoring"
|
||||
legs = ["elegance", "smells", "style", "commit-discipline"]
|
||||
|
||||
# Synthesis step - combines all leg outputs
|
||||
[synthesis]
|
||||
title = "Review Synthesis"
|
||||
@@ -441,13 +310,10 @@ A synthesized review at: {{.output.directory}}/{{.output.synthesis}}
|
||||
2. **Critical Issues** - P0 items from all legs, deduplicated
|
||||
3. **Major Issues** - P1 items, grouped by theme
|
||||
4. **Minor Issues** - P2 items, briefly listed
|
||||
5. **Wiring Gaps** - Dependencies added but not used (from wiring leg)
|
||||
6. **Commit Quality** - Notes on commit discipline
|
||||
7. **Test Quality** - Assessment of test meaningfulness
|
||||
8. **Positive Observations** - What's done well
|
||||
9. **Recommendations** - Actionable next steps
|
||||
5. **Positive Observations** - What's done well
|
||||
6. **Recommendations** - Actionable next steps
|
||||
|
||||
Deduplicate issues found by multiple legs (note which legs found them).
|
||||
Prioritize by impact and effort. Be actionable.
|
||||
"""
|
||||
depends_on = ["correctness", "performance", "security", "elegance", "resilience", "style", "smells", "wiring", "commit-discipline", "test-quality"]
|
||||
depends_on = ["correctness", "performance", "security", "elegance", "resilience", "style", "smells"]
|
||||
|
||||
@@ -47,7 +47,7 @@ Check all crew workspaces and the mayor rig:
|
||||
|
||||
```bash
|
||||
# Check each workspace
|
||||
for dir in $GT_ROOT/gastown/crew/* $GT_ROOT/gastown/mayor; do
|
||||
for dir in ~/gt/gastown/crew/* ~/gt/gastown/mayor; do
|
||||
if [ -d "$dir/.git" ] || [ -d "$dir" ]; then
|
||||
echo "=== Checking $dir ==="
|
||||
cd "$dir" 2>/dev/null || continue
|
||||
|
||||
@@ -47,7 +47,7 @@ bd show hq-deacon 2>/dev/null
|
||||
gt feed --since 10m --plain | head -20
|
||||
|
||||
# Recent wisps (operational state)
|
||||
ls -lt $GT_ROOT/.beads-wisp/*.wisp.json 2>/dev/null | head -5
|
||||
ls -lt ~/gt/.beads-wisp/*.wisp.json 2>/dev/null | head -5
|
||||
```
|
||||
|
||||
**Step 4: Check Deacon mail**
|
||||
@@ -221,7 +221,7 @@ Then exit. The next daemon tick will spawn a fresh Boot.
|
||||
**Update status file**
|
||||
```bash
|
||||
# The gt boot command handles this automatically
|
||||
# Status is written to $GT_ROOT/deacon/dogs/boot/.boot-status.json
|
||||
# Status is written to ~/gt/deacon/dogs/boot/.boot-status.json
|
||||
```
|
||||
|
||||
Boot is ephemeral by design. Each instance runs fresh.
|
||||
|
||||
@@ -480,7 +480,7 @@ needs = ["zombie-scan"]
|
||||
description = """
|
||||
Execute registered plugins.
|
||||
|
||||
Scan $GT_ROOT/plugins/ for plugin directories. Each plugin has a plugin.md with TOML frontmatter defining its gate (when to run) and instructions (what to do).
|
||||
Scan ~/gt/plugins/ for plugin directories. Each plugin has a plugin.md with TOML frontmatter defining its gate (when to run) and instructions (what to do).
|
||||
|
||||
See docs/deacon-plugins.md for full documentation.
|
||||
|
||||
@@ -497,7 +497,7 @@ For each plugin:
|
||||
|
||||
Plugins marked parallel: true can run concurrently using Task tool subagents. Sequential plugins run one at a time in directory order.
|
||||
|
||||
Skip this step if $GT_ROOT/plugins/ does not exist or is empty."""
|
||||
Skip this step if ~/gt/plugins/ does not exist or is empty."""
|
||||
|
||||
[[steps]]
|
||||
id = "dog-pool-maintenance"
|
||||
@@ -665,84 +665,59 @@ Skip dispatch - system is healthy.
|
||||
|
||||
[[steps]]
|
||||
id = "costs-digest"
|
||||
title = "Aggregate daily costs [DISABLED]"
|
||||
title = "Aggregate daily costs"
|
||||
needs = ["session-gc"]
|
||||
description = """
|
||||
**⚠️ DISABLED** - Skip this step entirely.
|
||||
**DAILY DIGEST** - Aggregate yesterday's session cost wisps.
|
||||
|
||||
Cost tracking is temporarily disabled because Claude Code does not expose
|
||||
session costs in a way that can be captured programmatically.
|
||||
|
||||
**Why disabled:**
|
||||
- The `gt costs` command uses tmux capture-pane to find costs
|
||||
- Claude Code displays costs in the TUI status bar, not in scrollback
|
||||
- All sessions show $0.00 because capture-pane can't see TUI chrome
|
||||
- The infrastructure is sound but has no data source
|
||||
|
||||
**What we need from Claude Code:**
|
||||
- Stop hook env var (e.g., `$CLAUDE_SESSION_COST`)
|
||||
- Or queryable file/API endpoint
|
||||
|
||||
**Re-enable when:** Claude Code exposes cost data via API or environment.
|
||||
|
||||
See: GH#24, gt-7awfj
|
||||
|
||||
**Exit criteria:** Skip this step - proceed to next."""
|
||||
|
||||
[[steps]]
|
||||
id = "patrol-digest"
|
||||
title = "Aggregate daily patrol digests"
|
||||
needs = ["costs-digest"]
|
||||
description = """
|
||||
**DAILY DIGEST** - Aggregate yesterday's patrol cycle digests.
|
||||
|
||||
Patrol cycles (Deacon, Witness, Refinery) create ephemeral per-cycle digests
|
||||
to avoid JSONL pollution. This step aggregates them into a single permanent
|
||||
"Patrol Report YYYY-MM-DD" bead for audit purposes.
|
||||
Session costs are recorded as ephemeral wisps (not exported to JSONL) to avoid
|
||||
log-in-database pollution. This step aggregates them into a permanent daily
|
||||
"Cost Report YYYY-MM-DD" bead for audit purposes.
|
||||
|
||||
**Step 1: Check if digest is needed**
|
||||
```bash
|
||||
# Preview yesterday's patrol digests (dry run)
|
||||
gt patrol digest --yesterday --dry-run
|
||||
# Preview yesterday's costs (dry run)
|
||||
gt costs digest --yesterday --dry-run
|
||||
```
|
||||
|
||||
If output shows "No patrol digests found", skip to Step 3.
|
||||
If output shows "No session cost wisps found", skip to Step 3.
|
||||
|
||||
**Step 2: Create the digest**
|
||||
```bash
|
||||
gt patrol digest --yesterday
|
||||
gt costs digest --yesterday
|
||||
```
|
||||
|
||||
This:
|
||||
- Queries all ephemeral patrol digests from yesterday
|
||||
- Creates a single "Patrol Report YYYY-MM-DD" bead with aggregated data
|
||||
- Deletes the source digests
|
||||
- Queries all session.ended wisps from yesterday
|
||||
- Creates a single "Cost Report YYYY-MM-DD" bead with aggregated data
|
||||
- Deletes the source wisps
|
||||
|
||||
**Step 3: Verify**
|
||||
Daily patrol digests preserve audit trail without per-cycle pollution.
|
||||
The digest appears in `gt costs --week` queries.
|
||||
Daily digests preserve audit trail without per-session pollution.
|
||||
|
||||
**Timing**: Run once per morning patrol cycle. The --yesterday flag ensures
|
||||
we don't try to digest today's incomplete data.
|
||||
|
||||
**Exit criteria:** Yesterday's patrol digests aggregated (or none to aggregate)."""
|
||||
**Exit criteria:** Yesterday's costs digested (or no wisps to digest)."""
|
||||
|
||||
[[steps]]
|
||||
id = "log-maintenance"
|
||||
title = "Rotate logs and prune state"
|
||||
needs = ["patrol-digest"]
|
||||
needs = ["costs-digest"]
|
||||
description = """
|
||||
Maintain daemon logs and state files.
|
||||
|
||||
**Step 1: Check daemon.log size**
|
||||
```bash
|
||||
# Get log file size
|
||||
ls -la ~/.beads/daemon*.log 2>/dev/null || ls -la $GT_ROOT/.beads/daemon*.log 2>/dev/null
|
||||
ls -la ~/.beads/daemon*.log 2>/dev/null || ls -la ~/gt/.beads/daemon*.log 2>/dev/null
|
||||
```
|
||||
|
||||
If daemon.log exceeds 10MB:
|
||||
```bash
|
||||
# Rotate with date suffix and gzip
|
||||
LOGFILE="$GT_ROOT/.beads/daemon.log"
|
||||
LOGFILE="$HOME/gt/.beads/daemon.log"
|
||||
if [ -f "$LOGFILE" ] && [ $(stat -f%z "$LOGFILE" 2>/dev/null || stat -c%s "$LOGFILE") -gt 10485760 ]; then
|
||||
DATE=$(date +%Y-%m-%dT%H-%M-%S)
|
||||
mv "$LOGFILE" "${LOGFILE%.log}-${DATE}.log"
|
||||
@@ -754,7 +729,7 @@ fi
|
||||
|
||||
Clean up daemon logs older than 7 days:
|
||||
```bash
|
||||
find $GT_ROOT/.beads/ -name "daemon-*.log.gz" -mtime +7 -delete
|
||||
find ~/gt/.beads/ -name "daemon-*.log.gz" -mtime +7 -delete
|
||||
```
|
||||
|
||||
**Step 3: Prune state.json of dead sessions**
|
||||
|
||||
@@ -8,7 +8,7 @@ goroutine (NOT a Claude session) that runs the interrogation state machine.
|
||||
|
||||
Dogs are lightweight workers in Boot's pool (see dog-pool-architecture.md):
|
||||
- Fixed pool of 5 goroutines (configurable via GT_DOG_POOL_SIZE)
|
||||
- State persisted to $GT_ROOT/deacon/dogs/active/<id>.json
|
||||
- State persisted to ~/gt/deacon/dogs/active/<id>.json
|
||||
- Recovery on Boot restart via orphan state files
|
||||
|
||||
## State Machine
|
||||
@@ -151,7 +151,7 @@ If target doesn't exist:
|
||||
- Skip to EPITAPH with outcome=already_dead
|
||||
|
||||
**3. Initialize state file:**
|
||||
Write initial state to $GT_ROOT/deacon/dogs/active/{dog-id}.json
|
||||
Write initial state to ~/gt/deacon/dogs/active/{dog-id}.json
|
||||
|
||||
**4. Set initial attempt counter:**
|
||||
attempt = 1
|
||||
@@ -477,11 +477,11 @@ bd close {warrant_id} --reason "{epitaph_summary}"
|
||||
|
||||
**3. Move state file to completed:**
|
||||
```bash
|
||||
mv $GT_ROOT/deacon/dogs/active/{dog-id}.json $GT_ROOT/deacon/dogs/completed/
|
||||
mv ~/gt/deacon/dogs/active/{dog-id}.json ~/gt/deacon/dogs/completed/
|
||||
```
|
||||
|
||||
**4. Report to Boot:**
|
||||
Write completion file: $GT_ROOT/deacon/dogs/active/{dog-id}.done
|
||||
Write completion file: ~/gt/deacon/dogs/active/{dog-id}.done
|
||||
```json
|
||||
{
|
||||
"dog_id": "{dog-id}",
|
||||
|
||||
@@ -132,7 +132,7 @@ gt daemon rotate-logs
|
||||
gt doctor --fix
|
||||
```
|
||||
|
||||
Old logs are moved to `$GT_ROOT/logs/archive/` with timestamps.
|
||||
Old logs are moved to `~/gt/logs/archive/` with timestamps.
|
||||
"""
|
||||
|
||||
[[steps]]
|
||||
|
||||
@@ -15,22 +15,17 @@ while read local_ref local_sha remote_ref remote_sha; do
|
||||
# Allowed branches
|
||||
;;
|
||||
*)
|
||||
# Allow feature branches when contributing to upstream (fork workflow).
|
||||
# If an 'upstream' remote exists, this is a contribution setup where
|
||||
# feature branches are needed for PRs. See: #848
|
||||
if ! git remote get-url upstream &>/dev/null; then
|
||||
echo "ERROR: Invalid branch for Gas Town agents."
|
||||
echo ""
|
||||
echo "Blocked push to: $branch"
|
||||
echo ""
|
||||
echo "Allowed branches:"
|
||||
echo " main - Crew workers push here directly"
|
||||
echo " polecat/* - Polecat working branches"
|
||||
echo " beads-sync - Beads synchronization"
|
||||
echo ""
|
||||
echo "Do NOT create PRs. Push to main or let Refinery merge polecat work."
|
||||
exit 1
|
||||
fi
|
||||
echo "ERROR: Invalid branch for Gas Town agents."
|
||||
echo ""
|
||||
echo "Blocked push to: $branch"
|
||||
echo ""
|
||||
echo "Allowed branches:"
|
||||
echo " main - Crew workers push here directly"
|
||||
echo " polecat/* - Polecat working branches"
|
||||
echo " beads-sync - Beads synchronization"
|
||||
echo ""
|
||||
echo "Do NOT create PRs. Push to main or let Refinery merge polecat work."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
32
.github/workflows/windows-ci.yml
vendored
32
.github/workflows/windows-ci.yml
vendored
@@ -1,32 +0,0 @@
|
||||
name: Windows CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
pull_request:
|
||||
branches: [ main ]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
name: Windows Build and Unit Tests
|
||||
runs-on: windows-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v5
|
||||
with:
|
||||
go-version: '1.24'
|
||||
|
||||
- name: Configure Git
|
||||
run: |
|
||||
git config --global user.name "CI Bot"
|
||||
git config --global user.email "ci@gastown.test"
|
||||
|
||||
- name: Build
|
||||
run: go build -v ./cmd/gt
|
||||
|
||||
- name: Unit Tests
|
||||
run: go test -short ./...
|
||||
7
.gitignore
vendored
7
.gitignore
vendored
@@ -51,10 +51,3 @@ CLAUDE.md
|
||||
|
||||
# Embedded formulas are committed so `go install @latest` works
|
||||
# Run `go generate ./...` after modifying .beads/formulas/
|
||||
|
||||
# Gas Town (added by gt)
|
||||
.beads/
|
||||
.logs/
|
||||
logs/
|
||||
settings/
|
||||
.events.jsonl
|
||||
|
||||
70
CHANGELOG.md
70
CHANGELOG.md
@@ -7,76 +7,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
## [0.5.0] - 2026-01-22
|
||||
|
||||
### Added
|
||||
|
||||
#### Mail Improvements
|
||||
- **Numeric index support for `gt mail read`** - Read messages by inbox position (e.g., `gt mail read 1`)
|
||||
- **`gt mail hook` alias** - Shortcut for `gt hook attach` from mail context
|
||||
- **`--body` alias for `--message`** - More intuitive flag in `gt mail send` and `gt mail reply`
|
||||
- **Multiple message IDs in delete** - `gt mail delete msg1 msg2 msg3`
|
||||
- **Positional message arg in reply** - `gt mail reply <id> "message"` without --message flag
|
||||
- **`--all` flag for inbox** - Show all messages including read
|
||||
- **Parallel inbox queries** - ~6x speedup for mail inbox
|
||||
|
||||
#### Command Aliases
|
||||
- **`gt bd`** - Alias for `gt bead`
|
||||
- **`gt work`** - Alias for `gt hook`
|
||||
- **`--comment` alias for `--reason`** - In `gt close`
|
||||
- **`read` alias for `show`** - In `gt bead`
|
||||
|
||||
#### Configuration & Agents
|
||||
- **OpenCode as built-in agent preset** - Configure with `gt config set agent opencode`
|
||||
- **Config-based role definition system** - Roles defined in config, not beads
|
||||
- **Env field in RuntimeConfig** - Custom environment variables for agent presets
|
||||
- **ShellQuote helper** - Safe env var escaping for shell commands
|
||||
|
||||
#### Infrastructure
|
||||
- **Deacon status line display** - Shows deacon icon in mayor status line
|
||||
- **Configurable polecat branch naming** - Template-based branch naming
|
||||
- **Hook registry and install command** - Manage Claude Code hooks via `gt hooks`
|
||||
- **Doctor auto-fix capability** - SessionHookCheck can auto-repair
|
||||
- **`gt orphans kill` command** - Clean up orphaned Claude processes
|
||||
- **Zombie-scan command for deacon** - tmux-verified process cleanup
|
||||
- **Initial prompt for autonomous patrol startup** - Better agent priming
|
||||
|
||||
#### Refinery & Merging
|
||||
- **Squash merge for cleaner history** - Eliminates redundant merge commits
|
||||
- **Redundant observers** - Witness and Refinery both watch convoys
|
||||
## [0.3.1] - 2026-01-17
|
||||
|
||||
### Fixed
|
||||
|
||||
#### Crew & Session Stability
|
||||
- **Don't kill pane processes on new sessions** - Prevents destroying fresh shells
|
||||
- **Auto-recover from stale tmux pane references** - Recreates sessions automatically
|
||||
- **Preserve GT_AGENT across session restarts** - Handoff maintains identity
|
||||
|
||||
#### Process Management
|
||||
- **KillPaneProcesses kills pane process itself** - Not just descendants
|
||||
- **Kill pane processes before all RespawnPane calls** - Prevents orphan leaks
|
||||
- **Shutdown reliability improvements** - Multiple fixes for clean shutdown
|
||||
- **Deacon spawns immediately after killing stuck session**
|
||||
|
||||
#### Convoy & Routing
|
||||
- **Pass convoy ID to convoy check command** - Correct ID propagation
|
||||
- **Multi-repo routing for custom types** - Correct beads routing across repos
|
||||
- **Normalize agent ID trailing slash** - Consistent ID handling
|
||||
|
||||
#### Miscellaneous
|
||||
- **Sling auto-apply mol-polecat-work** - Auto-attach on open polecat beads
|
||||
- **Wisp orphan lifecycle bug** - Proper cleanup of abandoned wisps
|
||||
- **Misclassified wisp detection** - Defense-in-depth filtering
|
||||
- **Cross-account session access in seance** - Talk to predecessors across accounts
|
||||
- **Many more bug fixes** - See git log for full details
|
||||
|
||||
## [0.4.0] - 2026-01-19
|
||||
|
||||
_Changelog not documented at release time. See git log v0.3.1..v0.4.0 for changes._
|
||||
|
||||
## [0.3.1] - 2026-01-18
|
||||
|
||||
_Changelog not documented at release time. See git log v0.3.0..v0.3.1 for changes._
|
||||
- **Orphan cleanup on macOS** - Fixed TTY comparison (`??` vs `?`) so orphan detection works on macOS
|
||||
- **Session kill leaves orphans** - `gt done` and `gt crew stop` now use `KillSessionWithProcesses` to properly terminate all child processes before killing the tmux session
|
||||
|
||||
## [0.3.0] - 2026-01-17
|
||||
|
||||
|
||||
7
Makefile
7
Makefile
@@ -22,8 +22,11 @@ ifeq ($(shell uname),Darwin)
|
||||
@echo "Signed $(BINARY) for macOS"
|
||||
endif
|
||||
|
||||
install: generate
|
||||
go install -ldflags "$(LDFLAGS)" ./cmd/gt
|
||||
install: build
|
||||
cp $(BUILD_DIR)/$(BINARY) ~/.local/bin/$(BINARY)
|
||||
ifeq ($(shell uname),Darwin)
|
||||
@codesign -s - -f ~/.local/bin/$(BINARY) 2>/dev/null || true
|
||||
endif
|
||||
|
||||
clean:
|
||||
rm -f $(BUILD_DIR)/$(BINARY)
|
||||
|
||||
@@ -1,57 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"runtime"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// TestCrossPlatformBuild verifies that the codebase compiles for all supported
|
||||
// platforms. This catches cases where platform-specific code (using build tags
|
||||
// like //go:build !windows) is called from platform-agnostic code without
|
||||
// providing stubs for all platforms.
|
||||
func TestCrossPlatformBuild(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping cross-platform build test in short mode")
|
||||
}
|
||||
|
||||
// Skip if not running on a platform that can cross-compile
|
||||
// (need Go toolchain, not just running tests)
|
||||
if os.Getenv("CI") == "" && runtime.GOOS != "darwin" && runtime.GOOS != "linux" {
|
||||
t.Skip("skipping cross-platform build test on unsupported platform")
|
||||
}
|
||||
|
||||
platforms := []struct {
|
||||
goos string
|
||||
goarch string
|
||||
cgo string
|
||||
}{
|
||||
{"linux", "amd64", "0"},
|
||||
{"linux", "arm64", "0"},
|
||||
{"darwin", "amd64", "0"},
|
||||
{"darwin", "arm64", "0"},
|
||||
{"windows", "amd64", "0"},
|
||||
{"freebsd", "amd64", "0"},
|
||||
}
|
||||
|
||||
for _, p := range platforms {
|
||||
p := p // capture range variable
|
||||
t.Run(p.goos+"_"+p.goarch, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
cmd := exec.Command("go", "build", "-o", os.DevNull, ".")
|
||||
cmd.Dir = "."
|
||||
cmd.Env = append(os.Environ(),
|
||||
"GOOS="+p.goos,
|
||||
"GOARCH="+p.goarch,
|
||||
"CGO_ENABLED="+p.cgo,
|
||||
)
|
||||
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
t.Errorf("build failed for %s/%s:\n%s", p.goos, p.goarch, string(output))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -44,8 +44,8 @@ sudo apt update
|
||||
sudo apt install -y git
|
||||
|
||||
# Install Go (apt version may be outdated, use official installer)
|
||||
wget https://go.dev/dl/go1.24.12.linux-amd64.tar.gz
|
||||
sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.24.12.linux-amd64.tar.gz
|
||||
wget https://go.dev/dl/go1.24.linux-amd64.tar.gz
|
||||
sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.24.linux-amd64.tar.gz
|
||||
echo 'export PATH=$PATH:/usr/local/go/bin:$HOME/go/bin' >> ~/.bashrc
|
||||
source ~/.bashrc
|
||||
|
||||
|
||||
@@ -51,7 +51,6 @@ so you can see when it lands and what was included.
|
||||
|---------|-------------|-----|-------------|
|
||||
| **Convoy** | Yes | hq-cv-* | Tracking unit. What you create, track, get notified about. |
|
||||
| **Swarm** | No | None | Ephemeral. "The workers currently on this convoy's issues." |
|
||||
| **Stranded Convoy** | Yes | hq-cv-* | A convoy with ready work but no polecats assigned. Needs attention. |
|
||||
|
||||
When you "kick off a swarm", you're really:
|
||||
1. Creating a convoy (the tracking unit)
|
||||
|
||||
@@ -25,7 +25,6 @@ Protomolecule (frozen template) ─── Solid
|
||||
| **Molecule** | Active workflow instance with trackable steps |
|
||||
| **Wisp** | Ephemeral molecule for patrol cycles (never synced) |
|
||||
| **Digest** | Squashed summary of completed molecule |
|
||||
| **Shiny Workflow** | Canonical polecat formula: design → implement → review → test → submit |
|
||||
|
||||
## Common Mistake: Reading Formulas Directly
|
||||
|
||||
@@ -201,8 +200,7 @@ gt done # Signal completion (syncs, submits to MQ, notifi
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **CRITICAL: Close steps in real-time** - Mark `in_progress` BEFORE starting, `closed` IMMEDIATELY after completing. Never batch-close steps at the end. Molecules ARE the ledger - each step closure is a timestamped CV entry. Batch-closing corrupts the timeline and violates HOP's core promise.
|
||||
2. **Use `--continue` for propulsion** - Keep momentum by auto-advancing
|
||||
3. **Check progress with `bd mol current`** - Know where you are before resuming
|
||||
4. **Squash completed molecules** - Create digests for audit trail
|
||||
5. **Burn routine wisps** - Don't accumulate ephemeral patrol data
|
||||
1. **Use `--continue` for propulsion** - Keep momentum by auto-advancing
|
||||
2. **Check progress with `bd mol current`** - Know where you are before resuming
|
||||
3. **Squash completed molecules** - Create digests for audit trail
|
||||
4. **Burn routine wisps** - Don't accumulate ephemeral patrol data
|
||||
|
||||
@@ -89,58 +89,6 @@ Debug routing: `BD_DEBUG_ROUTING=1 bd show <id>`
|
||||
|
||||
Process state, PIDs, ephemeral data.
|
||||
|
||||
### Rig-Level Configuration
|
||||
|
||||
Rigs support layered configuration through:
|
||||
1. **Wisp layer** (`.beads-wisp/config/`) - transient, local overrides
|
||||
2. **Rig identity bead labels** - persistent rig settings
|
||||
3. **Town defaults** (`~/gt/settings/config.json`)
|
||||
4. **System defaults** - compiled-in fallbacks
|
||||
|
||||
#### Polecat Branch Naming
|
||||
|
||||
Configure custom branch name templates for polecats:
|
||||
|
||||
```bash
|
||||
# Set via wisp (transient - for testing)
|
||||
echo '{"polecat_branch_template": "adam/{year}/{month}/{description}"}' > \
|
||||
~/gt/.beads-wisp/config/myrig.json
|
||||
|
||||
# Or set via rig identity bead labels (persistent)
|
||||
bd update gt-rig-myrig --labels="polecat_branch_template:adam/{year}/{month}/{description}"
|
||||
```
|
||||
|
||||
**Template Variables:**
|
||||
|
||||
| Variable | Description | Example |
|
||||
|----------|-------------|---------|
|
||||
| `{user}` | From `git config user.name` | `adam` |
|
||||
| `{year}` | Current year (YY format) | `26` |
|
||||
| `{month}` | Current month (MM format) | `01` |
|
||||
| `{name}` | Polecat name | `alpha` |
|
||||
| `{issue}` | Issue ID without prefix | `123` (from `gt-123`) |
|
||||
| `{description}` | Sanitized issue title | `fix-auth-bug` |
|
||||
| `{timestamp}` | Unique timestamp | `1ks7f9a` |
|
||||
|
||||
**Default Behavior (backward compatible):**
|
||||
|
||||
When `polecat_branch_template` is empty or not set:
|
||||
- With issue: `polecat/{name}/{issue}@{timestamp}`
|
||||
- Without issue: `polecat/{name}-{timestamp}`
|
||||
|
||||
**Example Configurations:**
|
||||
|
||||
```bash
|
||||
# GitHub enterprise format
|
||||
"adam/{year}/{month}/{description}"
|
||||
|
||||
# Simple feature branches
|
||||
"feature/{issue}"
|
||||
|
||||
# Include polecat name for clarity
|
||||
"work/{name}/{issue}"
|
||||
```
|
||||
|
||||
## Formula Format
|
||||
|
||||
```toml
|
||||
@@ -597,24 +545,6 @@ gt stop --all # Kill all sessions
|
||||
gt stop --rig <name> # Kill rig sessions
|
||||
```
|
||||
|
||||
### Health Check
|
||||
|
||||
```bash
|
||||
gt deacon health-check <agent> # Send health check ping, track response
|
||||
gt deacon health-state # Show health check state for all agents
|
||||
```
|
||||
|
||||
### Merge Queue (MQ)
|
||||
|
||||
```bash
|
||||
gt mq list [rig] # Show the merge queue
|
||||
gt mq next [rig] # Show highest-priority merge request
|
||||
gt mq submit # Submit current branch to merge queue
|
||||
gt mq status <id> # Show detailed merge request status
|
||||
gt mq retry <id> # Retry a failed merge request
|
||||
gt mq reject <id> # Reject a merge request
|
||||
```
|
||||
|
||||
## Beads Commands (bd)
|
||||
|
||||
```bash
|
||||
|
||||
@@ -44,8 +44,8 @@ type Issue struct {
|
||||
|
||||
// Agent bead slots (type=agent only)
|
||||
HookBead string `json:"hook_bead,omitempty"` // Current work attached to agent's hook
|
||||
RoleBead string `json:"role_bead,omitempty"` // Role definition bead (shared)
|
||||
AgentState string `json:"agent_state,omitempty"` // Agent lifecycle state (spawning, working, done, stuck)
|
||||
// Note: role_bead field removed - role definitions are now config-based
|
||||
|
||||
// Counts from list output
|
||||
DependencyCount int `json:"dependency_count,omitempty"`
|
||||
@@ -113,12 +113,6 @@ type SyncStatus struct {
|
||||
type Beads struct {
|
||||
workDir string
|
||||
beadsDir string // Optional BEADS_DIR override for cross-database access
|
||||
isolated bool // If true, suppress inherited beads env vars (for test isolation)
|
||||
|
||||
// Lazy-cached town root for routing resolution.
|
||||
// Populated on first call to getTownRoot() to avoid filesystem walk on every operation.
|
||||
townRoot string
|
||||
searchedRoot bool
|
||||
}
|
||||
|
||||
// New creates a new Beads wrapper for the given directory.
|
||||
@@ -126,56 +120,12 @@ func New(workDir string) *Beads {
|
||||
return &Beads{workDir: workDir}
|
||||
}
|
||||
|
||||
// NewIsolated creates a Beads wrapper for test isolation.
|
||||
// This suppresses inherited beads env vars (BD_ACTOR, BEADS_DB) to prevent
|
||||
// tests from accidentally routing to production databases.
|
||||
func NewIsolated(workDir string) *Beads {
|
||||
return &Beads{workDir: workDir, isolated: true}
|
||||
}
|
||||
|
||||
// NewWithBeadsDir creates a Beads wrapper with an explicit BEADS_DIR.
|
||||
// This is needed when running from a polecat worktree but accessing town-level beads.
|
||||
func NewWithBeadsDir(workDir, beadsDir string) *Beads {
|
||||
return &Beads{workDir: workDir, beadsDir: beadsDir}
|
||||
}
|
||||
|
||||
// getActor returns the BD_ACTOR value for this context.
|
||||
// Returns empty string when in isolated mode (tests) to prevent
|
||||
// inherited actors from routing to production databases.
|
||||
func (b *Beads) getActor() string {
|
||||
if b.isolated {
|
||||
return ""
|
||||
}
|
||||
return os.Getenv("BD_ACTOR")
|
||||
}
|
||||
|
||||
// getTownRoot returns the Gas Town root directory, using lazy caching.
|
||||
// The town root is found by walking up from workDir looking for mayor/town.json.
|
||||
// Returns empty string if not in a Gas Town project.
|
||||
func (b *Beads) getTownRoot() string {
|
||||
if !b.searchedRoot {
|
||||
b.townRoot = FindTownRoot(b.workDir)
|
||||
b.searchedRoot = true
|
||||
}
|
||||
return b.townRoot
|
||||
}
|
||||
|
||||
// getResolvedBeadsDir returns the beads directory this wrapper is operating on.
|
||||
// This follows any redirects and returns the actual beads directory path.
|
||||
func (b *Beads) getResolvedBeadsDir() string {
|
||||
if b.beadsDir != "" {
|
||||
return b.beadsDir
|
||||
}
|
||||
return ResolveBeadsDir(b.workDir)
|
||||
}
|
||||
|
||||
// Init initializes a new beads database in the working directory.
|
||||
// This uses the same environment isolation as other commands.
|
||||
func (b *Beads) Init(prefix string) error {
|
||||
_, err := b.run("init", "--prefix", prefix, "--quiet")
|
||||
return err
|
||||
}
|
||||
|
||||
// run executes a bd command and returns stdout.
|
||||
func (b *Beads) run(args ...string) ([]byte, error) {
|
||||
// Use --no-daemon for faster read operations (avoids daemon IPC overhead)
|
||||
@@ -183,6 +133,8 @@ func (b *Beads) run(args ...string) ([]byte, error) {
|
||||
// Use --allow-stale to prevent failures when db is out of sync with JSONL
|
||||
// (e.g., after daemon is killed during shutdown before syncing).
|
||||
fullArgs := append([]string{"--no-daemon", "--allow-stale"}, args...)
|
||||
cmd := exec.Command("bd", fullArgs...) //nolint:gosec // G204: bd is a trusted internal tool
|
||||
cmd.Dir = b.workDir
|
||||
|
||||
// Always explicitly set BEADS_DIR to prevent inherited env vars from
|
||||
// causing prefix mismatches. Use explicit beadsDir if set, otherwise
|
||||
@@ -191,28 +143,7 @@ func (b *Beads) run(args ...string) ([]byte, error) {
|
||||
if beadsDir == "" {
|
||||
beadsDir = ResolveBeadsDir(b.workDir)
|
||||
}
|
||||
|
||||
// In isolated mode, use --db flag to force specific database path
|
||||
// This bypasses bd's routing logic that can redirect to .beads-planning
|
||||
// Skip --db for init command since it creates the database
|
||||
isInit := len(args) > 0 && args[0] == "init"
|
||||
if b.isolated && !isInit {
|
||||
beadsDB := filepath.Join(beadsDir, "beads.db")
|
||||
fullArgs = append([]string{"--db", beadsDB}, fullArgs...)
|
||||
}
|
||||
|
||||
cmd := exec.Command("bd", fullArgs...) //nolint:gosec // G204: bd is a trusted internal tool
|
||||
cmd.Dir = b.workDir
|
||||
|
||||
// Build environment: filter beads env vars when in isolated mode (tests)
|
||||
// to prevent routing to production databases.
|
||||
var env []string
|
||||
if b.isolated {
|
||||
env = filterBeadsEnv(os.Environ())
|
||||
} else {
|
||||
env = os.Environ()
|
||||
}
|
||||
cmd.Env = append(env, "BEADS_DIR="+beadsDir)
|
||||
cmd.Env = append(os.Environ(), "BEADS_DIR="+beadsDir)
|
||||
|
||||
var stdout, stderr bytes.Buffer
|
||||
cmd.Stdout = &stdout
|
||||
@@ -265,27 +196,6 @@ func (b *Beads) wrapError(err error, stderr string, args []string) error {
|
||||
return fmt.Errorf("bd %s: %w", strings.Join(args, " "), err)
|
||||
}
|
||||
|
||||
// filterBeadsEnv removes beads-related environment variables from the given
|
||||
// environment slice. This ensures test isolation by preventing inherited
|
||||
// BD_ACTOR, BEADS_DB, GT_ROOT, HOME etc. from routing commands to production databases.
|
||||
func filterBeadsEnv(environ []string) []string {
|
||||
filtered := make([]string, 0, len(environ))
|
||||
for _, env := range environ {
|
||||
// Skip beads-related env vars that could interfere with test isolation
|
||||
// BD_ACTOR, BEADS_* - direct beads config
|
||||
// GT_ROOT - causes bd to find global routes file
|
||||
// HOME - causes bd to find ~/.beads-planning routing
|
||||
if strings.HasPrefix(env, "BD_ACTOR=") ||
|
||||
strings.HasPrefix(env, "BEADS_") ||
|
||||
strings.HasPrefix(env, "GT_ROOT=") ||
|
||||
strings.HasPrefix(env, "HOME=") {
|
||||
continue
|
||||
}
|
||||
filtered = append(filtered, env)
|
||||
}
|
||||
return filtered
|
||||
}
|
||||
|
||||
// List returns issues matching the given options.
|
||||
func (b *Beads) List(opts ListOptions) ([]*Issue, error) {
|
||||
args := []string{"list", "--json"}
|
||||
@@ -488,10 +398,9 @@ func (b *Beads) Create(opts CreateOptions) (*Issue, error) {
|
||||
args = append(args, "--ephemeral")
|
||||
}
|
||||
// Default Actor from BD_ACTOR env var if not specified
|
||||
// Uses getActor() to respect isolated mode (tests)
|
||||
actor := opts.Actor
|
||||
if actor == "" {
|
||||
actor = b.getActor()
|
||||
actor = os.Getenv("BD_ACTOR")
|
||||
}
|
||||
if actor != "" {
|
||||
args = append(args, "--actor="+actor)
|
||||
@@ -536,10 +445,9 @@ func (b *Beads) CreateWithID(id string, opts CreateOptions) (*Issue, error) {
|
||||
args = append(args, "--parent="+opts.Parent)
|
||||
}
|
||||
// Default Actor from BD_ACTOR env var if not specified
|
||||
// Uses getActor() to respect isolated mode (tests)
|
||||
actor := opts.Actor
|
||||
if actor == "" {
|
||||
actor = b.getActor()
|
||||
actor = os.Getenv("BD_ACTOR")
|
||||
}
|
||||
if actor != "" {
|
||||
args = append(args, "--actor="+actor)
|
||||
|
||||
@@ -5,32 +5,10 @@ import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// runSlotSet runs `bd slot set` from a specific directory.
|
||||
// This is needed when the agent bead was created via routing to a different
|
||||
// database than the Beads wrapper's default directory.
|
||||
func runSlotSet(workDir, beadID, slotName, slotValue string) error {
|
||||
cmd := exec.Command("bd", "slot", "set", beadID, slotName, slotValue)
|
||||
cmd.Dir = workDir
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("%s: %w", strings.TrimSpace(string(output)), err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// runSlotClear runs `bd slot clear` from a specific directory.
|
||||
func runSlotClear(workDir, beadID, slotName string) error {
|
||||
cmd := exec.Command("bd", "slot", "clear", beadID, slotName)
|
||||
cmd.Dir = workDir
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("%s: %w", strings.TrimSpace(string(output)), err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// AgentFields holds structured fields for agent beads.
|
||||
// These are stored as "key: value" lines in the description.
|
||||
type AgentFields struct {
|
||||
@@ -38,11 +16,10 @@ type AgentFields struct {
|
||||
Rig string // Rig name (empty for global agents like mayor/deacon)
|
||||
AgentState string // spawning, working, done, stuck
|
||||
HookBead string // Currently pinned work bead ID
|
||||
RoleBead string // Role definition bead ID (canonical location; may not exist yet)
|
||||
CleanupStatus string // ZFC: polecat self-reports git state (clean, has_uncommitted, has_stash, has_unpushed)
|
||||
ActiveMR string // Currently active merge request bead ID (for traceability)
|
||||
NotificationLevel string // DND mode: verbose, normal, muted (default: normal)
|
||||
// Note: RoleBead field removed - role definitions are now config-based.
|
||||
// See internal/config/roles/*.toml and config-based-roles.md.
|
||||
}
|
||||
|
||||
// Notification level constants
|
||||
@@ -77,7 +54,11 @@ func FormatAgentDescription(title string, fields *AgentFields) string {
|
||||
lines = append(lines, "hook_bead: null")
|
||||
}
|
||||
|
||||
// Note: role_bead field no longer written - role definitions are config-based
|
||||
if fields.RoleBead != "" {
|
||||
lines = append(lines, fmt.Sprintf("role_bead: %s", fields.RoleBead))
|
||||
} else {
|
||||
lines = append(lines, "role_bead: null")
|
||||
}
|
||||
|
||||
if fields.CleanupStatus != "" {
|
||||
lines = append(lines, fmt.Sprintf("cleanup_status: %s", fields.CleanupStatus))
|
||||
@@ -131,7 +112,7 @@ func ParseAgentFields(description string) *AgentFields {
|
||||
case "hook_bead":
|
||||
fields.HookBead = value
|
||||
case "role_bead":
|
||||
// Ignored - role definitions are now config-based (backward compat)
|
||||
fields.RoleBead = value
|
||||
case "cleanup_status":
|
||||
fields.CleanupStatus = value
|
||||
case "active_mr":
|
||||
@@ -148,21 +129,7 @@ func ParseAgentFields(description string) *AgentFields {
|
||||
// The ID format is: <prefix>-<rig>-<role>-<name> (e.g., gt-gastown-polecat-Toast)
|
||||
// Use AgentBeadID() helper to generate correct IDs.
|
||||
// The created_by field is populated from BD_ACTOR env var for provenance tracking.
|
||||
//
|
||||
// This function automatically ensures custom types are configured in the target
|
||||
// database before creating the bead. This handles multi-repo routing scenarios
|
||||
// where the bead may be routed to a different database than the one this wrapper
|
||||
// is connected to.
|
||||
func (b *Beads) CreateAgentBead(id, title string, fields *AgentFields) (*Issue, error) {
|
||||
// Resolve where this bead will actually be written (handles multi-repo routing)
|
||||
targetDir := ResolveRoutingTarget(b.getTownRoot(), id, b.getResolvedBeadsDir())
|
||||
|
||||
// Ensure target database has custom types configured
|
||||
// This is cached (sentinel file + in-memory) so repeated calls are fast
|
||||
if err := EnsureCustomTypes(targetDir); err != nil {
|
||||
return nil, fmt.Errorf("prepare target for agent bead %s: %w", id, err)
|
||||
}
|
||||
|
||||
description := FormatAgentDescription(title, fields)
|
||||
|
||||
args := []string{"create", "--json",
|
||||
@@ -177,8 +144,7 @@ func (b *Beads) CreateAgentBead(id, title string, fields *AgentFields) (*Issue,
|
||||
}
|
||||
|
||||
// Default actor from BD_ACTOR env var for provenance tracking
|
||||
// Uses getActor() to respect isolated mode (tests)
|
||||
if actor := b.getActor(); actor != "" {
|
||||
if actor := os.Getenv("BD_ACTOR"); actor != "" {
|
||||
args = append(args, "--actor="+actor)
|
||||
}
|
||||
|
||||
@@ -192,14 +158,19 @@ func (b *Beads) CreateAgentBead(id, title string, fields *AgentFields) (*Issue,
|
||||
return nil, fmt.Errorf("parsing bd create output: %w", err)
|
||||
}
|
||||
|
||||
// Note: role slot no longer set - role definitions are config-based
|
||||
// Set the role slot if specified (this is the authoritative storage)
|
||||
if fields != nil && fields.RoleBead != "" {
|
||||
if _, err := b.run("slot", "set", id, "role", fields.RoleBead); err != nil {
|
||||
// Non-fatal: warn but continue
|
||||
fmt.Printf("Warning: could not set role slot: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Set the hook slot if specified (this is the authoritative storage)
|
||||
// This fixes the slot inconsistency bug where bead status is 'hooked' but
|
||||
// agent's hook slot is empty. See mi-619.
|
||||
// Must run from targetDir since that's where the agent bead was created
|
||||
if fields != nil && fields.HookBead != "" {
|
||||
if err := runSlotSet(targetDir, id, "hook", fields.HookBead); err != nil {
|
||||
if _, err := b.run("slot", "set", id, "hook", fields.HookBead); err != nil {
|
||||
// Non-fatal: warn but continue - description text has the backup
|
||||
fmt.Printf("Warning: could not set hook slot: %v\n", err)
|
||||
}
|
||||
@@ -233,9 +204,6 @@ func (b *Beads) CreateOrReopenAgentBead(id, title string, fields *AgentFields) (
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Resolve where this bead lives (for slot operations)
|
||||
targetDir := ResolveRoutingTarget(b.getTownRoot(), id, b.getResolvedBeadsDir())
|
||||
|
||||
// The bead already exists (should be closed from previous polecat lifecycle)
|
||||
// Reopen it and update its fields
|
||||
if _, reopenErr := b.run("reopen", id, "--reason=re-spawning agent"); reopenErr != nil {
|
||||
@@ -255,17 +223,21 @@ func (b *Beads) CreateOrReopenAgentBead(id, title string, fields *AgentFields) (
|
||||
return nil, fmt.Errorf("updating reopened agent bead: %w", err)
|
||||
}
|
||||
|
||||
// Note: role slot no longer set - role definitions are config-based
|
||||
// Set the role slot if specified
|
||||
if fields != nil && fields.RoleBead != "" {
|
||||
if _, err := b.run("slot", "set", id, "role", fields.RoleBead); err != nil {
|
||||
// Non-fatal: warn but continue
|
||||
fmt.Printf("Warning: could not set role slot: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Clear any existing hook slot (handles stale state from previous lifecycle)
|
||||
// Must run from targetDir since that's where the agent bead lives
|
||||
_ = runSlotClear(targetDir, id, "hook")
|
||||
_, _ = b.run("slot", "clear", id, "hook")
|
||||
|
||||
// Set the hook slot if specified
|
||||
// Must run from targetDir since that's where the agent bead lives
|
||||
if fields != nil && fields.HookBead != "" {
|
||||
if err := runSlotSet(targetDir, id, "hook", fields.HookBead); err != nil {
|
||||
// Non-fatal: warn but continue - description text has the backup
|
||||
if _, err := b.run("slot", "set", id, "hook", fields.HookBead); err != nil {
|
||||
// Non-fatal: warn but continue
|
||||
fmt.Printf("Warning: could not set hook slot: %v\n", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -161,8 +162,7 @@ func (b *Beads) CreateChannelBead(name string, subscribers []string, createdBy s
|
||||
}
|
||||
|
||||
// Default actor from BD_ACTOR env var for provenance tracking
|
||||
// Uses getActor() to respect isolated mode (tests)
|
||||
if actor := b.getActor(); actor != "" {
|
||||
if actor := os.Getenv("BD_ACTOR"); actor != "" {
|
||||
args = append(args, "--actor="+actor)
|
||||
}
|
||||
|
||||
@@ -382,7 +382,7 @@ func (b *Beads) LookupChannelByName(name string) (*Issue, *ChannelFields, error)
|
||||
|
||||
// EnforceChannelRetention prunes old messages from a channel to enforce retention.
|
||||
// Called after posting a new message to the channel (on-write cleanup).
|
||||
// Enforces both count-based (RetentionCount) and time-based (RetentionHours) limits.
|
||||
// If channel has >= retainCount messages, deletes oldest until count < retainCount.
|
||||
func (b *Beads) EnforceChannelRetention(name string) error {
|
||||
// Get channel config
|
||||
_, fields, err := b.GetChannelBead(name)
|
||||
@@ -393,8 +393,8 @@ func (b *Beads) EnforceChannelRetention(name string) error {
|
||||
return fmt.Errorf("channel not found: %s", name)
|
||||
}
|
||||
|
||||
// Skip if no retention limits configured
|
||||
if fields.RetentionCount <= 0 && fields.RetentionHours <= 0 {
|
||||
// Skip if no retention limit
|
||||
if fields.RetentionCount <= 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -411,42 +411,23 @@ func (b *Beads) EnforceChannelRetention(name string) error {
|
||||
}
|
||||
|
||||
var messages []struct {
|
||||
ID string `json:"id"`
|
||||
CreatedAt string `json:"created_at"`
|
||||
ID string `json:"id"`
|
||||
}
|
||||
if err := json.Unmarshal(out, &messages); err != nil {
|
||||
return fmt.Errorf("parsing channel messages: %w", err)
|
||||
}
|
||||
|
||||
// Track which messages to delete (use map to avoid duplicates)
|
||||
toDeleteIDs := make(map[string]bool)
|
||||
|
||||
// Time-based retention: delete messages older than RetentionHours
|
||||
if fields.RetentionHours > 0 {
|
||||
cutoff := time.Now().Add(-time.Duration(fields.RetentionHours) * time.Hour)
|
||||
for _, msg := range messages {
|
||||
createdAt, err := time.Parse(time.RFC3339, msg.CreatedAt)
|
||||
if err != nil {
|
||||
continue // Skip messages with unparseable timestamps
|
||||
}
|
||||
if createdAt.Before(cutoff) {
|
||||
toDeleteIDs[msg.ID] = true
|
||||
}
|
||||
}
|
||||
// Calculate how many to delete
|
||||
// We're being called after a new message is posted, so we want to end up with retainCount
|
||||
toDelete := len(messages) - fields.RetentionCount
|
||||
if toDelete <= 0 {
|
||||
return nil // No pruning needed
|
||||
}
|
||||
|
||||
// Count-based retention: delete oldest messages beyond RetentionCount
|
||||
if fields.RetentionCount > 0 {
|
||||
toDeleteByCount := len(messages) - fields.RetentionCount
|
||||
for i := 0; i < toDeleteByCount && i < len(messages); i++ {
|
||||
toDeleteIDs[messages[i].ID] = true
|
||||
}
|
||||
}
|
||||
|
||||
// Delete marked messages (best-effort)
|
||||
for id := range toDeleteIDs {
|
||||
// Delete oldest messages (best-effort)
|
||||
for i := 0; i < toDelete && i < len(messages); i++ {
|
||||
// Use close instead of delete for audit trail
|
||||
_, _ = b.run("close", id, "--reason=channel retention pruning")
|
||||
_, _ = b.run("close", messages[i].ID, "--reason=channel retention pruning")
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -454,8 +435,7 @@ func (b *Beads) EnforceChannelRetention(name string) error {
|
||||
|
||||
// PruneAllChannels enforces retention on all channels.
|
||||
// Called by Deacon patrol as a backup cleanup mechanism.
|
||||
// Enforces both count-based (RetentionCount) and time-based (RetentionHours) limits.
|
||||
// Uses a 10% buffer for count-based pruning to avoid thrashing.
|
||||
// Uses a 10% buffer to avoid thrashing (only prunes if count > retainCount * 1.1).
|
||||
func (b *Beads) PruneAllChannels() (int, error) {
|
||||
channels, err := b.ListChannelBeads()
|
||||
if err != nil {
|
||||
@@ -464,62 +444,38 @@ func (b *Beads) PruneAllChannels() (int, error) {
|
||||
|
||||
pruned := 0
|
||||
for name, fields := range channels {
|
||||
// Skip if no retention limits configured
|
||||
if fields.RetentionCount <= 0 && fields.RetentionHours <= 0 {
|
||||
if fields.RetentionCount <= 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// Get messages with timestamps
|
||||
// Count messages
|
||||
out, err := b.run("list",
|
||||
"--type=message",
|
||||
"--label=channel:"+name,
|
||||
"--json",
|
||||
"--limit=0",
|
||||
"--sort=created",
|
||||
)
|
||||
if err != nil {
|
||||
continue // Skip on error
|
||||
}
|
||||
|
||||
var messages []struct {
|
||||
ID string `json:"id"`
|
||||
CreatedAt string `json:"created_at"`
|
||||
ID string `json:"id"`
|
||||
}
|
||||
if err := json.Unmarshal(out, &messages); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Track which messages to delete (use map to avoid duplicates)
|
||||
toDeleteIDs := make(map[string]bool)
|
||||
|
||||
// Time-based retention: delete messages older than RetentionHours
|
||||
if fields.RetentionHours > 0 {
|
||||
cutoff := time.Now().Add(-time.Duration(fields.RetentionHours) * time.Hour)
|
||||
for _, msg := range messages {
|
||||
createdAt, err := time.Parse(time.RFC3339, msg.CreatedAt)
|
||||
if err != nil {
|
||||
continue // Skip messages with unparseable timestamps
|
||||
}
|
||||
if createdAt.Before(cutoff) {
|
||||
toDeleteIDs[msg.ID] = true
|
||||
}
|
||||
}
|
||||
// 10% buffer - only prune if significantly over limit
|
||||
threshold := int(float64(fields.RetentionCount) * 1.1)
|
||||
if len(messages) <= threshold {
|
||||
continue
|
||||
}
|
||||
|
||||
// Count-based retention with 10% buffer to avoid thrashing
|
||||
if fields.RetentionCount > 0 {
|
||||
threshold := int(float64(fields.RetentionCount) * 1.1)
|
||||
if len(messages) > threshold {
|
||||
toDeleteByCount := len(messages) - fields.RetentionCount
|
||||
for i := 0; i < toDeleteByCount && i < len(messages); i++ {
|
||||
toDeleteIDs[messages[i].ID] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Delete marked messages
|
||||
for id := range toDeleteIDs {
|
||||
if _, err := b.run("close", id, "--reason=patrol retention pruning"); err == nil {
|
||||
// Prune down to exactly retainCount
|
||||
toDelete := len(messages) - fields.RetentionCount
|
||||
for i := 0; i < toDelete && i < len(messages); i++ {
|
||||
if _, err := b.run("close", messages[i].ID, "--reason=patrol retention pruning"); err == nil {
|
||||
pruned++
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,6 +4,7 @@ package beads
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
@@ -27,8 +28,7 @@ func (b *Beads) CreateDogAgentBead(name, location string) (*Issue, error) {
|
||||
}
|
||||
|
||||
// Default actor from BD_ACTOR env var for provenance tracking
|
||||
// Uses getActor() to respect isolated mode (tests)
|
||||
if actor := b.getActor(); actor != "" {
|
||||
if actor := os.Getenv("BD_ACTOR"); actor != "" {
|
||||
args = append(args, "--actor="+actor)
|
||||
}
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -182,8 +183,7 @@ func (b *Beads) CreateEscalationBead(title string, fields *EscalationFields) (*I
|
||||
}
|
||||
|
||||
// Default actor from BD_ACTOR env var for provenance tracking
|
||||
// Uses getActor() to respect isolated mode (tests)
|
||||
if actor := b.getActor(); actor != "" {
|
||||
if actor := os.Getenv("BD_ACTOR"); actor != "" {
|
||||
args = append(args, "--actor="+actor)
|
||||
}
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
@@ -129,8 +130,7 @@ func (b *Beads) CreateGroupBead(name string, members []string, createdBy string)
|
||||
}
|
||||
|
||||
// Default actor from BD_ACTOR env var for provenance tracking
|
||||
// Uses getActor() to respect isolated mode (tests)
|
||||
if actor := b.getActor(); actor != "" {
|
||||
if actor := os.Getenv("BD_ACTOR"); actor != "" {
|
||||
args = append(args, "--actor="+actor)
|
||||
}
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
@@ -179,8 +180,7 @@ func (b *Beads) CreateQueueBead(id, title string, fields *QueueFields) (*Issue,
|
||||
}
|
||||
|
||||
// Default actor from BD_ACTOR env var for provenance tracking
|
||||
// Uses getActor() to respect isolated mode (tests)
|
||||
if actor := b.getActor(); actor != "" {
|
||||
if actor := os.Getenv("BD_ACTOR"); actor != "" {
|
||||
args = append(args, "--actor="+actor)
|
||||
}
|
||||
|
||||
|
||||
@@ -4,6 +4,7 @@ package beads
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
)
|
||||
|
||||
@@ -89,8 +90,7 @@ func (b *Beads) CreateRigBead(id, title string, fields *RigFields) (*Issue, erro
|
||||
}
|
||||
|
||||
// Default actor from BD_ACTOR env var for provenance tracking
|
||||
// Uses getActor() to respect isolated mode (tests)
|
||||
if actor := b.getActor(); actor != "" {
|
||||
if actor := os.Getenv("BD_ACTOR"); actor != "" {
|
||||
args = append(args, "--actor="+actor)
|
||||
}
|
||||
|
||||
|
||||
@@ -1,11 +1,4 @@
|
||||
// Package beads provides role bead management.
|
||||
//
|
||||
// DEPRECATED: Role beads are deprecated. Role definitions are now config-based.
|
||||
// See internal/config/roles/*.toml and config-based-roles.md for the new system.
|
||||
//
|
||||
// This file is kept for backward compatibility with existing role beads but
|
||||
// new code should use config.LoadRoleDefinition() instead of reading role beads.
|
||||
// The daemon no longer uses role beads as of Phase 2 (config-based roles).
|
||||
package beads
|
||||
|
||||
import (
|
||||
@@ -13,12 +6,10 @@ import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// DEPRECATED: Role bead ID naming convention is no longer used.
|
||||
// Role definitions are now config-based (internal/config/roles/*.toml).
|
||||
// Role bead ID naming convention:
|
||||
// Role beads are stored in town beads (~/.beads/) with hq- prefix.
|
||||
//
|
||||
// Role beads were stored in town beads (~/.beads/) with hq- prefix.
|
||||
//
|
||||
// Canonical format was: hq-<role>-role
|
||||
// Canonical format: hq-<role>-role
|
||||
//
|
||||
// Examples:
|
||||
// - hq-mayor-role
|
||||
@@ -28,8 +19,8 @@ import (
|
||||
// - hq-crew-role
|
||||
// - hq-polecat-role
|
||||
//
|
||||
// Legacy functions RoleBeadID() and RoleBeadIDTown() still work for
|
||||
// backward compatibility but should not be used in new code.
|
||||
// Use RoleBeadIDTown() to get canonical role bead IDs.
|
||||
// The legacy RoleBeadID() function returns gt-<role>-role for backward compatibility.
|
||||
|
||||
// RoleBeadID returns the role bead ID for a given role type.
|
||||
// Role beads define lifecycle configuration for each agent type.
|
||||
@@ -76,9 +67,6 @@ func PolecatRoleBeadID() string {
|
||||
|
||||
// GetRoleConfig looks up a role bead and returns its parsed RoleConfig.
|
||||
// Returns nil, nil if the role bead doesn't exist or has no config.
|
||||
//
|
||||
// Deprecated: Use config.LoadRoleDefinition() instead. Role definitions
|
||||
// are now config-based, not stored as beads.
|
||||
func (b *Beads) GetRoleConfig(roleBeadID string) (*RoleConfig, error) {
|
||||
issue, err := b.Show(roleBeadID)
|
||||
if err != nil {
|
||||
@@ -106,9 +94,7 @@ func HasLabel(issue *Issue, label string) bool {
|
||||
}
|
||||
|
||||
// RoleBeadDef defines a role bead's metadata.
|
||||
//
|
||||
// Deprecated: Role beads are no longer created. Role definitions are
|
||||
// now config-based (internal/config/roles/*.toml).
|
||||
// Used by gt install and gt doctor to create missing role beads.
|
||||
type RoleBeadDef struct {
|
||||
ID string // e.g., "hq-witness-role"
|
||||
Title string // e.g., "Witness Role"
|
||||
@@ -116,9 +102,8 @@ type RoleBeadDef struct {
|
||||
}
|
||||
|
||||
// AllRoleBeadDefs returns all role bead definitions.
|
||||
//
|
||||
// Deprecated: Role beads are no longer created by gt install or gt doctor.
|
||||
// This function is kept for backward compatibility only.
|
||||
// This is the single source of truth for role beads used by both
|
||||
// gt install (initial creation) and gt doctor --fix (repair).
|
||||
func AllRoleBeadDefs() []RoleBeadDef {
|
||||
return []RoleBeadDef{
|
||||
{
|
||||
|
||||
@@ -1812,19 +1812,18 @@ func TestSetupRedirect(t *testing.T) {
|
||||
// 4. BUG: bd create fails with UNIQUE constraint
|
||||
// 5. BUG: bd reopen fails with "issue not found" (tombstones are invisible)
|
||||
func TestAgentBeadTombstoneBug(t *testing.T) {
|
||||
// Skip: bd CLI 0.47.2 has a bug where database writes don't commit
|
||||
// ("sql: database is closed" during auto-flush). This blocks all tests
|
||||
// that need to create issues. See internal issue for tracking.
|
||||
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
// Create isolated beads instance and initialize database
|
||||
bd := NewIsolated(tmpDir)
|
||||
if err := bd.Init("test"); err != nil {
|
||||
t.Fatalf("bd init: %v", err)
|
||||
// Initialize beads database
|
||||
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
|
||||
cmd.Dir = tmpDir
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
t.Fatalf("bd init: %v\n%s", err, output)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
bd := New(beadsDir)
|
||||
|
||||
agentID := "test-testrig-polecat-tombstone"
|
||||
|
||||
// Step 1: Create agent bead
|
||||
@@ -1897,14 +1896,18 @@ func TestAgentBeadTombstoneBug(t *testing.T) {
|
||||
// TestAgentBeadCloseReopenWorkaround demonstrates the workaround for the tombstone bug:
|
||||
// use Close instead of Delete, then Reopen works.
|
||||
func TestAgentBeadCloseReopenWorkaround(t *testing.T) {
|
||||
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
bd := NewIsolated(tmpDir)
|
||||
if err := bd.Init("test"); err != nil {
|
||||
t.Fatalf("bd init: %v", err)
|
||||
|
||||
// Initialize beads database
|
||||
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
|
||||
cmd.Dir = tmpDir
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
t.Fatalf("bd init: %v\n%s", err, output)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
bd := New(beadsDir)
|
||||
|
||||
agentID := "test-testrig-polecat-closereopen"
|
||||
|
||||
// Step 1: Create agent bead
|
||||
@@ -1954,14 +1957,18 @@ func TestAgentBeadCloseReopenWorkaround(t *testing.T) {
|
||||
// TestCreateOrReopenAgentBead_ClosedBead tests that CreateOrReopenAgentBead
|
||||
// successfully reopens a closed agent bead and updates its fields.
|
||||
func TestCreateOrReopenAgentBead_ClosedBead(t *testing.T) {
|
||||
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
bd := NewIsolated(tmpDir)
|
||||
if err := bd.Init("test"); err != nil {
|
||||
t.Fatalf("bd init: %v", err)
|
||||
|
||||
// Initialize beads database
|
||||
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
|
||||
cmd.Dir = tmpDir
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
t.Fatalf("bd init: %v\n%s", err, output)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
bd := New(beadsDir)
|
||||
|
||||
agentID := "test-testrig-polecat-lifecycle"
|
||||
|
||||
// Simulate polecat lifecycle: spawn → nuke → respawn
|
||||
@@ -1972,6 +1979,7 @@ func TestCreateOrReopenAgentBead_ClosedBead(t *testing.T) {
|
||||
Rig: "testrig",
|
||||
AgentState: "spawning",
|
||||
HookBead: "test-task-1",
|
||||
RoleBead: "test-polecat-role",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Spawn 1 - CreateOrReopenAgentBead: %v", err)
|
||||
@@ -1992,6 +2000,7 @@ func TestCreateOrReopenAgentBead_ClosedBead(t *testing.T) {
|
||||
Rig: "testrig",
|
||||
AgentState: "spawning",
|
||||
HookBead: "test-task-2", // Different task
|
||||
RoleBead: "test-polecat-role",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Spawn 2 - CreateOrReopenAgentBead: %v", err)
|
||||
@@ -2018,6 +2027,7 @@ func TestCreateOrReopenAgentBead_ClosedBead(t *testing.T) {
|
||||
Rig: "testrig",
|
||||
AgentState: "spawning",
|
||||
HookBead: "test-task-3",
|
||||
RoleBead: "test-polecat-role",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("Spawn 3 - CreateOrReopenAgentBead: %v", err)
|
||||
@@ -2035,14 +2045,18 @@ func TestCreateOrReopenAgentBead_ClosedBead(t *testing.T) {
|
||||
// fields to emulate delete --force --hard behavior. This ensures reopened agent
|
||||
// beads don't have stale state from previous lifecycle.
|
||||
func TestCloseAndClearAgentBead_FieldClearing(t *testing.T) {
|
||||
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
bd := NewIsolated(tmpDir)
|
||||
if err := bd.Init("test"); err != nil {
|
||||
t.Fatalf("bd init: %v", err)
|
||||
|
||||
// Initialize beads database
|
||||
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
|
||||
cmd.Dir = tmpDir
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
t.Fatalf("bd init: %v\n%s", err, output)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
bd := New(beadsDir)
|
||||
|
||||
// Test cases for field clearing permutations
|
||||
tests := []struct {
|
||||
name string
|
||||
@@ -2056,6 +2070,7 @@ func TestCloseAndClearAgentBead_FieldClearing(t *testing.T) {
|
||||
Rig: "testrig",
|
||||
AgentState: "running",
|
||||
HookBead: "test-issue-123",
|
||||
RoleBead: "test-polecat-role",
|
||||
CleanupStatus: "clean",
|
||||
ActiveMR: "test-mr-456",
|
||||
NotificationLevel: "normal",
|
||||
@@ -2189,14 +2204,17 @@ func TestCloseAndClearAgentBead_FieldClearing(t *testing.T) {
|
||||
|
||||
// TestCloseAndClearAgentBead_NonExistent tests behavior when closing a non-existent agent bead.
|
||||
func TestCloseAndClearAgentBead_NonExistent(t *testing.T) {
|
||||
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
bd := NewIsolated(tmpDir)
|
||||
if err := bd.Init("test"); err != nil {
|
||||
t.Fatalf("bd init: %v", err)
|
||||
|
||||
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
|
||||
cmd.Dir = tmpDir
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
t.Fatalf("bd init: %v\n%s", err, output)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
bd := New(beadsDir)
|
||||
|
||||
// Attempt to close non-existent bead
|
||||
err := bd.CloseAndClearAgentBead("test-nonexistent-polecat-xyz", "should fail")
|
||||
|
||||
@@ -2208,14 +2226,17 @@ func TestCloseAndClearAgentBead_NonExistent(t *testing.T) {
|
||||
|
||||
// TestCloseAndClearAgentBead_AlreadyClosed tests behavior when closing an already-closed agent bead.
|
||||
func TestCloseAndClearAgentBead_AlreadyClosed(t *testing.T) {
|
||||
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
bd := NewIsolated(tmpDir)
|
||||
if err := bd.Init("test"); err != nil {
|
||||
t.Fatalf("bd init: %v", err)
|
||||
|
||||
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
|
||||
cmd.Dir = tmpDir
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
t.Fatalf("bd init: %v\n%s", err, output)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
bd := New(beadsDir)
|
||||
|
||||
agentID := "test-testrig-polecat-doubleclosed"
|
||||
|
||||
// Create agent bead
|
||||
@@ -2259,14 +2280,17 @@ func TestCloseAndClearAgentBead_AlreadyClosed(t *testing.T) {
|
||||
// TestCloseAndClearAgentBead_ReopenHasCleanState tests that reopening a closed agent bead
|
||||
// starts with clean state (no stale hook_bead, active_mr, etc.).
|
||||
func TestCloseAndClearAgentBead_ReopenHasCleanState(t *testing.T) {
|
||||
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
bd := NewIsolated(tmpDir)
|
||||
if err := bd.Init("test"); err != nil {
|
||||
t.Fatalf("bd init: %v", err)
|
||||
|
||||
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
|
||||
cmd.Dir = tmpDir
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
t.Fatalf("bd init: %v\n%s", err, output)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
bd := New(beadsDir)
|
||||
|
||||
agentID := "test-testrig-polecat-cleanreopen"
|
||||
|
||||
// Step 1: Create agent with all fields populated
|
||||
@@ -2275,6 +2299,7 @@ func TestCloseAndClearAgentBead_ReopenHasCleanState(t *testing.T) {
|
||||
Rig: "testrig",
|
||||
AgentState: "running",
|
||||
HookBead: "test-old-issue",
|
||||
RoleBead: "test-polecat-role",
|
||||
CleanupStatus: "clean",
|
||||
ActiveMR: "test-old-mr",
|
||||
NotificationLevel: "normal",
|
||||
@@ -2295,6 +2320,7 @@ func TestCloseAndClearAgentBead_ReopenHasCleanState(t *testing.T) {
|
||||
Rig: "testrig",
|
||||
AgentState: "spawning",
|
||||
HookBead: "test-new-issue",
|
||||
RoleBead: "test-polecat-role",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("CreateOrReopenAgentBead: %v", err)
|
||||
@@ -2322,14 +2348,17 @@ func TestCloseAndClearAgentBead_ReopenHasCleanState(t *testing.T) {
|
||||
|
||||
// TestCloseAndClearAgentBead_ReasonVariations tests close with different reason values.
|
||||
func TestCloseAndClearAgentBead_ReasonVariations(t *testing.T) {
|
||||
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
bd := NewIsolated(tmpDir)
|
||||
if err := bd.Init("test"); err != nil {
|
||||
t.Fatalf("bd init: %v", err)
|
||||
|
||||
cmd := exec.Command("bd", "--no-daemon", "init", "--prefix", "test", "--quiet")
|
||||
cmd.Dir = tmpDir
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
t.Fatalf("bd init: %v\n%s", err, output)
|
||||
}
|
||||
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
bd := New(beadsDir)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
reason string
|
||||
|
||||
@@ -1,130 +0,0 @@
|
||||
// Package beads provides custom type management for agent beads.
|
||||
package beads
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/steveyegge/gastown/internal/constants"
|
||||
)
|
||||
|
||||
// typesSentinel is a marker file indicating custom types have been configured.
|
||||
// This persists across CLI invocations to avoid redundant bd config calls.
|
||||
const typesSentinel = ".gt-types-configured"
|
||||
|
||||
// ensuredDirs tracks which beads directories have been ensured this session.
|
||||
// This provides fast in-memory caching for multiple creates in the same CLI run.
|
||||
var (
|
||||
ensuredDirs = make(map[string]bool)
|
||||
ensuredMu sync.Mutex
|
||||
)
|
||||
|
||||
// FindTownRoot walks up from startDir to find the Gas Town root directory.
|
||||
// The town root is identified by the presence of mayor/town.json.
|
||||
// Returns empty string if not found (reached filesystem root).
|
||||
func FindTownRoot(startDir string) string {
|
||||
dir := startDir
|
||||
for {
|
||||
townFile := filepath.Join(dir, "mayor", "town.json")
|
||||
if _, err := os.Stat(townFile); err == nil {
|
||||
return dir
|
||||
}
|
||||
parent := filepath.Dir(dir)
|
||||
if parent == dir {
|
||||
return "" // Reached filesystem root
|
||||
}
|
||||
dir = parent
|
||||
}
|
||||
}
|
||||
|
||||
// ResolveRoutingTarget determines which beads directory a bead ID will route to.
|
||||
// It extracts the prefix from the bead ID and looks up the corresponding route.
|
||||
// Returns the resolved beads directory path, following any redirects.
|
||||
//
|
||||
// If townRoot is empty or prefix is not found, falls back to the provided fallbackDir.
|
||||
func ResolveRoutingTarget(townRoot, beadID, fallbackDir string) string {
|
||||
if townRoot == "" {
|
||||
return fallbackDir
|
||||
}
|
||||
|
||||
// Extract prefix from bead ID (e.g., "gt-gastown-polecat-Toast" -> "gt-")
|
||||
prefix := ExtractPrefix(beadID)
|
||||
if prefix == "" {
|
||||
return fallbackDir
|
||||
}
|
||||
|
||||
// Look up rig path for this prefix
|
||||
rigPath := GetRigPathForPrefix(townRoot, prefix)
|
||||
if rigPath == "" {
|
||||
return fallbackDir
|
||||
}
|
||||
|
||||
// Resolve redirects and get final beads directory
|
||||
beadsDir := ResolveBeadsDir(rigPath)
|
||||
if beadsDir == "" {
|
||||
return fallbackDir
|
||||
}
|
||||
|
||||
return beadsDir
|
||||
}
|
||||
|
||||
// EnsureCustomTypes ensures the target beads directory has custom types configured.
|
||||
// Uses a two-level caching strategy:
|
||||
// - In-memory cache for multiple creates in the same CLI invocation
|
||||
// - Sentinel file on disk for persistence across CLI invocations
|
||||
//
|
||||
// This function is thread-safe and idempotent.
|
||||
func EnsureCustomTypes(beadsDir string) error {
|
||||
if beadsDir == "" {
|
||||
return fmt.Errorf("empty beads directory")
|
||||
}
|
||||
|
||||
ensuredMu.Lock()
|
||||
defer ensuredMu.Unlock()
|
||||
|
||||
// Fast path: in-memory cache (same CLI invocation)
|
||||
if ensuredDirs[beadsDir] {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Fast path: sentinel file exists (previous CLI invocation)
|
||||
sentinelPath := filepath.Join(beadsDir, typesSentinel)
|
||||
if _, err := os.Stat(sentinelPath); err == nil {
|
||||
ensuredDirs[beadsDir] = true
|
||||
return nil
|
||||
}
|
||||
|
||||
// Verify beads directory exists
|
||||
if _, err := os.Stat(beadsDir); os.IsNotExist(err) {
|
||||
return fmt.Errorf("beads directory does not exist: %s", beadsDir)
|
||||
}
|
||||
|
||||
// Configure custom types via bd CLI
|
||||
typesList := strings.Join(constants.BeadsCustomTypesList(), ",")
|
||||
cmd := exec.Command("bd", "config", "set", "types.custom", typesList)
|
||||
cmd.Dir = beadsDir
|
||||
cmd.Env = append(os.Environ(), "BEADS_DIR="+beadsDir)
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("configure custom types in %s: %s: %w",
|
||||
beadsDir, strings.TrimSpace(string(output)), err)
|
||||
}
|
||||
|
||||
// Write sentinel file (best effort - don't fail if this fails)
|
||||
// The sentinel contains a version marker for future compatibility
|
||||
_ = os.WriteFile(sentinelPath, []byte("v1\n"), 0644)
|
||||
|
||||
ensuredDirs[beadsDir] = true
|
||||
return nil
|
||||
}
|
||||
|
||||
// ResetEnsuredDirs clears the in-memory cache of ensured directories.
|
||||
// This is primarily useful for testing.
|
||||
func ResetEnsuredDirs() {
|
||||
ensuredMu.Lock()
|
||||
defer ensuredMu.Unlock()
|
||||
ensuredDirs = make(map[string]bool)
|
||||
}
|
||||
@@ -1,234 +0,0 @@
|
||||
package beads
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestFindTownRoot(t *testing.T) {
|
||||
// Create a temporary town structure
|
||||
tmpDir := t.TempDir()
|
||||
mayorDir := filepath.Join(tmpDir, "mayor")
|
||||
if err := os.MkdirAll(mayorDir, 0755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.WriteFile(filepath.Join(mayorDir, "town.json"), []byte("{}"), 0644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Create nested directories
|
||||
deepDir := filepath.Join(tmpDir, "rig1", "crew", "worker1")
|
||||
if err := os.MkdirAll(deepDir, 0755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
startDir string
|
||||
expected string
|
||||
}{
|
||||
{"from town root", tmpDir, tmpDir},
|
||||
{"from mayor dir", mayorDir, tmpDir},
|
||||
{"from deep nested dir", deepDir, tmpDir},
|
||||
{"from non-town dir", t.TempDir(), ""},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
result := FindTownRoot(tc.startDir)
|
||||
if result != tc.expected {
|
||||
t.Errorf("FindTownRoot(%q) = %q, want %q", tc.startDir, result, tc.expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestResolveRoutingTarget(t *testing.T) {
|
||||
// Create a temporary town with routes
|
||||
tmpDir := t.TempDir()
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Create mayor/town.json for FindTownRoot
|
||||
mayorDir := filepath.Join(tmpDir, "mayor")
|
||||
if err := os.MkdirAll(mayorDir, 0755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.WriteFile(filepath.Join(mayorDir, "town.json"), []byte("{}"), 0644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Create routes.jsonl
|
||||
routesContent := `{"prefix": "gt-", "path": "gastown/mayor/rig"}
|
||||
{"prefix": "hq-", "path": "."}
|
||||
`
|
||||
if err := os.WriteFile(filepath.Join(beadsDir, "routes.jsonl"), []byte(routesContent), 0644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Create the rig beads directory
|
||||
rigBeadsDir := filepath.Join(tmpDir, "gastown", "mayor", "rig", ".beads")
|
||||
if err := os.MkdirAll(rigBeadsDir, 0755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
fallback := "/fallback/.beads"
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
townRoot string
|
||||
beadID string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
name: "rig-level bead routes to rig",
|
||||
townRoot: tmpDir,
|
||||
beadID: "gt-gastown-polecat-Toast",
|
||||
expected: rigBeadsDir,
|
||||
},
|
||||
{
|
||||
name: "town-level bead routes to town",
|
||||
townRoot: tmpDir,
|
||||
beadID: "hq-mayor",
|
||||
expected: beadsDir,
|
||||
},
|
||||
{
|
||||
name: "unknown prefix falls back",
|
||||
townRoot: tmpDir,
|
||||
beadID: "xx-unknown",
|
||||
expected: fallback,
|
||||
},
|
||||
{
|
||||
name: "empty townRoot falls back",
|
||||
townRoot: "",
|
||||
beadID: "gt-gastown-polecat-Toast",
|
||||
expected: fallback,
|
||||
},
|
||||
{
|
||||
name: "no prefix falls back",
|
||||
townRoot: tmpDir,
|
||||
beadID: "noprefixid",
|
||||
expected: fallback,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range tests {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
result := ResolveRoutingTarget(tc.townRoot, tc.beadID, fallback)
|
||||
if result != tc.expected {
|
||||
t.Errorf("ResolveRoutingTarget(%q, %q, %q) = %q, want %q",
|
||||
tc.townRoot, tc.beadID, fallback, result, tc.expected)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestEnsureCustomTypes(t *testing.T) {
|
||||
// Reset the in-memory cache before testing
|
||||
ResetEnsuredDirs()
|
||||
|
||||
t.Run("empty beads dir returns error", func(t *testing.T) {
|
||||
err := EnsureCustomTypes("")
|
||||
if err == nil {
|
||||
t.Error("expected error for empty beads dir")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("non-existent beads dir returns error", func(t *testing.T) {
|
||||
err := EnsureCustomTypes("/nonexistent/path/.beads")
|
||||
if err == nil {
|
||||
t.Error("expected error for non-existent beads dir")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("sentinel file triggers cache hit", func(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Create sentinel file
|
||||
sentinelPath := filepath.Join(beadsDir, typesSentinel)
|
||||
if err := os.WriteFile(sentinelPath, []byte("v1\n"), 0644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Reset cache to ensure we're testing sentinel detection
|
||||
ResetEnsuredDirs()
|
||||
|
||||
// This should succeed without running bd (sentinel exists)
|
||||
err := EnsureCustomTypes(beadsDir)
|
||||
if err != nil {
|
||||
t.Errorf("expected success with sentinel file, got: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("in-memory cache prevents repeated calls", func(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
beadsDir := filepath.Join(tmpDir, ".beads")
|
||||
if err := os.MkdirAll(beadsDir, 0755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Create sentinel to avoid bd call
|
||||
sentinelPath := filepath.Join(beadsDir, typesSentinel)
|
||||
if err := os.WriteFile(sentinelPath, []byte("v1\n"), 0644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ResetEnsuredDirs()
|
||||
|
||||
// First call
|
||||
if err := EnsureCustomTypes(beadsDir); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Remove sentinel - second call should still succeed due to in-memory cache
|
||||
os.Remove(sentinelPath)
|
||||
|
||||
if err := EnsureCustomTypes(beadsDir); err != nil {
|
||||
t.Errorf("expected cache hit, got: %v", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestBeads_getTownRoot(t *testing.T) {
|
||||
// Create a temporary town
|
||||
tmpDir := t.TempDir()
|
||||
mayorDir := filepath.Join(tmpDir, "mayor")
|
||||
if err := os.MkdirAll(mayorDir, 0755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := os.WriteFile(filepath.Join(mayorDir, "town.json"), []byte("{}"), 0644); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Create nested directory
|
||||
rigDir := filepath.Join(tmpDir, "myrig", "mayor", "rig")
|
||||
if err := os.MkdirAll(rigDir, 0755); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
b := New(rigDir)
|
||||
|
||||
// First call should find town root
|
||||
root1 := b.getTownRoot()
|
||||
if root1 != tmpDir {
|
||||
t.Errorf("first getTownRoot() = %q, want %q", root1, tmpDir)
|
||||
}
|
||||
|
||||
// Second call should return cached value
|
||||
root2 := b.getTownRoot()
|
||||
if root2 != root1 {
|
||||
t.Errorf("second getTownRoot() = %q, want cached %q", root2, root1)
|
||||
}
|
||||
|
||||
// Verify searchedRoot flag is set
|
||||
if !b.searchedRoot {
|
||||
t.Error("expected searchedRoot to be true after getTownRoot()")
|
||||
}
|
||||
}
|
||||
@@ -158,7 +158,6 @@ func (b *Beads) AttachMolecule(pinnedBeadID, moleculeID string) (*Issue, error)
|
||||
return nil, fmt.Errorf("fetching pinned bead: %w", err)
|
||||
}
|
||||
|
||||
// Only allow pinned beads (permanent records like role definitions)
|
||||
if issue.Status != StatusPinned {
|
||||
return nil, fmt.Errorf("issue %s is not pinned (status: %s)", pinnedBeadID, issue.Status)
|
||||
}
|
||||
|
||||
@@ -160,10 +160,9 @@ func (b *Boot) Spawn(agentOverride string) error {
|
||||
|
||||
// spawnTmux spawns Boot in a tmux session.
|
||||
func (b *Boot) spawnTmux(agentOverride string) error {
|
||||
// Kill any stale session first.
|
||||
// Use KillSessionWithProcesses to ensure all descendant processes are killed.
|
||||
// Kill any stale session first
|
||||
if b.IsSessionAlive() {
|
||||
_ = b.tmux.KillSessionWithProcesses(SessionName)
|
||||
_ = b.tmux.KillSession(SessionName)
|
||||
}
|
||||
|
||||
// Ensure boot directory exists (it should have CLAUDE.md with Boot context)
|
||||
|
||||
@@ -3,42 +3,13 @@
|
||||
"beads@beads-marketplace": false
|
||||
},
|
||||
"hooks": {
|
||||
"PreToolUse": [
|
||||
{
|
||||
"matcher": "Bash(gh pr create*)",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/.local/bin:$PATH\" && gt tap guard pr-workflow"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": "Bash(git checkout -b*)",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/.local/bin:$PATH\" && gt tap guard pr-workflow"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": "Bash(git switch -c*)",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/.local/bin:$PATH\" && gt tap guard pr-workflow"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"SessionStart": [
|
||||
{
|
||||
"matcher": "",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/bin:$PATH\" && gt prime --hook && gt mail check --inject && gt nudge deacon session-started"
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/bin:$PATH\" && gt prime && gt mail check --inject && gt nudge deacon session-started"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -49,7 +20,7 @@
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/bin:$PATH\" && gt prime --hook"
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/bin:$PATH\" && gt prime"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -3,42 +3,13 @@
|
||||
"beads@beads-marketplace": false
|
||||
},
|
||||
"hooks": {
|
||||
"PreToolUse": [
|
||||
{
|
||||
"matcher": "Bash(gh pr create*)",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/.local/bin:$PATH\" && gt tap guard pr-workflow"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": "Bash(git checkout -b*)",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/.local/bin:$PATH\" && gt tap guard pr-workflow"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"matcher": "Bash(git switch -c*)",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/.local/bin:$PATH\" && gt tap guard pr-workflow"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"SessionStart": [
|
||||
{
|
||||
"matcher": "",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/bin:$PATH\" && gt prime --hook && gt nudge deacon session-started"
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/bin:$PATH\" && gt prime && gt nudge deacon session-started"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -49,7 +20,7 @@
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/bin:$PATH\" && gt prime --hook"
|
||||
"command": "export PATH=\"$HOME/go/bin:$HOME/bin:$PATH\" && gt prime"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
@@ -3,8 +3,6 @@ package cmd
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -56,33 +54,15 @@ func setupTestTownForAccount(t *testing.T) (townRoot string, accountsDir string)
|
||||
return townRoot, accountsDir
|
||||
}
|
||||
|
||||
func setTestHome(t *testing.T, fakeHome string) {
|
||||
t.Helper()
|
||||
|
||||
t.Setenv("HOME", fakeHome)
|
||||
|
||||
if runtime.GOOS != "windows" {
|
||||
return
|
||||
}
|
||||
|
||||
t.Setenv("USERPROFILE", fakeHome)
|
||||
|
||||
drive := filepath.VolumeName(fakeHome)
|
||||
if drive == "" {
|
||||
return
|
||||
}
|
||||
|
||||
t.Setenv("HOMEDRIVE", drive)
|
||||
t.Setenv("HOMEPATH", strings.TrimPrefix(fakeHome, drive))
|
||||
}
|
||||
|
||||
func TestAccountSwitch(t *testing.T) {
|
||||
t.Run("switch between accounts", func(t *testing.T) {
|
||||
townRoot, accountsDir := setupTestTownForAccount(t)
|
||||
|
||||
// Create fake home directory for ~/.claude
|
||||
fakeHome := t.TempDir()
|
||||
setTestHome(t, fakeHome)
|
||||
originalHome := os.Getenv("HOME")
|
||||
os.Setenv("HOME", fakeHome)
|
||||
defer os.Setenv("HOME", originalHome)
|
||||
|
||||
// Create account config directories
|
||||
workConfigDir := filepath.Join(accountsDir, "work")
|
||||
@@ -153,7 +133,9 @@ func TestAccountSwitch(t *testing.T) {
|
||||
townRoot, accountsDir := setupTestTownForAccount(t)
|
||||
|
||||
fakeHome := t.TempDir()
|
||||
setTestHome(t, fakeHome)
|
||||
originalHome := os.Getenv("HOME")
|
||||
os.Setenv("HOME", fakeHome)
|
||||
defer os.Setenv("HOME", originalHome)
|
||||
|
||||
workConfigDir := filepath.Join(accountsDir, "work")
|
||||
if err := os.MkdirAll(workConfigDir, 0755); err != nil {
|
||||
@@ -204,7 +186,9 @@ func TestAccountSwitch(t *testing.T) {
|
||||
townRoot, accountsDir := setupTestTownForAccount(t)
|
||||
|
||||
fakeHome := t.TempDir()
|
||||
setTestHome(t, fakeHome)
|
||||
originalHome := os.Getenv("HOME")
|
||||
os.Setenv("HOME", fakeHome)
|
||||
defer os.Setenv("HOME", originalHome)
|
||||
|
||||
workConfigDir := filepath.Join(accountsDir, "work")
|
||||
if err := os.MkdirAll(workConfigDir, 0755); err != nil {
|
||||
@@ -240,7 +224,9 @@ func TestAccountSwitch(t *testing.T) {
|
||||
townRoot, accountsDir := setupTestTownForAccount(t)
|
||||
|
||||
fakeHome := t.TempDir()
|
||||
setTestHome(t, fakeHome)
|
||||
originalHome := os.Getenv("HOME")
|
||||
os.Setenv("HOME", fakeHome)
|
||||
defer os.Setenv("HOME", originalHome)
|
||||
|
||||
workConfigDir := filepath.Join(accountsDir, "work")
|
||||
personalConfigDir := filepath.Join(accountsDir, "personal")
|
||||
|
||||
@@ -13,7 +13,6 @@ import (
|
||||
|
||||
var beadCmd = &cobra.Command{
|
||||
Use: "bead",
|
||||
Aliases: []string{"bd"},
|
||||
GroupID: GroupWork,
|
||||
Short: "Bead management utilities",
|
||||
Long: `Utilities for managing beads across repositories.`,
|
||||
@@ -58,29 +57,10 @@ Examples:
|
||||
},
|
||||
}
|
||||
|
||||
var beadReadCmd = &cobra.Command{
|
||||
Use: "read <bead-id> [flags]",
|
||||
Short: "Show details of a bead (alias for 'show')",
|
||||
Long: `Displays the full details of a bead by ID.
|
||||
|
||||
This is an alias for 'gt bead show'. All bd show flags are supported.
|
||||
|
||||
Examples:
|
||||
gt bead read gt-abc123 # Show a gastown issue
|
||||
gt bead read hq-xyz789 # Show a town-level bead
|
||||
gt bead read bd-def456 # Show a beads issue
|
||||
gt bead read gt-abc123 --json # Output as JSON`,
|
||||
DisableFlagParsing: true, // Pass all flags through to bd show
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return runShow(cmd, args)
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
beadMoveCmd.Flags().BoolVarP(&beadMoveDryRun, "dry-run", "n", false, "Show what would be done")
|
||||
beadCmd.AddCommand(beadMoveCmd)
|
||||
beadCmd.AddCommand(beadShowCmd)
|
||||
beadCmd.AddCommand(beadReadCmd)
|
||||
rootCmd.AddCommand(beadCmd)
|
||||
}
|
||||
|
||||
|
||||
@@ -301,10 +301,9 @@ func runDegradedTriage(b *boot.Boot) (action, target string, err error) {
|
||||
// Nudge the session to try to wake it up
|
||||
age := hb.Age()
|
||||
if age > 30*time.Minute {
|
||||
// Very stuck - restart the session.
|
||||
// Use KillSessionWithProcesses to ensure all descendant processes are killed.
|
||||
// Very stuck - restart the session
|
||||
fmt.Printf("Deacon heartbeat is %s old - restarting session\n", age.Round(time.Minute))
|
||||
if err := tm.KillSessionWithProcesses(deaconSession); err == nil {
|
||||
if err := tm.KillSession(deaconSession); err == nil {
|
||||
return "restart", "deacon-stuck", nil
|
||||
}
|
||||
} else {
|
||||
|
||||
@@ -3,7 +3,6 @@ package cmd
|
||||
import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
@@ -21,7 +20,6 @@ Examples:
|
||||
gt close gt-abc # Close bead gt-abc
|
||||
gt close gt-abc gt-def # Close multiple beads
|
||||
gt close --reason "Done" # Close with reason
|
||||
gt close --comment "Done" # Same as --reason (alias)
|
||||
gt close --force # Force close pinned beads`,
|
||||
DisableFlagParsing: true, // Pass all flags through to bd close
|
||||
RunE: runClose,
|
||||
@@ -32,20 +30,8 @@ func init() {
|
||||
}
|
||||
|
||||
func runClose(cmd *cobra.Command, args []string) error {
|
||||
// Convert --comment to --reason (alias support)
|
||||
convertedArgs := make([]string, len(args))
|
||||
for i, arg := range args {
|
||||
if arg == "--comment" {
|
||||
convertedArgs[i] = "--reason"
|
||||
} else if strings.HasPrefix(arg, "--comment=") {
|
||||
convertedArgs[i] = "--reason=" + strings.TrimPrefix(arg, "--comment=")
|
||||
} else {
|
||||
convertedArgs[i] = arg
|
||||
}
|
||||
}
|
||||
|
||||
// Build bd close command with all args passed through
|
||||
bdArgs := append([]string{"close"}, convertedArgs...)
|
||||
bdArgs := append([]string{"close"}, args...)
|
||||
bdCmd := exec.Command("bd", bdArgs...)
|
||||
bdCmd.Stdin = os.Stdin
|
||||
bdCmd.Stdout = os.Stdout
|
||||
|
||||
@@ -73,7 +73,6 @@ var (
|
||||
convoyStrandedJSON bool
|
||||
convoyCloseReason string
|
||||
convoyCloseNotify string
|
||||
convoyCheckDryRun bool
|
||||
)
|
||||
|
||||
var convoyCmd = &cobra.Command{
|
||||
@@ -178,22 +177,14 @@ Examples:
|
||||
}
|
||||
|
||||
var convoyCheckCmd = &cobra.Command{
|
||||
Use: "check [convoy-id]",
|
||||
Use: "check",
|
||||
Short: "Check and auto-close completed convoys",
|
||||
Long: `Check convoys and auto-close any where all tracked issues are complete.
|
||||
|
||||
Without arguments, checks all open convoys. With a convoy ID, checks only that convoy.
|
||||
Long: `Check all open convoys and auto-close any where all tracked issues are complete.
|
||||
|
||||
This handles cross-rig convoy completion: convoys in town beads tracking issues
|
||||
in rig beads won't auto-close via bd close alone. This command bridges that gap.
|
||||
|
||||
Can be run manually or by deacon patrol to ensure convoys close promptly.
|
||||
|
||||
Examples:
|
||||
gt convoy check # Check all open convoys
|
||||
gt convoy check hq-cv-abc # Check specific convoy
|
||||
gt convoy check --dry-run # Preview what would close without acting`,
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
Can be run manually or by deacon patrol to ensure convoys close promptly.`,
|
||||
RunE: runConvoyCheck,
|
||||
}
|
||||
|
||||
@@ -257,9 +248,6 @@ func init() {
|
||||
// Interactive TUI flag (on parent command)
|
||||
convoyCmd.Flags().BoolVarP(&convoyInteractive, "interactive", "i", false, "Interactive tree view")
|
||||
|
||||
// Check flags
|
||||
convoyCheckCmd.Flags().BoolVar(&convoyCheckDryRun, "dry-run", false, "Preview what would close without acting")
|
||||
|
||||
// Stranded flags
|
||||
convoyStrandedCmd.Flags().BoolVar(&convoyStrandedJSON, "json", false, "Output as JSON")
|
||||
|
||||
@@ -311,14 +299,8 @@ func runConvoyCreate(cmd *cobra.Command, args []string) error {
|
||||
|
||||
// Create convoy issue in town beads
|
||||
description := fmt.Sprintf("Convoy tracking %d issues", len(trackedIssues))
|
||||
|
||||
// Default owner to creator identity if not specified
|
||||
owner := convoyOwner
|
||||
if owner == "" {
|
||||
owner = detectSender()
|
||||
}
|
||||
if owner != "" {
|
||||
description += fmt.Sprintf("\nOwner: %s", owner)
|
||||
if convoyOwner != "" {
|
||||
description += fmt.Sprintf("\nOwner: %s", convoyOwner)
|
||||
}
|
||||
if convoyNotify != "" {
|
||||
description += fmt.Sprintf("\nNotify: %s", convoyNotify)
|
||||
@@ -383,8 +365,8 @@ func runConvoyCreate(cmd *cobra.Command, args []string) error {
|
||||
if len(trackedIssues) > 0 {
|
||||
fmt.Printf(" Issues: %s\n", strings.Join(trackedIssues, ", "))
|
||||
}
|
||||
if owner != "" {
|
||||
fmt.Printf(" Owner: %s\n", owner)
|
||||
if convoyOwner != "" {
|
||||
fmt.Printf(" Owner: %s\n", convoyOwner)
|
||||
}
|
||||
if convoyNotify != "" {
|
||||
fmt.Printf(" Notify: %s\n", convoyNotify)
|
||||
@@ -490,14 +472,7 @@ func runConvoyCheck(cmd *cobra.Command, args []string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// If a specific convoy ID is provided, check only that convoy
|
||||
if len(args) == 1 {
|
||||
convoyID := args[0]
|
||||
return checkSingleConvoy(townBeads, convoyID, convoyCheckDryRun)
|
||||
}
|
||||
|
||||
// Check all open convoys
|
||||
closed, err := checkAndCloseCompletedConvoys(townBeads, convoyCheckDryRun)
|
||||
closed, err := checkAndCloseCompletedConvoys(townBeads)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -505,11 +480,7 @@ func runConvoyCheck(cmd *cobra.Command, args []string) error {
|
||||
if len(closed) == 0 {
|
||||
fmt.Println("No convoys ready to close.")
|
||||
} else {
|
||||
if convoyCheckDryRun {
|
||||
fmt.Printf("%s Would auto-close %d convoy(s):\n", style.Warning.Render("⚠"), len(closed))
|
||||
} else {
|
||||
fmt.Printf("%s Auto-closed %d convoy(s):\n", style.Bold.Render("✓"), len(closed))
|
||||
}
|
||||
fmt.Printf("%s Auto-closed %d convoy(s):\n", style.Bold.Render("✓"), len(closed))
|
||||
for _, c := range closed {
|
||||
fmt.Printf(" 🚚 %s: %s\n", c.ID, c.Title)
|
||||
}
|
||||
@@ -518,92 +489,6 @@ func runConvoyCheck(cmd *cobra.Command, args []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// checkSingleConvoy checks a specific convoy and closes it if all tracked issues are complete.
|
||||
func checkSingleConvoy(townBeads, convoyID string, dryRun bool) error {
|
||||
// Get convoy details
|
||||
showArgs := []string{"show", convoyID, "--json"}
|
||||
showCmd := exec.Command("bd", showArgs...)
|
||||
showCmd.Dir = townBeads
|
||||
var stdout bytes.Buffer
|
||||
showCmd.Stdout = &stdout
|
||||
|
||||
if err := showCmd.Run(); err != nil {
|
||||
return fmt.Errorf("convoy '%s' not found", convoyID)
|
||||
}
|
||||
|
||||
var convoys []struct {
|
||||
ID string `json:"id"`
|
||||
Title string `json:"title"`
|
||||
Status string `json:"status"`
|
||||
Type string `json:"issue_type"`
|
||||
Description string `json:"description"`
|
||||
}
|
||||
if err := json.Unmarshal(stdout.Bytes(), &convoys); err != nil {
|
||||
return fmt.Errorf("parsing convoy data: %w", err)
|
||||
}
|
||||
|
||||
if len(convoys) == 0 {
|
||||
return fmt.Errorf("convoy '%s' not found", convoyID)
|
||||
}
|
||||
|
||||
convoy := convoys[0]
|
||||
|
||||
// Verify it's actually a convoy type
|
||||
if convoy.Type != "convoy" {
|
||||
return fmt.Errorf("'%s' is not a convoy (type: %s)", convoyID, convoy.Type)
|
||||
}
|
||||
|
||||
// Check if convoy is already closed
|
||||
if convoy.Status == "closed" {
|
||||
fmt.Printf("%s Convoy %s is already closed\n", style.Dim.Render("○"), convoyID)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get tracked issues
|
||||
tracked := getTrackedIssues(townBeads, convoyID)
|
||||
if len(tracked) == 0 {
|
||||
fmt.Printf("%s Convoy %s has no tracked issues\n", style.Dim.Render("○"), convoyID)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check if all tracked issues are closed
|
||||
allClosed := true
|
||||
openCount := 0
|
||||
for _, t := range tracked {
|
||||
if t.Status != "closed" && t.Status != "tombstone" {
|
||||
allClosed = false
|
||||
openCount++
|
||||
}
|
||||
}
|
||||
|
||||
if !allClosed {
|
||||
fmt.Printf("%s Convoy %s has %d open issue(s) remaining\n", style.Dim.Render("○"), convoyID, openCount)
|
||||
return nil
|
||||
}
|
||||
|
||||
// All tracked issues are complete - close the convoy
|
||||
if dryRun {
|
||||
fmt.Printf("%s Would auto-close convoy 🚚 %s: %s\n", style.Warning.Render("⚠"), convoyID, convoy.Title)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Actually close the convoy
|
||||
closeArgs := []string{"close", convoyID, "-r", "All tracked issues completed"}
|
||||
closeCmd := exec.Command("bd", closeArgs...)
|
||||
closeCmd.Dir = townBeads
|
||||
|
||||
if err := closeCmd.Run(); err != nil {
|
||||
return fmt.Errorf("closing convoy: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf("%s Auto-closed convoy 🚚 %s: %s\n", style.Bold.Render("✓"), convoyID, convoy.Title)
|
||||
|
||||
// Send completion notification
|
||||
notifyConvoyCompletion(townBeads, convoyID, convoy.Title)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func runConvoyClose(cmd *cobra.Command, args []string) error {
|
||||
convoyID := args[0]
|
||||
|
||||
@@ -870,9 +755,8 @@ func isReadyIssue(t trackedIssueInfo, blockedIssues map[string]bool) bool {
|
||||
}
|
||||
|
||||
// checkAndCloseCompletedConvoys finds open convoys where all tracked issues are closed
|
||||
// and auto-closes them. Returns the list of convoys that were closed (or would be closed in dry-run mode).
|
||||
// If dryRun is true, no changes are made and the function returns what would have been closed.
|
||||
func checkAndCloseCompletedConvoys(townBeads string, dryRun bool) ([]struct{ ID, Title string }, error) {
|
||||
// and auto-closes them. Returns the list of convoys that were closed.
|
||||
func checkAndCloseCompletedConvoys(townBeads string) ([]struct{ ID, Title string }, error) {
|
||||
var closed []struct{ ID, Title string }
|
||||
|
||||
// List all open convoys
|
||||
@@ -911,12 +795,6 @@ func checkAndCloseCompletedConvoys(townBeads string, dryRun bool) ([]struct{ ID,
|
||||
}
|
||||
|
||||
if allClosed {
|
||||
if dryRun {
|
||||
// In dry-run mode, just record what would be closed
|
||||
closed = append(closed, struct{ ID, Title string }{convoy.ID, convoy.Title})
|
||||
continue
|
||||
}
|
||||
|
||||
// Close the convoy
|
||||
closeArgs := []string{"close", convoy.ID, "-r", "All tracked issues completed"}
|
||||
closeCmd := exec.Command("bd", closeArgs...)
|
||||
|
||||
@@ -37,11 +37,6 @@ func filterGTEnv(env []string) []string {
|
||||
// 2. Creates session.ended events in both town and rig beads
|
||||
// 3. Verifies querySessionEvents finds events from both locations
|
||||
func TestQuerySessionEvents_FindsEventsFromAllLocations(t *testing.T) {
|
||||
// Skip: bd CLI 0.47.2 has a bug where database writes don't commit
|
||||
// ("sql: database is closed" during auto-flush). This affects all tests
|
||||
// that create issues via bd create. See gt-lnn1xn for tracking.
|
||||
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
|
||||
|
||||
// Skip if gt and bd are not installed
|
||||
if _, err := exec.LookPath("gt"); err != nil {
|
||||
t.Skip("gt not installed, skipping integration test")
|
||||
|
||||
@@ -106,6 +106,7 @@ func runCrewAdd(cmd *cobra.Command, args []string) error {
|
||||
RoleType: "crew",
|
||||
Rig: rigName,
|
||||
AgentState: "idle",
|
||||
RoleBead: beads.RoleBeadIDTown("crew"),
|
||||
}
|
||||
desc := fmt.Sprintf("Crew worker %s in %s - human-managed persistent workspace.", name, rigName)
|
||||
if _, err := bd.CreateAgentBead(crewID, desc, fields); err != nil {
|
||||
|
||||
@@ -3,7 +3,6 @@ package cmd
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/steveyegge/gastown/internal/config"
|
||||
@@ -16,9 +15,6 @@ import (
|
||||
"github.com/steveyegge/gastown/internal/workspace"
|
||||
)
|
||||
|
||||
// crewAtRetried tracks if we've already retried after stale session cleanup
|
||||
var crewAtRetried bool
|
||||
|
||||
func runCrewAt(cmd *cobra.Command, args []string) error {
|
||||
var name string
|
||||
|
||||
@@ -213,10 +209,6 @@ func runCrewAt(cmd *cobra.Command, args []string) error {
|
||||
if runtimeConfig.Session != nil && runtimeConfig.Session.ConfigDirEnv != "" && claudeConfigDir != "" {
|
||||
startupCmd = config.PrependEnv(startupCmd, map[string]string{runtimeConfig.Session.ConfigDirEnv: claudeConfigDir})
|
||||
}
|
||||
// Note: Don't call KillPaneProcesses here - this is a NEW session with just
|
||||
// a fresh shell. Killing it would destroy the pane before we can respawn.
|
||||
// KillPaneProcesses is only needed when restarting in an EXISTING session
|
||||
// where Claude/Node processes might be running and ignoring SIGHUP.
|
||||
if err := t.RespawnPane(paneID, startupCmd); err != nil {
|
||||
return fmt.Errorf("starting runtime: %w", err)
|
||||
}
|
||||
@@ -260,26 +252,7 @@ func runCrewAt(cmd *cobra.Command, args []string) error {
|
||||
if runtimeConfig.Session != nil && runtimeConfig.Session.ConfigDirEnv != "" && claudeConfigDir != "" {
|
||||
startupCmd = config.PrependEnv(startupCmd, map[string]string{runtimeConfig.Session.ConfigDirEnv: claudeConfigDir})
|
||||
}
|
||||
// Kill all processes in the pane before respawning to prevent orphan leaks
|
||||
// RespawnPane's -k flag only sends SIGHUP which Claude/Node may ignore
|
||||
if err := t.KillPaneProcesses(paneID); err != nil {
|
||||
// Non-fatal but log the warning
|
||||
style.PrintWarning("could not kill pane processes: %v", err)
|
||||
}
|
||||
if err := t.RespawnPane(paneID, startupCmd); err != nil {
|
||||
// If pane is stale (session exists but pane doesn't), recreate the session
|
||||
if strings.Contains(err.Error(), "can't find pane") {
|
||||
if crewAtRetried {
|
||||
return fmt.Errorf("stale session persists after cleanup: %w", err)
|
||||
}
|
||||
fmt.Printf("Stale session detected, recreating...\n")
|
||||
if killErr := t.KillSession(sessionID); killErr != nil {
|
||||
return fmt.Errorf("failed to kill stale session: %w", killErr)
|
||||
}
|
||||
crewAtRetried = true
|
||||
defer func() { crewAtRetried = false }()
|
||||
return runCrewAt(cmd, args) // Retry with fresh session
|
||||
}
|
||||
return fmt.Errorf("restarting runtime: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -28,12 +28,11 @@ func runCrewRename(cmd *cobra.Command, args []string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// Kill any running session for the old name.
|
||||
// Use KillSessionWithProcesses to ensure all descendant processes are killed.
|
||||
// Kill any running session for the old name
|
||||
t := tmux.NewTmux()
|
||||
oldSessionID := crewSessionName(r.Name, oldName)
|
||||
if hasSession, _ := t.HasSession(oldSessionID); hasSession {
|
||||
if err := t.KillSessionWithProcesses(oldSessionID); err != nil {
|
||||
if err := t.KillSession(oldSessionID); err != nil {
|
||||
return fmt.Errorf("killing old session: %w", err)
|
||||
}
|
||||
fmt.Printf("Killed session %s\n", oldSessionID)
|
||||
|
||||
@@ -264,30 +264,6 @@ Example:
|
||||
RunE: runDeaconCleanupOrphans,
|
||||
}
|
||||
|
||||
var deaconZombieScanCmd = &cobra.Command{
|
||||
Use: "zombie-scan",
|
||||
Short: "Find and clean zombie Claude processes not in active tmux sessions",
|
||||
Long: `Find and clean zombie Claude processes not in active tmux sessions.
|
||||
|
||||
Unlike cleanup-orphans (which uses TTY detection), zombie-scan uses tmux
|
||||
verification: it checks if each Claude process is in an active tmux session
|
||||
by comparing against actual pane PIDs.
|
||||
|
||||
A process is a zombie if:
|
||||
- It's a Claude/codex process
|
||||
- It's NOT the pane PID of any active tmux session
|
||||
- It's NOT a child of any pane PID
|
||||
- It's older than 60 seconds
|
||||
|
||||
This catches "ghost" processes that have a TTY (from a dead tmux session)
|
||||
but are no longer part of any active Gas Town session.
|
||||
|
||||
Examples:
|
||||
gt deacon zombie-scan # Find and kill zombies
|
||||
gt deacon zombie-scan --dry-run # Just list zombies, don't kill`,
|
||||
RunE: runDeaconZombieScan,
|
||||
}
|
||||
|
||||
var (
|
||||
triggerTimeout time.Duration
|
||||
|
||||
@@ -306,9 +282,6 @@ var (
|
||||
|
||||
// Pause flags
|
||||
pauseReason string
|
||||
|
||||
// Zombie scan flags
|
||||
zombieScanDryRun bool
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -326,7 +299,6 @@ func init() {
|
||||
deaconCmd.AddCommand(deaconPauseCmd)
|
||||
deaconCmd.AddCommand(deaconResumeCmd)
|
||||
deaconCmd.AddCommand(deaconCleanupOrphansCmd)
|
||||
deaconCmd.AddCommand(deaconZombieScanCmd)
|
||||
|
||||
// Flags for trigger-pending
|
||||
deaconTriggerPendingCmd.Flags().DurationVar(&triggerTimeout, "timeout", 2*time.Second,
|
||||
@@ -356,10 +328,6 @@ func init() {
|
||||
deaconPauseCmd.Flags().StringVar(&pauseReason, "reason", "",
|
||||
"Reason for pausing the Deacon")
|
||||
|
||||
// Flags for zombie-scan
|
||||
deaconZombieScanCmd.Flags().BoolVar(&zombieScanDryRun, "dry-run", false,
|
||||
"List zombies without killing them")
|
||||
|
||||
deaconStartCmd.Flags().StringVar(&deaconAgentOverride, "agent", "", "Agent alias to run the Deacon with (overrides town default)")
|
||||
deaconAttachCmd.Flags().StringVar(&deaconAgentOverride, "agent", "", "Agent alias to run the Deacon with (overrides town default)")
|
||||
deaconRestartCmd.Flags().StringVar(&deaconAgentOverride, "agent", "", "Agent alias to run the Deacon with (overrides town default)")
|
||||
@@ -491,9 +459,8 @@ func runDeaconStop(cmd *cobra.Command, args []string) error {
|
||||
_ = t.SendKeysRaw(sessionName, "C-c")
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// Kill the session.
|
||||
// Use KillSessionWithProcesses to ensure all descendant processes are killed.
|
||||
if err := t.KillSessionWithProcesses(sessionName); err != nil {
|
||||
// Kill the session
|
||||
if err := t.KillSession(sessionName); err != nil {
|
||||
return fmt.Errorf("killing session: %w", err)
|
||||
}
|
||||
|
||||
@@ -593,9 +560,8 @@ func runDeaconRestart(cmd *cobra.Command, args []string) error {
|
||||
fmt.Println("Restarting Deacon...")
|
||||
|
||||
if running {
|
||||
// Kill existing session.
|
||||
// Use KillSessionWithProcesses to ensure all descendant processes are killed.
|
||||
if err := t.KillSessionWithProcesses(sessionName); err != nil {
|
||||
// Kill existing session
|
||||
if err := t.KillSession(sessionName); err != nil {
|
||||
style.PrintWarning("failed to kill session: %v", err)
|
||||
}
|
||||
}
|
||||
@@ -878,10 +844,9 @@ func runDeaconForceKill(cmd *cobra.Command, args []string) error {
|
||||
mailBody := fmt.Sprintf("Deacon detected %s as unresponsive.\nReason: %s\nAction: force-killing session", agent, reason)
|
||||
sendMail(townRoot, agent, "FORCE_KILL: unresponsive", mailBody)
|
||||
|
||||
// Step 2: Kill the tmux session.
|
||||
// Use KillSessionWithProcesses to ensure all descendant processes are killed.
|
||||
// Step 2: Kill the tmux session
|
||||
fmt.Printf("%s Killing tmux session %s...\n", style.Dim.Render("2."), sessionName)
|
||||
if err := t.KillSessionWithProcesses(sessionName); err != nil {
|
||||
if err := t.KillSession(sessionName); err != nil {
|
||||
return fmt.Errorf("killing session: %w", err)
|
||||
}
|
||||
|
||||
@@ -1220,68 +1185,3 @@ func runDeaconCleanupOrphans(cmd *cobra.Command, args []string) error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// runDeaconZombieScan finds and cleans zombie Claude processes not in active tmux sessions.
|
||||
func runDeaconZombieScan(cmd *cobra.Command, args []string) error {
|
||||
// Find zombies using tmux verification
|
||||
zombies, err := util.FindZombieClaudeProcesses()
|
||||
if err != nil {
|
||||
return fmt.Errorf("finding zombie processes: %w", err)
|
||||
}
|
||||
|
||||
if len(zombies) == 0 {
|
||||
fmt.Printf("%s No zombie claude processes found\n", style.Dim.Render("○"))
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Printf("%s Found %d zombie claude process(es)\n", style.Bold.Render("●"), len(zombies))
|
||||
|
||||
// In dry-run mode, just list them
|
||||
if zombieScanDryRun {
|
||||
for _, z := range zombies {
|
||||
ageStr := fmt.Sprintf("%dm", z.Age/60)
|
||||
fmt.Printf(" %s PID %d (%s) TTY=%s age=%s\n",
|
||||
style.Dim.Render("→"), z.PID, z.Cmd, z.TTY, ageStr)
|
||||
}
|
||||
fmt.Printf("%s Dry run - no processes killed\n", style.Dim.Render("○"))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Process them with signal escalation
|
||||
results, err := util.CleanupZombieClaudeProcesses()
|
||||
if err != nil {
|
||||
style.PrintWarning("cleanup had errors: %v", err)
|
||||
}
|
||||
|
||||
// Report results
|
||||
var terminated, escalated, unkillable int
|
||||
for _, r := range results {
|
||||
switch r.Signal {
|
||||
case "SIGTERM":
|
||||
fmt.Printf(" %s Sent SIGTERM to PID %d (%s) TTY=%s\n",
|
||||
style.Bold.Render("→"), r.Process.PID, r.Process.Cmd, r.Process.TTY)
|
||||
terminated++
|
||||
case "SIGKILL":
|
||||
fmt.Printf(" %s Escalated to SIGKILL for PID %d (%s)\n",
|
||||
style.Bold.Render("!"), r.Process.PID, r.Process.Cmd)
|
||||
escalated++
|
||||
case "UNKILLABLE":
|
||||
fmt.Printf(" %s WARNING: PID %d (%s) survived SIGKILL\n",
|
||||
style.Bold.Render("⚠"), r.Process.PID, r.Process.Cmd)
|
||||
unkillable++
|
||||
}
|
||||
}
|
||||
|
||||
if len(results) > 0 {
|
||||
summary := fmt.Sprintf("Processed %d zombie(s)", len(results))
|
||||
if escalated > 0 {
|
||||
summary += fmt.Sprintf(" (%d escalated to SIGKILL)", escalated)
|
||||
}
|
||||
if unkillable > 0 {
|
||||
summary += fmt.Sprintf(" (%d unkillable)", unkillable)
|
||||
}
|
||||
fmt.Printf("%s %s\n", style.Bold.Render("✓"), summary)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -134,12 +134,10 @@ func runDoctor(cmd *cobra.Command, args []string) error {
|
||||
d.Register(doctor.NewPrefixMismatchCheck())
|
||||
d.Register(doctor.NewRoutesCheck())
|
||||
d.Register(doctor.NewRigRoutesJSONLCheck())
|
||||
d.Register(doctor.NewRoutingModeCheck())
|
||||
d.Register(doctor.NewOrphanSessionCheck())
|
||||
d.Register(doctor.NewZombieSessionCheck())
|
||||
d.Register(doctor.NewOrphanProcessCheck())
|
||||
d.Register(doctor.NewWispGCCheck())
|
||||
d.Register(doctor.NewCheckMisclassifiedWisps())
|
||||
d.Register(doctor.NewBranchCheck())
|
||||
d.Register(doctor.NewBeadsSyncOrphanCheck())
|
||||
d.Register(doctor.NewCloneDivergenceCheck())
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
@@ -82,14 +81,6 @@ func init() {
|
||||
}
|
||||
|
||||
func runDone(cmd *cobra.Command, args []string) error {
|
||||
// Guard: Only polecats should call gt done
|
||||
// Crew, deacons, witnesses etc. don't use gt done - they persist across tasks.
|
||||
// Polecats are ephemeral workers that self-destruct after completing work.
|
||||
actor := os.Getenv("BD_ACTOR")
|
||||
if actor != "" && !isPolecatActor(actor) {
|
||||
return fmt.Errorf("gt done is for polecats only (you are %s)\nPolecats are ephemeral workers that self-destruct after completing work.\nOther roles persist across tasks and don't use gt done.", actor)
|
||||
}
|
||||
|
||||
// Handle --phase-complete flag (overrides --status)
|
||||
var exitType string
|
||||
if donePhaseComplete {
|
||||
@@ -268,29 +259,19 @@ func runDone(cmd *cobra.Command, args []string) error {
|
||||
return fmt.Errorf("cannot complete: uncommitted changes would be lost\nCommit your changes first, or use --status DEFERRED to exit without completing\nUncommitted: %s", workStatus.String())
|
||||
}
|
||||
|
||||
// Check if branch has commits ahead of origin/default
|
||||
// If not, work may have been pushed directly to main - that's fine, just skip MR
|
||||
// Check that branch has commits ahead of origin/default (not local default)
|
||||
// This ensures we compare against the remote, not a potentially stale local copy
|
||||
originDefault := "origin/" + defaultBranch
|
||||
aheadCount, err := g.CommitsAhead(originDefault, "HEAD")
|
||||
if err != nil {
|
||||
// Fallback to local branch comparison if origin not available
|
||||
aheadCount, err = g.CommitsAhead(defaultBranch, branch)
|
||||
if err != nil {
|
||||
// Can't determine - assume work exists and continue
|
||||
style.PrintWarning("could not check commits ahead of %s: %v", defaultBranch, err)
|
||||
aheadCount = 1
|
||||
return fmt.Errorf("checking commits ahead of %s: %w", defaultBranch, err)
|
||||
}
|
||||
}
|
||||
|
||||
// If no commits ahead, work was likely pushed directly to main (or already merged)
|
||||
// This is valid - skip MR creation but still complete successfully
|
||||
if aheadCount == 0 {
|
||||
fmt.Printf("%s Branch has no commits ahead of %s\n", style.Bold.Render("→"), originDefault)
|
||||
fmt.Printf(" Work was likely pushed directly to main or already merged.\n")
|
||||
fmt.Printf(" Skipping MR creation - completing without merge request.\n\n")
|
||||
|
||||
// Skip straight to witness notification (no MR needed)
|
||||
goto notifyWitness
|
||||
return fmt.Errorf("branch '%s' has 0 commits ahead of %s; nothing to merge\nMake and commit changes first, or use --status DEFERRED to exit without completing", branch, originDefault)
|
||||
}
|
||||
|
||||
// CRITICAL: Push branch BEFORE creating MR bead (hq-6dk53, hq-a4ksk)
|
||||
@@ -420,7 +401,6 @@ func runDone(cmd *cobra.Command, args []string) error {
|
||||
fmt.Printf(" Branch: %s\n", branch)
|
||||
}
|
||||
|
||||
notifyWitness:
|
||||
// Notify Witness about completion
|
||||
// Use town-level beads for cross-agent mail
|
||||
townRouter := mail.NewRouter(townRoot)
|
||||
@@ -482,28 +462,27 @@ notifyWitness:
|
||||
// This is the self-cleaning model - polecats clean up after themselves
|
||||
// "done means gone" - both worktree and session are terminated
|
||||
selfCleanAttempted := false
|
||||
if roleInfo, err := GetRoleWithContext(cwd, townRoot); err == nil && roleInfo.Role == RolePolecat {
|
||||
selfCleanAttempted = true
|
||||
if exitType == ExitCompleted {
|
||||
if roleInfo, err := GetRoleWithContext(cwd, townRoot); err == nil && roleInfo.Role == RolePolecat {
|
||||
selfCleanAttempted = true
|
||||
|
||||
// Step 1: Nuke the worktree (only for COMPLETED - other statuses preserve work)
|
||||
if exitType == ExitCompleted {
|
||||
// Step 1: Nuke the worktree
|
||||
if err := selfNukePolecat(roleInfo, townRoot); err != nil {
|
||||
// Non-fatal: Witness will clean up if we fail
|
||||
style.PrintWarning("worktree nuke failed: %v (Witness will clean up)", err)
|
||||
} else {
|
||||
fmt.Printf("%s Worktree nuked\n", style.Bold.Render("✓"))
|
||||
}
|
||||
}
|
||||
|
||||
// Step 2: Kill our own session (this terminates Claude and the shell)
|
||||
// This is the last thing we do - the process will be killed when tmux session dies
|
||||
// All exit types kill the session - "done means gone"
|
||||
fmt.Printf("%s Terminating session (done means gone)\n", style.Bold.Render("→"))
|
||||
if err := selfKillSession(townRoot, roleInfo); err != nil {
|
||||
// If session kill fails, fall through to os.Exit
|
||||
style.PrintWarning("session kill failed: %v", err)
|
||||
// Step 2: Kill our own session (this terminates Claude and the shell)
|
||||
// This is the last thing we do - the process will be killed when tmux session dies
|
||||
fmt.Printf("%s Terminating session (done means gone)\n", style.Bold.Render("→"))
|
||||
if err := selfKillSession(townRoot, roleInfo); err != nil {
|
||||
// If session kill fails, fall through to os.Exit
|
||||
style.PrintWarning("session kill failed: %v", err)
|
||||
}
|
||||
// If selfKillSession succeeds, we won't reach here (process killed by tmux)
|
||||
}
|
||||
// If selfKillSession succeeds, we won't reach here (process killed by tmux)
|
||||
}
|
||||
|
||||
// Fallback exit for non-polecats or if self-clean failed
|
||||
@@ -603,19 +582,6 @@ func updateAgentStateOnDone(cwd, townRoot, exitType, _ string) { // issueID unus
|
||||
hookedBeadID := agentBead.HookBead
|
||||
// Only close if the hooked bead exists and is still in "hooked" status
|
||||
if hookedBead, err := bd.Show(hookedBeadID); err == nil && hookedBead.Status == beads.StatusHooked {
|
||||
// BUG FIX: Close attached molecule (wisp) BEFORE closing hooked bead.
|
||||
// When using formula-on-bead (gt sling formula --on bead), the base bead
|
||||
// has attached_molecule pointing to the wisp. Without this fix, gt done
|
||||
// only closed the hooked bead, leaving the wisp orphaned.
|
||||
// Order matters: wisp closes -> unblocks base bead -> base bead closes.
|
||||
attachment := beads.ParseAttachmentFields(hookedBead)
|
||||
if attachment != nil && attachment.AttachedMolecule != "" {
|
||||
if err := bd.Close(attachment.AttachedMolecule); err != nil {
|
||||
// Non-fatal: warn but continue
|
||||
fmt.Fprintf(os.Stderr, "Warning: couldn't close attached molecule %s: %v\n", attachment.AttachedMolecule, err)
|
||||
}
|
||||
}
|
||||
|
||||
if err := bd.Close(hookedBeadID); err != nil {
|
||||
// Non-fatal: warn but continue
|
||||
fmt.Fprintf(os.Stderr, "Warning: couldn't close hooked bead %s: %v\n", hookedBeadID, err)
|
||||
@@ -740,14 +706,6 @@ func selfNukePolecat(roleInfo RoleInfo, _ string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// isPolecatActor checks if a BD_ACTOR value represents a polecat.
|
||||
// Polecat actors have format: rigname/polecats/polecatname
|
||||
// Non-polecat actors have formats like: gastown/crew/name, rigname/witness, etc.
|
||||
func isPolecatActor(actor string) bool {
|
||||
parts := strings.Split(actor, "/")
|
||||
return len(parts) >= 2 && parts[1] == "polecats"
|
||||
}
|
||||
|
||||
// selfKillSession terminates the polecat's own tmux session after logging the event.
|
||||
// This completes the self-cleaning model: "done means gone" - both worktree and session.
|
||||
//
|
||||
@@ -787,12 +745,9 @@ func selfKillSession(townRoot string, roleInfo RoleInfo) error {
|
||||
|
||||
// Kill our own tmux session with proper process cleanup
|
||||
// This will terminate Claude and all child processes, completing the self-cleaning cycle.
|
||||
// We use KillSessionWithProcessesExcluding to ensure no orphaned processes are left behind,
|
||||
// while excluding our own PID to avoid killing ourselves before cleanup completes.
|
||||
// The tmux kill-session at the end will terminate us along with the session.
|
||||
// We use KillSessionWithProcesses to ensure no orphaned processes are left behind.
|
||||
t := tmux.NewTmux()
|
||||
myPID := strconv.Itoa(os.Getpid())
|
||||
if err := t.KillSessionWithProcessesExcluding(sessionName, []string{myPID}); err != nil {
|
||||
if err := t.KillSessionWithProcesses(sessionName); err != nil {
|
||||
return fmt.Errorf("killing session %s: %w", sessionName, err)
|
||||
}
|
||||
|
||||
|
||||
@@ -253,11 +253,6 @@ func TestDoneCircularRedirectProtection(t *testing.T) {
|
||||
// This is critical because branch names like "polecat/furiosa-mkb0vq9f" don't
|
||||
// contain the actual issue ID (test-845.1), but the agent's hook does.
|
||||
func TestGetIssueFromAgentHook(t *testing.T) {
|
||||
// Skip: bd CLI 0.47.2 has a bug where database writes don't commit
|
||||
// ("sql: database is closed" during auto-flush). This blocks tests
|
||||
// that need to create issues. See internal issue for tracking.
|
||||
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
agentBeadID string
|
||||
@@ -341,39 +336,3 @@ func TestGetIssueFromAgentHook(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestIsPolecatActor verifies that isPolecatActor correctly identifies
|
||||
// polecat actors vs other roles based on the BD_ACTOR format.
|
||||
func TestIsPolecatActor(t *testing.T) {
|
||||
tests := []struct {
|
||||
actor string
|
||||
want bool
|
||||
}{
|
||||
// Polecats: rigname/polecats/polecatname
|
||||
{"testrig/polecats/furiosa", true},
|
||||
{"testrig/polecats/nux", true},
|
||||
{"myrig/polecats/witness", true}, // even if named "witness", still a polecat
|
||||
|
||||
// Non-polecats
|
||||
{"gastown/crew/george", false},
|
||||
{"gastown/crew/max", false},
|
||||
{"testrig/witness", false},
|
||||
{"testrig/deacon", false},
|
||||
{"testrig/mayor", false},
|
||||
{"gastown/refinery", false},
|
||||
|
||||
// Edge cases
|
||||
{"", false},
|
||||
{"single", false},
|
||||
{"polecats/name", false}, // needs rig prefix
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.actor, func(t *testing.T) {
|
||||
got := isPolecatActor(tt.actor)
|
||||
if got != tt.want {
|
||||
t.Errorf("isPolecatActor(%q) = %v, want %v", tt.actor, got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -112,6 +111,9 @@ func runDown(cmd *cobra.Command, args []string) error {
|
||||
|
||||
rigs := discoverRigs(townRoot)
|
||||
|
||||
// Pre-fetch all sessions once for O(1) lookups (avoids N+1 subprocess calls)
|
||||
sessionSet, _ := t.GetSessionSet() // Ignore error - empty set is safe fallback
|
||||
|
||||
// Phase 0.5: Stop polecats if --polecats
|
||||
if downPolecats {
|
||||
if downDryRun {
|
||||
@@ -168,12 +170,12 @@ func runDown(cmd *cobra.Command, args []string) error {
|
||||
for _, rigName := range rigs {
|
||||
sessionName := fmt.Sprintf("gt-%s-refinery", rigName)
|
||||
if downDryRun {
|
||||
if running, _ := t.HasSession(sessionName); running {
|
||||
if sessionSet.Has(sessionName) {
|
||||
printDownStatus(fmt.Sprintf("Refinery (%s)", rigName), true, "would stop")
|
||||
}
|
||||
continue
|
||||
}
|
||||
wasRunning, err := stopSession(t, sessionName)
|
||||
wasRunning, err := stopSessionWithCache(t, sessionName, sessionSet)
|
||||
if err != nil {
|
||||
printDownStatus(fmt.Sprintf("Refinery (%s)", rigName), false, err.Error())
|
||||
allOK = false
|
||||
@@ -188,12 +190,12 @@ func runDown(cmd *cobra.Command, args []string) error {
|
||||
for _, rigName := range rigs {
|
||||
sessionName := fmt.Sprintf("gt-%s-witness", rigName)
|
||||
if downDryRun {
|
||||
if running, _ := t.HasSession(sessionName); running {
|
||||
if sessionSet.Has(sessionName) {
|
||||
printDownStatus(fmt.Sprintf("Witness (%s)", rigName), true, "would stop")
|
||||
}
|
||||
continue
|
||||
}
|
||||
wasRunning, err := stopSession(t, sessionName)
|
||||
wasRunning, err := stopSessionWithCache(t, sessionName, sessionSet)
|
||||
if err != nil {
|
||||
printDownStatus(fmt.Sprintf("Witness (%s)", rigName), false, err.Error())
|
||||
allOK = false
|
||||
@@ -207,12 +209,12 @@ func runDown(cmd *cobra.Command, args []string) error {
|
||||
// Phase 3: Stop town-level sessions (Mayor, Boot, Deacon)
|
||||
for _, ts := range session.TownSessions() {
|
||||
if downDryRun {
|
||||
if running, _ := t.HasSession(ts.SessionID); running {
|
||||
if sessionSet.Has(ts.SessionID) {
|
||||
printDownStatus(ts.Name, true, "would stop")
|
||||
}
|
||||
continue
|
||||
}
|
||||
stopped, err := session.StopTownSession(t, ts, downForce)
|
||||
stopped, err := session.StopTownSessionWithCache(t, ts, downForce, sessionSet)
|
||||
if err != nil {
|
||||
printDownStatus(ts.Name, false, err.Error())
|
||||
allOK = false
|
||||
@@ -397,6 +399,23 @@ func stopSession(t *tmux.Tmux, sessionName string) (bool, error) {
|
||||
return true, t.KillSessionWithProcesses(sessionName)
|
||||
}
|
||||
|
||||
// stopSessionWithCache is like stopSession but uses a pre-fetched SessionSet
|
||||
// for O(1) existence check instead of spawning a subprocess.
|
||||
func stopSessionWithCache(t *tmux.Tmux, sessionName string, cache *tmux.SessionSet) (bool, error) {
|
||||
if !cache.Has(sessionName) {
|
||||
return false, nil // Already stopped
|
||||
}
|
||||
|
||||
// Try graceful shutdown first (Ctrl-C, best-effort interrupt)
|
||||
if !downForce {
|
||||
_ = t.SendKeysRaw(sessionName, "C-c")
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
|
||||
// Kill the session (with explicit process termination to prevent orphans)
|
||||
return true, t.KillSessionWithProcesses(sessionName)
|
||||
}
|
||||
|
||||
// acquireShutdownLock prevents concurrent shutdowns.
|
||||
// Returns the lock (caller must defer Unlock()) or error if lock held.
|
||||
func acquireShutdownLock(townRoot string) (*flock.Flock, error) {
|
||||
@@ -455,65 +474,5 @@ func verifyShutdown(t *tmux.Tmux, townRoot string) []string {
|
||||
}
|
||||
}
|
||||
|
||||
// Check for orphaned Claude/node processes
|
||||
// These can be left behind if tmux sessions were killed but child processes didn't terminate
|
||||
if pids := findOrphanedClaudeProcesses(townRoot); len(pids) > 0 {
|
||||
respawned = append(respawned, fmt.Sprintf("orphaned Claude processes (PIDs: %v)", pids))
|
||||
}
|
||||
|
||||
return respawned
|
||||
}
|
||||
|
||||
// findOrphanedClaudeProcesses finds Claude/node processes that are running in the
|
||||
// town directory but aren't associated with any active tmux session.
|
||||
// This can happen when tmux sessions are killed but child processes don't terminate.
|
||||
func findOrphanedClaudeProcesses(townRoot string) []int {
|
||||
// Use pgrep to find all claude/node processes
|
||||
cmd := exec.Command("pgrep", "-l", "node")
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return nil // pgrep found no processes or failed
|
||||
}
|
||||
|
||||
var orphaned []int
|
||||
lines := strings.Split(string(output), "\n")
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
// Format: "PID command"
|
||||
parts := strings.Fields(line)
|
||||
if len(parts) < 2 {
|
||||
continue
|
||||
}
|
||||
pidStr := parts[0]
|
||||
var pid int
|
||||
if _, err := fmt.Sscanf(pidStr, "%d", &pid); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if this process is running in the town directory
|
||||
if isProcessInTown(pid, townRoot) {
|
||||
orphaned = append(orphaned, pid)
|
||||
}
|
||||
}
|
||||
|
||||
return orphaned
|
||||
}
|
||||
|
||||
// isProcessInTown checks if a process is running in the given town directory.
|
||||
// Uses ps to check the process's working directory.
|
||||
func isProcessInTown(pid int, townRoot string) bool {
|
||||
// Use ps to get the process's working directory
|
||||
cmd := exec.Command("ps", "-o", "command=", "-p", fmt.Sprintf("%d", pid))
|
||||
output, err := cmd.Output()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// Check if the command line includes the town path
|
||||
command := string(output)
|
||||
return strings.Contains(command, townRoot)
|
||||
}
|
||||
|
||||
|
||||
@@ -138,11 +138,6 @@ func runGitInit(cmd *cobra.Command, args []string) error {
|
||||
fmt.Printf(" ✓ Git repository already exists\n")
|
||||
}
|
||||
|
||||
// Install pre-checkout hook to prevent accidental branch switches
|
||||
if err := InstallPreCheckoutHook(hqRoot); err != nil {
|
||||
fmt.Printf(" %s Could not install pre-checkout hook: %v\n", style.Dim.Render("⚠"), err)
|
||||
}
|
||||
|
||||
// Create GitHub repo if requested
|
||||
if gitInitGitHub != "" {
|
||||
if err := createGitHubRepo(hqRoot, gitInitGitHub, !gitInitPublic); err != nil {
|
||||
@@ -228,12 +223,6 @@ func createGitHubRepo(hqRoot, repo string, private bool) error {
|
||||
}
|
||||
fmt.Printf(" → Creating %s GitHub repository %s...\n", visibility, repo)
|
||||
|
||||
// Ensure there's at least one commit before pushing.
|
||||
// gh repo create --push fails on empty repos with no commits.
|
||||
if err := ensureInitialCommit(hqRoot); err != nil {
|
||||
return fmt.Errorf("creating initial commit: %w", err)
|
||||
}
|
||||
|
||||
// Build gh repo create command
|
||||
args := []string{"repo", "create", repo, "--source", hqRoot}
|
||||
if private {
|
||||
@@ -258,33 +247,6 @@ func createGitHubRepo(hqRoot, repo string, private bool) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// ensureInitialCommit creates an initial commit if the repo has no commits.
|
||||
// gh repo create --push requires at least one commit to push.
|
||||
func ensureInitialCommit(hqRoot string) error {
|
||||
// Check if commits exist
|
||||
cmd := exec.Command("git", "rev-parse", "HEAD")
|
||||
cmd.Dir = hqRoot
|
||||
if cmd.Run() == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stage and commit
|
||||
addCmd := exec.Command("git", "add", ".")
|
||||
addCmd.Dir = hqRoot
|
||||
if err := addCmd.Run(); err != nil {
|
||||
return fmt.Errorf("git add: %w", err)
|
||||
}
|
||||
|
||||
commitCmd := exec.Command("git", "commit", "-m", "Initial Gas Town HQ")
|
||||
commitCmd.Dir = hqRoot
|
||||
if output, err := commitCmd.CombinedOutput(); err != nil {
|
||||
return fmt.Errorf("git commit failed: %s", strings.TrimSpace(string(output)))
|
||||
}
|
||||
|
||||
fmt.Printf(" ✓ Created initial commit\n")
|
||||
return nil
|
||||
}
|
||||
|
||||
// InitGitForHarness is the shared implementation for git initialization.
|
||||
// It can be called from both 'gt git-init' and 'gt install --git'.
|
||||
// Note: Function name kept for backwards compatibility.
|
||||
|
||||
@@ -11,7 +11,6 @@ import (
|
||||
"github.com/steveyegge/gastown/internal/config"
|
||||
"github.com/steveyegge/gastown/internal/constants"
|
||||
"github.com/steveyegge/gastown/internal/events"
|
||||
"github.com/steveyegge/gastown/internal/mail"
|
||||
"github.com/steveyegge/gastown/internal/session"
|
||||
"github.com/steveyegge/gastown/internal/style"
|
||||
"github.com/steveyegge/gastown/internal/tmux"
|
||||
@@ -204,13 +203,6 @@ func runHandoff(cmd *cobra.Command, args []string) error {
|
||||
_ = os.WriteFile(markerPath, []byte(currentSession), 0644)
|
||||
}
|
||||
|
||||
// Kill all processes in the pane before respawning to prevent orphan leaks
|
||||
// RespawnPane's -k flag only sends SIGHUP which Claude/Node may ignore
|
||||
if err := t.KillPaneProcesses(pane); err != nil {
|
||||
// Non-fatal but log the warning
|
||||
style.PrintWarning("could not kill pane processes: %v", err)
|
||||
}
|
||||
|
||||
// Use exec to respawn the pane - this kills us and restarts
|
||||
return t.RespawnPane(pane, restartCmd)
|
||||
}
|
||||
@@ -391,20 +383,7 @@ func buildRestartCommand(sessionName string) (string, error) {
|
||||
// 3. export Claude-related env vars (not inherited by fresh shell)
|
||||
// 4. run claude with the startup beacon (triggers immediate context loading)
|
||||
// Use exec to ensure clean process replacement.
|
||||
//
|
||||
// Check if current session is using a non-default agent (GT_AGENT env var).
|
||||
// If so, preserve it across handoff by using the override variant.
|
||||
currentAgent := os.Getenv("GT_AGENT")
|
||||
var runtimeCmd string
|
||||
if currentAgent != "" {
|
||||
var err error
|
||||
runtimeCmd, err = config.GetRuntimeCommandWithPromptAndAgentOverride("", beacon, currentAgent)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("resolving agent config: %w", err)
|
||||
}
|
||||
} else {
|
||||
runtimeCmd = config.GetRuntimeCommandWithPrompt("", beacon)
|
||||
}
|
||||
runtimeCmd := config.GetRuntimeCommandWithPrompt("", beacon)
|
||||
|
||||
// Build environment exports - role vars first, then Claude vars
|
||||
var exports []string
|
||||
@@ -418,15 +397,6 @@ func buildRestartCommand(sessionName string) (string, error) {
|
||||
}
|
||||
}
|
||||
|
||||
// Propagate GT_ROOT so subsequent handoffs can use it as fallback
|
||||
// when cwd-based detection fails (broken state recovery)
|
||||
exports = append(exports, "GT_ROOT="+townRoot)
|
||||
|
||||
// Preserve GT_AGENT across handoff so agent override persists
|
||||
if currentAgent != "" {
|
||||
exports = append(exports, "GT_AGENT="+currentAgent)
|
||||
}
|
||||
|
||||
// Add Claude-related env vars from current environment
|
||||
for _, name := range claudeEnvVars {
|
||||
if val := os.Getenv(name); val != "" {
|
||||
@@ -509,33 +479,14 @@ func sessionToGTRole(sessionName string) string {
|
||||
}
|
||||
|
||||
// detectTownRootFromCwd walks up from the current directory to find the town root.
|
||||
// Falls back to GT_TOWN_ROOT or GT_ROOT env vars if cwd detection fails (broken state recovery).
|
||||
func detectTownRootFromCwd() string {
|
||||
// Use workspace.FindFromCwd which handles both primary (mayor/town.json)
|
||||
// and secondary (mayor/ directory) markers
|
||||
townRoot, err := workspace.FindFromCwd()
|
||||
if err == nil && townRoot != "" {
|
||||
return townRoot
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
// Fallback: try environment variables for town root
|
||||
// GT_TOWN_ROOT is set by shell integration, GT_ROOT is set by session manager
|
||||
// This enables handoff to work even when cwd detection fails due to
|
||||
// detached HEAD, wrong branch, deleted worktree, etc.
|
||||
for _, envName := range []string{"GT_TOWN_ROOT", "GT_ROOT"} {
|
||||
if envRoot := os.Getenv(envName); envRoot != "" {
|
||||
// Verify it's actually a workspace
|
||||
if _, statErr := os.Stat(filepath.Join(envRoot, workspace.PrimaryMarker)); statErr == nil {
|
||||
return envRoot
|
||||
}
|
||||
// Try secondary marker too
|
||||
if info, statErr := os.Stat(filepath.Join(envRoot, workspace.SecondaryMarker)); statErr == nil && info.IsDir() {
|
||||
return envRoot
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ""
|
||||
return townRoot
|
||||
}
|
||||
|
||||
// handoffRemoteSession respawns a different session and optionally switches to it.
|
||||
@@ -567,13 +518,6 @@ func handoffRemoteSession(t *tmux.Tmux, targetSession, restartCmd string) error
|
||||
return nil
|
||||
}
|
||||
|
||||
// Kill all processes in the pane before respawning to prevent orphan leaks
|
||||
// RespawnPane's -k flag only sends SIGHUP which Claude/Node may ignore
|
||||
if err := t.KillPaneProcesses(targetPane); err != nil {
|
||||
// Non-fatal but log the warning
|
||||
style.PrintWarning("could not kill pane processes: %v", err)
|
||||
}
|
||||
|
||||
// Clear scrollback history before respawn (resets copy-mode from [0/N] to [0/0])
|
||||
if err := t.ClearHistory(targetPane); err != nil {
|
||||
// Non-fatal - continue with respawn even if clear fails
|
||||
@@ -633,9 +577,6 @@ func sendHandoffMail(subject, message string) (string, error) {
|
||||
return "", fmt.Errorf("detecting agent identity: %w", err)
|
||||
}
|
||||
|
||||
// Normalize identity to match mailbox query format
|
||||
agentID = mail.AddressToIdentity(agentID)
|
||||
|
||||
// Detect town root for beads location
|
||||
townRoot := detectTownRootFromCwd()
|
||||
if townRoot == "" {
|
||||
|
||||
@@ -1,124 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/steveyegge/gastown/internal/workspace"
|
||||
)
|
||||
|
||||
func TestDetectTownRootFromCwd_EnvFallback(t *testing.T) {
|
||||
// Save original env vars and restore after test
|
||||
origTownRoot := os.Getenv("GT_TOWN_ROOT")
|
||||
origRoot := os.Getenv("GT_ROOT")
|
||||
defer func() {
|
||||
os.Setenv("GT_TOWN_ROOT", origTownRoot)
|
||||
os.Setenv("GT_ROOT", origRoot)
|
||||
}()
|
||||
|
||||
// Create a temp directory that looks like a valid town
|
||||
tmpTown := t.TempDir()
|
||||
mayorDir := filepath.Join(tmpTown, "mayor")
|
||||
if err := os.MkdirAll(mayorDir, 0755); err != nil {
|
||||
t.Fatalf("creating mayor dir: %v", err)
|
||||
}
|
||||
townJSON := filepath.Join(mayorDir, "town.json")
|
||||
if err := os.WriteFile(townJSON, []byte(`{"name": "test-town"}`), 0644); err != nil {
|
||||
t.Fatalf("creating town.json: %v", err)
|
||||
}
|
||||
|
||||
// Clear both env vars initially
|
||||
os.Setenv("GT_TOWN_ROOT", "")
|
||||
os.Setenv("GT_ROOT", "")
|
||||
|
||||
t.Run("uses GT_TOWN_ROOT when cwd detection fails", func(t *testing.T) {
|
||||
// Set GT_TOWN_ROOT to our temp town
|
||||
os.Setenv("GT_TOWN_ROOT", tmpTown)
|
||||
os.Setenv("GT_ROOT", "")
|
||||
|
||||
// Save cwd, cd to a non-town directory, and restore after
|
||||
origCwd, _ := os.Getwd()
|
||||
os.Chdir(os.TempDir())
|
||||
defer os.Chdir(origCwd)
|
||||
|
||||
result := detectTownRootFromCwd()
|
||||
if result != tmpTown {
|
||||
t.Errorf("detectTownRootFromCwd() = %q, want %q (should use GT_TOWN_ROOT fallback)", result, tmpTown)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("uses GT_ROOT when GT_TOWN_ROOT not set", func(t *testing.T) {
|
||||
// Set only GT_ROOT
|
||||
os.Setenv("GT_TOWN_ROOT", "")
|
||||
os.Setenv("GT_ROOT", tmpTown)
|
||||
|
||||
// Save cwd, cd to a non-town directory, and restore after
|
||||
origCwd, _ := os.Getwd()
|
||||
os.Chdir(os.TempDir())
|
||||
defer os.Chdir(origCwd)
|
||||
|
||||
result := detectTownRootFromCwd()
|
||||
if result != tmpTown {
|
||||
t.Errorf("detectTownRootFromCwd() = %q, want %q (should use GT_ROOT fallback)", result, tmpTown)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("prefers GT_TOWN_ROOT over GT_ROOT", func(t *testing.T) {
|
||||
// Create another temp town for GT_ROOT
|
||||
anotherTown := t.TempDir()
|
||||
anotherMayor := filepath.Join(anotherTown, "mayor")
|
||||
os.MkdirAll(anotherMayor, 0755)
|
||||
os.WriteFile(filepath.Join(anotherMayor, "town.json"), []byte(`{"name": "other-town"}`), 0644)
|
||||
|
||||
// Set both env vars
|
||||
os.Setenv("GT_TOWN_ROOT", tmpTown)
|
||||
os.Setenv("GT_ROOT", anotherTown)
|
||||
|
||||
// Save cwd, cd to a non-town directory, and restore after
|
||||
origCwd, _ := os.Getwd()
|
||||
os.Chdir(os.TempDir())
|
||||
defer os.Chdir(origCwd)
|
||||
|
||||
result := detectTownRootFromCwd()
|
||||
if result != tmpTown {
|
||||
t.Errorf("detectTownRootFromCwd() = %q, want %q (should prefer GT_TOWN_ROOT)", result, tmpTown)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("ignores invalid GT_TOWN_ROOT", func(t *testing.T) {
|
||||
// Set GT_TOWN_ROOT to non-existent path, GT_ROOT to valid
|
||||
os.Setenv("GT_TOWN_ROOT", "/nonexistent/path/to/town")
|
||||
os.Setenv("GT_ROOT", tmpTown)
|
||||
|
||||
// Save cwd, cd to a non-town directory, and restore after
|
||||
origCwd, _ := os.Getwd()
|
||||
os.Chdir(os.TempDir())
|
||||
defer os.Chdir(origCwd)
|
||||
|
||||
result := detectTownRootFromCwd()
|
||||
if result != tmpTown {
|
||||
t.Errorf("detectTownRootFromCwd() = %q, want %q (should skip invalid GT_TOWN_ROOT and use GT_ROOT)", result, tmpTown)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("uses secondary marker when primary missing", func(t *testing.T) {
|
||||
// Create a temp town with only mayor/ directory (no town.json)
|
||||
secondaryTown := t.TempDir()
|
||||
mayorOnlyDir := filepath.Join(secondaryTown, workspace.SecondaryMarker)
|
||||
os.MkdirAll(mayorOnlyDir, 0755)
|
||||
|
||||
os.Setenv("GT_TOWN_ROOT", secondaryTown)
|
||||
os.Setenv("GT_ROOT", "")
|
||||
|
||||
// Save cwd, cd to a non-town directory, and restore after
|
||||
origCwd, _ := os.Getwd()
|
||||
os.Chdir(os.TempDir())
|
||||
defer os.Chdir(origCwd)
|
||||
|
||||
result := detectTownRootFromCwd()
|
||||
if result != secondaryTown {
|
||||
t.Errorf("detectTownRootFromCwd() = %q, want %q (should accept secondary marker)", result, secondaryTown)
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -16,7 +16,6 @@ import (
|
||||
|
||||
var hookCmd = &cobra.Command{
|
||||
Use: "hook [bead-id]",
|
||||
Aliases: []string{"work"},
|
||||
GroupID: GroupWork,
|
||||
Short: "Show or attach work on your hook",
|
||||
Long: `Show what's on your hook, or attach new work.
|
||||
|
||||
@@ -1,267 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/steveyegge/gastown/internal/style"
|
||||
"github.com/steveyegge/gastown/internal/workspace"
|
||||
)
|
||||
|
||||
var (
|
||||
installRole string
|
||||
installAllRigs bool
|
||||
installDryRun bool
|
||||
)
|
||||
|
||||
var hooksInstallCmd = &cobra.Command{
|
||||
Use: "install <hook-name>",
|
||||
Short: "Install a hook from the registry",
|
||||
Long: `Install a hook from the registry to worktrees.
|
||||
|
||||
By default, installs to the current worktree. Use --role to install
|
||||
to all worktrees of a specific role in the current rig.
|
||||
|
||||
Examples:
|
||||
gt hooks install pr-workflow-guard # Install to current worktree
|
||||
gt hooks install pr-workflow-guard --role crew # Install to all crew in current rig
|
||||
gt hooks install session-prime --role crew --all-rigs # Install to all crew everywhere
|
||||
gt hooks install pr-workflow-guard --dry-run # Preview what would be installed`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: runHooksInstall,
|
||||
}
|
||||
|
||||
func init() {
|
||||
hooksCmd.AddCommand(hooksInstallCmd)
|
||||
hooksInstallCmd.Flags().StringVar(&installRole, "role", "", "Install to all worktrees of this role (crew, polecat, witness, refinery)")
|
||||
hooksInstallCmd.Flags().BoolVar(&installAllRigs, "all-rigs", false, "Install across all rigs (requires --role)")
|
||||
hooksInstallCmd.Flags().BoolVar(&installDryRun, "dry-run", false, "Preview changes without writing files")
|
||||
}
|
||||
|
||||
func runHooksInstall(cmd *cobra.Command, args []string) error {
|
||||
hookName := args[0]
|
||||
|
||||
townRoot, err := workspace.FindFromCwd()
|
||||
if err != nil {
|
||||
return fmt.Errorf("not in a Gas Town workspace: %w", err)
|
||||
}
|
||||
|
||||
// Load registry
|
||||
registry, err := LoadRegistry(townRoot)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Find the hook
|
||||
hookDef, ok := registry.Hooks[hookName]
|
||||
if !ok {
|
||||
return fmt.Errorf("hook %q not found in registry", hookName)
|
||||
}
|
||||
|
||||
if !hookDef.Enabled {
|
||||
fmt.Printf("%s Hook %q is disabled in registry. Use --force to install anyway.\n",
|
||||
style.Warning.Render("Warning:"), hookName)
|
||||
}
|
||||
|
||||
// Determine target worktrees
|
||||
targets, err := determineTargets(townRoot, installRole, installAllRigs, hookDef.Roles)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(targets) == 0 {
|
||||
// No role specified, install to current worktree
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
targets = []string{cwd}
|
||||
}
|
||||
|
||||
// Install to each target
|
||||
installed := 0
|
||||
for _, target := range targets {
|
||||
if err := installHookTo(target, hookName, hookDef, installDryRun); err != nil {
|
||||
fmt.Printf("%s Failed to install to %s: %v\n", style.Error.Render("Error:"), target, err)
|
||||
continue
|
||||
}
|
||||
installed++
|
||||
}
|
||||
|
||||
if installDryRun {
|
||||
fmt.Printf("\n%s Would install %q to %d worktree(s)\n", style.Dim.Render("Dry run:"), hookName, installed)
|
||||
} else {
|
||||
fmt.Printf("\n%s Installed %q to %d worktree(s)\n", style.Success.Render("Done:"), hookName, installed)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// determineTargets finds all worktree paths matching the role criteria.
|
||||
func determineTargets(townRoot, role string, allRigs bool, allowedRoles []string) ([]string, error) {
|
||||
if role == "" {
|
||||
return nil, nil // Will use current directory
|
||||
}
|
||||
|
||||
// Check if role is allowed for this hook
|
||||
roleAllowed := false
|
||||
for _, r := range allowedRoles {
|
||||
if r == role {
|
||||
roleAllowed = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !roleAllowed {
|
||||
return nil, fmt.Errorf("hook is not applicable to role %q (allowed: %s)", role, strings.Join(allowedRoles, ", "))
|
||||
}
|
||||
|
||||
var targets []string
|
||||
|
||||
// Find rigs to scan
|
||||
var rigs []string
|
||||
if allRigs {
|
||||
entries, err := os.ReadDir(townRoot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, e := range entries {
|
||||
if e.IsDir() && !strings.HasPrefix(e.Name(), ".") && e.Name() != "mayor" && e.Name() != "deacon" && e.Name() != "hooks" {
|
||||
rigs = append(rigs, e.Name())
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Find current rig from cwd
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
relPath, err := filepath.Rel(townRoot, cwd)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
parts := strings.Split(relPath, string(filepath.Separator))
|
||||
if len(parts) > 0 {
|
||||
rigs = []string{parts[0]}
|
||||
}
|
||||
}
|
||||
|
||||
// Find worktrees for the role in each rig
|
||||
for _, rig := range rigs {
|
||||
rigPath := filepath.Join(townRoot, rig)
|
||||
|
||||
switch role {
|
||||
case "crew":
|
||||
crewDir := filepath.Join(rigPath, "crew")
|
||||
if entries, err := os.ReadDir(crewDir); err == nil {
|
||||
for _, e := range entries {
|
||||
if e.IsDir() && !strings.HasPrefix(e.Name(), ".") {
|
||||
targets = append(targets, filepath.Join(crewDir, e.Name()))
|
||||
}
|
||||
}
|
||||
}
|
||||
case "polecat":
|
||||
polecatsDir := filepath.Join(rigPath, "polecats")
|
||||
if entries, err := os.ReadDir(polecatsDir); err == nil {
|
||||
for _, e := range entries {
|
||||
if e.IsDir() && !strings.HasPrefix(e.Name(), ".") {
|
||||
targets = append(targets, filepath.Join(polecatsDir, e.Name()))
|
||||
}
|
||||
}
|
||||
}
|
||||
case "witness":
|
||||
witnessPath := filepath.Join(rigPath, "witness")
|
||||
if _, err := os.Stat(witnessPath); err == nil {
|
||||
targets = append(targets, witnessPath)
|
||||
}
|
||||
case "refinery":
|
||||
refineryPath := filepath.Join(rigPath, "refinery")
|
||||
if _, err := os.Stat(refineryPath); err == nil {
|
||||
targets = append(targets, refineryPath)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return targets, nil
|
||||
}
|
||||
|
||||
// installHookTo installs a hook to a specific worktree.
|
||||
func installHookTo(worktreePath, hookName string, hookDef HookDefinition, dryRun bool) error {
|
||||
settingsPath := filepath.Join(worktreePath, ".claude", "settings.json")
|
||||
|
||||
// Load existing settings or create new
|
||||
var settings ClaudeSettings
|
||||
if data, err := os.ReadFile(settingsPath); err == nil {
|
||||
if err := json.Unmarshal(data, &settings); err != nil {
|
||||
return fmt.Errorf("parsing existing settings: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize maps if needed
|
||||
if settings.Hooks == nil {
|
||||
settings.Hooks = make(map[string][]ClaudeHookMatcher)
|
||||
}
|
||||
if settings.EnabledPlugins == nil {
|
||||
settings.EnabledPlugins = make(map[string]bool)
|
||||
}
|
||||
|
||||
// Build the hook entries
|
||||
for _, matcher := range hookDef.Matchers {
|
||||
hookEntry := ClaudeHookMatcher{
|
||||
Matcher: matcher,
|
||||
Hooks: []ClaudeHook{
|
||||
{Type: "command", Command: hookDef.Command},
|
||||
},
|
||||
}
|
||||
|
||||
// Check if this exact matcher already exists
|
||||
exists := false
|
||||
for _, existing := range settings.Hooks[hookDef.Event] {
|
||||
if existing.Matcher == matcher {
|
||||
exists = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !exists {
|
||||
settings.Hooks[hookDef.Event] = append(settings.Hooks[hookDef.Event], hookEntry)
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure beads plugin is disabled (standard for Gas Town)
|
||||
settings.EnabledPlugins["beads@beads-marketplace"] = false
|
||||
|
||||
// Pretty print relative path
|
||||
relPath := worktreePath
|
||||
if home, err := os.UserHomeDir(); err == nil {
|
||||
if rel, err := filepath.Rel(home, worktreePath); err == nil && !strings.HasPrefix(rel, "..") {
|
||||
relPath = "~/" + rel
|
||||
}
|
||||
}
|
||||
|
||||
if dryRun {
|
||||
fmt.Printf(" %s %s\n", style.Dim.Render("Would install to:"), relPath)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Create directory if needed
|
||||
if err := os.MkdirAll(filepath.Dir(settingsPath), 0755); err != nil {
|
||||
return fmt.Errorf("creating .claude directory: %w", err)
|
||||
}
|
||||
|
||||
// Write settings
|
||||
data, err := json.MarshalIndent(settings, "", " ")
|
||||
if err != nil {
|
||||
return fmt.Errorf("marshaling settings: %w", err)
|
||||
}
|
||||
|
||||
if err := os.WriteFile(settingsPath, data, 0600); err != nil {
|
||||
return fmt.Errorf("writing settings: %w", err)
|
||||
}
|
||||
|
||||
fmt.Printf(" %s %s\n", style.Success.Render("Installed to:"), relPath)
|
||||
return nil
|
||||
}
|
||||
@@ -1,165 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/BurntSushi/toml"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/steveyegge/gastown/internal/style"
|
||||
"github.com/steveyegge/gastown/internal/workspace"
|
||||
)
|
||||
|
||||
// HookRegistry represents the hooks/registry.toml structure.
|
||||
type HookRegistry struct {
|
||||
Hooks map[string]HookDefinition `toml:"hooks"`
|
||||
}
|
||||
|
||||
// HookDefinition represents a single hook definition in the registry.
|
||||
type HookDefinition struct {
|
||||
Description string `toml:"description"`
|
||||
Event string `toml:"event"`
|
||||
Matchers []string `toml:"matchers"`
|
||||
Command string `toml:"command"`
|
||||
Roles []string `toml:"roles"`
|
||||
Scope string `toml:"scope"`
|
||||
Enabled bool `toml:"enabled"`
|
||||
}
|
||||
|
||||
var (
|
||||
hooksListAll bool
|
||||
)
|
||||
|
||||
var hooksListCmd = &cobra.Command{
|
||||
Use: "list",
|
||||
Short: "List available hooks from the registry",
|
||||
Long: `List all hooks defined in the hook registry.
|
||||
|
||||
The registry is at ~/gt/hooks/registry.toml and defines hooks that can be
|
||||
installed for different roles (crew, polecat, witness, etc.).
|
||||
|
||||
Examples:
|
||||
gt hooks list # Show enabled hooks
|
||||
gt hooks list --all # Show all hooks including disabled`,
|
||||
RunE: runHooksList,
|
||||
}
|
||||
|
||||
func init() {
|
||||
hooksCmd.AddCommand(hooksListCmd)
|
||||
hooksListCmd.Flags().BoolVarP(&hooksListAll, "all", "a", false, "Show all hooks including disabled")
|
||||
hooksListCmd.Flags().BoolVarP(&hooksVerbose, "verbose", "v", false, "Show hook commands and matchers")
|
||||
}
|
||||
|
||||
// LoadRegistry loads the hook registry from the town's hooks directory.
|
||||
func LoadRegistry(townRoot string) (*HookRegistry, error) {
|
||||
registryPath := filepath.Join(townRoot, "hooks", "registry.toml")
|
||||
|
||||
data, err := os.ReadFile(registryPath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return nil, fmt.Errorf("hook registry not found at %s", registryPath)
|
||||
}
|
||||
return nil, fmt.Errorf("reading registry: %w", err)
|
||||
}
|
||||
|
||||
var registry HookRegistry
|
||||
if _, err := toml.Decode(string(data), ®istry); err != nil {
|
||||
return nil, fmt.Errorf("parsing registry: %w", err)
|
||||
}
|
||||
|
||||
return ®istry, nil
|
||||
}
|
||||
|
||||
func runHooksList(cmd *cobra.Command, args []string) error {
|
||||
townRoot, err := workspace.FindFromCwd()
|
||||
if err != nil {
|
||||
return fmt.Errorf("not in a Gas Town workspace: %w", err)
|
||||
}
|
||||
|
||||
registry, err := LoadRegistry(townRoot)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(registry.Hooks) == 0 {
|
||||
fmt.Println(style.Dim.Render("No hooks defined in registry"))
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Printf("\n%s Hook Registry\n", style.Bold.Render("📋"))
|
||||
fmt.Printf("Source: %s\n\n", style.Dim.Render(filepath.Join(townRoot, "hooks", "registry.toml")))
|
||||
|
||||
// Group by event type
|
||||
byEvent := make(map[string][]struct {
|
||||
name string
|
||||
def HookDefinition
|
||||
})
|
||||
eventOrder := []string{"PreToolUse", "PostToolUse", "SessionStart", "PreCompact", "UserPromptSubmit", "Stop"}
|
||||
|
||||
for name, def := range registry.Hooks {
|
||||
if !hooksListAll && !def.Enabled {
|
||||
continue
|
||||
}
|
||||
byEvent[def.Event] = append(byEvent[def.Event], struct {
|
||||
name string
|
||||
def HookDefinition
|
||||
}{name, def})
|
||||
}
|
||||
|
||||
// Add any events not in the predefined order
|
||||
for event := range byEvent {
|
||||
found := false
|
||||
for _, o := range eventOrder {
|
||||
if event == o {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
eventOrder = append(eventOrder, event)
|
||||
}
|
||||
}
|
||||
|
||||
count := 0
|
||||
for _, event := range eventOrder {
|
||||
hooks := byEvent[event]
|
||||
if len(hooks) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
fmt.Printf("%s %s\n", style.Bold.Render("▸"), event)
|
||||
|
||||
for _, h := range hooks {
|
||||
count++
|
||||
statusIcon := "●"
|
||||
statusColor := style.Success
|
||||
if !h.def.Enabled {
|
||||
statusIcon = "○"
|
||||
statusColor = style.Dim
|
||||
}
|
||||
|
||||
rolesStr := strings.Join(h.def.Roles, ", ")
|
||||
scopeStr := h.def.Scope
|
||||
|
||||
fmt.Printf(" %s %s\n", statusColor.Render(statusIcon), style.Bold.Render(h.name))
|
||||
fmt.Printf(" %s\n", h.def.Description)
|
||||
fmt.Printf(" %s %s %s %s\n",
|
||||
style.Dim.Render("roles:"), rolesStr,
|
||||
style.Dim.Render("scope:"), scopeStr)
|
||||
|
||||
if hooksVerbose {
|
||||
fmt.Printf(" %s %s\n", style.Dim.Render("command:"), h.def.Command)
|
||||
for _, m := range h.def.Matchers {
|
||||
fmt.Printf(" %s %s\n", style.Dim.Render("matcher:"), m)
|
||||
}
|
||||
}
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
fmt.Printf("%s %d hooks in registry\n", style.Dim.Render("Total:"), count)
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -74,34 +74,6 @@ type VersionChange struct {
|
||||
|
||||
// versionChanges contains agent-actionable changes for recent versions
|
||||
var versionChanges = []VersionChange{
|
||||
{
|
||||
Version: "0.5.0",
|
||||
Date: "2026-01-22",
|
||||
Changes: []string{
|
||||
"NEW: gt mail read <index> - Read messages by inbox position",
|
||||
"NEW: gt mail hook - Shortcut for gt hook attach from mail",
|
||||
"NEW: --body alias for --message in gt mail send/reply",
|
||||
"NEW: gt bd alias for gt bead, gt work alias for gt hook",
|
||||
"NEW: OpenCode as built-in agent preset (gt config set agent opencode)",
|
||||
"NEW: Config-based role definition system",
|
||||
"NEW: Deacon icon in mayor status line",
|
||||
"NEW: gt hooks - Hook registry and install command",
|
||||
"NEW: Squash merge in refinery for cleaner history",
|
||||
"CHANGED: Parallel mail inbox queries (~6x speedup)",
|
||||
"FIX: Crew session stability - Don't kill pane processes on new sessions",
|
||||
"FIX: Auto-recover from stale tmux pane references",
|
||||
"FIX: KillPaneProcesses now kills pane process itself, not just descendants",
|
||||
"FIX: Convoy ID propagation in refinery and convoy watcher",
|
||||
"FIX: Multi-repo routing for custom types and role slots",
|
||||
},
|
||||
},
|
||||
{
|
||||
Version: "0.4.0",
|
||||
Date: "2026-01-19",
|
||||
Changes: []string{
|
||||
"FIX: Orphan cleanup skips valid tmux sessions - Prevents false kills of witnesses/refineries/deacon during startup by checking gt-*/hq-* session membership",
|
||||
},
|
||||
},
|
||||
{
|
||||
Version: "0.3.1",
|
||||
Date: "2026-01-17",
|
||||
|
||||
@@ -221,30 +221,6 @@ func runInstall(cmd *cobra.Command, args []string) error {
|
||||
fmt.Printf(" ✓ Created deacon/.claude/settings.json\n")
|
||||
}
|
||||
|
||||
// Create boot directory (deacon/dogs/boot/) for Boot watchdog.
|
||||
// This avoids gt doctor warning on fresh install.
|
||||
bootDir := filepath.Join(deaconDir, "dogs", "boot")
|
||||
if err := os.MkdirAll(bootDir, 0755); err != nil {
|
||||
fmt.Printf(" %s Could not create boot directory: %v\n", style.Dim.Render("⚠"), err)
|
||||
}
|
||||
|
||||
// Create plugins directory for town-level patrol plugins.
|
||||
// This avoids gt doctor warning on fresh install.
|
||||
pluginsDir := filepath.Join(absPath, "plugins")
|
||||
if err := os.MkdirAll(pluginsDir, 0755); err != nil {
|
||||
fmt.Printf(" %s Could not create plugins directory: %v\n", style.Dim.Render("⚠"), err)
|
||||
} else {
|
||||
fmt.Printf(" ✓ Created plugins/\n")
|
||||
}
|
||||
|
||||
// Create daemon.json patrol config.
|
||||
// This avoids gt doctor warning on fresh install.
|
||||
if err := config.EnsureDaemonPatrolConfig(absPath); err != nil {
|
||||
fmt.Printf(" %s Could not create daemon.json: %v\n", style.Dim.Render("⚠"), err)
|
||||
} else {
|
||||
fmt.Printf(" ✓ Created mayor/daemon.json\n")
|
||||
}
|
||||
|
||||
// Initialize git BEFORE beads so that bd can compute repository fingerprint.
|
||||
// The fingerprint is required for the daemon to start properly.
|
||||
if installGit || installGitHub != "" {
|
||||
@@ -258,12 +234,6 @@ func runInstall(cmd *cobra.Command, args []string) error {
|
||||
// Town beads (hq- prefix) stores mayor mail, cross-rig coordination, and handoffs.
|
||||
// Rig beads are separate and have their own prefixes.
|
||||
if !installNoBeads {
|
||||
// Kill any orphaned bd daemons before initializing beads.
|
||||
// Stale daemons can interfere with fresh database creation.
|
||||
if killed, _, _ := beads.StopAllBdProcesses(false, true); killed > 0 {
|
||||
fmt.Printf(" ✓ Stopped %d orphaned bd daemon(s)\n", killed)
|
||||
}
|
||||
|
||||
if err := initTownBeads(absPath); err != nil {
|
||||
fmt.Printf(" %s Could not initialize town beads: %v\n", style.Dim.Render("⚠"), err)
|
||||
} else {
|
||||
@@ -278,7 +248,7 @@ func runInstall(cmd *cobra.Command, args []string) error {
|
||||
}
|
||||
}
|
||||
|
||||
// Create town-level agent beads (Mayor, Deacon).
|
||||
// Create town-level agent beads (Mayor, Deacon) and role beads.
|
||||
// These use hq- prefix and are stored in town beads for cross-rig coordination.
|
||||
if err := initTownAgentBeads(absPath); err != nil {
|
||||
fmt.Printf(" %s Could not create town-level agent beads: %v\n", style.Dim.Render("⚠"), err)
|
||||
@@ -399,19 +369,6 @@ func initTownBeads(townPath string) error {
|
||||
}
|
||||
}
|
||||
|
||||
// Verify .beads directory was actually created (bd init can exit 0 without creating it)
|
||||
beadsDir := filepath.Join(townPath, ".beads")
|
||||
if _, statErr := os.Stat(beadsDir); os.IsNotExist(statErr) {
|
||||
return fmt.Errorf("bd init succeeded but .beads directory not created (check bd daemon interference)")
|
||||
}
|
||||
|
||||
// Explicitly set issue_prefix config (bd init --prefix may not persist it in newer versions).
|
||||
prefixSetCmd := exec.Command("bd", "config", "set", "issue_prefix", "hq")
|
||||
prefixSetCmd.Dir = townPath
|
||||
if prefixOutput, prefixErr := prefixSetCmd.CombinedOutput(); prefixErr != nil {
|
||||
return fmt.Errorf("bd config set issue_prefix failed: %s", strings.TrimSpace(string(prefixOutput)))
|
||||
}
|
||||
|
||||
// Configure custom types for Gas Town (agent, role, rig, convoy, slot).
|
||||
// These were extracted from beads core in v0.46.0 and now require explicit config.
|
||||
configCmd := exec.Command("bd", "config", "set", "types.custom", constants.BeadsCustomTypes)
|
||||
@@ -491,30 +448,58 @@ func ensureCustomTypes(beadsPath string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// initTownAgentBeads creates town-level agent beads using hq- prefix.
|
||||
// initTownAgentBeads creates town-level agent and role beads using hq- prefix.
|
||||
// This creates:
|
||||
// - hq-mayor, hq-deacon (agent beads for town-level agents)
|
||||
// - hq-mayor-role, hq-deacon-role, hq-witness-role, hq-refinery-role,
|
||||
// hq-polecat-role, hq-crew-role (role definition beads)
|
||||
//
|
||||
// These beads are stored in town beads (~/gt/.beads/) and are shared across all rigs.
|
||||
// Rig-level agent beads (witness, refinery) are created by gt rig add in rig beads.
|
||||
//
|
||||
// Note: Role definitions are now config-based (internal/config/roles/*.toml),
|
||||
// not stored as beads. See config-based-roles.md for details.
|
||||
// ERROR HANDLING ASYMMETRY:
|
||||
// Agent beads (Mayor, Deacon) use hard fail - installation aborts if creation fails.
|
||||
// Role beads use soft fail - logs warning and continues if creation fails.
|
||||
//
|
||||
// Agent beads use hard fail - installation aborts if creation fails.
|
||||
// Agent beads are identity beads that track agent state, hooks, and
|
||||
// Rationale: Agent beads are identity beads that track agent state, hooks, and
|
||||
// form the foundation of the CV/reputation ledger. Without them, agents cannot
|
||||
// be properly tracked or coordinated.
|
||||
// be properly tracked or coordinated. Role beads are documentation templates
|
||||
// that define role characteristics but are not required for agent operation -
|
||||
// agents can function without their role bead existing.
|
||||
func initTownAgentBeads(townPath string) error {
|
||||
bd := beads.New(townPath)
|
||||
|
||||
// bd init doesn't enable "custom" issue types by default, but Gas Town uses
|
||||
// agent beads during install and runtime. Ensure these types are enabled
|
||||
// agent/role beads during install and runtime. Ensure these types are enabled
|
||||
// before attempting to create any town-level system beads.
|
||||
if err := ensureBeadsCustomTypes(townPath, constants.BeadsCustomTypesList()); err != nil {
|
||||
if err := ensureBeadsCustomTypes(townPath, []string{"agent", "role", "rig", "convoy", "slot"}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Role beads (global templates) - use shared definitions from beads package
|
||||
for _, role := range beads.AllRoleBeadDefs() {
|
||||
// Check if already exists
|
||||
if _, err := bd.Show(role.ID); err == nil {
|
||||
continue // Already exists
|
||||
}
|
||||
|
||||
// Create role bead using the beads API
|
||||
// CreateWithID with Type: "role" automatically adds gt:role label
|
||||
_, err := bd.CreateWithID(role.ID, beads.CreateOptions{
|
||||
Title: role.Title,
|
||||
Type: "role",
|
||||
Description: role.Desc,
|
||||
Priority: -1, // No priority
|
||||
})
|
||||
if err != nil {
|
||||
// Log but continue - role beads are optional
|
||||
fmt.Printf(" %s Could not create role bead %s: %v\n",
|
||||
style.Dim.Render("⚠"), role.ID, err)
|
||||
continue
|
||||
}
|
||||
fmt.Printf(" ✓ Created role bead: %s\n", role.ID)
|
||||
}
|
||||
|
||||
// Town-level agent beads
|
||||
agentDefs := []struct {
|
||||
id string
|
||||
@@ -556,7 +541,7 @@ func initTownAgentBeads(townPath string) error {
|
||||
Rig: "", // Town-level agents have no rig
|
||||
AgentState: "idle",
|
||||
HookBead: "",
|
||||
// Note: RoleBead field removed - role definitions are now config-based
|
||||
RoleBead: beads.RoleBeadIDTown(agent.roleType),
|
||||
}
|
||||
|
||||
if _, err := bd.CreateAgentBead(agent.id, agent.title, fields); err != nil {
|
||||
|
||||
@@ -122,6 +122,46 @@ func TestInstallBeadsHasCorrectPrefix(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// TestInstallTownRoleSlots validates that town-level agent beads
|
||||
// have their role slot set after install.
|
||||
func TestInstallTownRoleSlots(t *testing.T) {
|
||||
// Skip if bd is not available
|
||||
if _, err := exec.LookPath("bd"); err != nil {
|
||||
t.Skip("bd not installed, skipping role slot test")
|
||||
}
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
hqPath := filepath.Join(tmpDir, "test-hq")
|
||||
|
||||
gtBinary := buildGT(t)
|
||||
|
||||
// Run gt install (includes beads init by default)
|
||||
cmd := exec.Command(gtBinary, "install", hqPath)
|
||||
cmd.Env = append(os.Environ(), "HOME="+tmpDir)
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
t.Fatalf("gt install failed: %v\nOutput: %s", err, output)
|
||||
}
|
||||
|
||||
// Log install output for CI debugging
|
||||
t.Logf("gt install output:\n%s", output)
|
||||
|
||||
// Verify beads directory was created
|
||||
beadsDir := filepath.Join(hqPath, ".beads")
|
||||
if _, err := os.Stat(beadsDir); os.IsNotExist(err) {
|
||||
t.Fatalf("beads directory not created at %s", beadsDir)
|
||||
}
|
||||
|
||||
// List beads for debugging
|
||||
listCmd := exec.Command("bd", "--no-daemon", "list", "--type=agent")
|
||||
listCmd.Dir = hqPath
|
||||
listOutput, _ := listCmd.CombinedOutput()
|
||||
t.Logf("bd list --type=agent output:\n%s", listOutput)
|
||||
|
||||
assertSlotValue(t, hqPath, "hq-mayor", "role", "hq-mayor-role")
|
||||
assertSlotValue(t, hqPath, "hq-deacon", "role", "hq-deacon-role")
|
||||
}
|
||||
|
||||
// TestInstallIdempotent validates that running gt install twice
|
||||
// on the same directory fails without --force flag.
|
||||
func TestInstallIdempotent(t *testing.T) {
|
||||
@@ -287,6 +327,54 @@ func TestInstallNoBeadsFlag(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// buildGT builds the gt binary and returns its path.
|
||||
// It caches the build across tests in the same run.
|
||||
var cachedGTBinary string
|
||||
|
||||
func buildGT(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
if cachedGTBinary != "" {
|
||||
// Verify cached binary still exists
|
||||
if _, err := os.Stat(cachedGTBinary); err == nil {
|
||||
return cachedGTBinary
|
||||
}
|
||||
// Binary was cleaned up, rebuild
|
||||
cachedGTBinary = ""
|
||||
}
|
||||
|
||||
// Find project root (where go.mod is)
|
||||
wd, err := os.Getwd()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get working directory: %v", err)
|
||||
}
|
||||
|
||||
// Walk up to find go.mod
|
||||
projectRoot := wd
|
||||
for {
|
||||
if _, err := os.Stat(filepath.Join(projectRoot, "go.mod")); err == nil {
|
||||
break
|
||||
}
|
||||
parent := filepath.Dir(projectRoot)
|
||||
if parent == projectRoot {
|
||||
t.Fatal("could not find project root (go.mod)")
|
||||
}
|
||||
projectRoot = parent
|
||||
}
|
||||
|
||||
// Build gt binary to a persistent temp location (not per-test)
|
||||
tmpDir := os.TempDir()
|
||||
tmpBinary := filepath.Join(tmpDir, "gt-integration-test")
|
||||
cmd := exec.Command("go", "build", "-o", tmpBinary, "./cmd/gt")
|
||||
cmd.Dir = projectRoot
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
t.Fatalf("failed to build gt: %v\nOutput: %s", err, output)
|
||||
}
|
||||
|
||||
cachedGTBinary = tmpBinary
|
||||
return tmpBinary
|
||||
}
|
||||
|
||||
// assertDirExists checks that the given path exists and is a directory.
|
||||
func assertDirExists(t *testing.T, path, name string) {
|
||||
t.Helper()
|
||||
|
||||
@@ -21,7 +21,6 @@ var (
|
||||
mailInboxJSON bool
|
||||
mailReadJSON bool
|
||||
mailInboxUnread bool
|
||||
mailInboxAll bool
|
||||
mailInboxIdentity string
|
||||
mailCheckInject bool
|
||||
mailCheckJSON bool
|
||||
@@ -139,13 +138,8 @@ var mailInboxCmd = &cobra.Command{
|
||||
If no address is specified, shows the current context's inbox.
|
||||
Use --identity for polecats to explicitly specify their identity.
|
||||
|
||||
By default, shows all messages. Use --unread to filter to unread only,
|
||||
or --all to explicitly show all messages (read and unread).
|
||||
|
||||
Examples:
|
||||
gt mail inbox # Current context (auto-detected)
|
||||
gt mail inbox --all # Explicitly show all messages
|
||||
gt mail inbox --unread # Show only unread messages
|
||||
gt mail inbox mayor/ # Mayor's inbox
|
||||
gt mail inbox greenplace/Toast # Polecat's inbox
|
||||
gt mail inbox --identity greenplace/Toast # Explicit polecat identity`,
|
||||
@@ -154,21 +148,15 @@ Examples:
|
||||
}
|
||||
|
||||
var mailReadCmd = &cobra.Command{
|
||||
Use: "read <message-id|index>",
|
||||
Use: "read <message-id>",
|
||||
Short: "Read a message",
|
||||
Long: `Read a specific message (does not mark as read).
|
||||
|
||||
You can specify a message by its ID or by its numeric index from the inbox.
|
||||
The index corresponds to the number shown in 'gt mail inbox' (1-based).
|
||||
|
||||
Examples:
|
||||
gt mail read hq-abc123 # Read by message ID
|
||||
gt mail read 3 # Read the 3rd message in inbox
|
||||
|
||||
The message ID can be found from 'gt mail inbox'.
|
||||
Use 'gt mail mark-read' to mark messages as read.`,
|
||||
Aliases: []string{"show"},
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: runMailRead,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: runMailRead,
|
||||
}
|
||||
|
||||
var mailPeekCmd = &cobra.Command{
|
||||
@@ -182,16 +170,12 @@ Exits silently with code 1 if no unread messages.`,
|
||||
}
|
||||
|
||||
var mailDeleteCmd = &cobra.Command{
|
||||
Use: "delete <message-id> [message-id...]",
|
||||
Short: "Delete messages",
|
||||
Long: `Delete (acknowledge) one or more messages.
|
||||
Use: "delete <message-id>",
|
||||
Short: "Delete a message",
|
||||
Long: `Delete (acknowledge) a message.
|
||||
|
||||
This closes the messages in beads.
|
||||
|
||||
Examples:
|
||||
gt mail delete hq-abc123
|
||||
gt mail delete hq-abc123 hq-def456 hq-ghi789`,
|
||||
Args: cobra.MinimumNArgs(1),
|
||||
This closes the message in beads.`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: runMailDelete,
|
||||
}
|
||||
|
||||
@@ -278,7 +262,7 @@ Examples:
|
||||
}
|
||||
|
||||
var mailReplyCmd = &cobra.Command{
|
||||
Use: "reply <message-id> [message]",
|
||||
Use: "reply <message-id>",
|
||||
Short: "Reply to a message",
|
||||
Long: `Reply to a specific message.
|
||||
|
||||
@@ -287,13 +271,10 @@ This is a convenience command that automatically:
|
||||
- Prefixes the subject with "Re: " (if not already present)
|
||||
- Sends to the original sender
|
||||
|
||||
The message body can be provided as a positional argument or via -m flag.
|
||||
|
||||
Examples:
|
||||
gt mail reply msg-abc123 "Thanks, working on it now"
|
||||
gt mail reply msg-abc123 -m "Thanks, working on it now"
|
||||
gt mail reply msg-abc123 -s "Custom subject" -m "Reply body"`,
|
||||
Args: cobra.RangeArgs(1, 2),
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: runMailReply,
|
||||
}
|
||||
|
||||
@@ -437,7 +418,6 @@ func init() {
|
||||
// Send flags
|
||||
mailSendCmd.Flags().StringVarP(&mailSubject, "subject", "s", "", "Message subject (required)")
|
||||
mailSendCmd.Flags().StringVarP(&mailBody, "message", "m", "", "Message body")
|
||||
mailSendCmd.Flags().StringVar(&mailBody, "body", "", "Alias for --message")
|
||||
mailSendCmd.Flags().IntVar(&mailPriority, "priority", 2, "Message priority (0=urgent, 1=high, 2=normal, 3=low, 4=backlog)")
|
||||
mailSendCmd.Flags().BoolVar(&mailUrgent, "urgent", false, "Set priority=0 (urgent)")
|
||||
mailSendCmd.Flags().StringVar(&mailType, "type", "notification", "Message type (task, scavenge, notification, reply)")
|
||||
@@ -453,7 +433,6 @@ func init() {
|
||||
// Inbox flags
|
||||
mailInboxCmd.Flags().BoolVar(&mailInboxJSON, "json", false, "Output as JSON")
|
||||
mailInboxCmd.Flags().BoolVarP(&mailInboxUnread, "unread", "u", false, "Show only unread messages")
|
||||
mailInboxCmd.Flags().BoolVarP(&mailInboxAll, "all", "a", false, "Show all messages (read and unread)")
|
||||
mailInboxCmd.Flags().StringVar(&mailInboxIdentity, "identity", "", "Explicit identity for inbox (e.g., greenplace/Toast)")
|
||||
mailInboxCmd.Flags().StringVar(&mailInboxIdentity, "address", "", "Alias for --identity")
|
||||
|
||||
@@ -471,8 +450,8 @@ func init() {
|
||||
|
||||
// Reply flags
|
||||
mailReplyCmd.Flags().StringVarP(&mailReplySubject, "subject", "s", "", "Override reply subject (default: Re: <original>)")
|
||||
mailReplyCmd.Flags().StringVarP(&mailReplyMessage, "message", "m", "", "Reply message body")
|
||||
mailReplyCmd.Flags().StringVar(&mailReplyMessage, "body", "", "Reply message body (alias for --message)")
|
||||
mailReplyCmd.Flags().StringVarP(&mailReplyMessage, "message", "m", "", "Reply message body (required)")
|
||||
_ = mailReplyCmd.MarkFlagRequired("message")
|
||||
|
||||
// Search flags
|
||||
mailSearchCmd.Flags().StringVar(&mailSearchFrom, "from", "", "Filter by sender address")
|
||||
|
||||
@@ -352,23 +352,6 @@ func runChannelSubscribe(cmd *cobra.Command, args []string) error {
|
||||
|
||||
b := beads.New(townRoot)
|
||||
|
||||
// Check channel exists and current subscription status
|
||||
_, fields, err := b.GetChannelBead(name)
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting channel: %w", err)
|
||||
}
|
||||
if fields == nil {
|
||||
return fmt.Errorf("channel not found: %s", name)
|
||||
}
|
||||
|
||||
// Check if already subscribed
|
||||
for _, s := range fields.Subscribers {
|
||||
if s == subscriber {
|
||||
fmt.Printf("%s is already subscribed to channel %q\n", subscriber, name)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
if err := b.SubscribeToChannel(name, subscriber); err != nil {
|
||||
return fmt.Errorf("subscribing to channel: %w", err)
|
||||
}
|
||||
@@ -392,28 +375,6 @@ func runChannelUnsubscribe(cmd *cobra.Command, args []string) error {
|
||||
|
||||
b := beads.New(townRoot)
|
||||
|
||||
// Check channel exists and current subscription status
|
||||
_, fields, err := b.GetChannelBead(name)
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting channel: %w", err)
|
||||
}
|
||||
if fields == nil {
|
||||
return fmt.Errorf("channel not found: %s", name)
|
||||
}
|
||||
|
||||
// Check if actually subscribed
|
||||
found := false
|
||||
for _, s := range fields.Subscribers {
|
||||
if s == subscriber {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
fmt.Printf("%s is not subscribed to channel %q\n", subscriber, name)
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := b.UnsubscribeFromChannel(name, subscriber); err != nil {
|
||||
return fmt.Errorf("unsubscribing from channel: %w", err)
|
||||
}
|
||||
@@ -441,13 +402,9 @@ func runChannelSubscribers(cmd *cobra.Command, args []string) error {
|
||||
}
|
||||
|
||||
if channelJSON {
|
||||
subs := fields.Subscribers
|
||||
if subs == nil {
|
||||
subs = []string{}
|
||||
}
|
||||
enc := json.NewEncoder(os.Stdout)
|
||||
enc.SetIndent("", " ")
|
||||
return enc.Encode(subs)
|
||||
return enc.Encode(fields.Subscribers)
|
||||
}
|
||||
|
||||
if len(fields.Subscribers) == 0 {
|
||||
|
||||
@@ -1,58 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// Flags for mail hook command (mirror of hook command flags)
|
||||
var (
|
||||
mailHookSubject string
|
||||
mailHookMessage string
|
||||
mailHookDryRun bool
|
||||
mailHookForce bool
|
||||
)
|
||||
|
||||
var mailHookCmd = &cobra.Command{
|
||||
Use: "hook <mail-id>",
|
||||
Short: "Attach mail to your hook (alias for 'gt hook attach')",
|
||||
Long: `Attach a mail message to your hook.
|
||||
|
||||
This is an alias for 'gt hook attach <mail-id>'. It attaches the specified
|
||||
mail message to your hook so you can work on it.
|
||||
|
||||
The hook is the "durability primitive" - work on your hook survives session
|
||||
restarts, context compaction, and handoffs.
|
||||
|
||||
Examples:
|
||||
gt mail hook msg-abc123 # Attach mail to your hook
|
||||
gt mail hook msg-abc123 -s "Fix the bug" # With subject for handoff
|
||||
gt mail hook msg-abc123 --force # Replace existing incomplete work
|
||||
|
||||
Related commands:
|
||||
gt hook <bead> # Attach any bead to your hook
|
||||
gt hook status # Show what's on your hook
|
||||
gt unsling # Remove work from hook`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: runMailHook,
|
||||
}
|
||||
|
||||
func init() {
|
||||
mailHookCmd.Flags().StringVarP(&mailHookSubject, "subject", "s", "", "Subject for handoff mail (optional)")
|
||||
mailHookCmd.Flags().StringVarP(&mailHookMessage, "message", "m", "", "Message for handoff mail (optional)")
|
||||
mailHookCmd.Flags().BoolVarP(&mailHookDryRun, "dry-run", "n", false, "Show what would be done")
|
||||
mailHookCmd.Flags().BoolVarP(&mailHookForce, "force", "f", false, "Replace existing incomplete hooked bead")
|
||||
|
||||
mailCmd.AddCommand(mailHookCmd)
|
||||
}
|
||||
|
||||
// runMailHook attaches mail to the hook - delegates to the hook command's logic
|
||||
func runMailHook(cmd *cobra.Command, args []string) error {
|
||||
// Copy flags to hook command's globals (they share the same functionality)
|
||||
hookSubject = mailHookSubject
|
||||
hookMessage = mailHookMessage
|
||||
hookDryRun = mailHookDryRun
|
||||
hookForce = mailHookForce
|
||||
|
||||
// Delegate to the hook command's run function
|
||||
return runHook(cmd, args)
|
||||
}
|
||||
@@ -5,7 +5,6 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
@@ -31,11 +30,6 @@ func getMailbox(address string) (*mail.Mailbox, error) {
|
||||
}
|
||||
|
||||
func runMailInbox(cmd *cobra.Command, args []string) error {
|
||||
// Check for mutually exclusive flags
|
||||
if mailInboxAll && mailInboxUnread {
|
||||
return errors.New("--all and --unread are mutually exclusive")
|
||||
}
|
||||
|
||||
// Determine which inbox to check (priority: --identity flag, positional arg, auto-detect)
|
||||
address := ""
|
||||
if mailInboxIdentity != "" {
|
||||
@@ -52,8 +46,6 @@ func runMailInbox(cmd *cobra.Command, args []string) error {
|
||||
}
|
||||
|
||||
// Get messages
|
||||
// --all is the default behavior (shows all messages)
|
||||
// --unread filters to only unread messages
|
||||
var messages []*mail.Message
|
||||
if mailInboxUnread {
|
||||
messages, err = mailbox.ListUnread()
|
||||
@@ -81,7 +73,7 @@ func runMailInbox(cmd *cobra.Command, args []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
for i, msg := range messages {
|
||||
for _, msg := range messages {
|
||||
readMarker := "●"
|
||||
if msg.Read {
|
||||
readMarker = "○"
|
||||
@@ -99,13 +91,11 @@ func runMailInbox(cmd *cobra.Command, args []string) error {
|
||||
wispMarker = " " + style.Dim.Render("(wisp)")
|
||||
}
|
||||
|
||||
// Show 1-based index for easy reference with 'gt mail read <n>'
|
||||
indexStr := style.Dim.Render(fmt.Sprintf("%d.", i+1))
|
||||
fmt.Printf(" %s %s %s%s%s%s\n", indexStr, readMarker, msg.Subject, typeMarker, priorityMarker, wispMarker)
|
||||
fmt.Printf(" %s from %s\n",
|
||||
fmt.Printf(" %s %s%s%s%s\n", readMarker, msg.Subject, typeMarker, priorityMarker, wispMarker)
|
||||
fmt.Printf(" %s from %s\n",
|
||||
style.Dim.Render(msg.ID),
|
||||
msg.From)
|
||||
fmt.Printf(" %s\n",
|
||||
fmt.Printf(" %s\n",
|
||||
style.Dim.Render(msg.Timestamp.Format("2006-01-02 15:04")))
|
||||
}
|
||||
|
||||
@@ -114,9 +104,9 @@ func runMailInbox(cmd *cobra.Command, args []string) error {
|
||||
|
||||
func runMailRead(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
return errors.New("message ID or index required")
|
||||
return errors.New("msgID argument required")
|
||||
}
|
||||
msgRef := args[0]
|
||||
msgID := args[0]
|
||||
|
||||
// Determine which inbox
|
||||
address := detectSender()
|
||||
@@ -126,22 +116,6 @@ func runMailRead(cmd *cobra.Command, args []string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// Check if the argument is a numeric index (1-based)
|
||||
var msgID string
|
||||
if idx, err := strconv.Atoi(msgRef); err == nil && idx > 0 {
|
||||
// Numeric index: resolve to message ID by listing inbox
|
||||
messages, err := mailbox.List()
|
||||
if err != nil {
|
||||
return fmt.Errorf("listing messages: %w", err)
|
||||
}
|
||||
if idx > len(messages) {
|
||||
return fmt.Errorf("index %d out of range (inbox has %d messages)", idx, len(messages))
|
||||
}
|
||||
msgID = messages[idx-1].ID
|
||||
} else {
|
||||
msgID = msgRef
|
||||
}
|
||||
|
||||
msg, err := mailbox.Get(msgID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting message: %w", err)
|
||||
@@ -243,6 +217,11 @@ func runMailPeek(cmd *cobra.Command, args []string) error {
|
||||
}
|
||||
|
||||
func runMailDelete(cmd *cobra.Command, args []string) error {
|
||||
if len(args) == 0 {
|
||||
return errors.New("msgID argument required")
|
||||
}
|
||||
msgID := args[0]
|
||||
|
||||
// Determine which inbox
|
||||
address := detectSender()
|
||||
|
||||
@@ -251,32 +230,11 @@ func runMailDelete(cmd *cobra.Command, args []string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// Delete all specified messages
|
||||
deleted := 0
|
||||
var errors []string
|
||||
for _, msgID := range args {
|
||||
if err := mailbox.Delete(msgID); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("%s: %v", msgID, err))
|
||||
} else {
|
||||
deleted++
|
||||
}
|
||||
if err := mailbox.Delete(msgID); err != nil {
|
||||
return fmt.Errorf("deleting message: %w", err)
|
||||
}
|
||||
|
||||
// Report results
|
||||
if len(errors) > 0 {
|
||||
fmt.Printf("%s Deleted %d/%d messages\n",
|
||||
style.Bold.Render("⚠"), deleted, len(args))
|
||||
for _, e := range errors {
|
||||
fmt.Printf(" Error: %s\n", e)
|
||||
}
|
||||
return fmt.Errorf("failed to delete %d messages", len(errors))
|
||||
}
|
||||
|
||||
if len(args) == 1 {
|
||||
fmt.Printf("%s Message deleted\n", style.Bold.Render("✓"))
|
||||
} else {
|
||||
fmt.Printf("%s Deleted %d messages\n", style.Bold.Render("✓"), deleted)
|
||||
}
|
||||
fmt.Printf("%s Message deleted\n", style.Bold.Render("✓"))
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -80,22 +80,8 @@ func runMailThread(cmd *cobra.Command, args []string) error {
|
||||
}
|
||||
|
||||
func runMailReply(cmd *cobra.Command, args []string) error {
|
||||
if mailReplyMessage == "" {
|
||||
return fmt.Errorf("required flag \"message\" or \"body\" not set")
|
||||
}
|
||||
msgID := args[0]
|
||||
|
||||
// Get message body from positional arg or flag (positional takes precedence)
|
||||
messageBody := mailReplyMessage
|
||||
if len(args) > 1 {
|
||||
messageBody = args[1]
|
||||
}
|
||||
|
||||
// Validate message is provided
|
||||
if messageBody == "" {
|
||||
return fmt.Errorf("message body required: provide as second argument or use -m flag")
|
||||
}
|
||||
|
||||
// All mail uses town beads (two-level architecture)
|
||||
workDir, err := findMailWorkDir()
|
||||
if err != nil {
|
||||
@@ -132,7 +118,7 @@ func runMailReply(cmd *cobra.Command, args []string) error {
|
||||
From: from,
|
||||
To: original.From, // Reply to sender
|
||||
Subject: subject,
|
||||
Body: messageBody,
|
||||
Body: mailReplyMessage,
|
||||
Type: mail.TypeReply,
|
||||
Priority: mail.PriorityNormal,
|
||||
ReplyTo: msgID,
|
||||
|
||||
@@ -200,13 +200,6 @@ func runMayorAttach(cmd *cobra.Command, args []string) error {
|
||||
return fmt.Errorf("building startup command: %w", err)
|
||||
}
|
||||
|
||||
// Kill all processes in the pane before respawning to prevent orphan leaks
|
||||
// RespawnPane's -k flag only sends SIGHUP which Claude/Node may ignore
|
||||
if err := t.KillPaneProcesses(paneID); err != nil {
|
||||
// Non-fatal but log the warning
|
||||
style.PrintWarning("could not kill pane processes: %v", err)
|
||||
}
|
||||
|
||||
if err := t.RespawnPane(paneID, startupCmd); err != nil {
|
||||
return fmt.Errorf("restarting runtime: %w", err)
|
||||
}
|
||||
|
||||
325
internal/cmd/migrate_agents.go
Normal file
325
internal/cmd/migrate_agents.go
Normal file
@@ -0,0 +1,325 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/steveyegge/gastown/internal/beads"
|
||||
"github.com/steveyegge/gastown/internal/workspace"
|
||||
)
|
||||
|
||||
var (
|
||||
migrateAgentsDryRun bool
|
||||
migrateAgentsForce bool
|
||||
)
|
||||
|
||||
var migrateAgentsCmd = &cobra.Command{
|
||||
Use: "migrate-agents",
|
||||
GroupID: GroupDiag,
|
||||
Short: "Migrate agent beads to two-level architecture",
|
||||
Long: `Migrate agent beads from the old single-tier to the two-level architecture.
|
||||
|
||||
This command migrates town-level agent beads (Mayor, Deacon) from rig beads
|
||||
with gt-* prefix to town beads with hq-* prefix:
|
||||
|
||||
OLD (rig beads): gt-mayor, gt-deacon
|
||||
NEW (town beads): hq-mayor, hq-deacon
|
||||
|
||||
Rig-level agents (Witness, Refinery, Polecats) remain in rig beads unchanged.
|
||||
|
||||
The migration:
|
||||
1. Detects old gt-mayor/gt-deacon beads in rig beads
|
||||
2. Creates new hq-mayor/hq-deacon beads in town beads
|
||||
3. Copies agent state (hook_bead, agent_state, etc.)
|
||||
4. Adds migration note to old beads (preserves them)
|
||||
|
||||
Safety:
|
||||
- Dry-run mode by default (use --execute to apply changes)
|
||||
- Old beads are preserved with migration notes
|
||||
- Validates new beads exist before marking migration complete
|
||||
- Skips if new beads already exist (idempotent)
|
||||
|
||||
Examples:
|
||||
gt migrate-agents # Dry-run: show what would be migrated
|
||||
gt migrate-agents --execute # Apply the migration
|
||||
gt migrate-agents --force # Re-migrate even if new beads exist`,
|
||||
RunE: runMigrateAgents,
|
||||
}
|
||||
|
||||
func init() {
|
||||
migrateAgentsCmd.Flags().BoolVar(&migrateAgentsDryRun, "dry-run", true, "Show what would be migrated without making changes (default)")
|
||||
migrateAgentsCmd.Flags().BoolVar(&migrateAgentsForce, "force", false, "Re-migrate even if new beads already exist")
|
||||
// Add --execute as inverse of --dry-run for clarity
|
||||
migrateAgentsCmd.Flags().BoolP("execute", "x", false, "Actually apply the migration (opposite of --dry-run)")
|
||||
rootCmd.AddCommand(migrateAgentsCmd)
|
||||
}
|
||||
|
||||
// migrationResult holds the result of a single bead migration.
|
||||
type migrationResult struct {
|
||||
OldID string
|
||||
NewID string
|
||||
Status string // "migrated", "skipped", "error"
|
||||
Message string
|
||||
OldFields *beads.AgentFields
|
||||
WasDryRun bool
|
||||
}
|
||||
|
||||
func runMigrateAgents(cmd *cobra.Command, args []string) error {
|
||||
// Handle --execute flag
|
||||
if execute, _ := cmd.Flags().GetBool("execute"); execute {
|
||||
migrateAgentsDryRun = false
|
||||
}
|
||||
|
||||
// Find town root
|
||||
townRoot, err := workspace.FindFromCwdOrError()
|
||||
if err != nil {
|
||||
return fmt.Errorf("not in a Gas Town workspace: %w", err)
|
||||
}
|
||||
|
||||
// Get town beads path
|
||||
townBeadsDir := filepath.Join(townRoot, ".beads")
|
||||
|
||||
// Load routes to find rig beads
|
||||
routes, err := beads.LoadRoutes(townBeadsDir)
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading routes.jsonl: %w", err)
|
||||
}
|
||||
|
||||
// Find the first rig with gt- prefix (where global agents are currently stored)
|
||||
var sourceRigPath string
|
||||
for _, r := range routes {
|
||||
if strings.TrimSuffix(r.Prefix, "-") == "gt" && r.Path != "." {
|
||||
sourceRigPath = r.Path
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if sourceRigPath == "" {
|
||||
fmt.Println("No rig with gt- prefix found. Nothing to migrate.")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Source beads (rig beads where old agent beads are)
|
||||
sourceBeadsDir := filepath.Join(townRoot, sourceRigPath, ".beads")
|
||||
sourceBd := beads.New(sourceBeadsDir)
|
||||
|
||||
// Target beads (town beads where new agent beads should go)
|
||||
targetBd := beads.NewWithBeadsDir(townRoot, townBeadsDir)
|
||||
|
||||
// Agents to migrate: town-level agents only
|
||||
agentsToMigrate := []struct {
|
||||
oldID string
|
||||
newID string
|
||||
desc string
|
||||
}{
|
||||
{
|
||||
oldID: beads.MayorBeadID(), // gt-mayor
|
||||
newID: beads.MayorBeadIDTown(), // hq-mayor
|
||||
desc: "Mayor - global coordinator, handles cross-rig communication and escalations.",
|
||||
},
|
||||
{
|
||||
oldID: beads.DeaconBeadID(), // gt-deacon
|
||||
newID: beads.DeaconBeadIDTown(), // hq-deacon
|
||||
desc: "Deacon (daemon beacon) - receives mechanical heartbeats, runs town plugins and monitoring.",
|
||||
},
|
||||
}
|
||||
|
||||
// Also migrate role beads
|
||||
rolesToMigrate := []string{"mayor", "deacon", "witness", "refinery", "polecat", "crew", "dog"}
|
||||
|
||||
if migrateAgentsDryRun {
|
||||
fmt.Println("🔍 DRY RUN: Showing what would be migrated")
|
||||
fmt.Println(" Use --execute to apply changes")
|
||||
fmt.Println()
|
||||
} else {
|
||||
fmt.Println("🚀 Migrating agent beads to two-level architecture")
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
var results []migrationResult
|
||||
|
||||
// Migrate agent beads
|
||||
fmt.Println("Agent Beads:")
|
||||
for _, agent := range agentsToMigrate {
|
||||
result := migrateAgentBead(sourceBd, targetBd, agent.oldID, agent.newID, agent.desc, migrateAgentsDryRun, migrateAgentsForce)
|
||||
results = append(results, result)
|
||||
printMigrationResult(result)
|
||||
}
|
||||
|
||||
// Migrate role beads
|
||||
fmt.Println("\nRole Beads:")
|
||||
for _, role := range rolesToMigrate {
|
||||
oldID := "gt-" + role + "-role"
|
||||
newID := beads.RoleBeadIDTown(role) // hq-<role>-role
|
||||
result := migrateRoleBead(sourceBd, targetBd, oldID, newID, role, migrateAgentsDryRun, migrateAgentsForce)
|
||||
results = append(results, result)
|
||||
printMigrationResult(result)
|
||||
}
|
||||
|
||||
// Summary
|
||||
fmt.Println()
|
||||
printMigrationSummary(results, migrateAgentsDryRun)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// migrateAgentBead migrates a single agent bead from source to target.
|
||||
func migrateAgentBead(sourceBd, targetBd *beads.Beads, oldID, newID, desc string, dryRun, force bool) migrationResult {
|
||||
result := migrationResult{
|
||||
OldID: oldID,
|
||||
NewID: newID,
|
||||
WasDryRun: dryRun,
|
||||
}
|
||||
|
||||
// Check if old bead exists
|
||||
oldIssue, oldFields, err := sourceBd.GetAgentBead(oldID)
|
||||
if err != nil {
|
||||
result.Status = "skipped"
|
||||
result.Message = "old bead not found"
|
||||
return result
|
||||
}
|
||||
result.OldFields = oldFields
|
||||
|
||||
// Check if new bead already exists
|
||||
if _, err := targetBd.Show(newID); err == nil {
|
||||
if !force {
|
||||
result.Status = "skipped"
|
||||
result.Message = "new bead already exists (use --force to re-migrate)"
|
||||
return result
|
||||
}
|
||||
}
|
||||
|
||||
if dryRun {
|
||||
result.Status = "would migrate"
|
||||
result.Message = fmt.Sprintf("would copy state from %s", oldIssue.ID)
|
||||
return result
|
||||
}
|
||||
|
||||
// Create new bead in town beads
|
||||
newFields := &beads.AgentFields{
|
||||
RoleType: oldFields.RoleType,
|
||||
Rig: oldFields.Rig,
|
||||
AgentState: oldFields.AgentState,
|
||||
HookBead: oldFields.HookBead,
|
||||
RoleBead: beads.RoleBeadIDTown(oldFields.RoleType), // Update to hq- role
|
||||
CleanupStatus: oldFields.CleanupStatus,
|
||||
ActiveMR: oldFields.ActiveMR,
|
||||
NotificationLevel: oldFields.NotificationLevel,
|
||||
}
|
||||
|
||||
_, err = targetBd.CreateAgentBead(newID, desc, newFields)
|
||||
if err != nil {
|
||||
result.Status = "error"
|
||||
result.Message = fmt.Sprintf("failed to create: %v", err)
|
||||
return result
|
||||
}
|
||||
|
||||
// Add migration label to old bead
|
||||
migrationLabel := fmt.Sprintf("migrated-to:%s", newID)
|
||||
if err := sourceBd.Update(oldID, beads.UpdateOptions{AddLabels: []string{migrationLabel}}); err != nil {
|
||||
// Non-fatal: just log it
|
||||
result.Message = fmt.Sprintf("created but couldn't add migration label: %v", err)
|
||||
}
|
||||
|
||||
result.Status = "migrated"
|
||||
result.Message = "successfully migrated"
|
||||
return result
|
||||
}
|
||||
|
||||
// migrateRoleBead migrates a role definition bead.
|
||||
func migrateRoleBead(sourceBd, targetBd *beads.Beads, oldID, newID, role string, dryRun, force bool) migrationResult {
|
||||
result := migrationResult{
|
||||
OldID: oldID,
|
||||
NewID: newID,
|
||||
WasDryRun: dryRun,
|
||||
}
|
||||
|
||||
// Check if old bead exists
|
||||
oldIssue, err := sourceBd.Show(oldID)
|
||||
if err != nil {
|
||||
result.Status = "skipped"
|
||||
result.Message = "old bead not found"
|
||||
return result
|
||||
}
|
||||
|
||||
// Check if new bead already exists
|
||||
if _, err := targetBd.Show(newID); err == nil {
|
||||
if !force {
|
||||
result.Status = "skipped"
|
||||
result.Message = "new bead already exists (use --force to re-migrate)"
|
||||
return result
|
||||
}
|
||||
}
|
||||
|
||||
if dryRun {
|
||||
result.Status = "would migrate"
|
||||
result.Message = fmt.Sprintf("would copy from %s", oldIssue.ID)
|
||||
return result
|
||||
}
|
||||
|
||||
// Create new role bead in town beads
|
||||
// Role beads are simple - just copy the description
|
||||
_, err = targetBd.CreateWithID(newID, beads.CreateOptions{
|
||||
Title: fmt.Sprintf("Role: %s", role),
|
||||
Type: "role",
|
||||
Description: oldIssue.Title, // Use old title as description
|
||||
})
|
||||
if err != nil {
|
||||
result.Status = "error"
|
||||
result.Message = fmt.Sprintf("failed to create: %v", err)
|
||||
return result
|
||||
}
|
||||
|
||||
// Add migration label to old bead
|
||||
migrationLabel := fmt.Sprintf("migrated-to:%s", newID)
|
||||
if err := sourceBd.Update(oldID, beads.UpdateOptions{AddLabels: []string{migrationLabel}}); err != nil {
|
||||
// Non-fatal
|
||||
result.Message = fmt.Sprintf("created but couldn't add migration label: %v", err)
|
||||
}
|
||||
|
||||
result.Status = "migrated"
|
||||
result.Message = "successfully migrated"
|
||||
return result
|
||||
}
|
||||
|
||||
func getMigrationStatusIcon(status string) string {
|
||||
switch status {
|
||||
case "migrated", "would migrate":
|
||||
return " ✓"
|
||||
case "skipped":
|
||||
return " ⊘"
|
||||
case "error":
|
||||
return " ✗"
|
||||
default:
|
||||
return " ?"
|
||||
}
|
||||
}
|
||||
|
||||
func printMigrationResult(r migrationResult) {
|
||||
fmt.Printf("%s %s → %s: %s\n", getMigrationStatusIcon(r.Status), r.OldID, r.NewID, r.Message)
|
||||
}
|
||||
|
||||
func printMigrationSummary(results []migrationResult, dryRun bool) {
|
||||
var migrated, skipped, errors int
|
||||
for _, r := range results {
|
||||
switch r.Status {
|
||||
case "migrated", "would migrate":
|
||||
migrated++
|
||||
case "skipped":
|
||||
skipped++
|
||||
case "error":
|
||||
errors++
|
||||
}
|
||||
}
|
||||
|
||||
if dryRun {
|
||||
fmt.Printf("Summary (dry-run): %d would migrate, %d skipped, %d errors\n", migrated, skipped, errors)
|
||||
if migrated > 0 {
|
||||
fmt.Println("\nRun with --execute to apply these changes.")
|
||||
}
|
||||
} else {
|
||||
fmt.Printf("Summary: %d migrated, %d skipped, %d errors\n", migrated, skipped, errors)
|
||||
}
|
||||
}
|
||||
87
internal/cmd/migrate_agents_test.go
Normal file
87
internal/cmd/migrate_agents_test.go
Normal file
@@ -0,0 +1,87 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/steveyegge/gastown/internal/beads"
|
||||
)
|
||||
|
||||
func TestMigrationResultStatus(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
result migrationResult
|
||||
wantIcon string
|
||||
}{
|
||||
{
|
||||
name: "migrated shows checkmark",
|
||||
result: migrationResult{
|
||||
OldID: "gt-mayor",
|
||||
NewID: "hq-mayor",
|
||||
Status: "migrated",
|
||||
Message: "successfully migrated",
|
||||
},
|
||||
wantIcon: " ✓",
|
||||
},
|
||||
{
|
||||
name: "would migrate shows checkmark",
|
||||
result: migrationResult{
|
||||
OldID: "gt-mayor",
|
||||
NewID: "hq-mayor",
|
||||
Status: "would migrate",
|
||||
Message: "would copy state from gt-mayor",
|
||||
},
|
||||
wantIcon: " ✓",
|
||||
},
|
||||
{
|
||||
name: "skipped shows empty circle",
|
||||
result: migrationResult{
|
||||
OldID: "gt-mayor",
|
||||
NewID: "hq-mayor",
|
||||
Status: "skipped",
|
||||
Message: "already exists",
|
||||
},
|
||||
wantIcon: " ⊘",
|
||||
},
|
||||
{
|
||||
name: "error shows X",
|
||||
result: migrationResult{
|
||||
OldID: "gt-mayor",
|
||||
NewID: "hq-mayor",
|
||||
Status: "error",
|
||||
Message: "failed to create",
|
||||
},
|
||||
wantIcon: " ✗",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
icon := getMigrationStatusIcon(tt.result.Status)
|
||||
if icon != tt.wantIcon {
|
||||
t.Errorf("getMigrationStatusIcon(%q) = %q, want %q", tt.result.Status, icon, tt.wantIcon)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestTownBeadIDHelpers(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
got string
|
||||
want string
|
||||
}{
|
||||
{"MayorBeadIDTown", beads.MayorBeadIDTown(), "hq-mayor"},
|
||||
{"DeaconBeadIDTown", beads.DeaconBeadIDTown(), "hq-deacon"},
|
||||
{"DogBeadIDTown", beads.DogBeadIDTown("fido"), "hq-dog-fido"},
|
||||
{"RoleBeadIDTown mayor", beads.RoleBeadIDTown("mayor"), "hq-mayor-role"},
|
||||
{"RoleBeadIDTown witness", beads.RoleBeadIDTown("witness"), "hq-witness-role"},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if tt.got != tt.want {
|
||||
t.Errorf("%s = %q, want %q", tt.name, tt.got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -160,14 +160,7 @@ func runMoleculeAwaitSignal(cmd *cobra.Command, args []string) error {
|
||||
result.IdleCycles = newIdleCycles
|
||||
}
|
||||
} else if result.Reason == "signal" && awaitSignalAgentBead != "" {
|
||||
// On signal, update last_activity to prove agent is alive
|
||||
if err := updateAgentHeartbeat(awaitSignalAgentBead, beadsDir); err != nil {
|
||||
if !awaitSignalQuiet {
|
||||
fmt.Printf("%s Failed to update agent heartbeat: %v\n",
|
||||
style.Dim.Render("⚠"), err)
|
||||
}
|
||||
}
|
||||
// Report current idle cycles (caller should reset)
|
||||
// On signal, report current idle cycles (caller should reset)
|
||||
result.IdleCycles = idleCycles
|
||||
}
|
||||
|
||||
@@ -326,14 +319,6 @@ func parseIntSimple(s string) (int, error) {
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// updateAgentHeartbeat updates the last_activity timestamp on an agent bead.
|
||||
// This proves the agent is alive and processing signals.
|
||||
func updateAgentHeartbeat(agentBead, beadsDir string) error {
|
||||
cmd := exec.Command("bd", "agent", "heartbeat", agentBead)
|
||||
cmd.Env = append(os.Environ(), "BEADS_DIR="+beadsDir)
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
// setAgentIdleCycles sets the idle:N label on an agent bead.
|
||||
// Uses read-modify-write pattern to update only the idle label.
|
||||
func setAgentIdleCycles(agentBead, beadsDir string, cycles int) error {
|
||||
|
||||
@@ -1,476 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// TestSlingFormulaOnBeadHooksBaseBead verifies that when using
|
||||
// "gt sling <formula> --on <bead>", the BASE bead is hooked (not the wisp).
|
||||
//
|
||||
// Current bug: The code hooks the wisp (compound root) instead of the base bead.
|
||||
// This causes lifecycle issues:
|
||||
// - Base bead stays open after wisp completes
|
||||
// - gt done closes wisp, not the actual work item
|
||||
// - Orphaned base beads accumulate
|
||||
//
|
||||
// Expected behavior: Hook the base bead, store attached_molecule pointing to wisp.
|
||||
// gt hook/gt prime can follow attached_molecule to find the workflow steps.
|
||||
func TestSlingFormulaOnBeadHooksBaseBead(t *testing.T) {
|
||||
townRoot := t.TempDir()
|
||||
|
||||
// Minimal workspace marker
|
||||
if err := os.MkdirAll(filepath.Join(townRoot, "mayor", "rig"), 0755); err != nil {
|
||||
t.Fatalf("mkdir mayor/rig: %v", err)
|
||||
}
|
||||
|
||||
// Create routes
|
||||
if err := os.MkdirAll(filepath.Join(townRoot, ".beads"), 0755); err != nil {
|
||||
t.Fatalf("mkdir .beads: %v", err)
|
||||
}
|
||||
rigDir := filepath.Join(townRoot, "gastown", "mayor", "rig")
|
||||
if err := os.MkdirAll(rigDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir rigDir: %v", err)
|
||||
}
|
||||
routes := strings.Join([]string{
|
||||
`{"prefix":"gt-","path":"gastown/mayor/rig"}`,
|
||||
`{"prefix":"hq-","path":"."}`,
|
||||
"",
|
||||
}, "\n")
|
||||
if err := os.WriteFile(filepath.Join(townRoot, ".beads", "routes.jsonl"), []byte(routes), 0644); err != nil {
|
||||
t.Fatalf("write routes.jsonl: %v", err)
|
||||
}
|
||||
|
||||
// Stub bd to track which bead gets hooked
|
||||
binDir := filepath.Join(townRoot, "bin")
|
||||
if err := os.MkdirAll(binDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir binDir: %v", err)
|
||||
}
|
||||
logPath := filepath.Join(townRoot, "bd.log")
|
||||
bdScript := `#!/bin/sh
|
||||
set -e
|
||||
echo "$*" >> "${BD_LOG}"
|
||||
if [ "$1" = "--no-daemon" ]; then
|
||||
shift
|
||||
fi
|
||||
if [ "$1" = "--allow-stale" ]; then
|
||||
shift
|
||||
fi
|
||||
cmd="$1"
|
||||
shift || true
|
||||
case "$cmd" in
|
||||
show)
|
||||
# Return the base bead info
|
||||
echo '[{"id":"gt-abc123","title":"Bug to fix","status":"open","assignee":"","description":""}]'
|
||||
;;
|
||||
formula)
|
||||
echo '{"name":"mol-polecat-work"}'
|
||||
;;
|
||||
cook)
|
||||
exit 0
|
||||
;;
|
||||
mol)
|
||||
sub="$1"
|
||||
shift || true
|
||||
case "$sub" in
|
||||
wisp)
|
||||
echo '{"new_epic_id":"gt-wisp-xyz"}'
|
||||
;;
|
||||
bond)
|
||||
echo '{"root_id":"gt-wisp-xyz"}'
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
update)
|
||||
# Just succeed
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
exit 0
|
||||
`
|
||||
bdPath := filepath.Join(binDir, "bd")
|
||||
if err := os.WriteFile(bdPath, []byte(bdScript), 0755); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
|
||||
t.Setenv("BD_LOG", logPath)
|
||||
t.Setenv("PATH", binDir+string(os.PathListSeparator)+os.Getenv("PATH"))
|
||||
t.Setenv(EnvGTRole, "mayor")
|
||||
t.Setenv("GT_POLECAT", "")
|
||||
t.Setenv("GT_CREW", "")
|
||||
t.Setenv("TMUX_PANE", "")
|
||||
t.Setenv("GT_TEST_NO_NUDGE", "1")
|
||||
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
t.Fatalf("getwd: %v", err)
|
||||
}
|
||||
t.Cleanup(func() { _ = os.Chdir(cwd) })
|
||||
if err := os.Chdir(filepath.Join(townRoot, "mayor", "rig")); err != nil {
|
||||
t.Fatalf("chdir: %v", err)
|
||||
}
|
||||
|
||||
// Save and restore global flag state
|
||||
prevOn := slingOnTarget
|
||||
prevVars := slingVars
|
||||
prevDryRun := slingDryRun
|
||||
prevNoConvoy := slingNoConvoy
|
||||
t.Cleanup(func() {
|
||||
slingOnTarget = prevOn
|
||||
slingVars = prevVars
|
||||
slingDryRun = prevDryRun
|
||||
slingNoConvoy = prevNoConvoy
|
||||
})
|
||||
|
||||
slingDryRun = false
|
||||
slingNoConvoy = true
|
||||
slingVars = nil
|
||||
slingOnTarget = "gt-abc123" // The base bead
|
||||
|
||||
if err := runSling(nil, []string{"mol-polecat-work"}); err != nil {
|
||||
t.Fatalf("runSling: %v", err)
|
||||
}
|
||||
|
||||
logBytes, err := os.ReadFile(logPath)
|
||||
if err != nil {
|
||||
t.Fatalf("read bd log: %v", err)
|
||||
}
|
||||
|
||||
// Find the update command that sets status=hooked
|
||||
// Expected: should hook gt-abc123 (base bead)
|
||||
// Current bug: hooks gt-wisp-xyz (wisp)
|
||||
logLines := strings.Split(string(logBytes), "\n")
|
||||
var hookedBeadID string
|
||||
for _, line := range logLines {
|
||||
if strings.Contains(line, "update") && strings.Contains(line, "--status=hooked") {
|
||||
// Extract the bead ID being hooked
|
||||
// Format: "update <beadID> --status=hooked ..."
|
||||
parts := strings.Fields(line)
|
||||
for i, part := range parts {
|
||||
if part == "update" && i+1 < len(parts) {
|
||||
hookedBeadID = parts[i+1]
|
||||
break
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if hookedBeadID == "" {
|
||||
t.Fatalf("no hooked bead found in log:\n%s", string(logBytes))
|
||||
}
|
||||
|
||||
// The BASE bead (gt-abc123) should be hooked, not the wisp (gt-wisp-xyz)
|
||||
if hookedBeadID != "gt-abc123" {
|
||||
t.Errorf("wrong bead hooked: got %q, want %q (base bead)\n"+
|
||||
"Current behavior hooks the wisp instead of the base bead.\n"+
|
||||
"This causes orphaned base beads when gt done closes only the wisp.\n"+
|
||||
"Log:\n%s", hookedBeadID, "gt-abc123", string(logBytes))
|
||||
}
|
||||
}
|
||||
|
||||
// TestSlingFormulaOnBeadSetsAttachedMoleculeInBaseBead verifies that when using
|
||||
// "gt sling <formula> --on <bead>", the attached_molecule field is set in the
|
||||
// BASE bead's description (pointing to the wisp), not in the wisp itself.
|
||||
//
|
||||
// Current bug: attached_molecule is stored as a self-reference in the wisp.
|
||||
// This is semantically meaningless (wisp points to itself) and breaks
|
||||
// compound resolution from the base bead.
|
||||
//
|
||||
// Expected behavior: Store attached_molecule in the base bead pointing to wisp.
|
||||
// This enables:
|
||||
// - Compound resolution: base bead -> attached_molecule -> wisp
|
||||
// - gt hook/gt prime: read base bead, follow attached_molecule to show wisp steps
|
||||
func TestSlingFormulaOnBeadSetsAttachedMoleculeInBaseBead(t *testing.T) {
|
||||
townRoot := t.TempDir()
|
||||
|
||||
// Minimal workspace marker
|
||||
if err := os.MkdirAll(filepath.Join(townRoot, "mayor", "rig"), 0755); err != nil {
|
||||
t.Fatalf("mkdir mayor/rig: %v", err)
|
||||
}
|
||||
|
||||
// Create routes
|
||||
if err := os.MkdirAll(filepath.Join(townRoot, ".beads"), 0755); err != nil {
|
||||
t.Fatalf("mkdir .beads: %v", err)
|
||||
}
|
||||
rigDir := filepath.Join(townRoot, "gastown", "mayor", "rig")
|
||||
if err := os.MkdirAll(rigDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir rigDir: %v", err)
|
||||
}
|
||||
routes := strings.Join([]string{
|
||||
`{"prefix":"gt-","path":"gastown/mayor/rig"}`,
|
||||
`{"prefix":"hq-","path":"."}`,
|
||||
"",
|
||||
}, "\n")
|
||||
if err := os.WriteFile(filepath.Join(townRoot, ".beads", "routes.jsonl"), []byte(routes), 0644); err != nil {
|
||||
t.Fatalf("write routes.jsonl: %v", err)
|
||||
}
|
||||
|
||||
// Stub bd to track which bead gets attached_molecule set
|
||||
binDir := filepath.Join(townRoot, "bin")
|
||||
if err := os.MkdirAll(binDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir binDir: %v", err)
|
||||
}
|
||||
logPath := filepath.Join(townRoot, "bd.log")
|
||||
bdScript := `#!/bin/sh
|
||||
set -e
|
||||
echo "$*" >> "${BD_LOG}"
|
||||
if [ "$1" = "--no-daemon" ]; then
|
||||
shift
|
||||
fi
|
||||
if [ "$1" = "--allow-stale" ]; then
|
||||
shift
|
||||
fi
|
||||
cmd="$1"
|
||||
shift || true
|
||||
case "$cmd" in
|
||||
show)
|
||||
# Return bead info without attached_molecule initially
|
||||
echo '[{"id":"gt-abc123","title":"Bug to fix","status":"open","assignee":"","description":""}]'
|
||||
;;
|
||||
formula)
|
||||
echo '{"name":"mol-polecat-work"}'
|
||||
;;
|
||||
cook)
|
||||
exit 0
|
||||
;;
|
||||
mol)
|
||||
sub="$1"
|
||||
shift || true
|
||||
case "$sub" in
|
||||
wisp)
|
||||
echo '{"new_epic_id":"gt-wisp-xyz"}'
|
||||
;;
|
||||
bond)
|
||||
echo '{"root_id":"gt-wisp-xyz"}'
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
update)
|
||||
# Just succeed
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
exit 0
|
||||
`
|
||||
bdPath := filepath.Join(binDir, "bd")
|
||||
if err := os.WriteFile(bdPath, []byte(bdScript), 0755); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
|
||||
t.Setenv("BD_LOG", logPath)
|
||||
t.Setenv("PATH", binDir+string(os.PathListSeparator)+os.Getenv("PATH"))
|
||||
t.Setenv(EnvGTRole, "mayor")
|
||||
t.Setenv("GT_POLECAT", "")
|
||||
t.Setenv("GT_CREW", "")
|
||||
t.Setenv("TMUX_PANE", "")
|
||||
t.Setenv("GT_TEST_NO_NUDGE", "1")
|
||||
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
t.Fatalf("getwd: %v", err)
|
||||
}
|
||||
t.Cleanup(func() { _ = os.Chdir(cwd) })
|
||||
if err := os.Chdir(filepath.Join(townRoot, "mayor", "rig")); err != nil {
|
||||
t.Fatalf("chdir: %v", err)
|
||||
}
|
||||
|
||||
// Save and restore global flag state
|
||||
prevOn := slingOnTarget
|
||||
prevVars := slingVars
|
||||
prevDryRun := slingDryRun
|
||||
prevNoConvoy := slingNoConvoy
|
||||
t.Cleanup(func() {
|
||||
slingOnTarget = prevOn
|
||||
slingVars = prevVars
|
||||
slingDryRun = prevDryRun
|
||||
slingNoConvoy = prevNoConvoy
|
||||
})
|
||||
|
||||
slingDryRun = false
|
||||
slingNoConvoy = true
|
||||
slingVars = nil
|
||||
slingOnTarget = "gt-abc123" // The base bead
|
||||
|
||||
if err := runSling(nil, []string{"mol-polecat-work"}); err != nil {
|
||||
t.Fatalf("runSling: %v", err)
|
||||
}
|
||||
|
||||
logBytes, err := os.ReadFile(logPath)
|
||||
if err != nil {
|
||||
t.Fatalf("read bd log: %v", err)
|
||||
}
|
||||
|
||||
// Find update commands that set attached_molecule
|
||||
// Expected: "update gt-abc123 --description=...attached_molecule: gt-wisp-xyz..."
|
||||
// Current bug: "update gt-wisp-xyz --description=...attached_molecule: gt-wisp-xyz..."
|
||||
logLines := strings.Split(string(logBytes), "\n")
|
||||
var attachedMoleculeTarget string
|
||||
for _, line := range logLines {
|
||||
if strings.Contains(line, "update") && strings.Contains(line, "attached_molecule") {
|
||||
// Extract the bead ID being updated
|
||||
parts := strings.Fields(line)
|
||||
for i, part := range parts {
|
||||
if part == "update" && i+1 < len(parts) {
|
||||
attachedMoleculeTarget = parts[i+1]
|
||||
break
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if attachedMoleculeTarget == "" {
|
||||
t.Fatalf("no attached_molecule update found in log:\n%s", string(logBytes))
|
||||
}
|
||||
|
||||
// attached_molecule should be set on the BASE bead, not the wisp
|
||||
if attachedMoleculeTarget != "gt-abc123" {
|
||||
t.Errorf("attached_molecule set on wrong bead: got %q, want %q (base bead)\n"+
|
||||
"Current behavior stores attached_molecule in the wisp as a self-reference.\n"+
|
||||
"This breaks compound resolution (base bead has no pointer to wisp).\n"+
|
||||
"Log:\n%s", attachedMoleculeTarget, "gt-abc123", string(logBytes))
|
||||
}
|
||||
}
|
||||
|
||||
// TestDoneClosesAttachedMolecule verifies that gt done closes both the hooked
|
||||
// bead AND its attached molecule (wisp).
|
||||
//
|
||||
// Current bug: gt done only closes the hooked bead. If base bead is hooked
|
||||
// with attached_molecule pointing to wisp, the wisp becomes orphaned.
|
||||
//
|
||||
// Expected behavior: gt done should:
|
||||
// 1. Check for attached_molecule in hooked bead
|
||||
// 2. Close the attached molecule (wisp) first
|
||||
// 3. Close the hooked bead (base bead)
|
||||
//
|
||||
// This ensures no orphaned wisps remain after work completes.
|
||||
func TestDoneClosesAttachedMolecule(t *testing.T) {
|
||||
townRoot := t.TempDir()
|
||||
|
||||
// Create rig structure - use simple rig name that matches routes lookup
|
||||
rigPath := filepath.Join(townRoot, "gastown")
|
||||
if err := os.MkdirAll(rigPath, 0755); err != nil {
|
||||
t.Fatalf("mkdir rig: %v", err)
|
||||
}
|
||||
if err := os.MkdirAll(filepath.Join(townRoot, ".beads"), 0755); err != nil {
|
||||
t.Fatalf("mkdir .beads: %v", err)
|
||||
}
|
||||
|
||||
// Create routes - path first part must match GT_RIG for prefix lookup
|
||||
routes := strings.Join([]string{
|
||||
`{"prefix":"gt-","path":"gastown"}`,
|
||||
"",
|
||||
}, "\n")
|
||||
if err := os.WriteFile(filepath.Join(townRoot, ".beads", "routes.jsonl"), []byte(routes), 0644); err != nil {
|
||||
t.Fatalf("write routes.jsonl: %v", err)
|
||||
}
|
||||
|
||||
// Stub bd to track close calls
|
||||
binDir := filepath.Join(townRoot, "bin")
|
||||
if err := os.MkdirAll(binDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir binDir: %v", err)
|
||||
}
|
||||
closesPath := filepath.Join(townRoot, "closes.log")
|
||||
|
||||
// The stub simulates:
|
||||
// - Agent bead gt-agent-nux with hook_bead = gt-abc123 (base bead)
|
||||
// - Base bead gt-abc123 with attached_molecule: gt-wisp-xyz, status=hooked
|
||||
// - Wisp gt-wisp-xyz (the attached molecule)
|
||||
bdScript := fmt.Sprintf(`#!/bin/sh
|
||||
echo "$*" >> "%s/bd.log"
|
||||
# Strip --no-daemon and --allow-stale
|
||||
while [ "$1" = "--no-daemon" ] || [ "$1" = "--allow-stale" ]; do
|
||||
shift
|
||||
done
|
||||
cmd="$1"
|
||||
shift || true
|
||||
case "$cmd" in
|
||||
show)
|
||||
beadID="$1"
|
||||
case "$beadID" in
|
||||
gt-gastown-polecat-nux)
|
||||
echo '[{"id":"gt-gastown-polecat-nux","title":"Polecat nux","status":"open","hook_bead":"gt-abc123","agent_state":"working"}]'
|
||||
;;
|
||||
gt-abc123)
|
||||
echo '[{"id":"gt-abc123","title":"Bug to fix","status":"hooked","description":"attached_molecule: gt-wisp-xyz"}]'
|
||||
;;
|
||||
gt-wisp-xyz)
|
||||
echo '[{"id":"gt-wisp-xyz","title":"mol-polecat-work","status":"open","ephemeral":true}]'
|
||||
;;
|
||||
*)
|
||||
echo '[]'
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
close)
|
||||
echo "$1" >> "%s"
|
||||
;;
|
||||
agent|update|slot)
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
exit 0
|
||||
`, townRoot, closesPath)
|
||||
|
||||
bdPath := filepath.Join(binDir, "bd")
|
||||
if err := os.WriteFile(bdPath, []byte(bdScript), 0755); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
|
||||
t.Setenv("PATH", binDir+string(os.PathListSeparator)+os.Getenv("PATH"))
|
||||
t.Setenv("GT_ROLE", "polecat")
|
||||
t.Setenv("GT_RIG", "gastown")
|
||||
t.Setenv("GT_POLECAT", "nux")
|
||||
t.Setenv("GT_CREW", "")
|
||||
t.Setenv("TMUX_PANE", "")
|
||||
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
t.Fatalf("getwd: %v", err)
|
||||
}
|
||||
t.Cleanup(func() { _ = os.Chdir(cwd) })
|
||||
if err := os.Chdir(rigPath); err != nil {
|
||||
t.Fatalf("chdir: %v", err)
|
||||
}
|
||||
|
||||
// Call the unexported function directly (same package)
|
||||
// updateAgentStateOnDone(cwd, townRoot, exitType, issueID)
|
||||
updateAgentStateOnDone(rigPath, townRoot, ExitCompleted, "")
|
||||
|
||||
// Read the close log to see what got closed
|
||||
closesBytes, err := os.ReadFile(closesPath)
|
||||
if err != nil {
|
||||
// No closes happened at all - that's a failure
|
||||
t.Fatalf("no beads were closed (closes.log doesn't exist)")
|
||||
}
|
||||
closes := string(closesBytes)
|
||||
closeLines := strings.Split(strings.TrimSpace(closes), "\n")
|
||||
|
||||
// Check that attached molecule gt-wisp-xyz was closed
|
||||
foundWisp := false
|
||||
foundBase := false
|
||||
for _, line := range closeLines {
|
||||
if strings.Contains(line, "gt-wisp-xyz") {
|
||||
foundWisp = true
|
||||
}
|
||||
if strings.Contains(line, "gt-abc123") {
|
||||
foundBase = true
|
||||
}
|
||||
}
|
||||
|
||||
if !foundWisp {
|
||||
t.Errorf("attached molecule gt-wisp-xyz was NOT closed\n"+
|
||||
"gt done should close the attached_molecule before closing the hooked bead.\n"+
|
||||
"This leaves orphaned wisps after work completes.\n"+
|
||||
"Beads closed: %v", closeLines)
|
||||
}
|
||||
|
||||
if !foundBase {
|
||||
t.Errorf("hooked bead gt-abc123 was NOT closed\n"+
|
||||
"Beads closed: %v", closeLines)
|
||||
}
|
||||
}
|
||||
@@ -322,12 +322,6 @@ func handleStepContinue(cwd, townRoot, _ string, nextStep *beads.Issue, dryRun b
|
||||
|
||||
t := tmux.NewTmux()
|
||||
|
||||
// Kill all processes in the pane before respawning to prevent process leaks
|
||||
if err := t.KillPaneProcesses(pane); err != nil {
|
||||
// Non-fatal but log the warning
|
||||
style.PrintWarning("could not kill pane processes: %v", err)
|
||||
}
|
||||
|
||||
// Clear history before respawn
|
||||
if err := t.ClearHistory(pane); err != nil {
|
||||
// Non-fatal
|
||||
|
||||
@@ -48,9 +48,9 @@ func runMQList(cmd *cobra.Command, args []string) error {
|
||||
if err != nil {
|
||||
return fmt.Errorf("querying ready MRs: %w", err)
|
||||
}
|
||||
// Filter to only merge-request label (issue_type field is deprecated)
|
||||
// Filter to only merge-request type
|
||||
for _, issue := range allReady {
|
||||
if beads.HasLabel(issue, "gt:merge-request") {
|
||||
if issue.Type == "merge-request" {
|
||||
issues = append(issues, issue)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -740,64 +740,3 @@ func TestPolecatCleanupTimeoutConstant(t *testing.T) {
|
||||
t.Errorf("expectedMaxCleanupWait = %v, want 5m", expectedMaxCleanupWait)
|
||||
}
|
||||
}
|
||||
|
||||
// TestMRFilteringByLabel verifies that MRs are identified by their gt:merge-request
|
||||
// label rather than the deprecated issue_type field. This is the fix for #816 where
|
||||
// MRs created by `gt done` have issue_type='task' but correct gt:merge-request label.
|
||||
func TestMRFilteringByLabel(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
issue *beads.Issue
|
||||
wantIsMR bool
|
||||
}{
|
||||
{
|
||||
name: "MR with correct label and wrong type (bug #816 scenario)",
|
||||
issue: &beads.Issue{
|
||||
ID: "mr-1",
|
||||
Title: "Merge: test-branch",
|
||||
Type: "task", // Wrong type (default from bd create)
|
||||
Labels: []string{"gt:merge-request"}, // Correct label
|
||||
},
|
||||
wantIsMR: true,
|
||||
},
|
||||
{
|
||||
name: "MR with correct label and correct type",
|
||||
issue: &beads.Issue{
|
||||
ID: "mr-2",
|
||||
Title: "Merge: another-branch",
|
||||
Type: "merge-request",
|
||||
Labels: []string{"gt:merge-request"},
|
||||
},
|
||||
wantIsMR: true,
|
||||
},
|
||||
{
|
||||
name: "Task without MR label",
|
||||
issue: &beads.Issue{
|
||||
ID: "task-1",
|
||||
Title: "Regular task",
|
||||
Type: "task",
|
||||
Labels: []string{"other-label"},
|
||||
},
|
||||
wantIsMR: false,
|
||||
},
|
||||
{
|
||||
name: "Issue with no labels",
|
||||
issue: &beads.Issue{
|
||||
ID: "issue-1",
|
||||
Title: "No labels",
|
||||
Type: "task",
|
||||
},
|
||||
wantIsMR: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := beads.HasLabel(tt.issue, "gt:merge-request")
|
||||
if got != tt.wantIsMR {
|
||||
t.Errorf("HasLabel(%q, \"gt:merge-request\") = %v, want %v",
|
||||
tt.issue.ID, got, tt.wantIsMR)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -13,7 +13,6 @@ import (
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/steveyegge/gastown/internal/style"
|
||||
"github.com/steveyegge/gastown/internal/util"
|
||||
"github.com/steveyegge/gastown/internal/workspace"
|
||||
)
|
||||
|
||||
@@ -49,8 +48,7 @@ var (
|
||||
orphansKillForce bool
|
||||
|
||||
// Process orphan flags
|
||||
orphansProcsForce bool
|
||||
orphansProcsAggressive bool
|
||||
orphansProcsForce bool
|
||||
)
|
||||
|
||||
// Commit orphan kill command
|
||||
@@ -91,16 +89,10 @@ var orphansProcsCmd = &cobra.Command{
|
||||
These are processes that survived session termination and are now
|
||||
parented to init/launchd. They consume resources and should be killed.
|
||||
|
||||
Use --aggressive to detect ALL orphaned Claude processes by cross-referencing
|
||||
against active tmux sessions. Any Claude process NOT in a gt-* or hq-* session
|
||||
is considered an orphan. This catches processes that have been reparented to
|
||||
something other than init (PPID != 1).
|
||||
|
||||
Examples:
|
||||
gt orphans procs # List orphaned Claude processes (PPID=1 only)
|
||||
gt orphans procs list # Same as above
|
||||
gt orphans procs --aggressive # List ALL orphaned processes (tmux verification)
|
||||
gt orphans procs kill # Kill orphaned processes`,
|
||||
gt orphans procs # List orphaned Claude processes
|
||||
gt orphans procs list # Same as above
|
||||
gt orphans procs kill # Kill orphaned processes`,
|
||||
RunE: runOrphansListProcesses, // Default to list
|
||||
}
|
||||
|
||||
@@ -112,17 +104,12 @@ var orphansProcsListCmd = &cobra.Command{
|
||||
These are processes that survived session termination and are now
|
||||
parented to init/launchd. They consume resources and should be killed.
|
||||
|
||||
Use --aggressive to detect ALL orphaned Claude processes by cross-referencing
|
||||
against active tmux sessions. Any Claude process NOT in a gt-* or hq-* session
|
||||
is considered an orphan.
|
||||
|
||||
Excludes:
|
||||
- tmux server processes
|
||||
- Claude.app desktop application processes
|
||||
|
||||
Examples:
|
||||
gt orphans procs list # Show orphans with PPID=1
|
||||
gt orphans procs list --aggressive # Show ALL orphans (tmux verification)`,
|
||||
gt orphans procs list # Show all orphan Claude processes`,
|
||||
RunE: runOrphansListProcesses,
|
||||
}
|
||||
|
||||
@@ -133,12 +120,10 @@ var orphansProcsKillCmd = &cobra.Command{
|
||||
|
||||
Without flags, prompts for confirmation before killing.
|
||||
Use -f/--force to kill without confirmation.
|
||||
Use --aggressive to kill ALL orphaned processes (not just PPID=1).
|
||||
|
||||
Examples:
|
||||
gt orphans procs kill # Kill with confirmation
|
||||
gt orphans procs kill -f # Force kill without confirmation
|
||||
gt orphans procs kill --aggressive # Kill ALL orphans (tmux verification)`,
|
||||
gt orphans procs kill # Kill with confirmation
|
||||
gt orphans procs kill -f # Force kill without confirmation`,
|
||||
RunE: runOrphansKillProcesses,
|
||||
}
|
||||
|
||||
@@ -155,9 +140,6 @@ func init() {
|
||||
// Process orphan kill command flags
|
||||
orphansProcsKillCmd.Flags().BoolVarP(&orphansProcsForce, "force", "f", false, "Kill without confirmation")
|
||||
|
||||
// Aggressive flag for all procs commands (persistent so it applies to subcommands)
|
||||
orphansProcsCmd.PersistentFlags().BoolVar(&orphansProcsAggressive, "aggressive", false, "Use tmux session verification to find ALL orphans (not just PPID=1)")
|
||||
|
||||
// Wire up subcommands
|
||||
orphansProcsCmd.AddCommand(orphansProcsListCmd)
|
||||
orphansProcsCmd.AddCommand(orphansProcsKillCmd)
|
||||
@@ -467,12 +449,6 @@ func runOrphansKill(cmd *cobra.Command, args []string) error {
|
||||
// Kill orphaned processes
|
||||
if len(procOrphans) > 0 {
|
||||
fmt.Printf("\nKilling orphaned processes...\n")
|
||||
// Use SIGKILL with --force for immediate termination, SIGTERM otherwise
|
||||
signal := syscall.SIGTERM
|
||||
if orphansKillForce {
|
||||
signal = syscall.SIGKILL
|
||||
}
|
||||
|
||||
var killed, failed int
|
||||
for _, o := range procOrphans {
|
||||
proc, err := os.FindProcess(o.PID)
|
||||
@@ -482,7 +458,7 @@ func runOrphansKill(cmd *cobra.Command, args []string) error {
|
||||
continue
|
||||
}
|
||||
|
||||
if err := proc.Signal(signal); err != nil {
|
||||
if err := proc.Signal(syscall.SIGTERM); err != nil {
|
||||
if err == os.ErrProcessDone {
|
||||
fmt.Printf(" %s PID %d: already terminated\n", style.Dim.Render("○"), o.PID)
|
||||
continue
|
||||
@@ -603,22 +579,17 @@ func isExcludedProcess(args string) bool {
|
||||
|
||||
// runOrphansListProcesses lists orphaned Claude processes
|
||||
func runOrphansListProcesses(cmd *cobra.Command, args []string) error {
|
||||
if orphansProcsAggressive {
|
||||
return runOrphansListProcessesAggressive()
|
||||
}
|
||||
|
||||
orphans, err := findOrphanProcesses()
|
||||
if err != nil {
|
||||
return fmt.Errorf("finding orphan processes: %w", err)
|
||||
}
|
||||
|
||||
if len(orphans) == 0 {
|
||||
fmt.Printf("%s No orphaned Claude processes found (PPID=1)\n", style.Bold.Render("✓"))
|
||||
fmt.Printf("%s Use --aggressive to find orphans via tmux session verification\n", style.Dim.Render("Hint:"))
|
||||
fmt.Printf("%s No orphaned Claude processes found\n", style.Bold.Render("✓"))
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Printf("%s Found %d orphaned Claude process(es) with PPID=1:\n\n", style.Warning.Render("⚠"), len(orphans))
|
||||
fmt.Printf("%s Found %d orphaned Claude process(es):\n\n", style.Warning.Render("⚠"), len(orphans))
|
||||
|
||||
for _, o := range orphans {
|
||||
// Truncate args for display
|
||||
@@ -630,72 +601,24 @@ func runOrphansListProcesses(cmd *cobra.Command, args []string) error {
|
||||
}
|
||||
|
||||
fmt.Printf("\n%s\n", style.Dim.Render("Use 'gt orphans procs kill' to terminate these processes"))
|
||||
fmt.Printf("%s\n", style.Dim.Render("Use --aggressive to find more orphans via tmux session verification"))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// runOrphansListProcessesAggressive lists orphans using tmux session verification.
|
||||
// This finds ALL Claude processes not in any gt-* or hq-* tmux session.
|
||||
func runOrphansListProcessesAggressive() error {
|
||||
zombies, err := util.FindZombieClaudeProcesses()
|
||||
if err != nil {
|
||||
return fmt.Errorf("finding zombie processes: %w", err)
|
||||
}
|
||||
|
||||
if len(zombies) == 0 {
|
||||
fmt.Printf("%s No orphaned Claude processes found (aggressive mode)\n", style.Bold.Render("✓"))
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Printf("%s Found %d orphaned Claude process(es) not in any tmux session:\n\n", style.Warning.Render("⚠"), len(zombies))
|
||||
|
||||
for _, z := range zombies {
|
||||
ageStr := formatProcessAge(z.Age)
|
||||
fmt.Printf(" %s %s (age: %s, tty: %s)\n",
|
||||
style.Bold.Render(fmt.Sprintf("PID %d", z.PID)),
|
||||
z.Cmd,
|
||||
style.Dim.Render(ageStr),
|
||||
z.TTY)
|
||||
}
|
||||
|
||||
fmt.Printf("\n%s\n", style.Dim.Render("Use 'gt orphans procs kill --aggressive' to terminate these processes"))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// formatProcessAge formats seconds into a human-readable age string
|
||||
func formatProcessAge(seconds int) string {
|
||||
if seconds < 60 {
|
||||
return fmt.Sprintf("%ds", seconds)
|
||||
}
|
||||
if seconds < 3600 {
|
||||
return fmt.Sprintf("%dm%ds", seconds/60, seconds%60)
|
||||
}
|
||||
hours := seconds / 3600
|
||||
mins := (seconds % 3600) / 60
|
||||
return fmt.Sprintf("%dh%dm", hours, mins)
|
||||
}
|
||||
|
||||
// runOrphansKillProcesses kills orphaned Claude processes
|
||||
func runOrphansKillProcesses(cmd *cobra.Command, args []string) error {
|
||||
if orphansProcsAggressive {
|
||||
return runOrphansKillProcessesAggressive()
|
||||
}
|
||||
|
||||
orphans, err := findOrphanProcesses()
|
||||
if err != nil {
|
||||
return fmt.Errorf("finding orphan processes: %w", err)
|
||||
}
|
||||
|
||||
if len(orphans) == 0 {
|
||||
fmt.Printf("%s No orphaned Claude processes found (PPID=1)\n", style.Bold.Render("✓"))
|
||||
fmt.Printf("%s Use --aggressive to find orphans via tmux session verification\n", style.Dim.Render("Hint:"))
|
||||
fmt.Printf("%s No orphaned Claude processes found\n", style.Bold.Render("✓"))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Show what we're about to kill
|
||||
fmt.Printf("%s Found %d orphaned Claude process(es) with PPID=1:\n\n", style.Warning.Render("⚠"), len(orphans))
|
||||
fmt.Printf("%s Found %d orphaned Claude process(es):\n\n", style.Warning.Render("⚠"), len(orphans))
|
||||
for _, o := range orphans {
|
||||
displayArgs := o.Args
|
||||
if len(displayArgs) > 80 {
|
||||
@@ -718,12 +641,6 @@ func runOrphansKillProcesses(cmd *cobra.Command, args []string) error {
|
||||
}
|
||||
|
||||
// Kill the processes
|
||||
// Use SIGKILL with --force for immediate termination, SIGTERM otherwise
|
||||
signal := syscall.SIGTERM
|
||||
if orphansProcsForce {
|
||||
signal = syscall.SIGKILL
|
||||
}
|
||||
|
||||
var killed, failed int
|
||||
for _, o := range orphans {
|
||||
proc, err := os.FindProcess(o.PID)
|
||||
@@ -733,7 +650,8 @@ func runOrphansKillProcesses(cmd *cobra.Command, args []string) error {
|
||||
continue
|
||||
}
|
||||
|
||||
if err := proc.Signal(signal); err != nil {
|
||||
// Send SIGTERM first for graceful shutdown
|
||||
if err := proc.Signal(syscall.SIGTERM); err != nil {
|
||||
// Process may have already exited
|
||||
if err == os.ErrProcessDone {
|
||||
fmt.Printf(" %s PID %d: already terminated\n", style.Dim.Render("○"), o.PID)
|
||||
@@ -756,80 +674,3 @@ func runOrphansKillProcesses(cmd *cobra.Command, args []string) error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// runOrphansKillProcessesAggressive kills orphans using tmux session verification.
|
||||
// This kills ALL Claude processes not in any gt-* or hq-* tmux session.
|
||||
func runOrphansKillProcessesAggressive() error {
|
||||
zombies, err := util.FindZombieClaudeProcesses()
|
||||
if err != nil {
|
||||
return fmt.Errorf("finding zombie processes: %w", err)
|
||||
}
|
||||
|
||||
if len(zombies) == 0 {
|
||||
fmt.Printf("%s No orphaned Claude processes found (aggressive mode)\n", style.Bold.Render("✓"))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Show what we're about to kill
|
||||
fmt.Printf("%s Found %d orphaned Claude process(es) not in any tmux session:\n\n", style.Warning.Render("⚠"), len(zombies))
|
||||
for _, z := range zombies {
|
||||
ageStr := formatProcessAge(z.Age)
|
||||
fmt.Printf(" %s %s (age: %s, tty: %s)\n",
|
||||
style.Bold.Render(fmt.Sprintf("PID %d", z.PID)),
|
||||
z.Cmd,
|
||||
style.Dim.Render(ageStr),
|
||||
z.TTY)
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Confirm unless --force
|
||||
if !orphansProcsForce {
|
||||
fmt.Printf("Kill these %d process(es)? [y/N] ", len(zombies))
|
||||
var response string
|
||||
_, _ = fmt.Scanln(&response)
|
||||
response = strings.ToLower(strings.TrimSpace(response))
|
||||
if response != "y" && response != "yes" {
|
||||
fmt.Println("Aborted")
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Kill the processes
|
||||
// Use SIGKILL with --force for immediate termination, SIGTERM otherwise
|
||||
signal := syscall.SIGTERM
|
||||
if orphansProcsForce {
|
||||
signal = syscall.SIGKILL
|
||||
}
|
||||
|
||||
var killed, failed int
|
||||
for _, z := range zombies {
|
||||
proc, err := os.FindProcess(z.PID)
|
||||
if err != nil {
|
||||
fmt.Printf(" %s PID %d: %v\n", style.Error.Render("✗"), z.PID, err)
|
||||
failed++
|
||||
continue
|
||||
}
|
||||
|
||||
if err := proc.Signal(signal); err != nil {
|
||||
// Process may have already exited
|
||||
if err == os.ErrProcessDone {
|
||||
fmt.Printf(" %s PID %d: already terminated\n", style.Dim.Render("○"), z.PID)
|
||||
continue
|
||||
}
|
||||
fmt.Printf(" %s PID %d: %v\n", style.Error.Render("✗"), z.PID, err)
|
||||
failed++
|
||||
continue
|
||||
}
|
||||
|
||||
fmt.Printf(" %s PID %d killed\n", style.Bold.Render("✓"), z.PID)
|
||||
killed++
|
||||
}
|
||||
|
||||
fmt.Printf("\n%s %d killed", style.Bold.Render("Summary:"), killed)
|
||||
if failed > 0 {
|
||||
fmt.Printf(", %d failed", failed)
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -103,7 +103,7 @@ func findActivePatrol(cfg PatrolConfig) (patrolID, patrolLine string, found bool
|
||||
// Returns the patrol ID or an error.
|
||||
func autoSpawnPatrol(cfg PatrolConfig) (string, error) {
|
||||
// Find the proto ID for the patrol molecule
|
||||
cmdCatalog := exec.Command("gt", "formula", "list")
|
||||
cmdCatalog := exec.Command("bd", "--no-daemon", "mol", "catalog")
|
||||
cmdCatalog.Dir = cfg.BeadsDir
|
||||
var stdoutCatalog, stderrCatalog bytes.Buffer
|
||||
cmdCatalog.Stdout = &stdoutCatalog
|
||||
@@ -112,20 +112,20 @@ func autoSpawnPatrol(cfg PatrolConfig) (string, error) {
|
||||
if err := cmdCatalog.Run(); err != nil {
|
||||
errMsg := strings.TrimSpace(stderrCatalog.String())
|
||||
if errMsg != "" {
|
||||
return "", fmt.Errorf("failed to list formulas: %s", errMsg)
|
||||
return "", fmt.Errorf("failed to list molecule catalog: %s", errMsg)
|
||||
}
|
||||
return "", fmt.Errorf("failed to list formulas: %w", err)
|
||||
return "", fmt.Errorf("failed to list molecule catalog: %w", err)
|
||||
}
|
||||
|
||||
// Find patrol molecule in formula list
|
||||
// Format: "formula-name description"
|
||||
// Find patrol molecule in catalog
|
||||
var protoID string
|
||||
catalogLines := strings.Split(stdoutCatalog.String(), "\n")
|
||||
for _, line := range catalogLines {
|
||||
if strings.Contains(line, cfg.PatrolMolName) {
|
||||
parts := strings.Fields(line)
|
||||
if len(parts) > 0 {
|
||||
protoID = parts[0]
|
||||
// Strip trailing colon from ID (catalog format: "gt-xxx: title")
|
||||
protoID = strings.TrimSuffix(parts[0], ":")
|
||||
break
|
||||
}
|
||||
}
|
||||
@@ -196,7 +196,7 @@ func outputPatrolContext(cfg PatrolConfig) {
|
||||
fmt.Printf("⚠ %s\n", err.Error())
|
||||
} else {
|
||||
fmt.Println(style.Dim.Render(err.Error()))
|
||||
fmt.Println(style.Dim.Render(fmt.Sprintf("Run `gt formula list` to troubleshoot.")))
|
||||
fmt.Println(style.Dim.Render(fmt.Sprintf("Run `bd mol catalog` to troubleshoot.")))
|
||||
return
|
||||
}
|
||||
} else {
|
||||
|
||||
@@ -591,13 +591,10 @@ func runPolecatIdentityRename(cmd *cobra.Command, args []string) error {
|
||||
|
||||
// Create new identity bead with inherited fields
|
||||
newFields := &beads.AgentFields{
|
||||
RoleType: "polecat",
|
||||
Rig: rigName,
|
||||
AgentState: oldFields.AgentState,
|
||||
HookBead: oldFields.HookBead,
|
||||
CleanupStatus: oldFields.CleanupStatus,
|
||||
ActiveMR: oldFields.ActiveMR,
|
||||
NotificationLevel: oldFields.NotificationLevel,
|
||||
RoleType: "polecat",
|
||||
Rig: rigName,
|
||||
AgentState: oldFields.AgentState,
|
||||
CleanupStatus: oldFields.CleanupStatus,
|
||||
}
|
||||
|
||||
newTitle := fmt.Sprintf("Polecat %s in %s", newName, rigName)
|
||||
|
||||
@@ -501,23 +501,6 @@ func checkSlungWork(ctx RoleContext) bool {
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Check for attached molecule and show execution prompt
|
||||
// This was missing for hooked beads (only worked for pinned beads).
|
||||
// With formula-on-bead, the base bead is hooked with attached_molecule pointing to wisp.
|
||||
attachment := beads.ParseAttachmentFields(hookedBead)
|
||||
if attachment != nil && attachment.AttachedMolecule != "" {
|
||||
fmt.Printf("%s\n\n", style.Bold.Render("## 🎯 ATTACHED MOLECULE"))
|
||||
fmt.Printf("Molecule: %s\n", attachment.AttachedMolecule)
|
||||
if attachment.AttachedArgs != "" {
|
||||
fmt.Printf("\n%s\n", style.Bold.Render("📋 ARGS (use these to guide execution):"))
|
||||
fmt.Printf(" %s\n", attachment.AttachedArgs)
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Show current step from molecule
|
||||
showMoleculeExecutionPrompt(ctx.WorkDir, attachment.AttachedMolecule)
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
|
||||
@@ -374,11 +374,6 @@ func TestDetectSessionState(t *testing.T) {
|
||||
})
|
||||
|
||||
t.Run("autonomous_state_hooked_bead", func(t *testing.T) {
|
||||
// Skip: bd CLI 0.47.2 has a bug where database writes don't commit
|
||||
// ("sql: database is closed" during auto-flush). This blocks tests
|
||||
// that need to create issues. See internal issue for tracking.
|
||||
t.Skip("bd CLI 0.47.2 bug: database writes don't commit")
|
||||
|
||||
// Skip if bd CLI is not available
|
||||
if _, err := exec.LookPath("bd"); err != nil {
|
||||
t.Skip("bd binary not found in PATH")
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
@@ -133,10 +132,7 @@ func runReady(cmd *cobra.Command, args []string) error {
|
||||
} else {
|
||||
// Filter out formula scaffolds (gt-579)
|
||||
formulaNames := getFormulaNames(townBeadsPath)
|
||||
filtered := filterFormulaScaffolds(issues, formulaNames)
|
||||
// Defense-in-depth: also filter wisps that shouldn't appear in ready work
|
||||
wispIDs := getWispIDs(townBeadsPath)
|
||||
src.Issues = filterWisps(filtered, wispIDs)
|
||||
src.Issues = filterFormulaScaffolds(issues, formulaNames)
|
||||
}
|
||||
sources = append(sources, src)
|
||||
}()
|
||||
@@ -160,10 +156,7 @@ func runReady(cmd *cobra.Command, args []string) error {
|
||||
} else {
|
||||
// Filter out formula scaffolds (gt-579)
|
||||
formulaNames := getFormulaNames(rigBeadsPath)
|
||||
filtered := filterFormulaScaffolds(issues, formulaNames)
|
||||
// Defense-in-depth: also filter wisps that shouldn't appear in ready work
|
||||
wispIDs := getWispIDs(rigBeadsPath)
|
||||
src.Issues = filterWisps(filtered, wispIDs)
|
||||
src.Issues = filterFormulaScaffolds(issues, formulaNames)
|
||||
}
|
||||
sources = append(sources, src)
|
||||
}(r)
|
||||
@@ -353,56 +346,3 @@ func filterFormulaScaffolds(issues []*beads.Issue, formulaNames map[string]bool)
|
||||
}
|
||||
return filtered
|
||||
}
|
||||
|
||||
// getWispIDs reads the issues.jsonl and returns a set of IDs that are wisps.
|
||||
// Wisps are ephemeral issues (wisp: true flag) that shouldn't appear in ready work.
|
||||
// This is a defense-in-depth exclusion - bd ready should already filter wisps,
|
||||
// but we double-check at the display layer to ensure operational work doesn't leak.
|
||||
func getWispIDs(beadsPath string) map[string]bool {
|
||||
beadsDir := beads.ResolveBeadsDir(beadsPath)
|
||||
issuesPath := filepath.Join(beadsDir, "issues.jsonl")
|
||||
file, err := os.Open(issuesPath)
|
||||
if err != nil {
|
||||
return nil // No issues file
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
wispIDs := make(map[string]bool)
|
||||
scanner := bufio.NewScanner(file)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if line == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
var issue struct {
|
||||
ID string `json:"id"`
|
||||
Wisp bool `json:"wisp"`
|
||||
}
|
||||
if err := json.Unmarshal([]byte(line), &issue); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if issue.Wisp {
|
||||
wispIDs[issue.ID] = true
|
||||
}
|
||||
}
|
||||
|
||||
return wispIDs
|
||||
}
|
||||
|
||||
// filterWisps removes wisp issues from the list.
|
||||
// Wisps are ephemeral operational work that shouldn't appear in ready work.
|
||||
func filterWisps(issues []*beads.Issue, wispIDs map[string]bool) []*beads.Issue {
|
||||
if wispIDs == nil || len(wispIDs) == 0 {
|
||||
return issues
|
||||
}
|
||||
|
||||
filtered := make([]*beads.Issue, 0, len(issues))
|
||||
for _, issue := range issues {
|
||||
if !wispIDs[issue.ID] {
|
||||
filtered = append(filtered, issue)
|
||||
}
|
||||
}
|
||||
return filtered
|
||||
}
|
||||
|
||||
@@ -337,14 +337,6 @@ func runRefineryStop(cmd *cobra.Command, args []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// RefineryStatusOutput is the JSON output format for refinery status.
|
||||
type RefineryStatusOutput struct {
|
||||
Running bool `json:"running"`
|
||||
RigName string `json:"rig_name"`
|
||||
Session string `json:"session,omitempty"`
|
||||
QueueLength int `json:"queue_length"`
|
||||
}
|
||||
|
||||
func runRefineryStatus(cmd *cobra.Command, args []string) error {
|
||||
rigName := ""
|
||||
if len(args) > 0 {
|
||||
@@ -356,42 +348,58 @@ func runRefineryStatus(cmd *cobra.Command, args []string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// ZFC: tmux is source of truth for running state
|
||||
running, _ := mgr.IsRunning()
|
||||
sessionInfo, _ := mgr.Status() // may be nil if not running
|
||||
|
||||
// Get queue from beads
|
||||
queue, _ := mgr.Queue()
|
||||
queueLen := len(queue)
|
||||
ref, err := mgr.Status()
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting status: %w", err)
|
||||
}
|
||||
|
||||
// JSON output
|
||||
if refineryStatusJSON {
|
||||
output := RefineryStatusOutput{
|
||||
Running: running,
|
||||
RigName: rigName,
|
||||
QueueLength: queueLen,
|
||||
}
|
||||
if sessionInfo != nil {
|
||||
output.Session = sessionInfo.Name
|
||||
}
|
||||
enc := json.NewEncoder(os.Stdout)
|
||||
enc.SetIndent("", " ")
|
||||
return enc.Encode(output)
|
||||
return enc.Encode(ref)
|
||||
}
|
||||
|
||||
// Human-readable output
|
||||
fmt.Printf("%s Refinery: %s\n\n", style.Bold.Render("⚙"), rigName)
|
||||
|
||||
if running {
|
||||
fmt.Printf(" State: %s\n", style.Bold.Render("● running"))
|
||||
if sessionInfo != nil {
|
||||
fmt.Printf(" Session: %s\n", sessionInfo.Name)
|
||||
}
|
||||
} else {
|
||||
fmt.Printf(" State: %s\n", style.Dim.Render("○ stopped"))
|
||||
stateStr := string(ref.State)
|
||||
switch ref.State {
|
||||
case refinery.StateRunning:
|
||||
stateStr = style.Bold.Render("● running")
|
||||
case refinery.StateStopped:
|
||||
stateStr = style.Dim.Render("○ stopped")
|
||||
case refinery.StatePaused:
|
||||
stateStr = style.Dim.Render("⏸ paused")
|
||||
}
|
||||
fmt.Printf(" State: %s\n", stateStr)
|
||||
|
||||
if ref.StartedAt != nil {
|
||||
fmt.Printf(" Started: %s\n", ref.StartedAt.Format("2006-01-02 15:04:05"))
|
||||
}
|
||||
|
||||
fmt.Printf("\n Queue: %d pending\n", queueLen)
|
||||
if ref.CurrentMR != nil {
|
||||
fmt.Printf("\n %s\n", style.Bold.Render("Currently Processing:"))
|
||||
fmt.Printf(" Branch: %s\n", ref.CurrentMR.Branch)
|
||||
fmt.Printf(" Worker: %s\n", ref.CurrentMR.Worker)
|
||||
if ref.CurrentMR.IssueID != "" {
|
||||
fmt.Printf(" Issue: %s\n", ref.CurrentMR.IssueID)
|
||||
}
|
||||
}
|
||||
|
||||
// Get queue length
|
||||
queue, _ := mgr.Queue()
|
||||
pendingCount := 0
|
||||
for _, item := range queue {
|
||||
if item.Position > 0 { // Not currently processing
|
||||
pendingCount++
|
||||
}
|
||||
}
|
||||
fmt.Printf("\n Queue: %d pending\n", pendingCount)
|
||||
|
||||
if ref.LastMergeAt != nil {
|
||||
fmt.Printf(" Last merge: %s\n", ref.LastMergeAt.Format("2006-01-02 15:04:05"))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -977,7 +977,8 @@ func runRigShutdown(cmd *cobra.Command, args []string) error {
|
||||
|
||||
// 2. Stop the refinery
|
||||
refMgr := refinery.NewManager(r)
|
||||
if running, _ := refMgr.IsRunning(); running {
|
||||
refStatus, err := refMgr.Status()
|
||||
if err == nil && refStatus.State == refinery.StateRunning {
|
||||
fmt.Printf(" Stopping refinery...\n")
|
||||
if err := refMgr.Stop(); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("refinery: %v", err))
|
||||
@@ -986,7 +987,8 @@ func runRigShutdown(cmd *cobra.Command, args []string) error {
|
||||
|
||||
// 3. Stop the witness
|
||||
witMgr := witness.NewManager(r)
|
||||
if running, _ := witMgr.IsRunning(); running {
|
||||
witStatus, err := witMgr.Status()
|
||||
if err == nil && witStatus.State == witness.StateRunning {
|
||||
fmt.Printf(" Stopping witness...\n")
|
||||
if err := witMgr.Stop(); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("witness: %v", err))
|
||||
@@ -1073,10 +1075,16 @@ func runRigStatus(cmd *cobra.Command, args []string) error {
|
||||
|
||||
// Witness status
|
||||
fmt.Printf("%s\n", style.Bold.Render("Witness"))
|
||||
witnessSession := fmt.Sprintf("gt-%s-witness", rigName)
|
||||
witnessRunning, _ := t.HasSession(witnessSession)
|
||||
witMgr := witness.NewManager(r)
|
||||
witnessRunning, _ := witMgr.IsRunning()
|
||||
witStatus, _ := witMgr.Status()
|
||||
if witnessRunning {
|
||||
fmt.Printf(" %s running\n", style.Success.Render("●"))
|
||||
fmt.Printf(" %s running", style.Success.Render("●"))
|
||||
if witStatus != nil && witStatus.StartedAt != nil {
|
||||
fmt.Printf(" (uptime: %s)", formatDuration(time.Since(*witStatus.StartedAt)))
|
||||
}
|
||||
fmt.Printf("\n")
|
||||
} else {
|
||||
fmt.Printf(" %s stopped\n", style.Dim.Render("○"))
|
||||
}
|
||||
@@ -1084,10 +1092,16 @@ func runRigStatus(cmd *cobra.Command, args []string) error {
|
||||
|
||||
// Refinery status
|
||||
fmt.Printf("%s\n", style.Bold.Render("Refinery"))
|
||||
refinerySession := fmt.Sprintf("gt-%s-refinery", rigName)
|
||||
refineryRunning, _ := t.HasSession(refinerySession)
|
||||
refMgr := refinery.NewManager(r)
|
||||
refineryRunning, _ := refMgr.IsRunning()
|
||||
refStatus, _ := refMgr.Status()
|
||||
if refineryRunning {
|
||||
fmt.Printf(" %s running\n", style.Success.Render("●"))
|
||||
fmt.Printf(" %s running", style.Success.Render("●"))
|
||||
if refStatus != nil && refStatus.StartedAt != nil {
|
||||
fmt.Printf(" (uptime: %s)", formatDuration(time.Since(*refStatus.StartedAt)))
|
||||
}
|
||||
fmt.Printf("\n")
|
||||
// Show queue size
|
||||
queue, err := refMgr.Queue()
|
||||
if err == nil && len(queue) > 0 {
|
||||
@@ -1240,7 +1254,8 @@ func runRigStop(cmd *cobra.Command, args []string) error {
|
||||
|
||||
// 2. Stop the refinery
|
||||
refMgr := refinery.NewManager(r)
|
||||
if running, _ := refMgr.IsRunning(); running {
|
||||
refStatus, err := refMgr.Status()
|
||||
if err == nil && refStatus.State == refinery.StateRunning {
|
||||
fmt.Printf(" Stopping refinery...\n")
|
||||
if err := refMgr.Stop(); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("refinery: %v", err))
|
||||
@@ -1249,7 +1264,8 @@ func runRigStop(cmd *cobra.Command, args []string) error {
|
||||
|
||||
// 3. Stop the witness
|
||||
witMgr := witness.NewManager(r)
|
||||
if running, _ := witMgr.IsRunning(); running {
|
||||
witStatus, err := witMgr.Status()
|
||||
if err == nil && witStatus.State == witness.StateRunning {
|
||||
fmt.Printf(" Stopping witness...\n")
|
||||
if err := witMgr.Stop(); err != nil {
|
||||
errors = append(errors, fmt.Sprintf("witness: %v", err))
|
||||
@@ -1371,7 +1387,8 @@ func runRigRestart(cmd *cobra.Command, args []string) error {
|
||||
|
||||
// 2. Stop the refinery
|
||||
refMgr := refinery.NewManager(r)
|
||||
if running, _ := refMgr.IsRunning(); running {
|
||||
refStatus, err := refMgr.Status()
|
||||
if err == nil && refStatus.State == refinery.StateRunning {
|
||||
fmt.Printf(" Stopping refinery...\n")
|
||||
if err := refMgr.Stop(); err != nil {
|
||||
stopErrors = append(stopErrors, fmt.Sprintf("refinery: %v", err))
|
||||
@@ -1380,7 +1397,8 @@ func runRigRestart(cmd *cobra.Command, args []string) error {
|
||||
|
||||
// 3. Stop the witness
|
||||
witMgr := witness.NewManager(r)
|
||||
if running, _ := witMgr.IsRunning(); running {
|
||||
witStatus, err := witMgr.Status()
|
||||
if err == nil && witStatus.State == witness.StateRunning {
|
||||
fmt.Printf(" Stopping witness...\n")
|
||||
if err := witMgr.Stop(); err != nil {
|
||||
stopErrors = append(stopErrors, fmt.Sprintf("witness: %v", err))
|
||||
|
||||
@@ -30,11 +30,10 @@ for fast lookups by the shell hook.
|
||||
|
||||
Output format (to stdout):
|
||||
export GT_TOWN_ROOT=/path/to/town
|
||||
export GT_ROOT=/path/to/town
|
||||
export GT_RIG=rigname
|
||||
|
||||
Or if not in a rig:
|
||||
unset GT_TOWN_ROOT GT_ROOT GT_RIG`,
|
||||
unset GT_TOWN_ROOT GT_RIG`,
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
RunE: runRigDetect,
|
||||
}
|
||||
@@ -64,11 +63,9 @@ func runRigDetect(cmd *cobra.Command, args []string) error {
|
||||
|
||||
if rigName != "" {
|
||||
fmt.Printf("export GT_TOWN_ROOT=%q\n", townRoot)
|
||||
fmt.Printf("export GT_ROOT=%q\n", townRoot)
|
||||
fmt.Printf("export GT_RIG=%q\n", rigName)
|
||||
} else {
|
||||
fmt.Printf("export GT_TOWN_ROOT=%q\n", townRoot)
|
||||
fmt.Printf("export GT_ROOT=%q\n", townRoot)
|
||||
fmt.Println("unset GT_RIG")
|
||||
}
|
||||
|
||||
@@ -108,7 +105,7 @@ func detectRigFromPath(townRoot, absPath string) string {
|
||||
}
|
||||
|
||||
func outputNotInRig() error {
|
||||
fmt.Println("unset GT_TOWN_ROOT GT_ROOT GT_RIG")
|
||||
fmt.Println("unset GT_TOWN_ROOT GT_RIG")
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -132,11 +129,11 @@ func updateRigCache(repoRoot, townRoot, rigName string) error {
|
||||
|
||||
var value string
|
||||
if rigName != "" {
|
||||
value = fmt.Sprintf("export GT_TOWN_ROOT=%q; export GT_ROOT=%q; export GT_RIG=%q", townRoot, townRoot, rigName)
|
||||
value = fmt.Sprintf("export GT_TOWN_ROOT=%q; export GT_RIG=%q", townRoot, rigName)
|
||||
} else if townRoot != "" {
|
||||
value = fmt.Sprintf("export GT_TOWN_ROOT=%q; export GT_ROOT=%q; unset GT_RIG", townRoot, townRoot)
|
||||
value = fmt.Sprintf("export GT_TOWN_ROOT=%q; unset GT_RIG", townRoot)
|
||||
} else {
|
||||
value = "unset GT_TOWN_ROOT GT_ROOT GT_RIG"
|
||||
value = "unset GT_TOWN_ROOT GT_RIG"
|
||||
}
|
||||
|
||||
existing[repoRoot] = value
|
||||
|
||||
@@ -165,19 +165,6 @@ func sanitizeRigName(name string) string {
|
||||
}
|
||||
|
||||
func findOrCreateTown() (string, error) {
|
||||
// Priority 1: GT_TOWN_ROOT env var (explicit user preference)
|
||||
if townRoot := os.Getenv("GT_TOWN_ROOT"); townRoot != "" {
|
||||
if isValidTown(townRoot) {
|
||||
return townRoot, nil
|
||||
}
|
||||
}
|
||||
|
||||
// Priority 2: Try to find from cwd (supports multiple town installations)
|
||||
if townRoot, err := workspace.FindFromCwd(); err == nil && townRoot != "" {
|
||||
return townRoot, nil
|
||||
}
|
||||
|
||||
// Priority 3: Fall back to well-known locations
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return "", err
|
||||
@@ -189,17 +176,11 @@ func findOrCreateTown() (string, error) {
|
||||
}
|
||||
|
||||
for _, path := range candidates {
|
||||
if isValidTown(path) {
|
||||
mayorDir := filepath.Join(path, "mayor")
|
||||
if _, err := os.Stat(mayorDir); err == nil {
|
||||
return path, nil
|
||||
}
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("no Gas Town found - run 'gt install ~/gt' first")
|
||||
}
|
||||
|
||||
// isValidTown checks if a path is a valid Gas Town installation.
|
||||
func isValidTown(path string) bool {
|
||||
mayorDir := filepath.Join(path, "mayor")
|
||||
_, err := os.Stat(mayorDir)
|
||||
return err == nil
|
||||
}
|
||||
|
||||
@@ -1,113 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestFindOrCreateTown(t *testing.T) {
|
||||
// Save original env and restore after test
|
||||
origTownRoot := os.Getenv("GT_TOWN_ROOT")
|
||||
defer os.Setenv("GT_TOWN_ROOT", origTownRoot)
|
||||
|
||||
t.Run("respects GT_TOWN_ROOT when set", func(t *testing.T) {
|
||||
// Create a valid town in temp dir
|
||||
tmpTown := t.TempDir()
|
||||
mayorDir := filepath.Join(tmpTown, "mayor")
|
||||
if err := os.MkdirAll(mayorDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir mayor: %v", err)
|
||||
}
|
||||
|
||||
os.Setenv("GT_TOWN_ROOT", tmpTown)
|
||||
|
||||
result, err := findOrCreateTown()
|
||||
if err != nil {
|
||||
t.Fatalf("findOrCreateTown() error = %v", err)
|
||||
}
|
||||
if result != tmpTown {
|
||||
t.Errorf("findOrCreateTown() = %q, want %q", result, tmpTown)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("ignores invalid GT_TOWN_ROOT", func(t *testing.T) {
|
||||
// Set GT_TOWN_ROOT to a non-existent path
|
||||
os.Setenv("GT_TOWN_ROOT", "/nonexistent/path/to/town")
|
||||
|
||||
// Create a valid town at ~/gt for fallback
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
t.Skip("cannot get home dir")
|
||||
}
|
||||
|
||||
gtPath := filepath.Join(home, "gt")
|
||||
mayorDir := filepath.Join(gtPath, "mayor")
|
||||
|
||||
// Skip if ~/gt doesn't exist (don't want to create it in user's home)
|
||||
if _, err := os.Stat(mayorDir); os.IsNotExist(err) {
|
||||
t.Skip("~/gt/mayor does not exist, skipping fallback test")
|
||||
}
|
||||
|
||||
result, err := findOrCreateTown()
|
||||
if err != nil {
|
||||
t.Fatalf("findOrCreateTown() error = %v", err)
|
||||
}
|
||||
// Should fall back to ~/gt since GT_TOWN_ROOT is invalid
|
||||
if result != gtPath {
|
||||
t.Logf("findOrCreateTown() = %q (fell back to valid town)", result)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("GT_TOWN_ROOT takes priority over fallback", func(t *testing.T) {
|
||||
// Create two valid towns
|
||||
tmpTown1 := t.TempDir()
|
||||
tmpTown2 := t.TempDir()
|
||||
|
||||
if err := os.MkdirAll(filepath.Join(tmpTown1, "mayor"), 0755); err != nil {
|
||||
t.Fatalf("mkdir mayor1: %v", err)
|
||||
}
|
||||
if err := os.MkdirAll(filepath.Join(tmpTown2, "mayor"), 0755); err != nil {
|
||||
t.Fatalf("mkdir mayor2: %v", err)
|
||||
}
|
||||
|
||||
// Set GT_TOWN_ROOT to tmpTown1
|
||||
os.Setenv("GT_TOWN_ROOT", tmpTown1)
|
||||
|
||||
result, err := findOrCreateTown()
|
||||
if err != nil {
|
||||
t.Fatalf("findOrCreateTown() error = %v", err)
|
||||
}
|
||||
// Should use GT_TOWN_ROOT, not any other valid town
|
||||
if result != tmpTown1 {
|
||||
t.Errorf("findOrCreateTown() = %q, want %q (GT_TOWN_ROOT should take priority)", result, tmpTown1)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestIsValidTown(t *testing.T) {
|
||||
t.Run("valid town has mayor directory", func(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
mayorDir := filepath.Join(tmpDir, "mayor")
|
||||
if err := os.MkdirAll(mayorDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir: %v", err)
|
||||
}
|
||||
|
||||
if !isValidTown(tmpDir) {
|
||||
t.Error("isValidTown() = false, want true")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("invalid town missing mayor directory", func(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
if isValidTown(tmpDir) {
|
||||
t.Error("isValidTown() = true, want false")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("nonexistent path is invalid", func(t *testing.T) {
|
||||
if isValidTown("/nonexistent/path") {
|
||||
t.Error("isValidTown() = true, want false")
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -100,23 +100,6 @@ Examples:
|
||||
RunE: runRoleEnv,
|
||||
}
|
||||
|
||||
var roleDefCmd = &cobra.Command{
|
||||
Use: "def <role>",
|
||||
Short: "Display role definition (session, health, env config)",
|
||||
Long: `Display the effective role definition after all overrides are applied.
|
||||
|
||||
Role configuration is layered:
|
||||
1. Built-in defaults (embedded in binary)
|
||||
2. Town-level overrides (~/.gt/roles/<role>.toml)
|
||||
3. Rig-level overrides (<rig>/roles/<role>.toml)
|
||||
|
||||
Examples:
|
||||
gt role def witness # Show witness role definition
|
||||
gt role def crew # Show crew role definition`,
|
||||
Args: cobra.ExactArgs(1),
|
||||
RunE: runRoleDef,
|
||||
}
|
||||
|
||||
// Flags for role home command
|
||||
var (
|
||||
roleRig string
|
||||
@@ -130,7 +113,6 @@ func init() {
|
||||
roleCmd.AddCommand(roleDetectCmd)
|
||||
roleCmd.AddCommand(roleListCmd)
|
||||
roleCmd.AddCommand(roleEnvCmd)
|
||||
roleCmd.AddCommand(roleDefCmd)
|
||||
|
||||
// Add --rig and --polecat flags to home command for overrides
|
||||
roleHomeCmd.Flags().StringVar(&roleRig, "rig", "", "Rig name (required for rig-specific roles)")
|
||||
@@ -544,83 +526,3 @@ func runRoleEnv(cmd *cobra.Command, args []string) error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func runRoleDef(cmd *cobra.Command, args []string) error {
|
||||
roleName := args[0]
|
||||
|
||||
// Validate role name
|
||||
validRoles := config.AllRoles()
|
||||
isValid := false
|
||||
for _, r := range validRoles {
|
||||
if r == roleName {
|
||||
isValid = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !isValid {
|
||||
return fmt.Errorf("unknown role %q - valid roles: %s", roleName, strings.Join(validRoles, ", "))
|
||||
}
|
||||
|
||||
// Determine town root and rig path
|
||||
townRoot, _ := workspace.FindFromCwd()
|
||||
rigPath := ""
|
||||
if townRoot != "" {
|
||||
// Try to get rig path if we're in a rig directory
|
||||
if rigInfo, err := GetRole(); err == nil && rigInfo.Rig != "" {
|
||||
rigPath = filepath.Join(townRoot, rigInfo.Rig)
|
||||
}
|
||||
}
|
||||
|
||||
// Load role definition with overrides
|
||||
def, err := config.LoadRoleDefinition(townRoot, rigPath, roleName)
|
||||
if err != nil {
|
||||
return fmt.Errorf("loading role definition: %w", err)
|
||||
}
|
||||
|
||||
// Display role info
|
||||
fmt.Printf("%s %s\n", style.Bold.Render("Role:"), def.Role)
|
||||
fmt.Printf("%s %s\n", style.Bold.Render("Scope:"), def.Scope)
|
||||
fmt.Println()
|
||||
|
||||
// Session config
|
||||
fmt.Println(style.Bold.Render("[session]"))
|
||||
fmt.Printf(" pattern = %q\n", def.Session.Pattern)
|
||||
fmt.Printf(" work_dir = %q\n", def.Session.WorkDir)
|
||||
fmt.Printf(" needs_pre_sync = %v\n", def.Session.NeedsPreSync)
|
||||
if def.Session.StartCommand != "" {
|
||||
fmt.Printf(" start_command = %q\n", def.Session.StartCommand)
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
// Environment variables
|
||||
if len(def.Env) > 0 {
|
||||
fmt.Println(style.Bold.Render("[env]"))
|
||||
envKeys := make([]string, 0, len(def.Env))
|
||||
for k := range def.Env {
|
||||
envKeys = append(envKeys, k)
|
||||
}
|
||||
sort.Strings(envKeys)
|
||||
for _, k := range envKeys {
|
||||
fmt.Printf(" %s = %q\n", k, def.Env[k])
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
|
||||
// Health config
|
||||
fmt.Println(style.Bold.Render("[health]"))
|
||||
fmt.Printf(" ping_timeout = %q\n", def.Health.PingTimeout.String())
|
||||
fmt.Printf(" consecutive_failures = %d\n", def.Health.ConsecutiveFailures)
|
||||
fmt.Printf(" kill_cooldown = %q\n", def.Health.KillCooldown.String())
|
||||
fmt.Printf(" stuck_threshold = %q\n", def.Health.StuckThreshold.String())
|
||||
fmt.Println()
|
||||
|
||||
// Prompts
|
||||
if def.Nudge != "" {
|
||||
fmt.Printf("%s %s\n", style.Bold.Render("Nudge:"), def.Nudge)
|
||||
}
|
||||
if def.PromptTemplate != "" {
|
||||
fmt.Printf("%s %s\n", style.Bold.Render("Template:"), def.PromptTemplate)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
@@ -12,10 +11,7 @@ import (
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/gofrs/flock"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/steveyegge/gastown/internal/config"
|
||||
"github.com/steveyegge/gastown/internal/constants"
|
||||
"github.com/steveyegge/gastown/internal/events"
|
||||
"github.com/steveyegge/gastown/internal/style"
|
||||
"github.com/steveyegge/gastown/internal/workspace"
|
||||
@@ -195,24 +191,8 @@ func runSeanceTalk(sessionID, prompt string) error {
|
||||
// Expand short IDs if needed (user might provide partial)
|
||||
// For now, require full ID or let claude --resume handle it
|
||||
|
||||
// Clean up any orphaned symlinks from previous interrupted sessions
|
||||
cleanupOrphanedSessionSymlinks()
|
||||
|
||||
fmt.Printf("%s Summoning session %s...\n\n", style.Bold.Render("🔮"), sessionID)
|
||||
|
||||
// Find the session in another account and symlink it to the current account
|
||||
// This allows Claude to load sessions from any account while keeping
|
||||
// the forked session in the current account
|
||||
townRoot, _ := workspace.FindFromCwd()
|
||||
cleanup, err := symlinkSessionToCurrentAccount(townRoot, sessionID)
|
||||
if err != nil {
|
||||
// Not fatal - session might already be in current account
|
||||
fmt.Printf("%s\n", style.Dim.Render("Note: "+err.Error()))
|
||||
}
|
||||
if cleanup != nil {
|
||||
defer cleanup()
|
||||
}
|
||||
|
||||
// Build the command
|
||||
args := []string{"--fork-session", "--resume", sessionID}
|
||||
|
||||
@@ -307,427 +287,3 @@ func formatEventTime(ts string) string {
|
||||
}
|
||||
return t.Local().Format("2006-01-02 15:04")
|
||||
}
|
||||
|
||||
// sessionsIndex represents the structure of sessions-index.json files.
|
||||
// We use json.RawMessage for entries to preserve all fields when copying.
|
||||
type sessionsIndex struct {
|
||||
Version int `json:"version"`
|
||||
Entries []json.RawMessage `json:"entries"`
|
||||
}
|
||||
|
||||
// sessionsIndexEntry is a minimal struct to extract just the sessionId from an entry.
|
||||
type sessionsIndexEntry struct {
|
||||
SessionID string `json:"sessionId"`
|
||||
}
|
||||
|
||||
// sessionLocation contains the location info for a session.
|
||||
type sessionLocation struct {
|
||||
configDir string // The account's config directory
|
||||
projectDir string // The project directory name (e.g., "-Users-jv-gt-gastown-crew-propane")
|
||||
}
|
||||
|
||||
// sessionsIndexLockTimeout is how long to wait for the index lock.
|
||||
const sessionsIndexLockTimeout = 5 * time.Second
|
||||
|
||||
// lockSessionsIndex acquires an exclusive lock on the sessions index file.
|
||||
// Returns the lock (caller must unlock) or error if lock cannot be acquired.
|
||||
// The lock file is created adjacent to the index file with a .lock suffix.
|
||||
func lockSessionsIndex(indexPath string) (*flock.Flock, error) {
|
||||
lockPath := indexPath + ".lock"
|
||||
|
||||
// Ensure the directory exists
|
||||
if err := os.MkdirAll(filepath.Dir(lockPath), 0755); err != nil {
|
||||
return nil, fmt.Errorf("creating lock directory: %w", err)
|
||||
}
|
||||
|
||||
lock := flock.New(lockPath)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), sessionsIndexLockTimeout)
|
||||
defer cancel()
|
||||
|
||||
locked, err := lock.TryLockContext(ctx, 100*time.Millisecond)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("acquiring lock: %w", err)
|
||||
}
|
||||
if !locked {
|
||||
return nil, fmt.Errorf("timeout waiting for sessions index lock")
|
||||
}
|
||||
|
||||
return lock, nil
|
||||
}
|
||||
|
||||
// findSessionLocation searches all account config directories for a session.
|
||||
// Returns the config directory and project directory that contain the session.
|
||||
func findSessionLocation(townRoot, sessionID string) *sessionLocation {
|
||||
if townRoot == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Load accounts config
|
||||
accountsPath := constants.MayorAccountsPath(townRoot)
|
||||
cfg, err := config.LoadAccountsConfig(accountsPath)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Search each account's config directory
|
||||
for _, acct := range cfg.Accounts {
|
||||
if acct.ConfigDir == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Expand ~ in path
|
||||
configDir := acct.ConfigDir
|
||||
if strings.HasPrefix(configDir, "~/") {
|
||||
home, _ := os.UserHomeDir()
|
||||
configDir = filepath.Join(home, configDir[2:])
|
||||
}
|
||||
|
||||
// Search all sessions-index.json files in this account
|
||||
projectsDir := filepath.Join(configDir, "projects")
|
||||
if _, err := os.Stat(projectsDir); os.IsNotExist(err) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Walk through project directories
|
||||
entries, err := os.ReadDir(projectsDir)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, entry := range entries {
|
||||
if !entry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
indexPath := filepath.Join(projectsDir, entry.Name(), "sessions-index.json")
|
||||
if _, err := os.Stat(indexPath); os.IsNotExist(err) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Read and parse the sessions index
|
||||
data, err := os.ReadFile(indexPath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
var index sessionsIndex
|
||||
if err := json.Unmarshal(data, &index); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if this index contains our session
|
||||
for _, rawEntry := range index.Entries {
|
||||
var e sessionsIndexEntry
|
||||
if json.Unmarshal(rawEntry, &e) == nil && e.SessionID == sessionID {
|
||||
return &sessionLocation{
|
||||
configDir: configDir,
|
||||
projectDir: entry.Name(),
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// symlinkSessionToCurrentAccount finds a session in any account and symlinks
|
||||
// it to the current account so Claude can access it.
|
||||
// Returns a cleanup function to remove the symlink after use.
|
||||
func symlinkSessionToCurrentAccount(townRoot, sessionID string) (cleanup func(), err error) {
|
||||
// Find where the session lives
|
||||
loc := findSessionLocation(townRoot, sessionID)
|
||||
if loc == nil {
|
||||
return nil, fmt.Errorf("session not found in any account")
|
||||
}
|
||||
|
||||
// Get current account's config directory (resolve ~/.claude symlink)
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("getting home directory: %w", err)
|
||||
}
|
||||
|
||||
claudeDir := filepath.Join(home, ".claude")
|
||||
currentConfigDir, err := filepath.EvalSymlinks(claudeDir)
|
||||
if err != nil {
|
||||
// ~/.claude might not be a symlink, use it directly
|
||||
currentConfigDir = claudeDir
|
||||
}
|
||||
|
||||
// If session is already in current account, nothing to do
|
||||
if loc.configDir == currentConfigDir {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Source: the session file in the other account
|
||||
sourceSessionFile := filepath.Join(loc.configDir, "projects", loc.projectDir, sessionID+".jsonl")
|
||||
|
||||
// Check source exists
|
||||
if _, err := os.Stat(sourceSessionFile); os.IsNotExist(err) {
|
||||
return nil, fmt.Errorf("session file not found: %s", sourceSessionFile)
|
||||
}
|
||||
|
||||
// Target: the project directory in current account
|
||||
currentProjectDir := filepath.Join(currentConfigDir, "projects", loc.projectDir)
|
||||
|
||||
// Create project directory if it doesn't exist
|
||||
if err := os.MkdirAll(currentProjectDir, 0755); err != nil {
|
||||
return nil, fmt.Errorf("creating project directory: %w", err)
|
||||
}
|
||||
|
||||
// Symlink the specific session file
|
||||
targetSessionFile := filepath.Join(currentProjectDir, sessionID+".jsonl")
|
||||
|
||||
// Check if target session file already exists
|
||||
if info, err := os.Lstat(targetSessionFile); err == nil {
|
||||
if info.Mode()&os.ModeSymlink != 0 {
|
||||
// Already a symlink - check if it points to the right place
|
||||
existing, _ := os.Readlink(targetSessionFile)
|
||||
if existing == sourceSessionFile {
|
||||
// Already symlinked correctly, no cleanup needed
|
||||
return nil, nil
|
||||
}
|
||||
// Different symlink, remove it
|
||||
_ = os.Remove(targetSessionFile)
|
||||
} else {
|
||||
// Real file exists - session already in current account
|
||||
return nil, nil
|
||||
}
|
||||
}
|
||||
|
||||
// Create the symlink to the session file
|
||||
if err := os.Symlink(sourceSessionFile, targetSessionFile); err != nil {
|
||||
return nil, fmt.Errorf("creating symlink: %w", err)
|
||||
}
|
||||
|
||||
// Also need to update/create sessions-index.json so Claude can find the session
|
||||
// Read source index to get the session entry
|
||||
sourceIndexPath := filepath.Join(loc.configDir, "projects", loc.projectDir, "sessions-index.json")
|
||||
sourceIndexData, err := os.ReadFile(sourceIndexPath)
|
||||
if err != nil {
|
||||
// Clean up the symlink we just created
|
||||
_ = os.Remove(targetSessionFile)
|
||||
return nil, fmt.Errorf("reading source sessions index: %w", err)
|
||||
}
|
||||
|
||||
var sourceIndex sessionsIndex
|
||||
if err := json.Unmarshal(sourceIndexData, &sourceIndex); err != nil {
|
||||
_ = os.Remove(targetSessionFile)
|
||||
return nil, fmt.Errorf("parsing source sessions index: %w", err)
|
||||
}
|
||||
|
||||
// Find the session entry (as raw JSON to preserve all fields)
|
||||
var sessionEntry json.RawMessage
|
||||
for _, rawEntry := range sourceIndex.Entries {
|
||||
var e sessionsIndexEntry
|
||||
if json.Unmarshal(rawEntry, &e) == nil && e.SessionID == sessionID {
|
||||
sessionEntry = rawEntry
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if sessionEntry == nil {
|
||||
_ = os.Remove(targetSessionFile)
|
||||
return nil, fmt.Errorf("session not found in source index")
|
||||
}
|
||||
|
||||
// Read or create target index (with file locking to prevent race conditions)
|
||||
targetIndexPath := filepath.Join(currentProjectDir, "sessions-index.json")
|
||||
|
||||
// Acquire lock for read-modify-write operation
|
||||
lock, err := lockSessionsIndex(targetIndexPath)
|
||||
if err != nil {
|
||||
_ = os.Remove(targetSessionFile)
|
||||
return nil, fmt.Errorf("locking sessions index: %w", err)
|
||||
}
|
||||
defer func() { _ = lock.Unlock() }()
|
||||
|
||||
var targetIndex sessionsIndex
|
||||
if targetIndexData, err := os.ReadFile(targetIndexPath); err == nil {
|
||||
_ = json.Unmarshal(targetIndexData, &targetIndex)
|
||||
} else {
|
||||
targetIndex.Version = 1
|
||||
}
|
||||
|
||||
// Check if session already in target index
|
||||
sessionInIndex := false
|
||||
for _, rawEntry := range targetIndex.Entries {
|
||||
var e sessionsIndexEntry
|
||||
if json.Unmarshal(rawEntry, &e) == nil && e.SessionID == sessionID {
|
||||
sessionInIndex = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Add session to target index if not present
|
||||
indexModified := false
|
||||
if !sessionInIndex {
|
||||
targetIndex.Entries = append(targetIndex.Entries, sessionEntry)
|
||||
indexModified = true
|
||||
|
||||
// Write updated index
|
||||
targetIndexData, err := json.MarshalIndent(targetIndex, "", " ")
|
||||
if err != nil {
|
||||
_ = os.Remove(targetSessionFile)
|
||||
return nil, fmt.Errorf("encoding target sessions index: %w", err)
|
||||
}
|
||||
if err := os.WriteFile(targetIndexPath, targetIndexData, 0600); err != nil {
|
||||
_ = os.Remove(targetSessionFile)
|
||||
return nil, fmt.Errorf("writing target sessions index: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Return cleanup function
|
||||
cleanup = func() {
|
||||
_ = os.Remove(targetSessionFile)
|
||||
// If we modified the index, remove the entry we added
|
||||
if indexModified {
|
||||
// Acquire lock for read-modify-write operation
|
||||
cleanupLock, lockErr := lockSessionsIndex(targetIndexPath)
|
||||
if lockErr != nil {
|
||||
// Best effort cleanup - proceed without lock
|
||||
return
|
||||
}
|
||||
defer func() { _ = cleanupLock.Unlock() }()
|
||||
|
||||
// Re-read index, remove our entry, write it back
|
||||
if data, err := os.ReadFile(targetIndexPath); err == nil {
|
||||
var idx sessionsIndex
|
||||
if json.Unmarshal(data, &idx) == nil {
|
||||
newEntries := make([]json.RawMessage, 0, len(idx.Entries))
|
||||
for _, rawEntry := range idx.Entries {
|
||||
var e sessionsIndexEntry
|
||||
if json.Unmarshal(rawEntry, &e) == nil && e.SessionID != sessionID {
|
||||
newEntries = append(newEntries, rawEntry)
|
||||
}
|
||||
}
|
||||
idx.Entries = newEntries
|
||||
if newData, err := json.MarshalIndent(idx, "", " "); err == nil {
|
||||
_ = os.WriteFile(targetIndexPath, newData, 0600)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return cleanup, nil
|
||||
}
|
||||
|
||||
// cleanupOrphanedSessionSymlinks removes stale session symlinks from the current account.
|
||||
// This handles cases where a previous seance was interrupted (e.g., SIGKILL) and
|
||||
// couldn't run its cleanup function. Call this at the start of seance operations.
|
||||
func cleanupOrphanedSessionSymlinks() {
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
claudeDir := filepath.Join(home, ".claude")
|
||||
currentConfigDir, err := filepath.EvalSymlinks(claudeDir)
|
||||
if err != nil {
|
||||
currentConfigDir = claudeDir
|
||||
}
|
||||
|
||||
projectsDir := filepath.Join(currentConfigDir, "projects")
|
||||
if _, err := os.Stat(projectsDir); os.IsNotExist(err) {
|
||||
return
|
||||
}
|
||||
|
||||
// Walk through project directories
|
||||
projectEntries, err := os.ReadDir(projectsDir)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
for _, projEntry := range projectEntries {
|
||||
if !projEntry.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
projPath := filepath.Join(projectsDir, projEntry.Name())
|
||||
files, err := os.ReadDir(projPath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
var orphanedSessionIDs []string
|
||||
|
||||
for _, f := range files {
|
||||
if !strings.HasSuffix(f.Name(), ".jsonl") {
|
||||
continue
|
||||
}
|
||||
|
||||
filePath := filepath.Join(projPath, f.Name())
|
||||
info, err := os.Lstat(filePath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Only check symlinks
|
||||
if info.Mode()&os.ModeSymlink == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if symlink target exists
|
||||
target, err := os.Readlink(filePath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if _, err := os.Stat(target); os.IsNotExist(err) {
|
||||
// Target doesn't exist - this is an orphaned symlink
|
||||
sessionID := strings.TrimSuffix(f.Name(), ".jsonl")
|
||||
orphanedSessionIDs = append(orphanedSessionIDs, sessionID)
|
||||
_ = os.Remove(filePath)
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up orphaned entries from sessions-index.json
|
||||
if len(orphanedSessionIDs) > 0 {
|
||||
indexPath := filepath.Join(projPath, "sessions-index.json")
|
||||
|
||||
// Acquire lock for read-modify-write operation
|
||||
lock, lockErr := lockSessionsIndex(indexPath)
|
||||
if lockErr != nil {
|
||||
// Best effort cleanup - skip this project if lock fails
|
||||
continue
|
||||
}
|
||||
|
||||
data, err := os.ReadFile(indexPath)
|
||||
if err != nil {
|
||||
_ = lock.Unlock()
|
||||
continue
|
||||
}
|
||||
|
||||
var index sessionsIndex
|
||||
if err := json.Unmarshal(data, &index); err != nil {
|
||||
_ = lock.Unlock()
|
||||
continue
|
||||
}
|
||||
|
||||
// Build a set of orphaned IDs for fast lookup
|
||||
orphanedSet := make(map[string]bool)
|
||||
for _, id := range orphanedSessionIDs {
|
||||
orphanedSet[id] = true
|
||||
}
|
||||
|
||||
// Filter out orphaned entries
|
||||
newEntries := make([]json.RawMessage, 0, len(index.Entries))
|
||||
for _, rawEntry := range index.Entries {
|
||||
var e sessionsIndexEntry
|
||||
if json.Unmarshal(rawEntry, &e) == nil && !orphanedSet[e.SessionID] {
|
||||
newEntries = append(newEntries, rawEntry)
|
||||
}
|
||||
}
|
||||
|
||||
if len(newEntries) != len(index.Entries) {
|
||||
index.Entries = newEntries
|
||||
if newData, err := json.MarshalIndent(index, "", " "); err == nil {
|
||||
_ = os.WriteFile(indexPath, newData, 0600)
|
||||
}
|
||||
}
|
||||
|
||||
_ = lock.Unlock()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,366 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/steveyegge/gastown/internal/config"
|
||||
)
|
||||
|
||||
// setupSeanceTestEnv creates a test environment with multiple accounts and sessions.
|
||||
func setupSeanceTestEnv(t *testing.T) (townRoot, fakeHome string, cleanup func()) {
|
||||
t.Helper()
|
||||
|
||||
// Create fake home directory
|
||||
fakeHome = t.TempDir()
|
||||
|
||||
// Create town root
|
||||
townRoot = t.TempDir()
|
||||
|
||||
// Create mayor directory structure
|
||||
mayorDir := filepath.Join(townRoot, "mayor")
|
||||
if err := os.MkdirAll(mayorDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir mayor: %v", err)
|
||||
}
|
||||
|
||||
// Create two account config directories
|
||||
account1Dir := filepath.Join(fakeHome, "claude-config-account1")
|
||||
account2Dir := filepath.Join(fakeHome, "claude-config-account2")
|
||||
if err := os.MkdirAll(account1Dir, 0755); err != nil {
|
||||
t.Fatalf("mkdir account1: %v", err)
|
||||
}
|
||||
if err := os.MkdirAll(account2Dir, 0755); err != nil {
|
||||
t.Fatalf("mkdir account2: %v", err)
|
||||
}
|
||||
|
||||
// Create accounts.json pointing to both accounts
|
||||
accountsCfg := &config.AccountsConfig{
|
||||
Version: 1,
|
||||
Default: "account1",
|
||||
Accounts: map[string]config.Account{
|
||||
"account1": {Email: "test1@example.com", ConfigDir: account1Dir},
|
||||
"account2": {Email: "test2@example.com", ConfigDir: account2Dir},
|
||||
},
|
||||
}
|
||||
accountsPath := filepath.Join(mayorDir, "accounts.json")
|
||||
if err := config.SaveAccountsConfig(accountsPath, accountsCfg); err != nil {
|
||||
t.Fatalf("save accounts.json: %v", err)
|
||||
}
|
||||
|
||||
// Create ~/.claude symlink pointing to account1 (current account)
|
||||
claudeDir := filepath.Join(fakeHome, ".claude")
|
||||
if err := os.Symlink(account1Dir, claudeDir); err != nil {
|
||||
t.Fatalf("symlink .claude: %v", err)
|
||||
}
|
||||
|
||||
// Set up HOME env var
|
||||
oldHome := os.Getenv("HOME")
|
||||
os.Setenv("HOME", fakeHome)
|
||||
|
||||
cleanup = func() {
|
||||
os.Setenv("HOME", oldHome)
|
||||
}
|
||||
|
||||
return townRoot, fakeHome, cleanup
|
||||
}
|
||||
|
||||
// createTestSession creates a mock session file and index entry.
|
||||
func createTestSession(t *testing.T, configDir, projectName, sessionID string) {
|
||||
t.Helper()
|
||||
|
||||
projectDir := filepath.Join(configDir, "projects", projectName)
|
||||
if err := os.MkdirAll(projectDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir project: %v", err)
|
||||
}
|
||||
|
||||
// Create session file
|
||||
sessionFile := filepath.Join(projectDir, sessionID+".jsonl")
|
||||
if err := os.WriteFile(sessionFile, []byte(`{"type":"test"}`), 0600); err != nil {
|
||||
t.Fatalf("write session file: %v", err)
|
||||
}
|
||||
|
||||
// Create or update sessions-index.json
|
||||
indexPath := filepath.Join(projectDir, "sessions-index.json")
|
||||
var index sessionsIndex
|
||||
if data, err := os.ReadFile(indexPath); err == nil {
|
||||
_ = json.Unmarshal(data, &index)
|
||||
} else {
|
||||
index.Version = 1
|
||||
}
|
||||
|
||||
// Add session entry
|
||||
entry := map[string]interface{}{
|
||||
"sessionId": sessionID,
|
||||
"name": "Test Session",
|
||||
"lastAccessed": "2026-01-22T00:00:00Z",
|
||||
}
|
||||
entryJSON, _ := json.Marshal(entry)
|
||||
index.Entries = append(index.Entries, entryJSON)
|
||||
|
||||
indexData, _ := json.MarshalIndent(index, "", " ")
|
||||
if err := os.WriteFile(indexPath, indexData, 0600); err != nil {
|
||||
t.Fatalf("write sessions-index.json: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFindSessionLocation(t *testing.T) {
|
||||
t.Run("finds session in account1", func(t *testing.T) {
|
||||
townRoot, fakeHome, cleanup := setupSeanceTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
account1Dir := filepath.Join(fakeHome, "claude-config-account1")
|
||||
createTestSession(t, account1Dir, "test-project", "session-abc123")
|
||||
|
||||
loc := findSessionLocation(townRoot, "session-abc123")
|
||||
if loc == nil {
|
||||
t.Fatal("expected to find session, got nil")
|
||||
}
|
||||
if loc.configDir != account1Dir {
|
||||
t.Errorf("expected configDir %s, got %s", account1Dir, loc.configDir)
|
||||
}
|
||||
if loc.projectDir != "test-project" {
|
||||
t.Errorf("expected projectDir test-project, got %s", loc.projectDir)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("finds session in account2", func(t *testing.T) {
|
||||
townRoot, fakeHome, cleanup := setupSeanceTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
account2Dir := filepath.Join(fakeHome, "claude-config-account2")
|
||||
createTestSession(t, account2Dir, "other-project", "session-xyz789")
|
||||
|
||||
loc := findSessionLocation(townRoot, "session-xyz789")
|
||||
if loc == nil {
|
||||
t.Fatal("expected to find session, got nil")
|
||||
}
|
||||
if loc.configDir != account2Dir {
|
||||
t.Errorf("expected configDir %s, got %s", account2Dir, loc.configDir)
|
||||
}
|
||||
if loc.projectDir != "other-project" {
|
||||
t.Errorf("expected projectDir other-project, got %s", loc.projectDir)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("returns nil for nonexistent session", func(t *testing.T) {
|
||||
townRoot, _, cleanup := setupSeanceTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
loc := findSessionLocation(townRoot, "session-notfound")
|
||||
if loc != nil {
|
||||
t.Errorf("expected nil for nonexistent session, got %+v", loc)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("returns nil for empty townRoot", func(t *testing.T) {
|
||||
loc := findSessionLocation("", "session-abc")
|
||||
if loc != nil {
|
||||
t.Errorf("expected nil for empty townRoot, got %+v", loc)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestSymlinkSessionToCurrentAccount(t *testing.T) {
|
||||
t.Run("creates symlink for session in other account", func(t *testing.T) {
|
||||
townRoot, fakeHome, cleanup := setupSeanceTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
// Create session in account2 (not the current account)
|
||||
account2Dir := filepath.Join(fakeHome, "claude-config-account2")
|
||||
createTestSession(t, account2Dir, "cross-project", "session-cross123")
|
||||
|
||||
// Call symlinkSessionToCurrentAccount
|
||||
cleanupFn, err := symlinkSessionToCurrentAccount(townRoot, "session-cross123")
|
||||
if err != nil {
|
||||
t.Fatalf("symlinkSessionToCurrentAccount failed: %v", err)
|
||||
}
|
||||
if cleanupFn == nil {
|
||||
t.Fatal("expected cleanup function, got nil")
|
||||
}
|
||||
|
||||
// Verify symlink was created in current account (account1)
|
||||
account1Dir := filepath.Join(fakeHome, "claude-config-account1")
|
||||
symlinkPath := filepath.Join(account1Dir, "projects", "cross-project", "session-cross123.jsonl")
|
||||
|
||||
info, err := os.Lstat(symlinkPath)
|
||||
if err != nil {
|
||||
t.Fatalf("symlink not found: %v", err)
|
||||
}
|
||||
if info.Mode()&os.ModeSymlink == 0 {
|
||||
t.Error("expected symlink, got regular file")
|
||||
}
|
||||
|
||||
// Verify sessions-index.json was updated
|
||||
indexPath := filepath.Join(account1Dir, "projects", "cross-project", "sessions-index.json")
|
||||
data, err := os.ReadFile(indexPath)
|
||||
if err != nil {
|
||||
t.Fatalf("reading index: %v", err)
|
||||
}
|
||||
|
||||
var index sessionsIndex
|
||||
if err := json.Unmarshal(data, &index); err != nil {
|
||||
t.Fatalf("parsing index: %v", err)
|
||||
}
|
||||
|
||||
found := false
|
||||
for _, entry := range index.Entries {
|
||||
var e sessionsIndexEntry
|
||||
if json.Unmarshal(entry, &e) == nil && e.SessionID == "session-cross123" {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Error("session not found in target index")
|
||||
}
|
||||
|
||||
// Test cleanup
|
||||
cleanupFn()
|
||||
|
||||
// Verify symlink was removed
|
||||
if _, err := os.Lstat(symlinkPath); !os.IsNotExist(err) {
|
||||
t.Error("symlink should have been removed after cleanup")
|
||||
}
|
||||
|
||||
// Verify session was removed from index
|
||||
data, _ = os.ReadFile(indexPath)
|
||||
_ = json.Unmarshal(data, &index)
|
||||
for _, entry := range index.Entries {
|
||||
var e sessionsIndexEntry
|
||||
if json.Unmarshal(entry, &e) == nil && e.SessionID == "session-cross123" {
|
||||
t.Error("session should have been removed from index after cleanup")
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("returns nil cleanup for session in current account", func(t *testing.T) {
|
||||
townRoot, fakeHome, cleanup := setupSeanceTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
// Create session in account1 (the current account)
|
||||
account1Dir := filepath.Join(fakeHome, "claude-config-account1")
|
||||
createTestSession(t, account1Dir, "local-project", "session-local456")
|
||||
|
||||
cleanupFn, err := symlinkSessionToCurrentAccount(townRoot, "session-local456")
|
||||
if err != nil {
|
||||
t.Fatalf("unexpected error: %v", err)
|
||||
}
|
||||
if cleanupFn != nil {
|
||||
t.Error("expected nil cleanup for session in current account")
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("returns error for nonexistent session", func(t *testing.T) {
|
||||
townRoot, _, cleanup := setupSeanceTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
_, err := symlinkSessionToCurrentAccount(townRoot, "session-notfound")
|
||||
if err == nil {
|
||||
t.Error("expected error for nonexistent session")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestCleanupOrphanedSessionSymlinks(t *testing.T) {
|
||||
t.Run("removes orphaned symlinks", func(t *testing.T) {
|
||||
_, fakeHome, cleanup := setupSeanceTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
account1Dir := filepath.Join(fakeHome, "claude-config-account1")
|
||||
projectDir := filepath.Join(account1Dir, "projects", "orphan-project")
|
||||
if err := os.MkdirAll(projectDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir project: %v", err)
|
||||
}
|
||||
|
||||
// Create an orphaned symlink (target doesn't exist)
|
||||
orphanSymlink := filepath.Join(projectDir, "orphan-session.jsonl")
|
||||
nonexistentTarget := filepath.Join(fakeHome, "nonexistent", "session.jsonl")
|
||||
if err := os.Symlink(nonexistentTarget, orphanSymlink); err != nil {
|
||||
t.Fatalf("create orphan symlink: %v", err)
|
||||
}
|
||||
|
||||
// Create a sessions-index.json with the orphaned entry
|
||||
index := sessionsIndex{
|
||||
Version: 1,
|
||||
Entries: []json.RawMessage{
|
||||
json.RawMessage(`{"sessionId":"orphan-session","name":"Orphan"}`),
|
||||
},
|
||||
}
|
||||
indexPath := filepath.Join(projectDir, "sessions-index.json")
|
||||
data, _ := json.MarshalIndent(index, "", " ")
|
||||
if err := os.WriteFile(indexPath, data, 0600); err != nil {
|
||||
t.Fatalf("write index: %v", err)
|
||||
}
|
||||
|
||||
// Run cleanup
|
||||
cleanupOrphanedSessionSymlinks()
|
||||
|
||||
// Verify orphan symlink was removed
|
||||
if _, err := os.Lstat(orphanSymlink); !os.IsNotExist(err) {
|
||||
t.Error("orphaned symlink should have been removed")
|
||||
}
|
||||
|
||||
// Verify entry was removed from index
|
||||
data, _ = os.ReadFile(indexPath)
|
||||
var updatedIndex sessionsIndex
|
||||
_ = json.Unmarshal(data, &updatedIndex)
|
||||
if len(updatedIndex.Entries) != 0 {
|
||||
t.Errorf("expected 0 entries after cleanup, got %d", len(updatedIndex.Entries))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("preserves valid symlinks", func(t *testing.T) {
|
||||
_, fakeHome, cleanup := setupSeanceTestEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
account1Dir := filepath.Join(fakeHome, "claude-config-account1")
|
||||
account2Dir := filepath.Join(fakeHome, "claude-config-account2")
|
||||
|
||||
// Create a real session in account2
|
||||
createTestSession(t, account2Dir, "valid-project", "valid-session")
|
||||
|
||||
// Create project dir in account1
|
||||
projectDir := filepath.Join(account1Dir, "projects", "valid-project")
|
||||
if err := os.MkdirAll(projectDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir project: %v", err)
|
||||
}
|
||||
|
||||
// Create a valid symlink pointing to the real session
|
||||
validSymlink := filepath.Join(projectDir, "valid-session.jsonl")
|
||||
realTarget := filepath.Join(account2Dir, "projects", "valid-project", "valid-session.jsonl")
|
||||
if err := os.Symlink(realTarget, validSymlink); err != nil {
|
||||
t.Fatalf("create valid symlink: %v", err)
|
||||
}
|
||||
|
||||
// Create index with valid entry
|
||||
index := sessionsIndex{
|
||||
Version: 1,
|
||||
Entries: []json.RawMessage{
|
||||
json.RawMessage(`{"sessionId":"valid-session","name":"Valid"}`),
|
||||
},
|
||||
}
|
||||
indexPath := filepath.Join(projectDir, "sessions-index.json")
|
||||
data, _ := json.MarshalIndent(index, "", " ")
|
||||
if err := os.WriteFile(indexPath, data, 0600); err != nil {
|
||||
t.Fatalf("write index: %v", err)
|
||||
}
|
||||
|
||||
// Run cleanup
|
||||
cleanupOrphanedSessionSymlinks()
|
||||
|
||||
// Verify valid symlink was preserved
|
||||
if _, err := os.Lstat(validSymlink); err != nil {
|
||||
t.Error("valid symlink should have been preserved")
|
||||
}
|
||||
|
||||
// Verify entry was preserved in index
|
||||
data, _ = os.ReadFile(indexPath)
|
||||
var updatedIndex sessionsIndex
|
||||
_ = json.Unmarshal(data, &updatedIndex)
|
||||
if len(updatedIndex.Entries) != 1 {
|
||||
t.Errorf("expected 1 entry preserved, got %d", len(updatedIndex.Entries))
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
@@ -10,7 +11,6 @@ import (
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/steveyegge/gastown/internal/beads"
|
||||
"github.com/steveyegge/gastown/internal/events"
|
||||
"github.com/steveyegge/gastown/internal/mail"
|
||||
"github.com/steveyegge/gastown/internal/style"
|
||||
"github.com/steveyegge/gastown/internal/workspace"
|
||||
)
|
||||
@@ -82,13 +82,12 @@ Batch Slinging:
|
||||
}
|
||||
|
||||
var (
|
||||
slingSubject string
|
||||
slingMessage string
|
||||
slingDryRun bool
|
||||
slingOnTarget string // --on flag: target bead when slinging a formula
|
||||
slingVars []string // --var flag: formula variables (key=value)
|
||||
slingArgs string // --args flag: natural language instructions for executor
|
||||
slingHookRawBead bool // --hook-raw-bead: hook raw bead without default formula (expert mode)
|
||||
slingSubject string
|
||||
slingMessage string
|
||||
slingDryRun bool
|
||||
slingOnTarget string // --on flag: target bead when slinging a formula
|
||||
slingVars []string // --var flag: formula variables (key=value)
|
||||
slingArgs string // --args flag: natural language instructions for executor
|
||||
|
||||
// Flags migrated for polecat spawning (used by sling for work assignment)
|
||||
slingCreate bool // --create: create polecat if it doesn't exist
|
||||
@@ -112,7 +111,6 @@ func init() {
|
||||
slingCmd.Flags().StringVar(&slingAccount, "account", "", "Claude Code account handle to use")
|
||||
slingCmd.Flags().StringVar(&slingAgent, "agent", "", "Override agent/runtime for this sling (e.g., claude, gemini, codex, or custom alias)")
|
||||
slingCmd.Flags().BoolVar(&slingNoConvoy, "no-convoy", false, "Skip auto-convoy creation for single-issue sling")
|
||||
slingCmd.Flags().BoolVar(&slingHookRawBead, "hook-raw-bead", false, "Hook raw bead without default formula (expert mode)")
|
||||
|
||||
rootCmd.AddCommand(slingCmd)
|
||||
}
|
||||
@@ -149,7 +147,6 @@ func runSling(cmd *cobra.Command, args []string) error {
|
||||
// Determine mode based on flags and argument types
|
||||
var beadID string
|
||||
var formulaName string
|
||||
attachedMoleculeID := ""
|
||||
|
||||
if slingOnTarget != "" {
|
||||
// Formula-on-bead mode: gt sling <formula> --on <bead>
|
||||
@@ -191,8 +188,7 @@ func runSling(cmd *cobra.Command, args []string) error {
|
||||
// Determine target agent (self or specified)
|
||||
var targetAgent string
|
||||
var targetPane string
|
||||
var hookWorkDir string // Working directory for running bd hook commands
|
||||
var hookSetAtomically bool // True if hook was set during polecat spawn (skip redundant update)
|
||||
var hookWorkDir string // Working directory for running bd hook commands
|
||||
|
||||
if len(args) > 1 {
|
||||
target := args[1]
|
||||
@@ -249,7 +245,6 @@ func runSling(cmd *cobra.Command, args []string) error {
|
||||
targetAgent = spawnInfo.AgentID()
|
||||
targetPane = spawnInfo.Pane
|
||||
hookWorkDir = spawnInfo.ClonePath // Run bd commands from polecat's worktree
|
||||
hookSetAtomically = true // Hook was set during spawn (GH #gt-mzyk5)
|
||||
|
||||
// Wake witness and refinery to monitor the new polecat
|
||||
wakeRigAgents(rigName)
|
||||
@@ -281,7 +276,6 @@ func runSling(cmd *cobra.Command, args []string) error {
|
||||
targetAgent = spawnInfo.AgentID()
|
||||
targetPane = spawnInfo.Pane
|
||||
hookWorkDir = spawnInfo.ClonePath
|
||||
hookSetAtomically = true // Hook was set during spawn (GH #gt-mzyk5)
|
||||
|
||||
// Wake witness and refinery to monitor the new polecat
|
||||
wakeRigAgents(rigName)
|
||||
@@ -317,63 +311,17 @@ func runSling(cmd *cobra.Command, args []string) error {
|
||||
fmt.Printf("%s Slinging %s to %s...\n", style.Bold.Render("🎯"), beadID, targetAgent)
|
||||
}
|
||||
|
||||
// Check if bead is already assigned (guard against accidental re-sling)
|
||||
// Check if bead is already pinned (guard against accidental re-sling)
|
||||
info, err := getBeadInfo(beadID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("checking bead status: %w", err)
|
||||
}
|
||||
if (info.Status == "pinned" || info.Status == "hooked") && !slingForce {
|
||||
if info.Status == "pinned" && !slingForce {
|
||||
assignee := info.Assignee
|
||||
if assignee == "" {
|
||||
assignee = "(unknown)"
|
||||
}
|
||||
return fmt.Errorf("bead %s is already %s to %s\nUse --force to re-sling", beadID, info.Status, assignee)
|
||||
}
|
||||
|
||||
// Handle --force when bead is already hooked: send shutdown to old polecat and unhook
|
||||
if info.Status == "hooked" && slingForce && info.Assignee != "" {
|
||||
fmt.Printf("%s Bead already hooked to %s, forcing reassignment...\n", style.Warning.Render("⚠"), info.Assignee)
|
||||
|
||||
// Determine requester identity from env vars, fall back to "gt-sling"
|
||||
requester := "gt-sling"
|
||||
if polecat := os.Getenv("GT_POLECAT"); polecat != "" {
|
||||
requester = polecat
|
||||
} else if user := os.Getenv("USER"); user != "" {
|
||||
requester = user
|
||||
}
|
||||
|
||||
// Extract rig name from assignee (e.g., "gastown/polecats/Toast" -> "gastown")
|
||||
assigneeParts := strings.Split(info.Assignee, "/")
|
||||
if len(assigneeParts) >= 3 && assigneeParts[1] == "polecats" {
|
||||
oldRigName := assigneeParts[0]
|
||||
oldPolecatName := assigneeParts[2]
|
||||
|
||||
// Send LIFECYCLE:Shutdown to witness - will auto-nuke if clean,
|
||||
// otherwise create cleanup wisp for manual intervention
|
||||
if townRoot != "" {
|
||||
router := mail.NewRouter(townRoot)
|
||||
shutdownMsg := &mail.Message{
|
||||
From: "gt-sling",
|
||||
To: fmt.Sprintf("%s/witness", oldRigName),
|
||||
Subject: fmt.Sprintf("LIFECYCLE:Shutdown %s", oldPolecatName),
|
||||
Body: fmt.Sprintf("Reason: work_reassigned\nRequestedBy: %s\nBead: %s\nNewAssignee: %s", requester, beadID, targetAgent),
|
||||
Type: mail.TypeTask,
|
||||
Priority: mail.PriorityHigh,
|
||||
}
|
||||
if err := router.Send(shutdownMsg); err != nil {
|
||||
fmt.Printf("%s Could not send shutdown to witness: %v\n", style.Dim.Render("Warning:"), err)
|
||||
} else {
|
||||
fmt.Printf("%s Sent LIFECYCLE:Shutdown to %s/witness for %s\n", style.Bold.Render("→"), oldRigName, oldPolecatName)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Unhook the bead from old owner (set status back to open)
|
||||
unhookCmd := exec.Command("bd", "--no-daemon", "update", beadID, "--status=open", "--assignee=")
|
||||
unhookCmd.Dir = beads.ResolveHookDir(townRoot, beadID, "")
|
||||
if err := unhookCmd.Run(); err != nil {
|
||||
fmt.Printf("%s Could not unhook bead from old owner: %v\n", style.Dim.Render("Warning:"), err)
|
||||
}
|
||||
return fmt.Errorf("bead %s is already pinned to %s\nUse --force to re-sling", beadID, assignee)
|
||||
}
|
||||
|
||||
// Auto-convoy: check if issue is already tracked by a convoy
|
||||
@@ -399,14 +347,6 @@ func runSling(cmd *cobra.Command, args []string) error {
|
||||
}
|
||||
}
|
||||
|
||||
// Issue #288: Auto-apply mol-polecat-work when slinging bare bead to polecat.
|
||||
// This ensures polecats get structured work guidance through formula-on-bead.
|
||||
// Use --hook-raw-bead to bypass for expert/debugging scenarios.
|
||||
if formulaName == "" && !slingHookRawBead && strings.Contains(targetAgent, "/polecats/") {
|
||||
formulaName = "mol-polecat-work"
|
||||
fmt.Printf(" Auto-applying %s for polecat work...\n", formulaName)
|
||||
}
|
||||
|
||||
if slingDryRun {
|
||||
if formulaName != "" {
|
||||
fmt.Printf("Would instantiate formula %s:\n", formulaName)
|
||||
@@ -434,30 +374,75 @@ func runSling(cmd *cobra.Command, args []string) error {
|
||||
if formulaName != "" {
|
||||
fmt.Printf(" Instantiating formula %s...\n", formulaName)
|
||||
|
||||
result, err := InstantiateFormulaOnBead(formulaName, beadID, info.Title, hookWorkDir, townRoot, false)
|
||||
if err != nil {
|
||||
return fmt.Errorf("instantiating formula %s: %w", formulaName, err)
|
||||
// Route bd mutations (wisp/bond) to the correct beads context for the target bead.
|
||||
// Some bd mol commands don't support prefix routing, so we must run them from the
|
||||
// rig directory that owns the bead's database.
|
||||
formulaWorkDir := beads.ResolveHookDir(townRoot, beadID, hookWorkDir)
|
||||
|
||||
// Step 1: Cook the formula (ensures proto exists)
|
||||
// Cook runs from rig directory to access the correct formula database
|
||||
cookCmd := exec.Command("bd", "--no-daemon", "cook", formulaName)
|
||||
cookCmd.Dir = formulaWorkDir
|
||||
cookCmd.Stderr = os.Stderr
|
||||
if err := cookCmd.Run(); err != nil {
|
||||
return fmt.Errorf("cooking formula %s: %w", formulaName, err)
|
||||
}
|
||||
|
||||
// Step 2: Create wisp with feature and issue variables from bead
|
||||
// Run from rig directory so wisp is created in correct database
|
||||
featureVar := fmt.Sprintf("feature=%s", info.Title)
|
||||
issueVar := fmt.Sprintf("issue=%s", beadID)
|
||||
wispArgs := []string{"--no-daemon", "mol", "wisp", formulaName, "--var", featureVar, "--var", issueVar, "--json"}
|
||||
wispCmd := exec.Command("bd", wispArgs...)
|
||||
wispCmd.Dir = formulaWorkDir
|
||||
wispCmd.Env = append(os.Environ(), "GT_ROOT="+townRoot)
|
||||
wispCmd.Stderr = os.Stderr
|
||||
wispOut, err := wispCmd.Output()
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating wisp for formula %s: %w", formulaName, err)
|
||||
}
|
||||
|
||||
// Parse wisp output to get the root ID
|
||||
wispRootID, err := parseWispIDFromJSON(wispOut)
|
||||
if err != nil {
|
||||
return fmt.Errorf("parsing wisp output: %w", err)
|
||||
}
|
||||
fmt.Printf("%s Formula wisp created: %s\n", style.Bold.Render("✓"), wispRootID)
|
||||
|
||||
// Step 3: Bond wisp to original bead (creates compound)
|
||||
// Use --no-daemon for mol bond (requires direct database access)
|
||||
bondArgs := []string{"--no-daemon", "mol", "bond", wispRootID, beadID, "--json"}
|
||||
bondCmd := exec.Command("bd", bondArgs...)
|
||||
bondCmd.Dir = formulaWorkDir
|
||||
bondCmd.Stderr = os.Stderr
|
||||
bondOut, err := bondCmd.Output()
|
||||
if err != nil {
|
||||
return fmt.Errorf("bonding formula to bead: %w", err)
|
||||
}
|
||||
|
||||
// Parse bond output - the wisp root becomes the compound root
|
||||
// After bonding, we hook the wisp root (which now contains the original bead)
|
||||
var bondResult struct {
|
||||
RootID string `json:"root_id"`
|
||||
}
|
||||
if err := json.Unmarshal(bondOut, &bondResult); err != nil {
|
||||
// Fallback: use wisp root as the compound root
|
||||
fmt.Printf("%s Could not parse bond output, using wisp root\n", style.Dim.Render("Warning:"))
|
||||
} else if bondResult.RootID != "" {
|
||||
wispRootID = bondResult.RootID
|
||||
}
|
||||
|
||||
fmt.Printf("%s Formula wisp created: %s\n", style.Bold.Render("✓"), result.WispRootID)
|
||||
fmt.Printf("%s Formula bonded to %s\n", style.Bold.Render("✓"), beadID)
|
||||
|
||||
// Record attached molecule - will be stored in BASE bead (not wisp).
|
||||
// The base bead is hooked, and its attached_molecule points to the wisp.
|
||||
// This enables:
|
||||
// - gt hook/gt prime: read base bead, follow attached_molecule to show wisp steps
|
||||
// - gt done: close attached_molecule (wisp) first, then close base bead
|
||||
// - Compound resolution: base bead -> attached_molecule -> wisp
|
||||
attachedMoleculeID = result.WispRootID
|
||||
// Record the attached molecule in the wisp's description.
|
||||
// This is required for gt hook to recognize the molecule attachment.
|
||||
if err := storeAttachedMoleculeInBead(wispRootID, wispRootID); err != nil {
|
||||
// Warn but don't fail - polecat can still work through steps
|
||||
fmt.Printf("%s Could not store attached_molecule: %v\n", style.Dim.Render("Warning:"), err)
|
||||
}
|
||||
|
||||
// NOTE: We intentionally keep beadID as the ORIGINAL base bead, not the wisp.
|
||||
// The base bead is hooked so that:
|
||||
// 1. gt done closes both the base bead AND the attached molecule (wisp)
|
||||
// 2. The base bead's attached_molecule field points to the wisp for compound resolution
|
||||
// Previously, this line incorrectly set beadID = wispRootID, causing:
|
||||
// - Wisp hooked instead of base bead
|
||||
// - attached_molecule stored as self-reference in wisp (meaningless)
|
||||
// - Base bead left orphaned after gt done
|
||||
// Update beadID to hook the compound root instead of bare bead
|
||||
beadID = wispRootID
|
||||
}
|
||||
|
||||
// Hook the bead using bd update.
|
||||
@@ -476,11 +461,15 @@ func runSling(cmd *cobra.Command, args []string) error {
|
||||
_ = events.LogFeed(events.TypeSling, actor, events.SlingPayload(beadID, targetAgent))
|
||||
|
||||
// Update agent bead's hook_bead field (ZFC: agents track their current work)
|
||||
// Skip if hook was already set atomically during polecat spawn - avoids "agent bead not found"
|
||||
// error when polecat redirect setup fails (GH #gt-mzyk5: agent bead created in rig beads
|
||||
// but updateAgentHookBead looks in polecat's local beads if redirect is missing).
|
||||
if !hookSetAtomically {
|
||||
updateAgentHookBead(targetAgent, beadID, hookWorkDir, townBeadsDir)
|
||||
updateAgentHookBead(targetAgent, beadID, hookWorkDir, townBeadsDir)
|
||||
|
||||
// Auto-attach mol-polecat-work to polecat agent beads
|
||||
// This ensures polecats have the standard work molecule attached for guidance
|
||||
if strings.Contains(targetAgent, "/polecats/") {
|
||||
if err := attachPolecatWorkMolecule(targetAgent, hookWorkDir, townRoot); err != nil {
|
||||
// Warn but don't fail - polecat will still work without molecule
|
||||
fmt.Printf("%s Could not attach work molecule: %v\n", style.Dim.Render("Warning:"), err)
|
||||
}
|
||||
}
|
||||
|
||||
// Store dispatcher in bead description (enables completion notification to dispatcher)
|
||||
@@ -499,18 +488,6 @@ func runSling(cmd *cobra.Command, args []string) error {
|
||||
}
|
||||
}
|
||||
|
||||
// Record the attached molecule in the BASE bead's description.
|
||||
// This field points to the wisp (compound root) and enables:
|
||||
// - gt hook/gt prime: follow attached_molecule to show molecule steps
|
||||
// - gt done: close attached_molecule (wisp) before closing hooked bead
|
||||
// - Compound resolution: base bead -> attached_molecule -> wisp
|
||||
if attachedMoleculeID != "" {
|
||||
if err := storeAttachedMoleculeInBead(beadID, attachedMoleculeID); err != nil {
|
||||
// Warn but don't fail - polecat can still work through steps
|
||||
fmt.Printf("%s Could not store attached_molecule: %v\n", style.Dim.Render("Warning:"), err)
|
||||
}
|
||||
}
|
||||
|
||||
// Try to inject the "start now" prompt (graceful if no tmux)
|
||||
if targetPane == "" {
|
||||
fmt.Printf("%s No pane to nudge (agent will discover work via gt prime)\n", style.Dim.Render("○"))
|
||||
|
||||
@@ -1,378 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// TestInstantiateFormulaOnBead verifies the helper function works correctly.
|
||||
// This tests the formula-on-bead pattern used by issue #288.
|
||||
func TestInstantiateFormulaOnBead(t *testing.T) {
|
||||
townRoot := t.TempDir()
|
||||
|
||||
// Minimal workspace marker
|
||||
if err := os.MkdirAll(filepath.Join(townRoot, "mayor", "rig"), 0755); err != nil {
|
||||
t.Fatalf("mkdir mayor/rig: %v", err)
|
||||
}
|
||||
|
||||
// Create routes.jsonl
|
||||
if err := os.MkdirAll(filepath.Join(townRoot, ".beads"), 0755); err != nil {
|
||||
t.Fatalf("mkdir .beads: %v", err)
|
||||
}
|
||||
rigDir := filepath.Join(townRoot, "gastown", "mayor", "rig")
|
||||
if err := os.MkdirAll(rigDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir rigDir: %v", err)
|
||||
}
|
||||
routes := strings.Join([]string{
|
||||
`{"prefix":"gt-","path":"gastown/mayor/rig"}`,
|
||||
`{"prefix":"hq-","path":"."}`,
|
||||
"",
|
||||
}, "\n")
|
||||
if err := os.WriteFile(filepath.Join(townRoot, ".beads", "routes.jsonl"), []byte(routes), 0644); err != nil {
|
||||
t.Fatalf("write routes.jsonl: %v", err)
|
||||
}
|
||||
|
||||
// Create stub bd that logs all commands
|
||||
binDir := filepath.Join(townRoot, "bin")
|
||||
if err := os.MkdirAll(binDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir binDir: %v", err)
|
||||
}
|
||||
logPath := filepath.Join(townRoot, "bd.log")
|
||||
bdScript := `#!/bin/sh
|
||||
set -e
|
||||
echo "CMD:$*" >> "${BD_LOG}"
|
||||
if [ "$1" = "--no-daemon" ]; then
|
||||
shift
|
||||
fi
|
||||
cmd="$1"
|
||||
shift || true
|
||||
case "$cmd" in
|
||||
show)
|
||||
echo '[{"title":"Fix bug ABC","status":"open","assignee":"","description":""}]'
|
||||
;;
|
||||
formula)
|
||||
echo '{"name":"mol-polecat-work"}'
|
||||
;;
|
||||
cook)
|
||||
;;
|
||||
mol)
|
||||
sub="$1"
|
||||
shift || true
|
||||
case "$sub" in
|
||||
wisp)
|
||||
echo '{"new_epic_id":"gt-wisp-288"}'
|
||||
;;
|
||||
bond)
|
||||
echo '{"root_id":"gt-wisp-288"}'
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
update)
|
||||
;;
|
||||
esac
|
||||
exit 0
|
||||
`
|
||||
bdPath := filepath.Join(binDir, "bd")
|
||||
if err := os.WriteFile(bdPath, []byte(bdScript), 0755); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
|
||||
t.Setenv("BD_LOG", logPath)
|
||||
t.Setenv("PATH", binDir+string(os.PathListSeparator)+os.Getenv("PATH"))
|
||||
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
t.Fatalf("getwd: %v", err)
|
||||
}
|
||||
t.Cleanup(func() { _ = os.Chdir(cwd) })
|
||||
if err := os.Chdir(filepath.Join(townRoot, "mayor", "rig")); err != nil {
|
||||
t.Fatalf("chdir: %v", err)
|
||||
}
|
||||
|
||||
// Test the helper function directly
|
||||
result, err := InstantiateFormulaOnBead("mol-polecat-work", "gt-abc123", "Test Bug Fix", "", townRoot, false)
|
||||
if err != nil {
|
||||
t.Fatalf("InstantiateFormulaOnBead failed: %v", err)
|
||||
}
|
||||
|
||||
if result.WispRootID == "" {
|
||||
t.Error("WispRootID should not be empty")
|
||||
}
|
||||
if result.BeadToHook == "" {
|
||||
t.Error("BeadToHook should not be empty")
|
||||
}
|
||||
|
||||
// Verify commands were logged
|
||||
logBytes, err := os.ReadFile(logPath)
|
||||
if err != nil {
|
||||
t.Fatalf("read log: %v", err)
|
||||
}
|
||||
logContent := string(logBytes)
|
||||
|
||||
if !strings.Contains(logContent, "cook mol-polecat-work") {
|
||||
t.Errorf("cook command not found in log:\n%s", logContent)
|
||||
}
|
||||
if !strings.Contains(logContent, "mol wisp mol-polecat-work") {
|
||||
t.Errorf("mol wisp command not found in log:\n%s", logContent)
|
||||
}
|
||||
if !strings.Contains(logContent, "mol bond") {
|
||||
t.Errorf("mol bond command not found in log:\n%s", logContent)
|
||||
}
|
||||
}
|
||||
|
||||
// TestInstantiateFormulaOnBeadSkipCook verifies the skipCook optimization.
|
||||
func TestInstantiateFormulaOnBeadSkipCook(t *testing.T) {
|
||||
townRoot := t.TempDir()
|
||||
|
||||
// Minimal workspace marker
|
||||
if err := os.MkdirAll(filepath.Join(townRoot, "mayor", "rig"), 0755); err != nil {
|
||||
t.Fatalf("mkdir mayor/rig: %v", err)
|
||||
}
|
||||
|
||||
// Create routes.jsonl
|
||||
if err := os.MkdirAll(filepath.Join(townRoot, ".beads"), 0755); err != nil {
|
||||
t.Fatalf("mkdir .beads: %v", err)
|
||||
}
|
||||
routes := `{"prefix":"gt-","path":"."}`
|
||||
if err := os.WriteFile(filepath.Join(townRoot, ".beads", "routes.jsonl"), []byte(routes), 0644); err != nil {
|
||||
t.Fatalf("write routes.jsonl: %v", err)
|
||||
}
|
||||
|
||||
// Create stub bd
|
||||
binDir := filepath.Join(townRoot, "bin")
|
||||
if err := os.MkdirAll(binDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir binDir: %v", err)
|
||||
}
|
||||
logPath := filepath.Join(townRoot, "bd.log")
|
||||
bdScript := `#!/bin/sh
|
||||
echo "CMD:$*" >> "${BD_LOG}"
|
||||
if [ "$1" = "--no-daemon" ]; then shift; fi
|
||||
cmd="$1"; shift || true
|
||||
case "$cmd" in
|
||||
mol)
|
||||
sub="$1"; shift || true
|
||||
case "$sub" in
|
||||
wisp) echo '{"new_epic_id":"gt-wisp-skip"}';;
|
||||
bond) echo '{"root_id":"gt-wisp-skip"}';;
|
||||
esac;;
|
||||
esac
|
||||
exit 0
|
||||
`
|
||||
if err := os.WriteFile(filepath.Join(binDir, "bd"), []byte(bdScript), 0755); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
|
||||
t.Setenv("BD_LOG", logPath)
|
||||
t.Setenv("PATH", binDir+string(os.PathListSeparator)+os.Getenv("PATH"))
|
||||
|
||||
cwd, _ := os.Getwd()
|
||||
t.Cleanup(func() { _ = os.Chdir(cwd) })
|
||||
_ = os.Chdir(townRoot)
|
||||
|
||||
// Test with skipCook=true
|
||||
_, err := InstantiateFormulaOnBead("mol-polecat-work", "gt-test", "Test", "", townRoot, true)
|
||||
if err != nil {
|
||||
t.Fatalf("InstantiateFormulaOnBead failed: %v", err)
|
||||
}
|
||||
|
||||
logBytes, _ := os.ReadFile(logPath)
|
||||
logContent := string(logBytes)
|
||||
|
||||
// Verify cook was NOT called when skipCook=true
|
||||
if strings.Contains(logContent, "cook") {
|
||||
t.Errorf("cook should be skipped when skipCook=true, but was called:\n%s", logContent)
|
||||
}
|
||||
|
||||
// Verify wisp and bond were still called
|
||||
if !strings.Contains(logContent, "mol wisp") {
|
||||
t.Errorf("mol wisp should still be called")
|
||||
}
|
||||
if !strings.Contains(logContent, "mol bond") {
|
||||
t.Errorf("mol bond should still be called")
|
||||
}
|
||||
}
|
||||
|
||||
// TestCookFormula verifies the CookFormula helper.
|
||||
func TestCookFormula(t *testing.T) {
|
||||
townRoot := t.TempDir()
|
||||
|
||||
binDir := filepath.Join(townRoot, "bin")
|
||||
if err := os.MkdirAll(binDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir binDir: %v", err)
|
||||
}
|
||||
logPath := filepath.Join(townRoot, "bd.log")
|
||||
bdScript := `#!/bin/sh
|
||||
echo "CMD:$*" >> "${BD_LOG}"
|
||||
exit 0
|
||||
`
|
||||
if err := os.WriteFile(filepath.Join(binDir, "bd"), []byte(bdScript), 0755); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
|
||||
t.Setenv("BD_LOG", logPath)
|
||||
t.Setenv("PATH", binDir+string(os.PathListSeparator)+os.Getenv("PATH"))
|
||||
|
||||
err := CookFormula("mol-polecat-work", townRoot)
|
||||
if err != nil {
|
||||
t.Fatalf("CookFormula failed: %v", err)
|
||||
}
|
||||
|
||||
logBytes, _ := os.ReadFile(logPath)
|
||||
if !strings.Contains(string(logBytes), "cook mol-polecat-work") {
|
||||
t.Errorf("cook command not found in log")
|
||||
}
|
||||
}
|
||||
|
||||
// TestSlingHookRawBeadFlag verifies --hook-raw-bead flag exists.
|
||||
func TestSlingHookRawBeadFlag(t *testing.T) {
|
||||
// Verify the flag variable exists and works
|
||||
prevValue := slingHookRawBead
|
||||
t.Cleanup(func() { slingHookRawBead = prevValue })
|
||||
|
||||
slingHookRawBead = true
|
||||
if !slingHookRawBead {
|
||||
t.Error("slingHookRawBead flag should be true")
|
||||
}
|
||||
|
||||
slingHookRawBead = false
|
||||
if slingHookRawBead {
|
||||
t.Error("slingHookRawBead flag should be false")
|
||||
}
|
||||
}
|
||||
|
||||
// TestAutoApplyLogic verifies the auto-apply detection logic.
|
||||
// When formulaName is empty and target contains "/polecats/", mol-polecat-work should be applied.
|
||||
func TestAutoApplyLogic(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
formulaName string
|
||||
hookRawBead bool
|
||||
targetAgent string
|
||||
wantAutoApply bool
|
||||
}{
|
||||
{
|
||||
name: "bare bead to polecat - should auto-apply",
|
||||
formulaName: "",
|
||||
hookRawBead: false,
|
||||
targetAgent: "gastown/polecats/Toast",
|
||||
wantAutoApply: true,
|
||||
},
|
||||
{
|
||||
name: "bare bead with --hook-raw-bead - should not auto-apply",
|
||||
formulaName: "",
|
||||
hookRawBead: true,
|
||||
targetAgent: "gastown/polecats/Toast",
|
||||
wantAutoApply: false,
|
||||
},
|
||||
{
|
||||
name: "formula already specified - should not auto-apply",
|
||||
formulaName: "mol-review",
|
||||
hookRawBead: false,
|
||||
targetAgent: "gastown/polecats/Toast",
|
||||
wantAutoApply: false,
|
||||
},
|
||||
{
|
||||
name: "non-polecat target - should not auto-apply",
|
||||
formulaName: "",
|
||||
hookRawBead: false,
|
||||
targetAgent: "gastown/witness",
|
||||
wantAutoApply: false,
|
||||
},
|
||||
{
|
||||
name: "mayor target - should not auto-apply",
|
||||
formulaName: "",
|
||||
hookRawBead: false,
|
||||
targetAgent: "mayor",
|
||||
wantAutoApply: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// This mirrors the logic in sling.go
|
||||
shouldAutoApply := tt.formulaName == "" && !tt.hookRawBead && strings.Contains(tt.targetAgent, "/polecats/")
|
||||
|
||||
if shouldAutoApply != tt.wantAutoApply {
|
||||
t.Errorf("auto-apply logic: got %v, want %v", shouldAutoApply, tt.wantAutoApply)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestFormulaOnBeadPassesVariables verifies that feature and issue variables are passed.
|
||||
func TestFormulaOnBeadPassesVariables(t *testing.T) {
|
||||
townRoot := t.TempDir()
|
||||
|
||||
// Minimal workspace
|
||||
if err := os.MkdirAll(filepath.Join(townRoot, "mayor", "rig"), 0755); err != nil {
|
||||
t.Fatalf("mkdir: %v", err)
|
||||
}
|
||||
if err := os.MkdirAll(filepath.Join(townRoot, ".beads"), 0755); err != nil {
|
||||
t.Fatalf("mkdir .beads: %v", err)
|
||||
}
|
||||
if err := os.WriteFile(filepath.Join(townRoot, ".beads", "routes.jsonl"), []byte(`{"prefix":"gt-","path":"."}`), 0644); err != nil {
|
||||
t.Fatalf("write routes: %v", err)
|
||||
}
|
||||
|
||||
binDir := filepath.Join(townRoot, "bin")
|
||||
if err := os.MkdirAll(binDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir binDir: %v", err)
|
||||
}
|
||||
logPath := filepath.Join(townRoot, "bd.log")
|
||||
bdScript := `#!/bin/sh
|
||||
echo "CMD:$*" >> "${BD_LOG}"
|
||||
if [ "$1" = "--no-daemon" ]; then shift; fi
|
||||
cmd="$1"; shift || true
|
||||
case "$cmd" in
|
||||
cook) exit 0;;
|
||||
mol)
|
||||
sub="$1"; shift || true
|
||||
case "$sub" in
|
||||
wisp) echo '{"new_epic_id":"gt-wisp-var"}';;
|
||||
bond) echo '{"root_id":"gt-wisp-var"}';;
|
||||
esac;;
|
||||
esac
|
||||
exit 0
|
||||
`
|
||||
if err := os.WriteFile(filepath.Join(binDir, "bd"), []byte(bdScript), 0755); err != nil {
|
||||
t.Fatalf("write bd: %v", err)
|
||||
}
|
||||
|
||||
t.Setenv("BD_LOG", logPath)
|
||||
t.Setenv("PATH", binDir+string(os.PathListSeparator)+os.Getenv("PATH"))
|
||||
|
||||
cwd, _ := os.Getwd()
|
||||
t.Cleanup(func() { _ = os.Chdir(cwd) })
|
||||
_ = os.Chdir(townRoot)
|
||||
|
||||
_, err := InstantiateFormulaOnBead("mol-polecat-work", "gt-abc123", "My Cool Feature", "", townRoot, false)
|
||||
if err != nil {
|
||||
t.Fatalf("InstantiateFormulaOnBead: %v", err)
|
||||
}
|
||||
|
||||
logBytes, _ := os.ReadFile(logPath)
|
||||
logContent := string(logBytes)
|
||||
|
||||
// Find mol wisp line
|
||||
var wispLine string
|
||||
for _, line := range strings.Split(logContent, "\n") {
|
||||
if strings.Contains(line, "mol wisp") {
|
||||
wispLine = line
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if wispLine == "" {
|
||||
t.Fatalf("mol wisp command not found:\n%s", logContent)
|
||||
}
|
||||
|
||||
if !strings.Contains(wispLine, "feature=My Cool Feature") {
|
||||
t.Errorf("mol wisp missing feature variable:\n%s", wispLine)
|
||||
}
|
||||
|
||||
if !strings.Contains(wispLine, "issue=gt-abc123") {
|
||||
t.Errorf("mol wisp missing issue variable:\n%s", wispLine)
|
||||
}
|
||||
}
|
||||
@@ -23,21 +23,14 @@ func runBatchSling(beadIDs []string, rigName string, townBeadsDir string) error
|
||||
|
||||
if slingDryRun {
|
||||
fmt.Printf("%s Batch slinging %d beads to rig '%s':\n", style.Bold.Render("🎯"), len(beadIDs), rigName)
|
||||
fmt.Printf(" Would cook mol-polecat-work formula once\n")
|
||||
for _, beadID := range beadIDs {
|
||||
fmt.Printf(" Would spawn polecat and apply mol-polecat-work to: %s\n", beadID)
|
||||
fmt.Printf(" Would spawn polecat for: %s\n", beadID)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
fmt.Printf("%s Batch slinging %d beads to rig '%s'...\n", style.Bold.Render("🎯"), len(beadIDs), rigName)
|
||||
|
||||
// Issue #288: Auto-apply mol-polecat-work for batch sling
|
||||
// Cook once before the loop for efficiency
|
||||
townRoot := filepath.Dir(townBeadsDir)
|
||||
formulaName := "mol-polecat-work"
|
||||
formulaCooked := false
|
||||
|
||||
// Track results for summary
|
||||
type slingResult struct {
|
||||
beadID string
|
||||
@@ -98,34 +91,10 @@ func runBatchSling(beadIDs []string, rigName string, townBeadsDir string) error
|
||||
}
|
||||
}
|
||||
|
||||
// Issue #288: Apply mol-polecat-work via formula-on-bead pattern
|
||||
// Cook once (lazy), then instantiate for each bead
|
||||
if !formulaCooked {
|
||||
workDir := beads.ResolveHookDir(townRoot, beadID, hookWorkDir)
|
||||
if err := CookFormula(formulaName, workDir); err != nil {
|
||||
fmt.Printf(" %s Could not cook formula %s: %v\n", style.Dim.Render("Warning:"), formulaName, err)
|
||||
// Fall back to raw hook if formula cook fails
|
||||
} else {
|
||||
formulaCooked = true
|
||||
}
|
||||
}
|
||||
|
||||
beadToHook := beadID
|
||||
attachedMoleculeID := ""
|
||||
if formulaCooked {
|
||||
result, err := InstantiateFormulaOnBead(formulaName, beadID, info.Title, hookWorkDir, townRoot, true)
|
||||
if err != nil {
|
||||
fmt.Printf(" %s Could not apply formula: %v (hooking raw bead)\n", style.Dim.Render("Warning:"), err)
|
||||
} else {
|
||||
fmt.Printf(" %s Formula %s applied\n", style.Bold.Render("✓"), formulaName)
|
||||
beadToHook = result.BeadToHook
|
||||
attachedMoleculeID = result.WispRootID
|
||||
}
|
||||
}
|
||||
|
||||
// Hook the bead (or wisp compound if formula was applied)
|
||||
hookCmd := exec.Command("bd", "--no-daemon", "update", beadToHook, "--status=hooked", "--assignee="+targetAgent)
|
||||
hookCmd.Dir = beads.ResolveHookDir(townRoot, beadToHook, hookWorkDir)
|
||||
// Hook the bead. See: https://github.com/steveyegge/gastown/issues/148
|
||||
townRoot := filepath.Dir(townBeadsDir)
|
||||
hookCmd := exec.Command("bd", "--no-daemon", "update", beadID, "--status=hooked", "--assignee="+targetAgent)
|
||||
hookCmd.Dir = beads.ResolveHookDir(townRoot, beadID, hookWorkDir)
|
||||
hookCmd.Stderr = os.Stderr
|
||||
if err := hookCmd.Run(); err != nil {
|
||||
results = append(results, slingResult{beadID: beadID, polecat: spawnInfo.PolecatName, success: false, errMsg: "hook failed"})
|
||||
@@ -137,16 +106,14 @@ func runBatchSling(beadIDs []string, rigName string, townBeadsDir string) error
|
||||
|
||||
// Log sling event
|
||||
actor := detectActor()
|
||||
_ = events.LogFeed(events.TypeSling, actor, events.SlingPayload(beadToHook, targetAgent))
|
||||
_ = events.LogFeed(events.TypeSling, actor, events.SlingPayload(beadID, targetAgent))
|
||||
|
||||
// Update agent bead state
|
||||
updateAgentHookBead(targetAgent, beadToHook, hookWorkDir, townBeadsDir)
|
||||
updateAgentHookBead(targetAgent, beadID, hookWorkDir, townBeadsDir)
|
||||
|
||||
// Store attached molecule in the hooked bead
|
||||
if attachedMoleculeID != "" {
|
||||
if err := storeAttachedMoleculeInBead(beadToHook, attachedMoleculeID); err != nil {
|
||||
fmt.Printf(" %s Could not store attached_molecule: %v\n", style.Dim.Render("Warning:"), err)
|
||||
}
|
||||
// Auto-attach mol-polecat-work molecule to polecat agent bead
|
||||
if err := attachPolecatWorkMolecule(targetAgent, hookWorkDir, townRoot); err != nil {
|
||||
fmt.Printf(" %s Could not attach work molecule: %v\n", style.Dim.Render("Warning:"), err)
|
||||
}
|
||||
|
||||
// Store args if provided
|
||||
|
||||
@@ -42,7 +42,6 @@ func isTrackedByConvoy(beadID string) string {
|
||||
JOIN issues i ON d.issue_id = i.id
|
||||
WHERE d.type = 'tracks'
|
||||
AND i.issue_type = 'convoy'
|
||||
AND i.status = 'open'
|
||||
AND (d.depends_on_id = '%s' OR d.depends_on_id LIKE '%%:%s')
|
||||
LIMIT 1
|
||||
`, beadID, beadID)
|
||||
|
||||
@@ -209,7 +209,13 @@ func runSlingFormula(args []string) error {
|
||||
}
|
||||
|
||||
fmt.Printf("%s Wisp created: %s\n", style.Bold.Render("✓"), wispRootID)
|
||||
attachedMoleculeID := wispRootID
|
||||
|
||||
// Record the attached molecule in the wisp's description.
|
||||
// This is required for gt hook to recognize the molecule attachment.
|
||||
if err := storeAttachedMoleculeInBead(wispRootID, wispRootID); err != nil {
|
||||
// Warn but don't fail - polecat can still work through steps
|
||||
fmt.Printf("%s Could not store attached_molecule: %v\n", style.Dim.Render("Warning:"), err)
|
||||
}
|
||||
|
||||
// Step 3: Hook the wisp bead using bd update.
|
||||
// See: https://github.com/steveyegge/gastown/issues/148
|
||||
@@ -246,25 +252,12 @@ func runSlingFormula(args []string) error {
|
||||
}
|
||||
}
|
||||
|
||||
// Record the attached molecule after other description updates to avoid overwrite.
|
||||
if attachedMoleculeID != "" {
|
||||
if err := storeAttachedMoleculeInBead(wispRootID, attachedMoleculeID); err != nil {
|
||||
// Warn but don't fail - polecat can still work through steps
|
||||
fmt.Printf("%s Could not store attached_molecule: %v\n", style.Dim.Render("Warning:"), err)
|
||||
}
|
||||
}
|
||||
|
||||
// Step 4: Nudge to start (graceful if no tmux)
|
||||
if targetPane == "" {
|
||||
fmt.Printf("%s No pane to nudge (agent will discover work via gt prime)\n", style.Dim.Render("○"))
|
||||
return nil
|
||||
}
|
||||
|
||||
// Skip nudge during tests to prevent agent self-interruption
|
||||
if os.Getenv("GT_TEST_NO_NUDGE") != "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
var prompt string
|
||||
if slingArgs != "" {
|
||||
prompt = fmt.Sprintf("Formula %s slung. Args: %s. Run `gt hook` to see your hook, then execute using these args.", formulaName, slingArgs)
|
||||
|
||||
@@ -9,7 +9,9 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/steveyegge/gastown/internal/beads"
|
||||
"github.com/steveyegge/gastown/internal/config"
|
||||
"github.com/steveyegge/gastown/internal/constants"
|
||||
"github.com/steveyegge/gastown/internal/style"
|
||||
"github.com/steveyegge/gastown/internal/tmux"
|
||||
"github.com/steveyegge/gastown/internal/workspace"
|
||||
)
|
||||
@@ -93,16 +95,12 @@ func storeArgsInBead(beadID, args string) error {
|
||||
// Parse the bead
|
||||
var issues []beads.Issue
|
||||
if err := json.Unmarshal(out, &issues); err != nil {
|
||||
if os.Getenv("GT_TEST_ATTACHED_MOLECULE_LOG") == "" {
|
||||
return fmt.Errorf("parsing bead: %w", err)
|
||||
}
|
||||
return fmt.Errorf("parsing bead: %w", err)
|
||||
}
|
||||
issue := &beads.Issue{}
|
||||
if len(issues) > 0 {
|
||||
issue = &issues[0]
|
||||
} else if os.Getenv("GT_TEST_ATTACHED_MOLECULE_LOG") == "" {
|
||||
if len(issues) == 0 {
|
||||
return fmt.Errorf("bead not found")
|
||||
}
|
||||
issue := &issues[0]
|
||||
|
||||
// Get or create attachment fields
|
||||
fields := beads.ParseAttachmentFields(issue)
|
||||
@@ -115,9 +113,6 @@ func storeArgsInBead(beadID, args string) error {
|
||||
|
||||
// Update the description
|
||||
newDesc := beads.SetAttachmentFields(issue, fields)
|
||||
if logPath := os.Getenv("GT_TEST_ATTACHED_MOLECULE_LOG"); logPath != "" {
|
||||
_ = os.WriteFile(logPath, []byte(newDesc), 0644)
|
||||
}
|
||||
|
||||
// Update the bead
|
||||
updateCmd := exec.Command("bd", "--no-daemon", "update", beadID, "--description="+newDesc)
|
||||
@@ -182,30 +177,23 @@ func storeAttachedMoleculeInBead(beadID, moleculeID string) error {
|
||||
if moleculeID == "" {
|
||||
return nil
|
||||
}
|
||||
logPath := os.Getenv("GT_TEST_ATTACHED_MOLECULE_LOG")
|
||||
if logPath != "" {
|
||||
_ = os.WriteFile(logPath, []byte("called"), 0644)
|
||||
|
||||
// Get the bead to preserve existing description content
|
||||
showCmd := exec.Command("bd", "show", beadID, "--json")
|
||||
out, err := showCmd.Output()
|
||||
if err != nil {
|
||||
return fmt.Errorf("fetching bead: %w", err)
|
||||
}
|
||||
|
||||
issue := &beads.Issue{}
|
||||
if logPath == "" {
|
||||
// Get the bead to preserve existing description content
|
||||
showCmd := exec.Command("bd", "show", beadID, "--json")
|
||||
out, err := showCmd.Output()
|
||||
if err != nil {
|
||||
return fmt.Errorf("fetching bead: %w", err)
|
||||
}
|
||||
|
||||
// Parse the bead
|
||||
var issues []beads.Issue
|
||||
if err := json.Unmarshal(out, &issues); err != nil {
|
||||
return fmt.Errorf("parsing bead: %w", err)
|
||||
}
|
||||
if len(issues) == 0 {
|
||||
return fmt.Errorf("bead not found")
|
||||
}
|
||||
issue = &issues[0]
|
||||
// Parse the bead
|
||||
var issues []beads.Issue
|
||||
if err := json.Unmarshal(out, &issues); err != nil {
|
||||
return fmt.Errorf("parsing bead: %w", err)
|
||||
}
|
||||
if len(issues) == 0 {
|
||||
return fmt.Errorf("bead not found")
|
||||
}
|
||||
issue := &issues[0]
|
||||
|
||||
// Get or create attachment fields
|
||||
fields := beads.ParseAttachmentFields(issue)
|
||||
@@ -221,9 +209,6 @@ func storeAttachedMoleculeInBead(beadID, moleculeID string) error {
|
||||
|
||||
// Update the description
|
||||
newDesc := beads.SetAttachmentFields(issue, fields)
|
||||
if logPath != "" {
|
||||
_ = os.WriteFile(logPath, []byte(newDesc), 0644)
|
||||
}
|
||||
|
||||
// Update the bead
|
||||
updateCmd := exec.Command("bd", "update", beadID, "--description="+newDesc)
|
||||
@@ -343,9 +328,6 @@ func detectActor() string {
|
||||
// Rig-level agents use the rig's configured prefix (default "gt-").
|
||||
// townRoot is needed to look up the rig's configured prefix.
|
||||
func agentIDToBeadID(agentID, townRoot string) string {
|
||||
// Normalize: strip trailing slash (resolveSelfTarget returns "mayor/" not "mayor")
|
||||
agentID = strings.TrimSuffix(agentID, "/")
|
||||
|
||||
// Handle simple cases (town-level agents with hq- prefix)
|
||||
if agentID == "mayor" {
|
||||
return beads.MayorBeadIDTown()
|
||||
@@ -415,17 +397,11 @@ func updateAgentHookBead(agentID, beadID, workDir, townBeadsDir string) {
|
||||
return
|
||||
}
|
||||
|
||||
// Resolve the correct working directory for the agent bead.
|
||||
// Agent beads with rig-level prefixes (e.g., go-) live in rig databases,
|
||||
// not the town database. Use prefix-based resolution to find the correct path.
|
||||
// This fixes go-19z: bd slot commands failing for go-* prefixed beads.
|
||||
agentWorkDir := beads.ResolveHookDir(townRoot, agentBeadID, bdWorkDir)
|
||||
|
||||
// Run from agentWorkDir WITHOUT BEADS_DIR to enable redirect-based routing.
|
||||
// Run from workDir WITHOUT BEADS_DIR to enable redirect-based routing.
|
||||
// Set hook_bead to the slung work (gt-zecmc: removed agent_state update).
|
||||
// Agent liveness is observable from tmux - no need to record it in bead.
|
||||
// For cross-database scenarios, slot set may fail gracefully (warning only).
|
||||
bd := beads.New(agentWorkDir)
|
||||
bd := beads.New(bdWorkDir)
|
||||
if err := bd.SetHookBead(agentBeadID, beadID); err != nil {
|
||||
// Log warning instead of silent ignore - helps debug cross-beads issues
|
||||
fmt.Fprintf(os.Stderr, "Warning: couldn't set agent %s hook: %v\n", agentBeadID, err)
|
||||
@@ -459,86 +435,57 @@ func isPolecatTarget(target string) bool {
|
||||
return len(parts) >= 3 && parts[1] == "polecats"
|
||||
}
|
||||
|
||||
// FormulaOnBeadResult contains the result of instantiating a formula on a bead.
|
||||
type FormulaOnBeadResult struct {
|
||||
WispRootID string // The wisp root ID (compound root after bonding)
|
||||
BeadToHook string // The bead ID to hook (BASE bead, not wisp - lifecycle fix)
|
||||
}
|
||||
|
||||
// InstantiateFormulaOnBead creates a wisp from a formula, bonds it to a bead.
|
||||
// This is the formula-on-bead pattern used by issue #288 for auto-applying mol-polecat-work.
|
||||
// attachPolecatWorkMolecule attaches the mol-polecat-work molecule to a polecat's agent bead.
|
||||
// This ensures all polecats have the standard work molecule attached for guidance.
|
||||
// The molecule is attached by storing it in the agent bead's description using attachment fields.
|
||||
//
|
||||
// Parameters:
|
||||
// - formulaName: the formula to instantiate (e.g., "mol-polecat-work")
|
||||
// - beadID: the base bead to bond the wisp to
|
||||
// - title: the bead title (used for --var feature=<title>)
|
||||
// - hookWorkDir: working directory for bd commands (polecat's worktree)
|
||||
// - townRoot: the town root directory
|
||||
// - skipCook: if true, skip cooking (for batch mode optimization where cook happens once)
|
||||
//
|
||||
// Returns the wisp root ID which should be hooked.
|
||||
func InstantiateFormulaOnBead(formulaName, beadID, title, hookWorkDir, townRoot string, skipCook bool) (*FormulaOnBeadResult, error) {
|
||||
// Route bd mutations (wisp/bond) to the correct beads context for the target bead.
|
||||
formulaWorkDir := beads.ResolveHookDir(townRoot, beadID, hookWorkDir)
|
||||
// Per issue #288: gt sling should auto-attach mol-polecat-work when slinging to polecats.
|
||||
func attachPolecatWorkMolecule(targetAgent, hookWorkDir, townRoot string) error {
|
||||
// Parse the polecat name from targetAgent (format: "rig/polecats/name")
|
||||
parts := strings.Split(targetAgent, "/")
|
||||
if len(parts) != 3 || parts[1] != "polecats" {
|
||||
return fmt.Errorf("invalid polecat agent format: %s", targetAgent)
|
||||
}
|
||||
rigName := parts[0]
|
||||
polecatName := parts[2]
|
||||
|
||||
// Step 1: Cook the formula (ensures proto exists)
|
||||
if !skipCook {
|
||||
cookCmd := exec.Command("bd", "--no-daemon", "cook", formulaName)
|
||||
cookCmd.Dir = formulaWorkDir
|
||||
cookCmd.Stderr = os.Stderr
|
||||
if err := cookCmd.Run(); err != nil {
|
||||
return nil, fmt.Errorf("cooking formula %s: %w", formulaName, err)
|
||||
}
|
||||
// Get the polecat's agent bead ID
|
||||
// Format: "<prefix>-<rig>-polecat-<name>" (e.g., "gt-gastown-polecat-Toast")
|
||||
prefix := config.GetRigPrefix(townRoot, rigName)
|
||||
agentBeadID := beads.PolecatBeadIDWithPrefix(prefix, rigName, polecatName)
|
||||
|
||||
// Resolve the rig directory for running bd commands.
|
||||
// Use ResolveHookDir to ensure we run bd from the correct rig directory
|
||||
// (not from the polecat's worktree, which doesn't have a .beads directory).
|
||||
// This fixes issue #197: polecat fails to hook when slinging with molecule.
|
||||
rigDir := beads.ResolveHookDir(townRoot, prefix+"-"+polecatName, hookWorkDir)
|
||||
|
||||
b := beads.New(rigDir)
|
||||
|
||||
// Check if molecule is already attached (avoid duplicate attach)
|
||||
attachment, err := b.GetAttachment(agentBeadID)
|
||||
if err == nil && attachment != nil && attachment.AttachedMolecule != "" {
|
||||
// Already has a molecule attached - skip
|
||||
return nil
|
||||
}
|
||||
|
||||
// Step 2: Create wisp with feature and issue variables from bead
|
||||
featureVar := fmt.Sprintf("feature=%s", title)
|
||||
issueVar := fmt.Sprintf("issue=%s", beadID)
|
||||
wispArgs := []string{"--no-daemon", "mol", "wisp", formulaName, "--var", featureVar, "--var", issueVar, "--json"}
|
||||
wispCmd := exec.Command("bd", wispArgs...)
|
||||
wispCmd.Dir = formulaWorkDir
|
||||
wispCmd.Env = append(os.Environ(), "GT_ROOT="+townRoot)
|
||||
wispCmd.Stderr = os.Stderr
|
||||
wispOut, err := wispCmd.Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("creating wisp for formula %s: %w", formulaName, err)
|
||||
}
|
||||
|
||||
// Parse wisp output to get the root ID
|
||||
wispRootID, err := parseWispIDFromJSON(wispOut)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("parsing wisp output: %w", err)
|
||||
}
|
||||
|
||||
// Step 3: Bond wisp to original bead (creates compound)
|
||||
bondArgs := []string{"--no-daemon", "mol", "bond", wispRootID, beadID, "--json"}
|
||||
bondCmd := exec.Command("bd", bondArgs...)
|
||||
bondCmd.Dir = formulaWorkDir
|
||||
bondCmd.Stderr = os.Stderr
|
||||
bondOut, err := bondCmd.Output()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("bonding formula to bead: %w", err)
|
||||
}
|
||||
|
||||
// Parse bond output - the wisp root becomes the compound root
|
||||
var bondResult struct {
|
||||
RootID string `json:"root_id"`
|
||||
}
|
||||
if err := json.Unmarshal(bondOut, &bondResult); err == nil && bondResult.RootID != "" {
|
||||
wispRootID = bondResult.RootID
|
||||
}
|
||||
|
||||
return &FormulaOnBeadResult{
|
||||
WispRootID: wispRootID,
|
||||
BeadToHook: beadID, // Hook the BASE bead (lifecycle fix: wisp is attached_molecule)
|
||||
}, nil
|
||||
}
|
||||
|
||||
// CookFormula cooks a formula to ensure its proto exists.
|
||||
// This is useful for batch mode where we cook once before processing multiple beads.
|
||||
func CookFormula(formulaName, workDir string) error {
|
||||
cookCmd := exec.Command("bd", "--no-daemon", "cook", formulaName)
|
||||
cookCmd.Dir = workDir
|
||||
// Cook the mol-polecat-work formula to ensure the proto exists
|
||||
// This is safe to run multiple times - cooking is idempotent
|
||||
cookCmd := exec.Command("bd", "--no-daemon", "cook", "mol-polecat-work")
|
||||
cookCmd.Dir = rigDir
|
||||
cookCmd.Stderr = os.Stderr
|
||||
return cookCmd.Run()
|
||||
if err := cookCmd.Run(); err != nil {
|
||||
return fmt.Errorf("cooking mol-polecat-work formula: %w", err)
|
||||
}
|
||||
|
||||
// Attach the molecule to the polecat's agent bead
|
||||
// The molecule ID is the formula name "mol-polecat-work"
|
||||
moleculeID := "mol-polecat-work"
|
||||
_, err = b.AttachMolecule(agentBeadID, moleculeID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("attaching molecule %s to %s: %w", moleculeID, agentBeadID, err)
|
||||
}
|
||||
|
||||
fmt.Printf("%s Attached %s to %s\n", style.Bold.Render("✓"), moleculeID, agentBeadID)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -3,39 +3,10 @@ package cmd
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func writeBDStub(t *testing.T, binDir string, unixScript string, windowsScript string) string {
|
||||
t.Helper()
|
||||
|
||||
var path string
|
||||
if runtime.GOOS == "windows" {
|
||||
path = filepath.Join(binDir, "bd.cmd")
|
||||
if err := os.WriteFile(path, []byte(windowsScript), 0644); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
return path
|
||||
}
|
||||
|
||||
path = filepath.Join(binDir, "bd")
|
||||
if err := os.WriteFile(path, []byte(unixScript), 0755); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
return path
|
||||
}
|
||||
|
||||
func containsVarArg(line, key, value string) bool {
|
||||
plain := "--var " + key + "=" + value
|
||||
if strings.Contains(line, plain) {
|
||||
return true
|
||||
}
|
||||
quoted := "--var \"" + key + "=" + value + "\""
|
||||
return strings.Contains(line, quoted)
|
||||
}
|
||||
|
||||
func TestParseWispIDFromJSON(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
@@ -249,6 +220,7 @@ func TestSlingFormulaOnBeadRoutesBDCommandsToTargetRig(t *testing.T) {
|
||||
t.Fatalf("mkdir binDir: %v", err)
|
||||
}
|
||||
logPath := filepath.Join(townRoot, "bd.log")
|
||||
bdPath := filepath.Join(binDir, "bd")
|
||||
bdScript := `#!/bin/sh
|
||||
set -e
|
||||
echo "$(pwd)|$*" >> "${BD_LOG}"
|
||||
@@ -284,41 +256,11 @@ case "$cmd" in
|
||||
esac
|
||||
exit 0
|
||||
`
|
||||
bdScriptWindows := `@echo off
|
||||
setlocal enableextensions
|
||||
echo %CD%^|%*>>"%BD_LOG%"
|
||||
set "cmd=%1"
|
||||
set "sub=%2"
|
||||
if "%cmd%"=="--no-daemon" (
|
||||
set "cmd=%2"
|
||||
set "sub=%3"
|
||||
)
|
||||
if "%cmd%"=="show" (
|
||||
echo [{"title":"Test issue","status":"open","assignee":"","description":""}]
|
||||
exit /b 0
|
||||
)
|
||||
if "%cmd%"=="formula" (
|
||||
echo {"name":"test-formula"}
|
||||
exit /b 0
|
||||
)
|
||||
if "%cmd%"=="cook" exit /b 0
|
||||
if "%cmd%"=="mol" (
|
||||
if "%sub%"=="wisp" (
|
||||
echo {"new_epic_id":"gt-wisp-xyz"}
|
||||
exit /b 0
|
||||
)
|
||||
if "%sub%"=="bond" (
|
||||
echo {"root_id":"gt-wisp-xyz"}
|
||||
exit /b 0
|
||||
)
|
||||
)
|
||||
exit /b 0
|
||||
`
|
||||
_ = writeBDStub(t, binDir, bdScript, bdScriptWindows)
|
||||
if err := os.WriteFile(bdPath, []byte(bdScript), 0755); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
|
||||
t.Setenv("BD_LOG", logPath)
|
||||
attachedLogPath := filepath.Join(townRoot, "attached-molecule.log")
|
||||
t.Setenv("GT_TEST_ATTACHED_MOLECULE_LOG", attachedLogPath)
|
||||
t.Setenv("PATH", binDir+string(os.PathListSeparator)+os.Getenv("PATH"))
|
||||
t.Setenv(EnvGTRole, "mayor")
|
||||
t.Setenv("GT_POLECAT", "")
|
||||
@@ -439,6 +381,7 @@ func TestSlingFormulaOnBeadPassesFeatureAndIssueVars(t *testing.T) {
|
||||
t.Fatalf("mkdir binDir: %v", err)
|
||||
}
|
||||
logPath := filepath.Join(townRoot, "bd.log")
|
||||
bdPath := filepath.Join(binDir, "bd")
|
||||
// The stub returns a specific title so we can verify it appears in --var feature=
|
||||
bdScript := `#!/bin/sh
|
||||
set -e
|
||||
@@ -475,41 +418,11 @@ case "$cmd" in
|
||||
esac
|
||||
exit 0
|
||||
`
|
||||
bdScriptWindows := `@echo off
|
||||
setlocal enableextensions
|
||||
echo ARGS:%*>>"%BD_LOG%"
|
||||
set "cmd=%1"
|
||||
set "sub=%2"
|
||||
if "%cmd%"=="--no-daemon" (
|
||||
set "cmd=%2"
|
||||
set "sub=%3"
|
||||
)
|
||||
if "%cmd%"=="show" (
|
||||
echo [{^"title^":^"My Test Feature^",^"status^":^"open^",^"assignee^":^"^",^"description^":^"^"}]
|
||||
exit /b 0
|
||||
)
|
||||
if "%cmd%"=="formula" (
|
||||
echo {^"name^":^"mol-review^"}
|
||||
exit /b 0
|
||||
)
|
||||
if "%cmd%"=="cook" exit /b 0
|
||||
if "%cmd%"=="mol" (
|
||||
if "%sub%"=="wisp" (
|
||||
echo {^"new_epic_id^":^"gt-wisp-xyz^"}
|
||||
exit /b 0
|
||||
)
|
||||
if "%sub%"=="bond" (
|
||||
echo {^"root_id^":^"gt-wisp-xyz^"}
|
||||
exit /b 0
|
||||
)
|
||||
)
|
||||
exit /b 0
|
||||
`
|
||||
_ = writeBDStub(t, binDir, bdScript, bdScriptWindows)
|
||||
if err := os.WriteFile(bdPath, []byte(bdScript), 0755); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
|
||||
t.Setenv("BD_LOG", logPath)
|
||||
attachedLogPath := filepath.Join(townRoot, "attached-molecule.log")
|
||||
t.Setenv("GT_TEST_ATTACHED_MOLECULE_LOG", attachedLogPath)
|
||||
t.Setenv("PATH", binDir+string(os.PathListSeparator)+os.Getenv("PATH"))
|
||||
t.Setenv(EnvGTRole, "mayor")
|
||||
t.Setenv("GT_POLECAT", "")
|
||||
@@ -569,12 +482,12 @@ exit /b 0
|
||||
}
|
||||
|
||||
// Verify --var feature=<title> is present
|
||||
if !containsVarArg(wispLine, "feature", "My Test Feature") {
|
||||
if !strings.Contains(wispLine, "--var feature=My Test Feature") {
|
||||
t.Errorf("mol wisp missing --var feature=<title>\ngot: %s", wispLine)
|
||||
}
|
||||
|
||||
// Verify --var issue=<beadID> is present
|
||||
if !containsVarArg(wispLine, "issue", "gt-abc123") {
|
||||
if !strings.Contains(wispLine, "--var issue=gt-abc123") {
|
||||
t.Errorf("mol wisp missing --var issue=<beadID>\ngot: %s", wispLine)
|
||||
}
|
||||
}
|
||||
@@ -597,6 +510,7 @@ func TestVerifyBeadExistsAllowStale(t *testing.T) {
|
||||
if err := os.MkdirAll(binDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir binDir: %v", err)
|
||||
}
|
||||
bdPath := filepath.Join(binDir, "bd")
|
||||
bdScript := `#!/bin/sh
|
||||
# Check for --allow-stale flag
|
||||
allow_stale=false
|
||||
@@ -621,24 +535,9 @@ fi
|
||||
echo '[{"title":"Test bead","status":"open","assignee":""}]'
|
||||
exit 0
|
||||
`
|
||||
bdScriptWindows := `@echo off
|
||||
setlocal enableextensions
|
||||
set "allow=false"
|
||||
for %%A in (%*) do (
|
||||
if "%%~A"=="--allow-stale" set "allow=true"
|
||||
)
|
||||
if "%1"=="--no-daemon" (
|
||||
if "%allow%"=="true" (
|
||||
echo [{"title":"Test bead","status":"open","assignee":""}]
|
||||
exit /b 0
|
||||
)
|
||||
echo {"error":"Database out of sync with JSONL."}
|
||||
exit /b 1
|
||||
)
|
||||
echo [{"title":"Test bead","status":"open","assignee":""}]
|
||||
exit /b 0
|
||||
`
|
||||
_ = writeBDStub(t, binDir, bdScript, bdScriptWindows)
|
||||
if err := os.WriteFile(bdPath, []byte(bdScript), 0755); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
|
||||
t.Setenv("PATH", binDir+string(os.PathListSeparator)+os.Getenv("PATH"))
|
||||
|
||||
@@ -674,6 +573,7 @@ func TestSlingWithAllowStale(t *testing.T) {
|
||||
if err := os.MkdirAll(binDir, 0755); err != nil {
|
||||
t.Fatalf("mkdir binDir: %v", err)
|
||||
}
|
||||
bdPath := filepath.Join(binDir, "bd")
|
||||
bdScript := `#!/bin/sh
|
||||
# Check for --allow-stale flag
|
||||
allow_stale=false
|
||||
@@ -708,40 +608,14 @@ case "$cmd" in
|
||||
esac
|
||||
exit 0
|
||||
`
|
||||
bdScriptWindows := `@echo off
|
||||
setlocal enableextensions
|
||||
set "allow=false"
|
||||
for %%A in (%*) do (
|
||||
if "%%~A"=="--allow-stale" set "allow=true"
|
||||
)
|
||||
set "cmd=%1"
|
||||
if "%cmd%"=="--no-daemon" (
|
||||
set "cmd=%2"
|
||||
if "%cmd%"=="show" (
|
||||
if "%allow%"=="true" (
|
||||
echo [{"title":"Synced bead","status":"open","assignee":""}]
|
||||
exit /b 0
|
||||
)
|
||||
echo {"error":"Database out of sync"}
|
||||
exit /b 1
|
||||
)
|
||||
exit /b 0
|
||||
)
|
||||
set "cmd=%1"
|
||||
if "%cmd%"=="show" (
|
||||
echo [{"title":"Synced bead","status":"open","assignee":""}]
|
||||
exit /b 0
|
||||
)
|
||||
if "%cmd%"=="update" exit /b 0
|
||||
exit /b 0
|
||||
`
|
||||
_ = writeBDStub(t, binDir, bdScript, bdScriptWindows)
|
||||
if err := os.WriteFile(bdPath, []byte(bdScript), 0755); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
|
||||
t.Setenv("PATH", binDir+string(os.PathListSeparator)+os.Getenv("PATH"))
|
||||
t.Setenv(EnvGTRole, "crew")
|
||||
t.Setenv("GT_CREW", "jv")
|
||||
t.Setenv("GT_POLECAT", "")
|
||||
t.Setenv("TMUX_PANE", "") // Prevent inheriting real tmux pane from test runner
|
||||
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
@@ -763,9 +637,6 @@ exit /b 0
|
||||
slingDryRun = true
|
||||
slingNoConvoy = true
|
||||
|
||||
// Prevent real tmux nudge from firing during tests (causes agent self-interruption)
|
||||
t.Setenv("GT_TEST_NO_NUDGE", "1")
|
||||
|
||||
// EXPECTED: gt sling should use daemon mode and succeed
|
||||
// ACTUAL: verifyBeadExists uses --no-daemon and fails with sync error
|
||||
beadID := "jv-v599"
|
||||
@@ -872,6 +743,7 @@ func TestSlingFormulaOnBeadSetsAttachedMolecule(t *testing.T) {
|
||||
t.Fatalf("mkdir binDir: %v", err)
|
||||
}
|
||||
logPath := filepath.Join(townRoot, "bd.log")
|
||||
bdPath := filepath.Join(binDir, "bd")
|
||||
// The stub logs all commands to a file for verification
|
||||
bdScript := `#!/bin/sh
|
||||
set -e
|
||||
@@ -911,47 +783,15 @@ case "$cmd" in
|
||||
esac
|
||||
exit 0
|
||||
`
|
||||
bdScriptWindows := `@echo off
|
||||
setlocal enableextensions
|
||||
echo %CD%^|%*>>"%BD_LOG%"
|
||||
set "cmd=%1"
|
||||
set "sub=%2"
|
||||
if "%cmd%"=="--no-daemon" (
|
||||
set "cmd=%2"
|
||||
set "sub=%3"
|
||||
)
|
||||
if "%cmd%"=="show" (
|
||||
echo [{^"title^":^"Bug to fix^",^"status^":^"open^",^"assignee^":^"^",^"description^":^"^"}]
|
||||
exit /b 0
|
||||
)
|
||||
if "%cmd%"=="formula" (
|
||||
echo {^"name^":^"mol-polecat-work^"}
|
||||
exit /b 0
|
||||
)
|
||||
if "%cmd%"=="cook" exit /b 0
|
||||
if "%cmd%"=="mol" (
|
||||
if "%sub%"=="wisp" (
|
||||
echo {^"new_epic_id^":^"gt-wisp-xyz^"}
|
||||
exit /b 0
|
||||
)
|
||||
if "%sub%"=="bond" (
|
||||
echo {^"root_id^":^"gt-wisp-xyz^"}
|
||||
exit /b 0
|
||||
)
|
||||
)
|
||||
if "%cmd%"=="update" exit /b 0
|
||||
exit /b 0
|
||||
`
|
||||
_ = writeBDStub(t, binDir, bdScript, bdScriptWindows)
|
||||
if err := os.WriteFile(bdPath, []byte(bdScript), 0755); err != nil {
|
||||
t.Fatalf("write bd stub: %v", err)
|
||||
}
|
||||
|
||||
t.Setenv("BD_LOG", logPath)
|
||||
attachedLogPath := filepath.Join(townRoot, "attached-molecule.log")
|
||||
t.Setenv("GT_TEST_ATTACHED_MOLECULE_LOG", attachedLogPath)
|
||||
t.Setenv("PATH", binDir+string(os.PathListSeparator)+os.Getenv("PATH"))
|
||||
t.Setenv(EnvGTRole, "mayor")
|
||||
t.Setenv("GT_POLECAT", "")
|
||||
t.Setenv("GT_CREW", "")
|
||||
t.Setenv("TMUX_PANE", "") // Prevent inheriting real tmux pane from test runner
|
||||
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
@@ -979,9 +819,6 @@ exit /b 0
|
||||
slingVars = nil
|
||||
slingOnTarget = "gt-abc123" // The bug bead we're applying formula to
|
||||
|
||||
// Prevent real tmux nudge from firing during tests (causes agent self-interruption)
|
||||
t.Setenv("GT_TEST_NO_NUDGE", "1")
|
||||
|
||||
if err := runSling(nil, []string{"mol-polecat-work"}); err != nil {
|
||||
t.Fatalf("runSling: %v", err)
|
||||
}
|
||||
@@ -1017,20 +854,8 @@ exit /b 0
|
||||
}
|
||||
|
||||
if !foundAttachedMolecule {
|
||||
if descBytes, err := os.ReadFile(attachedLogPath); err == nil {
|
||||
if strings.Contains(string(descBytes), "attached_molecule") {
|
||||
foundAttachedMolecule = true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !foundAttachedMolecule {
|
||||
attachedLog := "<missing>"
|
||||
if descBytes, err := os.ReadFile(attachedLogPath); err == nil {
|
||||
attachedLog = string(descBytes)
|
||||
}
|
||||
t.Errorf("after mol bond, expected update with attached_molecule in description\n"+
|
||||
"This is required for gt hook to recognize the molecule attachment.\n"+
|
||||
"Log output:\n%s\nAttached log:\n%s", string(logBytes), attachedLog)
|
||||
"Log output:\n%s", string(logBytes))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -30,20 +30,18 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
startAll bool
|
||||
startAgentOverride string
|
||||
startCrewRig string
|
||||
startCrewAccount string
|
||||
startCrewAgentOverride string
|
||||
shutdownGraceful bool
|
||||
shutdownWait int
|
||||
shutdownAll bool
|
||||
shutdownForce bool
|
||||
shutdownYes bool
|
||||
shutdownPolecatsOnly bool
|
||||
shutdownNuclear bool
|
||||
shutdownCleanupOrphans bool
|
||||
shutdownCleanupOrphansGrace int
|
||||
startAll bool
|
||||
startAgentOverride string
|
||||
startCrewRig string
|
||||
startCrewAccount string
|
||||
startCrewAgentOverride string
|
||||
shutdownGraceful bool
|
||||
shutdownWait int
|
||||
shutdownAll bool
|
||||
shutdownForce bool
|
||||
shutdownYes bool
|
||||
shutdownPolecatsOnly bool
|
||||
shutdownNuclear bool
|
||||
)
|
||||
|
||||
var startCmd = &cobra.Command{
|
||||
@@ -92,9 +90,7 @@ Shutdown levels (progressively more aggressive):
|
||||
|
||||
Use --force or --yes to skip confirmation prompt.
|
||||
Use --graceful to allow agents time to save state before killing.
|
||||
Use --nuclear to force cleanup even if polecats have uncommitted work (DANGER).
|
||||
Use --cleanup-orphans to kill orphaned Claude processes (TTY-less, older than 60s).
|
||||
Use --cleanup-orphans-grace-secs to set the grace period (default 60s).`,
|
||||
Use --nuclear to force cleanup even if polecats have uncommitted work (DANGER).`,
|
||||
RunE: runShutdown,
|
||||
}
|
||||
|
||||
@@ -141,10 +137,6 @@ func init() {
|
||||
"Only stop polecats (minimal shutdown)")
|
||||
shutdownCmd.Flags().BoolVar(&shutdownNuclear, "nuclear", false,
|
||||
"Force cleanup even if polecats have uncommitted work (DANGER: may lose work)")
|
||||
shutdownCmd.Flags().BoolVar(&shutdownCleanupOrphans, "cleanup-orphans", false,
|
||||
"Clean up orphaned Claude processes (TTY-less processes older than 60s)")
|
||||
shutdownCmd.Flags().IntVar(&shutdownCleanupOrphansGrace, "cleanup-orphans-grace-secs", 60,
|
||||
"Grace period in seconds between SIGTERM and SIGKILL when cleaning orphans (default 60)")
|
||||
|
||||
rootCmd.AddCommand(startCmd)
|
||||
rootCmd.AddCommand(shutdownCmd)
|
||||
@@ -448,14 +440,6 @@ func runShutdown(cmd *cobra.Command, args []string) error {
|
||||
|
||||
if len(toStop) == 0 {
|
||||
fmt.Printf("%s Gas Town was not running\n", style.Dim.Render("○"))
|
||||
|
||||
// Still check for orphaned daemons even if no sessions are running
|
||||
if townRoot != "" {
|
||||
fmt.Println()
|
||||
fmt.Println("Checking for orphaned daemon...")
|
||||
stopDaemonIfRunning(townRoot)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -579,20 +563,14 @@ func runGracefulShutdown(t *tmux.Tmux, gtSessions []string, townRoot string) err
|
||||
deaconSession := getDeaconSessionName()
|
||||
stopped := killSessionsInOrder(t, gtSessions, mayorSession, deaconSession)
|
||||
|
||||
// Phase 5: Cleanup orphaned Claude processes if requested
|
||||
if shutdownCleanupOrphans {
|
||||
fmt.Printf("\nPhase 5: Cleaning up orphaned Claude processes...\n")
|
||||
cleanupOrphanedClaude(shutdownCleanupOrphansGrace)
|
||||
}
|
||||
|
||||
// Phase 6: Cleanup polecat worktrees and branches
|
||||
fmt.Printf("\nPhase 6: Cleaning up polecats...\n")
|
||||
// Phase 5: Cleanup polecat worktrees and branches
|
||||
fmt.Printf("\nPhase 5: Cleaning up polecats...\n")
|
||||
if townRoot != "" {
|
||||
cleanupPolecats(townRoot)
|
||||
}
|
||||
|
||||
// Phase 7: Stop the daemon
|
||||
fmt.Printf("\nPhase 7: Stopping daemon...\n")
|
||||
// Phase 6: Stop the daemon
|
||||
fmt.Printf("\nPhase 6: Stopping daemon...\n")
|
||||
if townRoot != "" {
|
||||
stopDaemonIfRunning(townRoot)
|
||||
}
|
||||
@@ -609,13 +587,6 @@ func runImmediateShutdown(t *tmux.Tmux, gtSessions []string, townRoot string) er
|
||||
deaconSession := getDeaconSessionName()
|
||||
stopped := killSessionsInOrder(t, gtSessions, mayorSession, deaconSession)
|
||||
|
||||
// Cleanup orphaned Claude processes if requested
|
||||
if shutdownCleanupOrphans {
|
||||
fmt.Println()
|
||||
fmt.Println("Cleaning up orphaned Claude processes...")
|
||||
cleanupOrphanedClaude(shutdownCleanupOrphansGrace)
|
||||
}
|
||||
|
||||
// Cleanup polecat worktrees and branches
|
||||
if townRoot != "" {
|
||||
fmt.Println()
|
||||
@@ -641,9 +612,6 @@ func runImmediateShutdown(t *tmux.Tmux, gtSessions []string, townRoot string) er
|
||||
// 2. Everything except Mayor
|
||||
// 3. Mayor last
|
||||
// mayorSession and deaconSession are the dynamic session names for the current town.
|
||||
//
|
||||
// Returns the count of sessions that were successfully stopped (verified by checking
|
||||
// if the session no longer exists after the kill attempt).
|
||||
func killSessionsInOrder(t *tmux.Tmux, sessions []string, mayorSession, deaconSession string) int {
|
||||
stopped := 0
|
||||
|
||||
@@ -657,31 +625,10 @@ func killSessionsInOrder(t *tmux.Tmux, sessions []string, mayorSession, deaconSe
|
||||
return false
|
||||
}
|
||||
|
||||
// Helper to kill a session and verify it was stopped
|
||||
killAndVerify := func(sess string) bool {
|
||||
// Check if session exists before attempting to kill
|
||||
exists, _ := t.HasSession(sess)
|
||||
if !exists {
|
||||
return false // Session already gone
|
||||
}
|
||||
|
||||
// Attempt to kill the session and its processes
|
||||
_ = t.KillSessionWithProcesses(sess)
|
||||
|
||||
// Verify the session is actually gone (ignore error, check existence)
|
||||
// KillSessionWithProcesses might return an error even if it successfully
|
||||
// killed the processes and the session auto-closed
|
||||
stillExists, _ := t.HasSession(sess)
|
||||
if !stillExists {
|
||||
fmt.Printf(" %s %s stopped\n", style.Bold.Render("✓"), sess)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// 1. Stop Deacon first
|
||||
if inList(deaconSession) {
|
||||
if killAndVerify(deaconSession) {
|
||||
if err := t.KillSessionWithProcesses(deaconSession); err == nil {
|
||||
fmt.Printf(" %s %s stopped\n", style.Bold.Render("✓"), deaconSession)
|
||||
stopped++
|
||||
}
|
||||
}
|
||||
@@ -691,14 +638,16 @@ func killSessionsInOrder(t *tmux.Tmux, sessions []string, mayorSession, deaconSe
|
||||
if sess == deaconSession || sess == mayorSession {
|
||||
continue
|
||||
}
|
||||
if killAndVerify(sess) {
|
||||
if err := t.KillSessionWithProcesses(sess); err == nil {
|
||||
fmt.Printf(" %s %s stopped\n", style.Bold.Render("✓"), sess)
|
||||
stopped++
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Stop Mayor last
|
||||
if inList(mayorSession) {
|
||||
if killAndVerify(mayorSession) {
|
||||
if err := t.KillSessionWithProcesses(mayorSession); err == nil {
|
||||
fmt.Printf(" %s %s stopped\n", style.Bold.Render("✓"), mayorSession)
|
||||
stopped++
|
||||
}
|
||||
}
|
||||
@@ -803,48 +752,16 @@ func cleanupPolecats(townRoot string) {
|
||||
|
||||
// stopDaemonIfRunning stops the daemon if it is running.
|
||||
// This prevents the daemon from restarting agents after shutdown.
|
||||
// Uses robust detection with fallback to process search.
|
||||
func stopDaemonIfRunning(townRoot string) {
|
||||
// Primary detection: PID file
|
||||
running, pid, err := daemon.IsRunning(townRoot)
|
||||
|
||||
if err != nil {
|
||||
// Detection error - report it but continue with fallback
|
||||
fmt.Printf(" %s Daemon detection warning: %s\n", style.Bold.Render("⚠"), err.Error())
|
||||
}
|
||||
|
||||
running, _, _ := daemon.IsRunning(townRoot)
|
||||
if running {
|
||||
// PID file points to live daemon - stop it
|
||||
if err := daemon.StopDaemon(townRoot); err != nil {
|
||||
fmt.Printf(" %s Failed to stop daemon (PID %d): %s\n",
|
||||
style.Bold.Render("✗"), pid, err.Error())
|
||||
fmt.Printf(" %s Daemon: %s\n", style.Dim.Render("○"), err.Error())
|
||||
} else {
|
||||
fmt.Printf(" %s Daemon stopped (was PID %d)\n", style.Bold.Render("✓"), pid)
|
||||
fmt.Printf(" %s Daemon stopped\n", style.Bold.Render("✓"))
|
||||
}
|
||||
} else {
|
||||
fmt.Printf(" %s Daemon not tracked by PID file\n", style.Dim.Render("○"))
|
||||
}
|
||||
|
||||
// Fallback: Search for orphaned daemon processes
|
||||
orphaned, err := daemon.FindOrphanedDaemons()
|
||||
if err != nil {
|
||||
fmt.Printf(" %s Warning: failed to search for orphaned daemons: %v\n",
|
||||
style.Dim.Render("○"), err)
|
||||
return
|
||||
}
|
||||
|
||||
if len(orphaned) > 0 {
|
||||
fmt.Printf(" %s Found %d orphaned daemon process(es): %v\n",
|
||||
style.Bold.Render("⚠"), len(orphaned), orphaned)
|
||||
|
||||
killed, err := daemon.KillOrphanedDaemons()
|
||||
if err != nil {
|
||||
fmt.Printf(" %s Failed to kill orphaned daemons: %v\n",
|
||||
style.Bold.Render("✗"), err)
|
||||
} else if killed > 0 {
|
||||
fmt.Printf(" %s Killed %d orphaned daemon(s)\n",
|
||||
style.Bold.Render("✓"), killed)
|
||||
}
|
||||
fmt.Printf(" %s Daemon not running\n", style.Dim.Render("○"))
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,88 +0,0 @@
|
||||
//go:build !windows
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/steveyegge/gastown/internal/style"
|
||||
"github.com/steveyegge/gastown/internal/util"
|
||||
)
|
||||
|
||||
// cleanupOrphanedClaude finds and kills orphaned Claude processes with a grace period.
|
||||
// This is a simpler synchronous implementation that:
|
||||
// 1. Finds orphaned processes (TTY-less, older than 60s, not in Gas Town sessions)
|
||||
// 2. Sends SIGTERM to all of them
|
||||
// 3. Waits for the grace period
|
||||
// 4. Sends SIGKILL to any that are still alive
|
||||
func cleanupOrphanedClaude(graceSecs int) {
|
||||
// Find orphaned processes
|
||||
orphans, err := util.FindOrphanedClaudeProcesses()
|
||||
if err != nil {
|
||||
fmt.Printf(" %s Warning: %v\n", style.Bold.Render("⚠"), err)
|
||||
return
|
||||
}
|
||||
|
||||
if len(orphans) == 0 {
|
||||
fmt.Printf(" %s No orphaned processes found\n", style.Dim.Render("○"))
|
||||
return
|
||||
}
|
||||
|
||||
// Send SIGTERM to all orphans
|
||||
var termPIDs []int
|
||||
for _, orphan := range orphans {
|
||||
if err := syscall.Kill(orphan.PID, syscall.SIGTERM); err != nil {
|
||||
if err != syscall.ESRCH {
|
||||
fmt.Printf(" %s PID %d: failed to send SIGTERM: %v\n",
|
||||
style.Bold.Render("⚠"), orphan.PID, err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
termPIDs = append(termPIDs, orphan.PID)
|
||||
fmt.Printf(" %s PID %d: sent SIGTERM (waiting %ds before SIGKILL)\n",
|
||||
style.Bold.Render("→"), orphan.PID, graceSecs)
|
||||
}
|
||||
|
||||
if len(termPIDs) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
// Wait for grace period
|
||||
fmt.Printf(" %s Waiting %d seconds for processes to terminate gracefully...\n",
|
||||
style.Dim.Render("⏳"), graceSecs)
|
||||
time.Sleep(time.Duration(graceSecs) * time.Second)
|
||||
|
||||
// Check which processes are still alive and send SIGKILL
|
||||
var killedCount, alreadyDeadCount int
|
||||
for _, pid := range termPIDs {
|
||||
// Check if process still exists
|
||||
if err := syscall.Kill(pid, 0); err != nil {
|
||||
// Process is gone (either died from SIGTERM or doesn't exist)
|
||||
alreadyDeadCount++
|
||||
continue
|
||||
}
|
||||
|
||||
// Process still alive - send SIGKILL
|
||||
if err := syscall.Kill(pid, syscall.SIGKILL); err != nil {
|
||||
if err != syscall.ESRCH {
|
||||
fmt.Printf(" %s PID %d: failed to send SIGKILL: %v\n",
|
||||
style.Bold.Render("⚠"), pid, err)
|
||||
}
|
||||
continue
|
||||
}
|
||||
killedCount++
|
||||
fmt.Printf(" %s PID %d: sent SIGKILL (did not respond to SIGTERM)\n",
|
||||
style.Bold.Render("✓"), pid)
|
||||
}
|
||||
|
||||
if alreadyDeadCount > 0 {
|
||||
fmt.Printf(" %s %d process(es) terminated gracefully from SIGTERM\n",
|
||||
style.Bold.Render("✓"), alreadyDeadCount)
|
||||
}
|
||||
if killedCount == 0 && alreadyDeadCount > 0 {
|
||||
fmt.Printf(" %s All processes cleaned up successfully\n",
|
||||
style.Bold.Render("✓"))
|
||||
}
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
//go:build windows
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/steveyegge/gastown/internal/style"
|
||||
)
|
||||
|
||||
// cleanupOrphanedClaude is a Windows stub.
|
||||
// Orphan cleanup requires Unix-specific signals (SIGTERM/SIGKILL).
|
||||
func cleanupOrphanedClaude(graceSecs int) {
|
||||
fmt.Printf(" %s Orphan cleanup not supported on Windows\n",
|
||||
style.Dim.Render("○"))
|
||||
}
|
||||
@@ -184,9 +184,10 @@ func runMayorStatusLine(t *tmux.Tmux) error {
|
||||
|
||||
// Track per-rig status for LED indicators and sorting
|
||||
type rigStatus struct {
|
||||
hasWitness bool
|
||||
hasRefinery bool
|
||||
opState string // "OPERATIONAL", "PARKED", or "DOCKED"
|
||||
hasWitness bool
|
||||
hasRefinery bool
|
||||
polecatCount int
|
||||
opState string // "OPERATIONAL", "PARKED", or "DOCKED"
|
||||
}
|
||||
rigStatuses := make(map[string]*rigStatus)
|
||||
|
||||
@@ -201,13 +202,12 @@ func runMayorStatusLine(t *tmux.Tmux) error {
|
||||
working int
|
||||
}
|
||||
healthByType := map[AgentType]*agentHealth{
|
||||
AgentPolecat: {},
|
||||
AgentWitness: {},
|
||||
AgentRefinery: {},
|
||||
AgentDeacon: {},
|
||||
}
|
||||
|
||||
// Track deacon presence (just icon, no count)
|
||||
hasDeacon := false
|
||||
|
||||
// Single pass: track rig status AND agent health
|
||||
for _, s := range sessions {
|
||||
agent := categorizeSession(s)
|
||||
@@ -215,8 +215,7 @@ func runMayorStatusLine(t *tmux.Tmux) error {
|
||||
continue
|
||||
}
|
||||
|
||||
// Track rig-level status (witness/refinery presence)
|
||||
// Polecats are not tracked in tmux - they're a GC concern, not a display concern
|
||||
// Track rig-level status (witness/refinery/polecat presence)
|
||||
if agent.Rig != "" && registeredRigs[agent.Rig] {
|
||||
if rigStatuses[agent.Rig] == nil {
|
||||
rigStatuses[agent.Rig] = &rigStatus{}
|
||||
@@ -226,6 +225,8 @@ func runMayorStatusLine(t *tmux.Tmux) error {
|
||||
rigStatuses[agent.Rig].hasWitness = true
|
||||
case AgentRefinery:
|
||||
rigStatuses[agent.Rig].hasRefinery = true
|
||||
case AgentPolecat:
|
||||
rigStatuses[agent.Rig].polecatCount++
|
||||
}
|
||||
}
|
||||
|
||||
@@ -237,11 +238,6 @@ func runMayorStatusLine(t *tmux.Tmux) error {
|
||||
health.working++
|
||||
}
|
||||
}
|
||||
|
||||
// Track deacon presence (just the icon, no count)
|
||||
if agent.Type == AgentDeacon {
|
||||
hasDeacon = true
|
||||
}
|
||||
}
|
||||
|
||||
// Get operational state for each rig
|
||||
@@ -258,11 +254,9 @@ func runMayorStatusLine(t *tmux.Tmux) error {
|
||||
var parts []string
|
||||
|
||||
// Add per-agent-type health in consistent order
|
||||
// Format: "1/3 👁️" = 1 working out of 3 total
|
||||
// Format: "1/10 😺" = 1 working out of 10 total
|
||||
// Only show agent types that have sessions
|
||||
// Note: Polecats excluded - idle state is misleading noise
|
||||
// Deacon gets just an icon (no count) - shown separately below
|
||||
agentOrder := []AgentType{AgentWitness, AgentRefinery}
|
||||
agentOrder := []AgentType{AgentPolecat, AgentWitness, AgentRefinery, AgentDeacon}
|
||||
var agentParts []string
|
||||
for _, agentType := range agentOrder {
|
||||
health := healthByType[agentType]
|
||||
@@ -276,11 +270,6 @@ func runMayorStatusLine(t *tmux.Tmux) error {
|
||||
parts = append(parts, strings.Join(agentParts, " "))
|
||||
}
|
||||
|
||||
// Add deacon icon if running (just presence, no count)
|
||||
if hasDeacon {
|
||||
parts = append(parts, AgentTypeIcons[AgentDeacon])
|
||||
}
|
||||
|
||||
// Build rig status display with LED indicators
|
||||
// 🟢 = both witness and refinery running (fully active)
|
||||
// 🟡 = one of witness/refinery running (partially active)
|
||||
@@ -298,7 +287,7 @@ func runMayorStatusLine(t *tmux.Tmux) error {
|
||||
rigs = append(rigs, rigInfo{name: rigName, status: status})
|
||||
}
|
||||
|
||||
// Sort by: 1) running state, 2) operational state, 3) alphabetical
|
||||
// Sort by: 1) running state, 2) polecat count (desc), 3) operational state, 4) alphabetical
|
||||
sort.Slice(rigs, func(i, j int) bool {
|
||||
isRunningI := rigs[i].status.hasWitness || rigs[i].status.hasRefinery
|
||||
isRunningJ := rigs[j].status.hasWitness || rigs[j].status.hasRefinery
|
||||
@@ -308,7 +297,12 @@ func runMayorStatusLine(t *tmux.Tmux) error {
|
||||
return isRunningI
|
||||
}
|
||||
|
||||
// Secondary sort: operational state (for non-running rigs: OPERATIONAL < PARKED < DOCKED)
|
||||
// Secondary sort: polecat count (descending)
|
||||
if rigs[i].status.polecatCount != rigs[j].status.polecatCount {
|
||||
return rigs[i].status.polecatCount > rigs[j].status.polecatCount
|
||||
}
|
||||
|
||||
// Tertiary sort: operational state (for non-running rigs: OPERATIONAL < PARKED < DOCKED)
|
||||
stateOrder := map[string]int{"OPERATIONAL": 0, "PARKED": 1, "DOCKED": 2}
|
||||
stateI := stateOrder[rigs[i].status.opState]
|
||||
stateJ := stateOrder[rigs[j].status.opState]
|
||||
@@ -316,7 +310,7 @@ func runMayorStatusLine(t *tmux.Tmux) error {
|
||||
return stateI < stateJ
|
||||
}
|
||||
|
||||
// Tertiary sort: alphabetical
|
||||
// Quaternary sort: alphabetical
|
||||
return rigs[i].name < rigs[j].name
|
||||
})
|
||||
|
||||
@@ -358,12 +352,17 @@ func runMayorStatusLine(t *tmux.Tmux) error {
|
||||
}
|
||||
}
|
||||
|
||||
// Show polecat count if > 0
|
||||
// All icons get 1 space, Park gets 2
|
||||
space := " "
|
||||
if led == "🅿️" {
|
||||
space = " "
|
||||
}
|
||||
rigParts = append(rigParts, led+space+rig.name)
|
||||
display := led + space + rig.name
|
||||
if status.polecatCount > 0 {
|
||||
display += fmt.Sprintf("(%d)", status.polecatCount)
|
||||
}
|
||||
rigParts = append(rigParts, display)
|
||||
}
|
||||
|
||||
if len(rigParts) > 0 {
|
||||
@@ -422,6 +421,7 @@ func runDeaconStatusLine(t *tmux.Tmux) error {
|
||||
}
|
||||
|
||||
rigs := make(map[string]bool)
|
||||
polecatCount := 0
|
||||
for _, s := range sessions {
|
||||
agent := categorizeSession(s)
|
||||
if agent == nil {
|
||||
@@ -431,13 +431,16 @@ func runDeaconStatusLine(t *tmux.Tmux) error {
|
||||
if agent.Rig != "" && registeredRigs[agent.Rig] {
|
||||
rigs[agent.Rig] = true
|
||||
}
|
||||
if agent.Type == AgentPolecat && registeredRigs[agent.Rig] {
|
||||
polecatCount++
|
||||
}
|
||||
}
|
||||
rigCount := len(rigs)
|
||||
|
||||
// Build status
|
||||
// Note: Polecats excluded - they're ephemeral and idle detection is a GC concern
|
||||
var parts []string
|
||||
parts = append(parts, fmt.Sprintf("%d rigs", rigCount))
|
||||
parts = append(parts, fmt.Sprintf("%d 😺", polecatCount))
|
||||
|
||||
// Priority 1: Check for hooked work (town beads for deacon)
|
||||
hookedWork := ""
|
||||
@@ -463,8 +466,7 @@ func runDeaconStatusLine(t *tmux.Tmux) error {
|
||||
}
|
||||
|
||||
// runWitnessStatusLine outputs status for a witness session.
|
||||
// Shows: crew count, hook or mail preview
|
||||
// Note: Polecats excluded - they're ephemeral and idle detection is a GC concern
|
||||
// Shows: polecat count, crew count, hook or mail preview
|
||||
func runWitnessStatusLine(t *tmux.Tmux, rigName string) error {
|
||||
if rigName == "" {
|
||||
// Try to extract from session name: gt-<rig>-witness
|
||||
@@ -481,20 +483,25 @@ func runWitnessStatusLine(t *tmux.Tmux, rigName string) error {
|
||||
townRoot, _ = workspace.Find(paneDir)
|
||||
}
|
||||
|
||||
// Count crew in this rig (crew are persistent, worth tracking)
|
||||
// Count polecats and crew in this rig
|
||||
sessions, err := t.ListSessions()
|
||||
if err != nil {
|
||||
return nil // Silent fail
|
||||
}
|
||||
|
||||
polecatCount := 0
|
||||
crewCount := 0
|
||||
for _, s := range sessions {
|
||||
agent := categorizeSession(s)
|
||||
if agent == nil {
|
||||
continue
|
||||
}
|
||||
if agent.Rig == rigName && agent.Type == AgentCrew {
|
||||
crewCount++
|
||||
if agent.Rig == rigName {
|
||||
if agent.Type == AgentPolecat {
|
||||
polecatCount++
|
||||
} else if agent.Type == AgentCrew {
|
||||
crewCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -502,6 +509,7 @@ func runWitnessStatusLine(t *tmux.Tmux, rigName string) error {
|
||||
|
||||
// Build status
|
||||
var parts []string
|
||||
parts = append(parts, fmt.Sprintf("%d 😺", polecatCount))
|
||||
if crewCount > 0 {
|
||||
parts = append(parts, fmt.Sprintf("%d crew", crewCount))
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"testing"
|
||||
)
|
||||
|
||||
@@ -43,7 +42,7 @@ func TestExpandOutputPath(t *testing.T) {
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got := expandOutputPath(tt.directory, tt.pattern, tt.reviewID, tt.legID)
|
||||
if filepath.ToSlash(got) != tt.want {
|
||||
if got != tt.want {
|
||||
t.Errorf("expandOutputPath() = %q, want %q", got, tt.want)
|
||||
}
|
||||
})
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var tapCmd = &cobra.Command{
|
||||
Use: "tap",
|
||||
Short: "Claude Code hook handlers",
|
||||
Long: `Hook handlers for Claude Code PreToolUse and PostToolUse events.
|
||||
|
||||
These commands are called by Claude Code hooks to implement policies,
|
||||
auditing, and input transformation. They tap into the tool execution
|
||||
flow to guard, audit, inject, or check.
|
||||
|
||||
Subcommands:
|
||||
guard - Block forbidden operations (PreToolUse, exit 2)
|
||||
audit - Log/record tool executions (PostToolUse) [planned]
|
||||
inject - Modify tool inputs (PreToolUse, updatedInput) [planned]
|
||||
check - Validate after execution (PostToolUse) [planned]
|
||||
|
||||
Hook configuration in .claude/settings.json:
|
||||
{
|
||||
"PreToolUse": [{
|
||||
"matcher": "Bash(gh pr create*)",
|
||||
"hooks": [{"command": "gt tap guard pr-workflow"}]
|
||||
}]
|
||||
}
|
||||
|
||||
See ~/gt/docs/HOOKS.md for full documentation.`,
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(tapCmd)
|
||||
}
|
||||
@@ -1,116 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var tapGuardCmd = &cobra.Command{
|
||||
Use: "guard",
|
||||
Short: "Block forbidden operations (PreToolUse hook)",
|
||||
Long: `Block forbidden operations via Claude Code PreToolUse hooks.
|
||||
|
||||
Guard commands exit with code 2 to BLOCK tool execution when a policy
|
||||
is violated. They're called before the tool runs, preventing the
|
||||
forbidden operation entirely.
|
||||
|
||||
Available guards:
|
||||
pr-workflow - Block PR creation and feature branches
|
||||
|
||||
Example hook configuration:
|
||||
{
|
||||
"PreToolUse": [{
|
||||
"matcher": "Bash(gh pr create*)",
|
||||
"hooks": [{"command": "gt tap guard pr-workflow"}]
|
||||
}]
|
||||
}`,
|
||||
}
|
||||
|
||||
var tapGuardPRWorkflowCmd = &cobra.Command{
|
||||
Use: "pr-workflow",
|
||||
Short: "Block PR creation and feature branches",
|
||||
Long: `Block PR workflow operations in Gas Town.
|
||||
|
||||
Gas Town workers push directly to main. PRs add friction that breaks
|
||||
the autonomous execution model (GUPP principle).
|
||||
|
||||
This guard blocks:
|
||||
- gh pr create
|
||||
- git checkout -b (feature branches)
|
||||
- git switch -c (feature branches)
|
||||
|
||||
Exit codes:
|
||||
0 - Operation allowed (not in Gas Town agent context)
|
||||
2 - Operation BLOCKED (in agent context)
|
||||
|
||||
The guard only blocks when running as a Gas Town agent (crew, polecat,
|
||||
witness, etc.). Humans running outside Gas Town can still use PRs.`,
|
||||
RunE: runTapGuardPRWorkflow,
|
||||
}
|
||||
|
||||
func init() {
|
||||
tapCmd.AddCommand(tapGuardCmd)
|
||||
tapGuardCmd.AddCommand(tapGuardPRWorkflowCmd)
|
||||
}
|
||||
|
||||
func runTapGuardPRWorkflow(cmd *cobra.Command, args []string) error {
|
||||
// Check if we're in a Gas Town agent context
|
||||
if !isGasTownAgentContext() {
|
||||
// Not in a Gas Town managed context - allow the operation
|
||||
return nil
|
||||
}
|
||||
|
||||
// We're in a Gas Town context - block PR operations
|
||||
fmt.Fprintln(os.Stderr, "")
|
||||
fmt.Fprintln(os.Stderr, "╔══════════════════════════════════════════════════════════════════╗")
|
||||
fmt.Fprintln(os.Stderr, "║ ❌ PR WORKFLOW BLOCKED ║")
|
||||
fmt.Fprintln(os.Stderr, "╠══════════════════════════════════════════════════════════════════╣")
|
||||
fmt.Fprintln(os.Stderr, "║ Gas Town workers push directly to main. PRs are forbidden. ║")
|
||||
fmt.Fprintln(os.Stderr, "║ ║")
|
||||
fmt.Fprintln(os.Stderr, "║ Instead of: gh pr create / git checkout -b / git switch -c ║")
|
||||
fmt.Fprintln(os.Stderr, "║ Do this: git add . && git commit && git push origin main ║")
|
||||
fmt.Fprintln(os.Stderr, "║ ║")
|
||||
fmt.Fprintln(os.Stderr, "║ Why? PRs add friction that breaks autonomous execution. ║")
|
||||
fmt.Fprintln(os.Stderr, "║ See: ~/gt/docs/PRIMING.md (GUPP principle) ║")
|
||||
fmt.Fprintln(os.Stderr, "╚══════════════════════════════════════════════════════════════════╝")
|
||||
fmt.Fprintln(os.Stderr, "")
|
||||
os.Exit(2) // Exit 2 = BLOCK in Claude Code hooks
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// isGasTownAgentContext returns true if we're running as a Gas Town managed agent.
|
||||
func isGasTownAgentContext() bool {
|
||||
// Check environment variables set by Gas Town session management
|
||||
envVars := []string{
|
||||
"GT_POLECAT",
|
||||
"GT_CREW",
|
||||
"GT_WITNESS",
|
||||
"GT_REFINERY",
|
||||
"GT_MAYOR",
|
||||
"GT_DEACON",
|
||||
}
|
||||
for _, env := range envVars {
|
||||
if os.Getenv(env) != "" {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// Also check if we're in a crew or polecat worktree by path
|
||||
cwd, err := os.Getwd()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
agentPaths := []string{"/crew/", "/polecats/"}
|
||||
for _, path := range agentPaths {
|
||||
if strings.Contains(cwd, path) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
@@ -1,61 +0,0 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// buildGT builds the gt binary and returns its path.
|
||||
// It caches the build across tests in the same run.
|
||||
var cachedGTBinary string
|
||||
|
||||
func buildGT(t *testing.T) string {
|
||||
t.Helper()
|
||||
|
||||
if cachedGTBinary != "" {
|
||||
// Verify cached binary still exists
|
||||
if _, err := os.Stat(cachedGTBinary); err == nil {
|
||||
return cachedGTBinary
|
||||
}
|
||||
// Binary was cleaned up, rebuild
|
||||
cachedGTBinary = ""
|
||||
}
|
||||
|
||||
// Find project root (where go.mod is)
|
||||
wd, err := os.Getwd()
|
||||
if err != nil {
|
||||
t.Fatalf("failed to get working directory: %v", err)
|
||||
}
|
||||
|
||||
// Walk up to find go.mod
|
||||
projectRoot := wd
|
||||
for {
|
||||
if _, err := os.Stat(filepath.Join(projectRoot, "go.mod")); err == nil {
|
||||
break
|
||||
}
|
||||
parent := filepath.Dir(projectRoot)
|
||||
if parent == projectRoot {
|
||||
t.Fatal("could not find project root (go.mod)")
|
||||
}
|
||||
projectRoot = parent
|
||||
}
|
||||
|
||||
// Build gt binary to a persistent temp location (not per-test)
|
||||
tmpDir := os.TempDir()
|
||||
binaryName := "gt-integration-test"
|
||||
if runtime.GOOS == "windows" {
|
||||
binaryName += ".exe"
|
||||
}
|
||||
tmpBinary := filepath.Join(tmpDir, binaryName)
|
||||
cmd := exec.Command("go", "build", "-o", tmpBinary, "./cmd/gt")
|
||||
cmd.Dir = projectRoot
|
||||
if output, err := cmd.CombinedOutput(); err != nil {
|
||||
t.Fatalf("failed to build gt: %v\nOutput: %s", err, output)
|
||||
}
|
||||
|
||||
cachedGTBinary = tmpBinary
|
||||
return tmpBinary
|
||||
}
|
||||
@@ -12,7 +12,7 @@ import (
|
||||
|
||||
// Version information - set at build time via ldflags
|
||||
var (
|
||||
Version = "0.5.0"
|
||||
Version = "0.3.1"
|
||||
// Build can be set via ldflags at compile time
|
||||
Build = "dev"
|
||||
// Commit and Branch - the git revision the binary was built from (optional ldflag)
|
||||
|
||||
@@ -192,13 +192,12 @@ func runWitnessStop(cmd *cobra.Command, args []string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// Kill tmux session if it exists.
|
||||
// Use KillSessionWithProcesses to ensure all descendant processes are killed.
|
||||
// Kill tmux session if it exists
|
||||
t := tmux.NewTmux()
|
||||
sessionName := witnessSessionName(rigName)
|
||||
running, _ := t.HasSession(sessionName)
|
||||
if running {
|
||||
if err := t.KillSessionWithProcesses(sessionName); err != nil {
|
||||
if err := t.KillSession(sessionName); err != nil {
|
||||
style.PrintWarning("failed to kill session: %v", err)
|
||||
}
|
||||
}
|
||||
@@ -219,65 +218,65 @@ func runWitnessStop(cmd *cobra.Command, args []string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// WitnessStatusOutput is the JSON output format for witness status.
|
||||
type WitnessStatusOutput struct {
|
||||
Running bool `json:"running"`
|
||||
RigName string `json:"rig_name"`
|
||||
Session string `json:"session,omitempty"`
|
||||
MonitoredPolecats []string `json:"monitored_polecats,omitempty"`
|
||||
}
|
||||
|
||||
func runWitnessStatus(cmd *cobra.Command, args []string) error {
|
||||
rigName := args[0]
|
||||
|
||||
// Get rig for polecat info
|
||||
_, r, err := getRig(rigName)
|
||||
mgr, err := getWitnessManager(rigName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
mgr := witness.NewManager(r)
|
||||
w, err := mgr.Status()
|
||||
if err != nil {
|
||||
return fmt.Errorf("getting status: %w", err)
|
||||
}
|
||||
|
||||
// ZFC: tmux is source of truth for running state
|
||||
running, _ := mgr.IsRunning()
|
||||
sessionInfo, _ := mgr.Status() // may be nil if not running
|
||||
// Check actual tmux session state (more reliable than state file)
|
||||
t := tmux.NewTmux()
|
||||
sessionName := witnessSessionName(rigName)
|
||||
sessionRunning, _ := t.HasSession(sessionName)
|
||||
|
||||
// Polecats come from rig config, not state file
|
||||
polecats := r.Polecats
|
||||
// Reconcile state: tmux session is the source of truth for background mode
|
||||
if sessionRunning && w.State != witness.StateRunning {
|
||||
w.State = witness.StateRunning
|
||||
} else if !sessionRunning && w.State == witness.StateRunning {
|
||||
w.State = witness.StateStopped
|
||||
}
|
||||
|
||||
// JSON output
|
||||
if witnessStatusJSON {
|
||||
output := WitnessStatusOutput{
|
||||
Running: running,
|
||||
RigName: rigName,
|
||||
MonitoredPolecats: polecats,
|
||||
}
|
||||
if sessionInfo != nil {
|
||||
output.Session = sessionInfo.Name
|
||||
}
|
||||
enc := json.NewEncoder(os.Stdout)
|
||||
enc.SetIndent("", " ")
|
||||
return enc.Encode(output)
|
||||
return enc.Encode(w)
|
||||
}
|
||||
|
||||
// Human-readable output
|
||||
fmt.Printf("%s Witness: %s\n\n", style.Bold.Render(AgentTypeIcons[AgentWitness]), rigName)
|
||||
|
||||
if running {
|
||||
fmt.Printf(" State: %s\n", style.Bold.Render("● running"))
|
||||
if sessionInfo != nil {
|
||||
fmt.Printf(" Session: %s\n", sessionInfo.Name)
|
||||
}
|
||||
} else {
|
||||
fmt.Printf(" State: %s\n", style.Dim.Render("○ stopped"))
|
||||
stateStr := string(w.State)
|
||||
switch w.State {
|
||||
case witness.StateRunning:
|
||||
stateStr = style.Bold.Render("● running")
|
||||
case witness.StateStopped:
|
||||
stateStr = style.Dim.Render("○ stopped")
|
||||
case witness.StatePaused:
|
||||
stateStr = style.Dim.Render("⏸ paused")
|
||||
}
|
||||
fmt.Printf(" State: %s\n", stateStr)
|
||||
if sessionRunning {
|
||||
fmt.Printf(" Session: %s\n", sessionName)
|
||||
}
|
||||
|
||||
if w.StartedAt != nil {
|
||||
fmt.Printf(" Started: %s\n", w.StartedAt.Format("2006-01-02 15:04:05"))
|
||||
}
|
||||
|
||||
// Show monitored polecats
|
||||
fmt.Printf("\n %s\n", style.Bold.Render("Monitored Polecats:"))
|
||||
if len(polecats) == 0 {
|
||||
if len(w.MonitoredPolecats) == 0 {
|
||||
fmt.Printf(" %s\n", style.Dim.Render("(none)"))
|
||||
} else {
|
||||
for _, p := range polecats {
|
||||
for _, p := range w.MonitoredPolecats {
|
||||
fmt.Printf(" • %s\n", p)
|
||||
}
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user