42 Commits

Author SHA1 Message Date
furiosa
1335b8b28f feat(statusline): lower max rigs to 3 and add alias support
- Add Alias field to RigEntry struct for short display names
- Limit displayed rigs to 3 (was unlimited, causing overflow)
- Use alias in statusline when configured (e.g., gcr instead of google_cookie_retrieval)
- Show +N overflow indicator when more rigs exist

Closes: hq-5j33zz
2026-01-25 14:43:42 -08:00
gastown/crew/diesel
003fd1a741 fix(tmux): stabilize flaky tests with WaitForShellReady
Some checks failed
CI / Check for .beads changes (push) Has been skipped
CI / Check embedded formulas (push) Failing after 22s
CI / Test (push) Failing after 1m30s
CI / Lint (push) Failing after 24s
CI / Integration Tests (push) Failing after 43s
CI / Coverage Report (push) Has been skipped
Windows CI / Windows Build and Unit Tests (push) Has been cancelled
TestIsAgentRunning and TestEnsureSessionFresh_ZombieSession were flaky
because they checked the pane command immediately after NewSession,
before the shell had fully initialized. Added WaitForShellReady calls
to wait for shell readiness before assertions.

Closes: gt-jzwx

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 12:25:36 -08:00
diesel
601efd658d fix(hook): remove unused workspace import after rebase
Some checks failed
CI / Check for .beads changes (push) Has been skipped
CI / Check embedded formulas (push) Failing after 20s
CI / Test (push) Failing after 1m26s
CI / Lint (push) Failing after 23s
CI / Integration Tests (push) Failing after 37s
CI / Coverage Report (push) Has been skipped
Windows CI / Windows Build and Unit Tests (push) Has been cancelled
2026-01-25 11:24:49 -08:00
diesel
633612c29a fix(hooks): use portable shebang for NixOS compatibility
Some checks failed
CI / Check for .beads changes (push) Has been skipped
CI / Check embedded formulas (push) Failing after 16s
CI / Test (push) Failing after 21s
CI / Lint (push) Failing after 22s
CI / Integration Tests (push) Failing after 21s
CI / Coverage Report (push) Has been skipped
Windows CI / Windows Build and Unit Tests (push) Has been cancelled
2026-01-25 11:20:02 -08:00
onyx
4ac03662d6 perf(git): cache git rev-parse results within sessions
Some checks failed
CI / Check for .beads changes (push) Has been skipped
CI / Test (push) Has been cancelled
CI / Coverage Report (push) Has been cancelled
CI / Check embedded formulas (push) Has been cancelled
CI / Lint (push) Has been cancelled
CI / Integration Tests (push) Has been cancelled
Windows CI / Windows Build and Unit Tests (push) Has been cancelled
Multiple gt commands call git rev-parse --show-toplevel, adding ~50ms
each invocation. Results rarely change within a session, and multiple
agents calling git concurrently contend on .git/index.lock.

Add cached RepoRoot() and RepoRootFrom() functions to the git package
and update all callers to use them. This ensures a single git subprocess
call per process for the common case of checking the current directory's
repo root.

Files updated:
- internal/git/git.go: Add RepoRoot() and RepoRootFrom()
- internal/cmd/prime.go: Use cached git.RepoRoot()
- internal/cmd/molecule_status.go: Use cached git.RepoRoot()
- internal/cmd/sling_helpers.go: Use cached git.RepoRoot()
- internal/cmd/rig_quick_add.go: Use git.RepoRootFrom() for path arg
- internal/version/stale.go: Use cached git.RepoRoot()

Closes: bd-2zd.5
2026-01-25 11:17:45 -08:00
furiosa
a4921e5eaa fix(sling): use --no-daemon consistently in bd calls (h-3f96b)
storeDispatcherInBead and storeAttachedMoleculeInBead were calling
bd show/update without --no-daemon, while all other sling operations
used --no-daemon. This inconsistency could cause daemon socket hangs
if the daemon was in a bad state during sling operations.

Changes:
- Add --no-daemon --allow-stale to bd show calls in both functions
- Add --no-daemon to bd update calls in both functions
- Add empty stdout check for bd --no-daemon exit 0 bug

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:17:16 -08:00
furiosa
02f2aa1fca feat: add delegation patterns to crew role template (sc-fpqcf)
Add new 'Delegating Work' section with:
- Delegation checklist (execution vs thinking)
- Polecat vs Crew decision table
- Sling pattern examples with mail-back
- Completion notification gap documentation (sc-g7bl3)
- Escalation protocol for blocked work
2026-01-25 11:17:16 -08:00
nux
defc97216f feat(crew): add crew configuration to rigs.json for cross-machine sync
Add CrewRegistryConfig to RigEntry allowing crew members to be defined
in rigs.json and synced across machines. The new `gt crew sync` command
creates missing crew members from the configuration.

Configuration example:
  "rigs": {
    "gastown": {
      "crew": {
        "theme": "mad-max",
        "members": ["diesel", "chrome", "nitro"]
      }
    }
  }

Closes: gt-tu4
2026-01-25 11:17:16 -08:00
furiosa
f01ae03d40 docs(templates): add goals workflow documentation (gt-3wb)
- Add 'Goals Workflow' section to mayor template explaining:
  - Goals = epics assigned to crew members
  - Requirements gathering happens first
  - Crew as long-term goal owners/liaisons
  - The full pattern from assignment to completion

- Update crew template with 'Goal Owner' section:
  - Explicit goal ownership pattern
  - Requirements gathering step
  - Reframed coordination loop
2026-01-25 11:17:16 -08:00
diesel
05b716f4a3 perf(goals): optimize gt goals from 6s to <50ms via direct SQLite (gt-aps.3)
Replace bd subprocess spawns with direct SQLite queries:
- queryEpicsInDir: direct sqlite3 query vs bd list subprocess
- getLinkedConvoys: direct JOIN query vs bd dep list + getIssueDetails loop
- computeGoalLastMovement: reuse epic.UpdatedAt vs separate bd show call

Also includes mailbox optimization from earlier session:
- Consolidated multiple parallel queries into single bd list --all query
- Filters in Go instead of spawning O(identities × statuses) bd processes

177x improvement (6.2s → 35ms) by eliminating subprocess overhead.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:17:16 -08:00
obsidian
6f0282f1c6 perf(rpc): use bd daemon protocol to reduce subprocess overhead
Replace bd subprocess calls in gt commands with daemon RPC when available.
Each subprocess call has ~40ms overhead for Go binary startup, so using
the daemon's Unix socket protocol significantly reduces latency.

Changes:
- Add RPC client to beads package (beads_rpc.go)
- Modify List/Show/Update/Close methods to try RPC first, fall back to subprocess
- Replace runBdPrime() with direct content output (avoids bd subprocess)
- Replace checkPendingEscalations() to use beads.List() with RPC
- Replace hook.go bd subprocess calls with beads package methods

The RPC client:
- Connects to daemon via Unix socket at .beads/bd.sock
- Uses JSON-based request/response protocol (same as bd daemon)
- Falls back gracefully to subprocess if daemon unavailable
- Lazy-initializes connection on first use

Performance improvement targets (from bd-2zd.2):
- gt prime < 100ms (was 5.8s with subprocess chain)
- gt hook < 100ms (was ~323ms)

Closes: bd-2zd.2
2026-01-25 11:17:16 -08:00
furiosa
bd0f30cfdd fix(handoff): don't kill pane processes before respawn (hq-bv7ef)
The previous approach using KillPaneProcessesExcluding/KillPaneProcesses
killed the pane's main process (Claude/node) before calling RespawnPane.
This caused the pane to close (since tmux's remain-on-exit is off by default),
which then made RespawnPane fail because the target pane no longer exists.

The respawn-pane -k flag handles killing atomically - it kills the old process
and starts the new one in a single operation without closing the pane in between.
If orphan processes remain (e.g., Claude ignoring SIGHUP), they will be cleaned
up when the new session starts or by periodic cleanup processes.

This fixes both self-handoff and remote handoff paths.

Fixes: hq-bv7ef

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:16:23 -08:00
furiosa
f376d07e12 fix(ready): use rig root for beads resolution, not mayor/rig
The ready command was using constants.RigMayorPath(r.Path) which returns
<rig>/mayor/rig, but this fails for rigs where the source repo doesn't
have tracked beads. In those cases, rig-level beads are stored at
<rig>/.beads directly.

Using r.Path (rig root) allows ResolveBeadsDir to properly handle both:
- Tracked beads: follows <rig>/.beads/redirect to mayor/rig/.beads
- Local beads: uses <rig>/.beads directly

Fixes "no beads database found" errors for google_cookie_retrieval and
home_assistant_blueprints rigs.

Closes: hq-c90jd

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
furiosa
9809e0dfc4 fix(plugin): don't record false success when gt plugin run only prints instructions
The `gt plugin run` command was recording a "success" run even though it
only prints plugin instructions for an agent/user to execute - it doesn't
actually run the plugin.

This poisoned the cooldown gate: CountRunsSince counted these false
successes, preventing actual executions from running because the gate
appeared to have recent successful runs.

Remove the recording from `gt plugin run`. The actual plugin execution
(by whatever follows the printed instructions) should record the result.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
furiosa
551da0b159 fix(version): add file-based caching to prevent bd version contention
Under high concurrency (17+ agents), the bd version check spawns
multiple git subprocesses per invocation, causing timeouts when
85-120+ git processes compete for resources.

This fix:
- Caches successful version checks to ~/.cache/gastown/beads-version.json
- Uses cached results for 24 hours to avoid subprocess spawning
- On timeout, uses stale cache if available or gracefully degrades
- Prints warning when using cached/degraded path

Fixes: https://github.com/steveyegge/gastown/issues/503

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
furiosa
0d665e6cd7 fix(hooks): remove Stop hook that caused 30s timeouts (gt-quoj)
The Stop hook with `gt costs record` was causing 30-second timeouts
on every session stop due to beads socket connection issues. Since
cost tracking is disabled anyway (Claude Code doesn't expose session
costs), this hook provided no value.

Changes:
- Remove Stop hook from settings-autonomous.json and settings-interactive.json
- Remove Stop hook validation from claude_settings_check.go
- Update tests to not expect Stop hook

The cost tracking infrastructure remains in costs.go for future use
when Claude Code exposes session costs via API or environment variable.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
furiosa
77531e6f28 feat(goals): show assignee for each bead in gt goals output
Add assignee display to both list and single-goal views. In list view,
assignee appears on the second line when present. In single-goal view,
it appears as a dedicated field after priority. JSON output also includes
the assignee field.

Closes: gt-libj

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
propane
90a3c58f18 fix(statusline): filter in_progress beads by identity in getCurrentWork
The getCurrentWork function was returning ANY in_progress bead from the
workspace rather than only beads assigned to the current agent. This caused
crew workers to see wisps assigned to polecats in their status bar.

Changes:
- Add identity parameter to getCurrentWork function
- Add identity guard (return empty if identity is empty)
- Filter by Assignee in the beads query

This complements the earlier getHookedWork fix and ensures both hooked
AND in_progress beads are filtered by the agent's identity.

Fixes gt-zxnr (additional fix).

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
propane
a4e8700173 fix(statusline): ensure crew sessions have correct hook display
Root cause: tmux statusline showed wrong hook for all java crewmembers
because GT_CREW env var wasn't set in tmux session environment.

Changes:
- statusline.go: Add early return in getHookedWork() when identity is empty
  to prevent returning ALL hooked beads regardless of assignee
- crew_at.go: Call SetEnvironment in the restart path so sessions created
  before GT_CREW was being set get it on restart

Fixes gt-zxnr.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
diesel
8b2dc39e88 fix(handoff): prevent race condition when killing pane processes
KillPaneProcesses was killing ALL processes in the pane, including the
gt handoff process itself. This created a race condition where the
process could be killed before RespawnPane executes, causing the pane
to close prematurely and requiring manual reattach.

Added KillPaneProcessesExcluding() function that excludes specified PIDs
from being killed. The handoff command now passes its own PID to avoid
the race condition.

Fixes: gt-85qd

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
furiosa
fa9087c5d7 fix(molecules): cascade-close child wisps on molecule completion (gt-zbnr)
When deacon patrol molecules completed, their child step wisps were not being
closed automatically. This caused orphan wisp accumulation - 143+ orphaned
wisps were found in one cleanup session.

The fix ensures that when a molecule completes (via gt done or gt mol step done),
all descendant step issues are recursively closed before the molecule itself.

Changes:
- done.go: Added closeDescendants() call in updateAgentStateOnDone before
  closing the attached molecule
- molecule_step.go: Added closeDescendants() call in handleMoleculeComplete
  for all roles (not just polecats)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
kerosene
cce261c97b feat(deacon): make patrol loop explicit and continuous
The Deacon patrol formula now clearly documents the continuous loop:
1. Execute patrol steps (inbox-check through context-check)
2. Squash wisp, wait for activity via await-signal (15min max)
3. Create new patrol wisp and hook it
4. Repeat from step 1

Changes:
- Formula description emphasizes CONTINUOUS EXECUTION with flow diagram
- loop-or-exit step renamed to "Continuous patrol loop" with explicit
  instructions for creating/hooking new wisps after await-signal
- plugin-run step now clearly shows gt plugin list + gt dog dispatch
- Deacon role template updated to match formula changes
- Formula version bumped to 9

Fixes gt-fm2c: Deacon needs continuous patrol loop for plugin dispatch

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
diesel
0d065921b6 fix(goals): query epics from all rigs, not just default
gt goals was only querying the default beads location (town-level
with hq- prefix), missing epics from rig-level beads (j-, sc-, etc.).

Now iterates over all rig directories with .beads/ subdirectories
and aggregates epics, deduplicating by ID.
2026-01-25 11:15:29 -08:00
diesel
13303d4e83 fix(goals): filter out wisp molecules from gt goals output
Wisp molecules (gt-wisp-* IDs, mol-* titles) are transient operational
beads for witness/refinery/polecat patrol, not strategic goals that
need human attention. These are now filtered by default.

Add --include-wisp flag to show them when debugging.

Fixes gt-ysmj

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
gastown/crew/kerosene
a19452eb60 feat: add bd tree to Key Commands in gt prime output
- Add `bd tree <id>` to Key Commands in bd prime template (beads.go)
- Add `bd tree <issue>` to prime_output.go for mayor/polecat/crew roles
- Helps agents understand bead ancestry, siblings, and dependencies

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
gastown/crew/octane
9a53d85c43 feat(convoy): add epic filtering flags to convoy list
Add three new flags for filtering convoys by epic relationship:
- --orphans: show only convoys without a parent epic
- --epic <id>: show only convoys under a specific epic
- --by-epic: group convoys by parent epic

These support the Goals Layer feature (Phase 3) for hierarchical
focus management.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
nux
873d6e2c1a feat(goals): implement goals list with staleness computation
Implements gt goals command to show epics sorted by staleness × priority.

Features:
- List all open epics with staleness indicators (🟢/🟡/🔴)
- Sort by attention score (priority × staleness hours)
- Show specific goal details with description and linked convoys
- JSON output support
- Priority and status filtering

Staleness thresholds:
- 🟢 active: moved in last hour
- 🟡 stale: no movement for 1+ hours
- 🔴 stuck: no movement for 4+ hours

Closes: gt-vix

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
furiosa
c863e9ac65 feat(cmd): add gt goals command skeleton
Create goals.go with basic command structure for viewing strategic
goals (epics) with staleness indicators. Includes --json, --status,
and --priority flags. Implementation stubs return not-yet-implemented
errors.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
rictus
b37d8ecc79 feat(mayor): add delegation hierarchy guidance to role template
Add explicit guidance on the Mayor → Crew → Polecats delegation model:
- Crew are coordinators for epics/goals needing decomposition
- Polecats are executors for well-defined tasks
- Include decision framework table for work type routing

Closes: gt-9jd
2026-01-25 11:15:29 -08:00
kerosene
77e4f82494 feat: add overseer experience commands (gt focus, gt attention)
Implements the Overseer Experience epic (gt-k0kn):

- gt focus: Shows stalest high-priority goals, sorted by priority × staleness
- gt attention: Shows blocked items, PRs awaiting review, stuck workers
- gt status: Now includes GOALS and ATTENTION summary sections
- gt convoy list: Added --orphans, --epic, --by-epic flags

These commands reduce Mayor bottleneck by giving the overseer direct
visibility into system state without needing to ask Mayor.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
rictus
3e4cb71c89 feat(mayor): add delegation hierarchy guidance to role template
Add explicit guidance on the Mayor → Crew → Polecats delegation model:
- Crew are coordinators for epics/goals needing decomposition
- Polecats are executors for well-defined tasks
- Include decision framework table for work type routing

Closes: gt-9jd
2026-01-25 11:15:29 -08:00
slit
1eee064efa docs(crew): add coordinator role guidance to crew template
Adds clear guidance that crew members are coordinators, not implementers:
- Lists 4 key responsibilities: Research, Decompose, Sling, Review
- Clarifies "goal-specific mayor" model - own outcomes via delegation
- Documents when to implement directly vs delegate (trivial fixes, spikes, etc.)

Closes: gt-gig

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
nux
8b7a5e6f91 feat(sling): implement --convoy flag logic (gt-9o4)
Add --convoy flag to gt sling that allows adding an issue to an existing
convoy instead of creating a new one. When specified:
- Validates the convoy exists and is open
- Adds tracking relation between convoy and issue
- Skips auto-convoy creation

Changes:
- Add slingConvoy variable and --convoy flag registration
- Add addToExistingConvoy() helper function in sling_convoy.go
- Modify auto-convoy logic to check slingConvoy first

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
furiosa
37ac676232 feat(sling): register --epic and --convoy flags
Add flag variable declarations and Cobra flag registrations for:
- --epic: link auto-created convoy to parent epic
- --convoy: add to existing convoy instead of creating new

Closes: gt-n3o

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
0be6a39bb7 fix: enforce fresh context between molecule steps
Change molecule step completion instructions to use `gt mol step done`
instead of `bd close`. This ensures polecats get fresh context between
each step, which is critical for multi-step review workflows like
shiny-enterprise where each refinement pass should have unbiased attention.

The `gt mol step done` command already:
1. Closes the step
2. Finds the next ready step
3. Respawns the pane for fresh context

But polecats were being instructed to use `bd close` directly, which
skipped the respawn and let them run through entire workflows in a
single session with accumulated context.

Updated:
- prime_molecule.go: step completion instructions
- mol-polecat-work.formula.toml
- mol-polecat-code-review.formula.toml
- mol-polecat-review-pr.formula.toml

Fixes: hq-0kx7ra
2026-01-25 11:15:29 -08:00
riker
c3fb9c6027 fix(dog): properly set identity for dog sessions
Three fixes to make dog dispatch work end-to-end:

1. Add BuildDogStartupCommand in loader.go
   - Similar to BuildPolecatStartupCommand/BuildCrewStartupCommand
   - Passes AgentName to AgentEnv so BD_ACTOR is exported in startup command

2. Use BuildDogStartupCommand in dog.go
   - Removes ineffective SetEnvironment calls (env vars set after shell starts
     don't propagate to already-running processes)

3. Add "dog" case in mail_identity.go detectSenderFromRole
   - Dogs now use BD_ACTOR for mail identity
   - Without this, dogs fell through to "overseer" and couldn't find their mail

Tested: dog alpha now correctly sees inbox as deacon/dogs/alpha

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
riker
2590e3de3b fix(dog): spawn session and set BD_ACTOR for dog dispatch
Recovered from reflog - these commits were lost during a rebase/force-push.

Dogs are directories with state files but no sessions. When `gt dog dispatch`
assigned work and sent mail, nothing executed because no session existed.

Changes:
1. Spawn tmux session after dispatch (gt-<town>-deacon-<dogname>)
2. Set BD_ACTOR=deacon/dogs/<name> so dogs can find their mail
3. Add dog case to AgentEnv for proper identity

Session spawn is non-blocking - if it fails, mail was sent and human can
manually start the session.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
ec9a07cec1 feat(dog): add 'gt dog done' command for dogs to mark themselves idle
Dogs can now reset their own state to idle after completing work:

  gt dog done        # Auto-detect from BD_ACTOR
  gt dog done alpha  # Explicit name

This solves the issue where dog sessions would complete work but remain in
"working" state because nothing processed the DOG_DONE mail. Now dogs can
explicitly mark themselves idle before handing off.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
1c8d0891e1 fix(session): increase ClaudeStartTimeout from 60s to 120s
Fixes intermittent 'timeout waiting for runtime prompt' errors that occur
when Claude takes longer than 60s to start under load or on slower machines.

Resolves: hq-j2wl

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 11:15:29 -08:00
eed6284e3c feat(security): add GIT_AUTHOR_EMAIL per agent type
Phase 1 of agent security model: Set distinct email addresses for each
agent type to improve audit trail clarity.

Email format:
- Town-level: {role}@gastown.local (mayor, deacon, boot)
- Rig-level: {rig}-{role}@gastown.local (witness, refinery)
- Named agents: {rig}-{role}-{name}@gastown.local (polecat, crew)

This makes git log filtering by agent type trivial and provides a
foundation for per-agent key separation in future phases.

Refs: hq-biot
2026-01-25 11:15:29 -08:00
35dbe8ee24 ci: disable block-internal-prs for fork workflow
We use PRs for human review before merging in our fork.
2026-01-25 11:15:29 -08:00
ad6322b414 feat(mayor): add escalation check to startup protocol
Mayor now checks `gt escalate list` between hook and mail checks at startup.
This ensures pending escalations from other agents are handled promptly.

Other roles (witness, refinery, polecat, crew, deacon) are unaffected -
they create escalations but don't handle them at startup.
2026-01-25 11:15:29 -08:00
51 changed files with 3339 additions and 513 deletions

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# Block PRs by preventing pushes to arbitrary feature branches.
# Gas Town agents push to main (crew) or polecat/* branches (polecats).
# PRs are for external contributors only.

View File

@@ -1,51 +0,0 @@
name: Block Internal PRs
on:
pull_request:
types: [opened, reopened]
jobs:
block-internal-prs:
name: Block Internal PRs
# Only run if PR is from the same repo (not a fork)
if: github.event.pull_request.head.repo.full_name == github.repository
runs-on: ubuntu-latest
steps:
- name: Close PR and comment
uses: actions/github-script@v7
with:
script: |
const prNumber = context.issue.number;
const branch = context.payload.pull_request.head.ref;
const body = [
'**Internal PRs are not allowed.**',
'',
'Gas Town agents push directly to main. PRs are for external contributors only.',
'',
'To land your changes:',
'```bash',
'git checkout main',
'git merge ' + branch,
'git push origin main',
'git push origin --delete ' + branch,
'```',
'',
'See CLAUDE.md: "Crew workers push directly to main. No feature branches. NEVER create PRs."'
].join('\n');
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: prNumber,
body: body
});
await github.rest.pulls.update({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: prNumber,
state: 'closed'
});
core.setFailed('Internal PR blocked. Push directly to main instead.');

View File

@@ -119,6 +119,12 @@ type Beads struct {
// Populated on first call to getTownRoot() to avoid filesystem walk on every operation.
townRoot string
searchedRoot bool
// RPC client for daemon communication (lazy-initialized).
// When available, RPC is preferred over subprocess for performance.
rpcClient *rpcClient
rpcChecked bool
rpcAvailable bool
}
// New creates a new Beads wrapper for the given directory.
@@ -287,7 +293,14 @@ func filterBeadsEnv(environ []string) []string {
}
// List returns issues matching the given options.
// Uses daemon RPC when available for better performance (~40ms faster).
func (b *Beads) List(opts ListOptions) ([]*Issue, error) {
// Try RPC first (faster when daemon is running)
if issues, err := b.listViaRPC(opts); err == nil {
return issues, nil
}
// Fall back to subprocess
args := []string{"list", "--json"}
if opts.Status != "" {
@@ -400,7 +413,14 @@ func (b *Beads) ReadyWithType(issueType string) ([]*Issue, error) {
}
// Show returns detailed information about an issue.
// Uses daemon RPC when available for better performance (~40ms faster).
func (b *Beads) Show(id string) (*Issue, error) {
// Try RPC first (faster when daemon is running)
if issue, err := b.showViaRPC(id); err == nil {
return issue, nil
}
// Fall back to subprocess
out, err := b.run("show", id, "--json")
if err != nil {
return nil, err
@@ -559,7 +579,14 @@ func (b *Beads) CreateWithID(id string, opts CreateOptions) (*Issue, error) {
}
// Update updates an existing issue.
// Uses daemon RPC when available for better performance (~40ms faster).
func (b *Beads) Update(id string, opts UpdateOptions) error {
// Try RPC first (faster when daemon is running)
if err := b.updateViaRPC(id, opts); err == nil {
return nil
}
// Fall back to subprocess
args := []string{"update", id}
if opts.Title != nil {
@@ -598,15 +625,26 @@ func (b *Beads) Update(id string, opts UpdateOptions) error {
// Close closes one or more issues.
// If a runtime session ID is set in the environment, it is passed to bd close
// for work attribution tracking (see decision 009-session-events-architecture.md).
// Uses daemon RPC when available for better performance (~40ms faster per call).
func (b *Beads) Close(ids ...string) error {
if len(ids) == 0 {
return nil
}
sessionID := runtime.SessionIDFromEnv()
// Try RPC for single-issue closes (faster when daemon is running)
if len(ids) == 1 {
if err := b.closeViaRPC(ids[0], "", sessionID, false); err == nil {
return nil
}
}
// Fall back to subprocess
args := append([]string{"close"}, ids...)
// Pass session ID for work attribution if available
if sessionID := runtime.SessionIDFromEnv(); sessionID != "" {
if sessionID != "" {
args = append(args, "--session="+sessionID)
}
@@ -617,16 +655,51 @@ func (b *Beads) Close(ids ...string) error {
// CloseWithReason closes one or more issues with a reason.
// If a runtime session ID is set in the environment, it is passed to bd close
// for work attribution tracking (see decision 009-session-events-architecture.md).
// Uses daemon RPC when available for better performance (~40ms faster per call).
func (b *Beads) CloseWithReason(reason string, ids ...string) error {
if len(ids) == 0 {
return nil
}
sessionID := runtime.SessionIDFromEnv()
// Try RPC for single-issue closes (faster when daemon is running)
if len(ids) == 1 {
if err := b.closeViaRPC(ids[0], reason, sessionID, false); err == nil {
return nil
}
}
// Fall back to subprocess
args := append([]string{"close"}, ids...)
args = append(args, "--reason="+reason)
// Pass session ID for work attribution if available
if sessionID := runtime.SessionIDFromEnv(); sessionID != "" {
if sessionID != "" {
args = append(args, "--session="+sessionID)
}
_, err := b.run(args...)
return err
}
// CloseForced closes an issue with force flag and optional reason.
// The force flag bypasses blockers and other validation checks.
// Uses daemon RPC when available for better performance (~40ms faster).
func (b *Beads) CloseForced(id, reason string) error {
sessionID := runtime.SessionIDFromEnv()
// Try RPC first (faster when daemon is running)
if err := b.closeViaRPC(id, reason, sessionID, true); err == nil {
return nil
}
// Fall back to subprocess
args := []string{"close", id, "--force"}
if reason != "" {
args = append(args, "--reason="+reason)
}
if sessionID != "" {
args = append(args, "--session="+sessionID)
}
@@ -747,6 +820,7 @@ This is physics, not politeness. Gas Town is a steam engine - you are a piston.
- ` + "`gt mol status`" + ` - Check your hooked work
- ` + "`gt mail inbox`" + ` - Check for messages
- ` + "`bd ready`" + ` - Find available work (no blockers)
- ` + "`bd tree <id>`" + ` - View bead ancestry, siblings, and dependencies
- ` + "`bd sync`" + ` - Sync beads changes
## Session Close Protocol
@@ -799,3 +873,19 @@ func ProvisionPrimeMDForWorktree(worktreePath string) error {
// Provision PRIME.md in the target directory
return ProvisionPrimeMD(beadsDir)
}
// GetPrimeContent returns the beads workflow context content.
// It checks for a custom PRIME.md file first, otherwise returns the default.
// This eliminates the need to spawn a bd subprocess for gt prime.
func GetPrimeContent(workDir string) string {
beadsDir := ResolveBeadsDir(workDir)
primePath := filepath.Join(beadsDir, "PRIME.md")
// Check for custom PRIME.md
if content, err := os.ReadFile(primePath); err == nil {
return strings.TrimSpace(string(content))
}
// Return default content
return strings.TrimSpace(primeContent)
}

330
internal/beads/beads_rpc.go Normal file
View File

@@ -0,0 +1,330 @@
package beads
import (
"bufio"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"net"
"os"
"path/filepath"
"time"
)
// MaxUnixSocketPath is the maximum length for Unix socket paths.
const MaxUnixSocketPath = 103
// rpcClient represents an RPC client for the bd daemon.
type rpcClient struct {
conn net.Conn
socketPath string
timeout time.Duration
cwd string
}
// rpcRequest represents an RPC request to the daemon.
type rpcRequest struct {
Operation string `json:"operation"`
Args json.RawMessage `json:"args"`
Cwd string `json:"cwd,omitempty"`
}
// rpcResponse represents an RPC response from the daemon.
type rpcResponse struct {
Success bool `json:"success"`
Data json.RawMessage `json:"data,omitempty"`
Error string `json:"error,omitempty"`
}
// tryConnectRPC attempts to connect to the bd daemon.
// Returns nil if no daemon is running.
func tryConnectRPC(workspacePath string) *rpcClient {
socketPath := socketPathForWorkspace(workspacePath)
// Check if socket exists
if _, err := os.Stat(socketPath); os.IsNotExist(err) {
return nil
}
conn, err := net.DialTimeout("unix", socketPath, 200*time.Millisecond)
if err != nil {
return nil
}
client := &rpcClient{
conn: conn,
socketPath: socketPath,
timeout: 30 * time.Second,
cwd: workspacePath,
}
// Quick health check
if err := client.ping(); err != nil {
_ = conn.Close()
return nil
}
return client
}
// close closes the RPC connection.
func (c *rpcClient) close() error {
if c.conn != nil {
return c.conn.Close()
}
return nil
}
// execute sends a request and returns the response.
func (c *rpcClient) execute(operation string, args interface{}) (*rpcResponse, error) {
argsJSON, err := json.Marshal(args)
if err != nil {
return nil, fmt.Errorf("marshaling args: %w", err)
}
req := rpcRequest{
Operation: operation,
Args: argsJSON,
Cwd: c.cwd,
}
reqJSON, err := json.Marshal(req)
if err != nil {
return nil, fmt.Errorf("marshaling request: %w", err)
}
if c.timeout > 0 {
deadline := time.Now().Add(c.timeout)
if err := c.conn.SetDeadline(deadline); err != nil {
return nil, fmt.Errorf("setting deadline: %w", err)
}
}
writer := bufio.NewWriter(c.conn)
if _, err := writer.Write(reqJSON); err != nil {
return nil, fmt.Errorf("writing request: %w", err)
}
if err := writer.WriteByte('\n'); err != nil {
return nil, fmt.Errorf("writing newline: %w", err)
}
if err := writer.Flush(); err != nil {
return nil, fmt.Errorf("flushing: %w", err)
}
reader := bufio.NewReader(c.conn)
respLine, err := reader.ReadBytes('\n')
if err != nil {
return nil, fmt.Errorf("reading response: %w", err)
}
var resp rpcResponse
if err := json.Unmarshal(respLine, &resp); err != nil {
return nil, fmt.Errorf("unmarshaling response: %w", err)
}
if !resp.Success {
return &resp, fmt.Errorf("operation failed: %s", resp.Error)
}
return &resp, nil
}
// ping verifies the daemon is alive.
func (c *rpcClient) ping() error {
_, err := c.execute("ping", nil)
return err
}
// socketPathForWorkspace returns the socket path for a workspace.
// This mirrors the logic in beads/internal/rpc/socket_path.go.
func socketPathForWorkspace(workspacePath string) string {
// Compute the "natural" socket path in .beads/
naturalPath := filepath.Join(workspacePath, ".beads", "bd.sock")
// If natural path is short enough, use it
if len(naturalPath) <= MaxUnixSocketPath {
return naturalPath
}
// Path too long - use /tmp with hash
hash := sha256.Sum256([]byte(workspacePath))
hashStr := hex.EncodeToString(hash[:4])
return filepath.Join("/tmp", "beads-"+hashStr, "bd.sock")
}
// getRPCClient returns the RPC client, initializing on first call.
// Returns nil if daemon is not available.
func (b *Beads) getRPCClient() *rpcClient {
if b.rpcChecked {
return b.rpcClient
}
b.rpcChecked = true
// Don't use RPC in isolated mode (tests)
if b.isolated {
return nil
}
// Resolve workspace path for socket discovery
workspacePath := b.beadsDir
if workspacePath == "" {
workspacePath = ResolveBeadsDir(b.workDir)
}
// Get the workspace root (parent of .beads)
if filepath.Base(workspacePath) == ".beads" {
workspacePath = filepath.Dir(workspacePath)
}
b.rpcClient = tryConnectRPC(workspacePath)
b.rpcAvailable = b.rpcClient != nil
return b.rpcClient
}
// closeRPC closes the RPC client if connected.
func (b *Beads) closeRPC() {
if b.rpcClient != nil {
_ = b.rpcClient.close()
b.rpcClient = nil
}
}
// RPC operation argument types
type rpcListArgs struct {
Status string `json:"status,omitempty"`
Assignee string `json:"assignee,omitempty"`
Labels []string `json:"labels,omitempty"`
LabelsAny []string `json:"labels_any,omitempty"`
ExcludeStatus []string `json:"exclude_status,omitempty"`
Priority *int `json:"priority,omitempty"`
ParentID string `json:"parent_id,omitempty"`
NoAssignee bool `json:"no_assignee,omitempty"`
Limit int `json:"limit,omitempty"`
}
type rpcShowArgs struct {
ID string `json:"id"`
}
type rpcUpdateArgs struct {
ID string `json:"id"`
Title *string `json:"title,omitempty"`
Status *string `json:"status,omitempty"`
Priority *int `json:"priority,omitempty"`
Description *string `json:"description,omitempty"`
Assignee *string `json:"assignee,omitempty"`
AddLabels []string `json:"add_labels,omitempty"`
RemoveLabels []string `json:"remove_labels,omitempty"`
SetLabels []string `json:"set_labels,omitempty"`
}
type rpcCloseArgs struct {
ID string `json:"id"`
Reason string `json:"reason,omitempty"`
Session string `json:"session,omitempty"`
Force bool `json:"force,omitempty"`
}
// listViaRPC performs a list operation via the daemon RPC.
func (b *Beads) listViaRPC(opts ListOptions) ([]*Issue, error) {
client := b.getRPCClient()
if client == nil {
return nil, fmt.Errorf("no RPC client")
}
args := rpcListArgs{
Status: opts.Status,
Assignee: opts.Assignee,
ParentID: opts.Parent,
}
// Convert Label to Labels array if set
if opts.Label != "" {
args.Labels = []string{opts.Label}
}
// Handle priority: -1 means no filter
if opts.Priority >= 0 {
args.Priority = &opts.Priority
}
if opts.NoAssignee {
args.NoAssignee = true
}
resp, err := client.execute("list", args)
if err != nil {
return nil, err
}
var issues []*Issue
if err := json.Unmarshal(resp.Data, &issues); err != nil {
return nil, fmt.Errorf("unmarshaling issues: %w", err)
}
return issues, nil
}
// showViaRPC performs a show operation via the daemon RPC.
func (b *Beads) showViaRPC(id string) (*Issue, error) {
client := b.getRPCClient()
if client == nil {
return nil, fmt.Errorf("no RPC client")
}
resp, err := client.execute("show", rpcShowArgs{ID: id})
if err != nil {
return nil, err
}
var issue Issue
if err := json.Unmarshal(resp.Data, &issue); err != nil {
return nil, fmt.Errorf("unmarshaling issue: %w", err)
}
return &issue, nil
}
// updateViaRPC performs an update operation via the daemon RPC.
func (b *Beads) updateViaRPC(id string, opts UpdateOptions) error {
client := b.getRPCClient()
if client == nil {
return fmt.Errorf("no RPC client")
}
args := rpcUpdateArgs{
ID: id,
Title: opts.Title,
Status: opts.Status,
Priority: opts.Priority,
Description: opts.Description,
Assignee: opts.Assignee,
AddLabels: opts.AddLabels,
RemoveLabels: opts.RemoveLabels,
SetLabels: opts.SetLabels,
}
_, err := client.execute("update", args)
return err
}
// closeViaRPC performs a close operation via the daemon RPC.
func (b *Beads) closeViaRPC(id, reason, session string, force bool) error {
client := b.getRPCClient()
if client == nil {
return fmt.Errorf("no RPC client")
}
args := rpcCloseArgs{
ID: id,
Reason: reason,
Session: session,
Force: force,
}
_, err := client.execute("close", args)
return err
}

View File

@@ -65,17 +65,6 @@
}
]
}
],
"Stop": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "export PATH=\"$HOME/go/bin:$HOME/bin:$PATH\" && gt costs record"
}
]
}
]
}
}

View File

@@ -65,17 +65,6 @@
}
]
}
],
"Stop": [
{
"matcher": "",
"hooks": [
{
"type": "command",
"command": "export PATH=\"$HOME/go/bin:$HOME/bin:$PATH\" && gt costs record"
}
]
}
]
}
}

374
internal/cmd/attention.go Normal file
View File

@@ -0,0 +1,374 @@
package cmd
import (
"bytes"
"encoding/json"
"fmt"
"os"
"os/exec"
"path/filepath"
"sort"
"strings"
"github.com/spf13/cobra"
"github.com/steveyegge/gastown/internal/style"
"github.com/steveyegge/gastown/internal/workspace"
)
var attentionJSON bool
var attentionAll bool
var attentionCmd = &cobra.Command{
Use: "attention",
GroupID: GroupWork,
Short: "Show items requiring overseer attention",
Long: `Show what specifically needs the overseer's attention.
Groups items into categories:
REQUIRES DECISION - Issues needing architectural/design choices
REQUIRES REVIEW - PRs and design docs awaiting approval
BLOCKED - Items stuck on unresolved dependencies
Examples:
gt attention # Show all attention items
gt attention --json # Machine-readable output`,
RunE: runAttention,
}
func init() {
attentionCmd.Flags().BoolVar(&attentionJSON, "json", false, "Output as JSON")
attentionCmd.Flags().BoolVar(&attentionAll, "all", false, "Include lower-priority items")
rootCmd.AddCommand(attentionCmd)
}
// AttentionCategory represents a group of items needing attention.
type AttentionCategory string
const (
CategoryDecision AttentionCategory = "REQUIRES_DECISION"
CategoryReview AttentionCategory = "REQUIRES_REVIEW"
CategoryBlocked AttentionCategory = "BLOCKED"
CategoryStuck AttentionCategory = "STUCK_WORKERS"
)
// AttentionItem represents something needing overseer attention.
type AttentionItem struct {
Category AttentionCategory `json:"category"`
Priority int `json:"priority"`
ID string `json:"id"`
Title string `json:"title"`
Context string `json:"context,omitempty"`
DrillDown string `json:"drill_down"`
Source string `json:"source,omitempty"` // "beads", "github", "agent"
Details string `json:"details,omitempty"`
}
// AttentionOutput is the full attention report.
type AttentionOutput struct {
Decisions []AttentionItem `json:"decisions,omitempty"`
Reviews []AttentionItem `json:"reviews,omitempty"`
Blocked []AttentionItem `json:"blocked,omitempty"`
StuckWorkers []AttentionItem `json:"stuck_workers,omitempty"`
}
func runAttention(cmd *cobra.Command, args []string) error {
townRoot, err := workspace.FindFromCwdOrError()
if err != nil {
return fmt.Errorf("not in a Gas Town workspace: %w", err)
}
output := AttentionOutput{}
// Collect items from various sources in parallel
// 1. Blocked beads
output.Blocked = collectBlockedItems(townRoot)
// 2. Items needing decision (issues with needs-decision label)
output.Decisions = collectDecisionItems(townRoot)
// 3. PRs awaiting review
output.Reviews = collectReviewItems(townRoot)
// 4. Stuck workers (agents marked as stuck)
output.StuckWorkers = collectStuckWorkers(townRoot)
// Sort each category by priority
sortByPriority := func(items []AttentionItem) {
sort.Slice(items, func(i, j int) bool {
return items[i].Priority < items[j].Priority // Lower priority number = higher importance
})
}
sortByPriority(output.Decisions)
sortByPriority(output.Reviews)
sortByPriority(output.Blocked)
sortByPriority(output.StuckWorkers)
if attentionJSON {
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
return enc.Encode(output)
}
return outputAttentionText(output)
}
func collectBlockedItems(townRoot string) []AttentionItem {
var items []AttentionItem
// Query blocked issues from beads
blockedCmd := exec.Command("bd", "blocked", "--json")
var stdout bytes.Buffer
blockedCmd.Stdout = &stdout
if err := blockedCmd.Run(); err != nil {
return items
}
var blocked []struct {
ID string `json:"id"`
Title string `json:"title"`
Priority int `json:"priority"`
BlockedBy []string `json:"blocked_by,omitempty"`
}
if err := json.Unmarshal(stdout.Bytes(), &blocked); err != nil {
return items
}
for _, b := range blocked {
// Skip ephemeral/internal issues
if strings.Contains(b.ID, "wisp") || strings.Contains(b.ID, "-mol-") {
continue
}
if strings.Contains(b.ID, "-agent-") {
continue
}
context := ""
if len(b.BlockedBy) > 0 {
context = fmt.Sprintf("Blocked by: %s", strings.Join(b.BlockedBy, ", "))
}
items = append(items, AttentionItem{
Category: CategoryBlocked,
Priority: b.Priority,
ID: b.ID,
Title: b.Title,
Context: context,
DrillDown: fmt.Sprintf("bd show %s", b.ID),
Source: "beads",
})
}
return items
}
func collectDecisionItems(townRoot string) []AttentionItem {
var items []AttentionItem
// Query issues with needs-decision label
listCmd := exec.Command("bd", "list", "--label=needs-decision", "--status=open", "--json")
var stdout bytes.Buffer
listCmd.Stdout = &stdout
if err := listCmd.Run(); err != nil {
return items
}
var issues []struct {
ID string `json:"id"`
Title string `json:"title"`
Priority int `json:"priority"`
}
if err := json.Unmarshal(stdout.Bytes(), &issues); err != nil {
return items
}
for _, issue := range issues {
items = append(items, AttentionItem{
Category: CategoryDecision,
Priority: issue.Priority,
ID: issue.ID,
Title: issue.Title,
Context: "Needs architectural/design decision",
DrillDown: fmt.Sprintf("bd show %s", issue.ID),
Source: "beads",
})
}
return items
}
func collectReviewItems(townRoot string) []AttentionItem {
var items []AttentionItem
// Query open PRs from GitHub
prCmd := exec.Command("gh", "pr", "list", "--json", "number,title,headRefName,reviewDecision,additions,deletions")
var stdout bytes.Buffer
prCmd.Stdout = &stdout
if err := prCmd.Run(); err != nil {
// gh not available or not in a git repo - skip
return items
}
var prs []struct {
Number int `json:"number"`
Title string `json:"title"`
HeadRefName string `json:"headRefName"`
ReviewDecision string `json:"reviewDecision"`
Additions int `json:"additions"`
Deletions int `json:"deletions"`
}
if err := json.Unmarshal(stdout.Bytes(), &prs); err != nil {
return items
}
for _, pr := range prs {
// Skip PRs that are already approved
if pr.ReviewDecision == "APPROVED" {
continue
}
details := fmt.Sprintf("+%d/-%d lines", pr.Additions, pr.Deletions)
items = append(items, AttentionItem{
Category: CategoryReview,
Priority: 2, // Default P2 for PRs
ID: fmt.Sprintf("PR #%d", pr.Number),
Title: pr.Title,
Context: fmt.Sprintf("Branch: %s", pr.HeadRefName),
DrillDown: fmt.Sprintf("gh pr view %d", pr.Number),
Source: "github",
Details: details,
})
}
return items
}
func collectStuckWorkers(townRoot string) []AttentionItem {
var items []AttentionItem
// Query agent beads with stuck state
// Check each rig's beads for stuck agents
rigDirs, _ := filepath.Glob(filepath.Join(townRoot, "*", "mayor", "rig", ".beads"))
for _, rigBeads := range rigDirs {
rigItems := queryStuckAgents(rigBeads)
items = append(items, rigItems...)
}
return items
}
func queryStuckAgents(beadsPath string) []AttentionItem {
var items []AttentionItem
// Query agents with stuck state
dbPath := filepath.Join(beadsPath, "beads.db")
if _, err := os.Stat(dbPath); err != nil {
return items
}
// Query for agent beads with agent_state = 'stuck'
query := `SELECT id, title, agent_state FROM issues WHERE issue_type = 'agent' AND agent_state = 'stuck'`
queryCmd := exec.Command("sqlite3", "-json", dbPath, query)
var stdout bytes.Buffer
queryCmd.Stdout = &stdout
if err := queryCmd.Run(); err != nil {
return items
}
var agents []struct {
ID string `json:"id"`
Title string `json:"title"`
AgentState string `json:"agent_state"`
}
if err := json.Unmarshal(stdout.Bytes(), &agents); err != nil {
return items
}
for _, agent := range agents {
// Extract agent name from ID (e.g., "gt-gastown-polecat-goose" -> "goose")
parts := strings.Split(agent.ID, "-")
name := parts[len(parts)-1]
items = append(items, AttentionItem{
Category: CategoryStuck,
Priority: 1, // Stuck workers are high priority
ID: agent.ID,
Title: fmt.Sprintf("Worker %s is stuck", name),
Context: "Agent escalated - needs help",
DrillDown: fmt.Sprintf("bd show %s", agent.ID),
Source: "agent",
})
}
return items
}
func outputAttentionText(output AttentionOutput) error {
hasContent := false
// Decisions
if len(output.Decisions) > 0 {
hasContent = true
fmt.Printf("%s (%d items)\n", style.Bold.Render("REQUIRES DECISION"), len(output.Decisions))
for i, item := range output.Decisions {
fmt.Printf("%d. [P%d] %s: %s\n", i+1, item.Priority, item.ID, item.Title)
if item.Context != "" {
fmt.Printf(" %s\n", style.Dim.Render(item.Context))
}
fmt.Printf(" %s\n\n", style.Dim.Render("→ "+item.DrillDown))
}
}
// Reviews
if len(output.Reviews) > 0 {
hasContent = true
fmt.Printf("%s (%d items)\n", style.Bold.Render("REQUIRES REVIEW"), len(output.Reviews))
for i, item := range output.Reviews {
fmt.Printf("%d. [P%d] %s: %s\n", i+1, item.Priority, item.ID, item.Title)
if item.Details != "" {
fmt.Printf(" %s\n", style.Dim.Render(item.Details))
}
if item.Context != "" {
fmt.Printf(" %s\n", style.Dim.Render(item.Context))
}
fmt.Printf(" %s\n\n", style.Dim.Render("→ "+item.DrillDown))
}
}
// Stuck Workers
if len(output.StuckWorkers) > 0 {
hasContent = true
fmt.Printf("%s (%d items)\n", style.Bold.Render("STUCK WORKERS"), len(output.StuckWorkers))
for i, item := range output.StuckWorkers {
fmt.Printf("%d. %s\n", i+1, item.Title)
if item.Context != "" {
fmt.Printf(" %s\n", style.Dim.Render(item.Context))
}
fmt.Printf(" %s\n\n", style.Dim.Render("→ "+item.DrillDown))
}
}
// Blocked
if len(output.Blocked) > 0 {
hasContent = true
fmt.Printf("%s (%d items)\n", style.Bold.Render("BLOCKED"), len(output.Blocked))
for i, item := range output.Blocked {
fmt.Printf("%d. [P%d] %s: %s\n", i+1, item.Priority, item.ID, item.Title)
if item.Context != "" {
fmt.Printf(" %s\n", style.Dim.Render(item.Context))
}
fmt.Printf(" %s\n\n", style.Dim.Render("→ "+item.DrillDown))
}
}
if !hasContent {
fmt.Println("No items requiring attention.")
fmt.Println(style.Dim.Render("All clear - nothing blocked, no pending reviews."))
}
return nil
}

View File

@@ -3,13 +3,18 @@ package cmd
import (
"context"
"encoding/json"
"fmt"
"os"
"os/exec"
"path/filepath"
"regexp"
"strconv"
"strings"
"sync"
"time"
"github.com/steveyegge/gastown/internal/state"
)
// MinBeadsVersion is the minimum required beads version for Gas Town.
@@ -90,6 +95,58 @@ func (v beadsVersion) compare(other beadsVersion) int {
// Pre-compiled regex for beads version parsing
var beadsVersionRe = regexp.MustCompile(`bd version (\d+\.\d+(?:\.\d+)?(?:-\w+)?)`)
// versionCacheTTL is how long a cached version check remains valid.
// 24 hours is reasonable since version upgrades are infrequent.
const versionCacheTTL = 24 * time.Hour
// versionCache stores the result of a beads version check.
type versionCache struct {
Version string `json:"version"`
CheckedAt time.Time `json:"checked_at"`
Valid bool `json:"valid"` // true if version meets minimum requirement
}
// versionCachePath returns the path to the version cache file.
func versionCachePath() string {
return filepath.Join(state.CacheDir(), "beads-version.json")
}
// loadVersionCache reads the cached version check result.
func loadVersionCache() (*versionCache, error) {
data, err := os.ReadFile(versionCachePath())
if err != nil {
return nil, err
}
var cache versionCache
if err := json.Unmarshal(data, &cache); err != nil {
return nil, err
}
return &cache, nil
}
// saveVersionCache writes the version check result to cache.
func saveVersionCache(c *versionCache) error {
dir := state.CacheDir()
if err := os.MkdirAll(dir, 0755); err != nil {
return err
}
data, err := json.MarshalIndent(c, "", " ")
if err != nil {
return err
}
// Atomic write via temp file
tmp := versionCachePath() + ".tmp"
if err := os.WriteFile(tmp, data, 0600); err != nil {
return err
}
return os.Rename(tmp, versionCachePath())
}
// isCacheFresh returns true if the cache is within the TTL.
func (c *versionCache) isCacheFresh() bool {
return time.Since(c.CheckedAt) < versionCacheTTL
}
func getBeadsVersion() (string, error) {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
@@ -132,8 +189,27 @@ func CheckBeadsVersion() error {
}
func checkBeadsVersionInternal() error {
// Try to use cached result first to avoid subprocess spawning
if cache, err := loadVersionCache(); err == nil && cache.isCacheFresh() {
if cache.Valid {
return nil // Cached successful check
}
// Cached failure - still need to check (version might have been upgraded)
}
installedStr, err := getBeadsVersion()
if err != nil {
// On timeout, try to use stale cache or gracefully degrade
if strings.Contains(err.Error(), "timed out") {
if cache, cacheErr := loadVersionCache(); cacheErr == nil && cache.Valid {
// Use stale cache but warn
fmt.Fprintf(os.Stderr, "Warning: bd version check timed out, using cached result (v%s)\n", cache.Version)
return nil
}
// No cache available - gracefully degrade with warning
fmt.Fprintf(os.Stderr, "Warning: bd version check timed out (high system load?), proceeding anyway\n")
return nil
}
return fmt.Errorf("cannot verify beads version: %w", err)
}
@@ -148,7 +224,16 @@ func checkBeadsVersionInternal() error {
return fmt.Errorf("cannot parse required beads version %q: %w", MinBeadsVersion, err)
}
if installed.compare(required) < 0 {
valid := installed.compare(required) >= 0
// Cache the result
_ = saveVersionCache(&versionCache{
Version: installedStr,
CheckedAt: time.Now(),
Valid: valid,
})
if !valid {
return fmt.Errorf("beads version %s is required, but %s is installed\n\nPlease upgrade beads: go install github.com/steveyegge/beads/cmd/bd@latest", MinBeadsVersion, installedStr)
}

View File

@@ -9,6 +9,7 @@ import (
"os"
"os/exec"
"path/filepath"
"sort"
"strconv"
"strings"
"sync"
@@ -69,11 +70,15 @@ var (
convoyListStatus string
convoyListAll bool
convoyListTree bool
convoyListOrphans bool
convoyListEpic string
convoyListByEpic bool
convoyInteractive bool
convoyStrandedJSON bool
convoyCloseReason string
convoyCloseNotify string
convoyCheckDryRun bool
convoyEpic string // --epic: link convoy to parent epic (Goals layer)
)
var convoyCmd = &cobra.Command{
@@ -159,6 +164,9 @@ Examples:
gt convoy list --all # All convoys (open + closed)
gt convoy list --status=closed # Recently landed
gt convoy list --tree # Show convoy + child status tree
gt convoy list --orphans # Convoys with no parent epic
gt convoy list --epic gt-abc # Convoys linked to specific epic
gt convoy list --by-epic # Group convoys by parent epic
gt convoy list --json`,
RunE: runConvoyList,
}
@@ -253,6 +261,9 @@ func init() {
convoyListCmd.Flags().StringVar(&convoyListStatus, "status", "", "Filter by status (open, closed)")
convoyListCmd.Flags().BoolVar(&convoyListAll, "all", false, "Show all convoys (open and closed)")
convoyListCmd.Flags().BoolVar(&convoyListTree, "tree", false, "Show convoy + child status tree")
convoyListCmd.Flags().BoolVar(&convoyListOrphans, "orphans", false, "Show only orphan convoys (no parent epic)")
convoyListCmd.Flags().StringVar(&convoyListEpic, "epic", "", "Show convoys for a specific epic")
convoyListCmd.Flags().BoolVar(&convoyListByEpic, "by-epic", false, "Group convoys by parent epic")
// Interactive TUI flag (on parent command)
convoyCmd.Flags().BoolVarP(&convoyInteractive, "interactive", "i", false, "Interactive tree view")
@@ -1169,6 +1180,16 @@ func showAllConvoyStatus(townBeads string) error {
return nil
}
// convoyListItem holds convoy info for list display.
type convoyListItem struct {
ID string `json:"id"`
Title string `json:"title"`
Status string `json:"status"`
CreatedAt string `json:"created_at"`
ParentEpic string `json:"parent_epic,omitempty"`
Description string `json:"description,omitempty"`
}
func runConvoyList(cmd *cobra.Command, args []string) error {
townBeads, err := getTownBeadsDir()
if err != nil {
@@ -1193,16 +1214,59 @@ func runConvoyList(cmd *cobra.Command, args []string) error {
return fmt.Errorf("listing convoys: %w", err)
}
var convoys []struct {
ID string `json:"id"`
Title string `json:"title"`
Status string `json:"status"`
CreatedAt string `json:"created_at"`
var rawConvoys []struct {
ID string `json:"id"`
Title string `json:"title"`
Status string `json:"status"`
CreatedAt string `json:"created_at"`
Description string `json:"description"`
}
if err := json.Unmarshal(stdout.Bytes(), &convoys); err != nil {
if err := json.Unmarshal(stdout.Bytes(), &rawConvoys); err != nil {
return fmt.Errorf("parsing convoy list: %w", err)
}
// Convert to convoyListItem and extract parent_epic from description
convoys := make([]convoyListItem, 0, len(rawConvoys))
for _, rc := range rawConvoys {
item := convoyListItem{
ID: rc.ID,
Title: rc.Title,
Status: rc.Status,
CreatedAt: rc.CreatedAt,
Description: rc.Description,
}
// Extract parent_epic from description (format: "Parent-Epic: xxx")
for _, line := range strings.Split(rc.Description, "\n") {
if strings.HasPrefix(line, "Parent-Epic: ") {
item.ParentEpic = strings.TrimPrefix(line, "Parent-Epic: ")
break
}
}
convoys = append(convoys, item)
}
// Apply filtering based on new flags
if convoyListOrphans {
// Filter to only orphan convoys (no parent epic)
filtered := make([]convoyListItem, 0)
for _, c := range convoys {
if c.ParentEpic == "" {
filtered = append(filtered, c)
}
}
convoys = filtered
} else if convoyListEpic != "" {
// Filter to convoys linked to specific epic
filtered := make([]convoyListItem, 0)
for _, c := range convoys {
if c.ParentEpic == convoyListEpic {
filtered = append(filtered, c)
}
}
convoys = filtered
}
if convoyListJSON {
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
@@ -1210,33 +1274,81 @@ func runConvoyList(cmd *cobra.Command, args []string) error {
}
if len(convoys) == 0 {
fmt.Println("No convoys found.")
if convoyListOrphans {
fmt.Println("No orphan convoys found.")
} else if convoyListEpic != "" {
fmt.Printf("No convoys found for epic %s.\n", convoyListEpic)
} else {
fmt.Println("No convoys found.")
}
fmt.Println("Create a convoy with: gt convoy create <name> [issues...]")
return nil
}
// Group by epic view
if convoyListByEpic {
return printConvoysByEpic(townBeads, convoys)
}
// Tree view: show convoys with their child issues
if convoyListTree {
return printConvoyTree(townBeads, convoys)
return printConvoyTreeFromItems(townBeads, convoys)
}
fmt.Printf("%s\n\n", style.Bold.Render("Convoys"))
for i, c := range convoys {
status := formatConvoyStatus(c.Status)
fmt.Printf(" %d. 🚚 %s: %s %s\n", i+1, c.ID, c.Title, status)
epicSuffix := ""
if c.ParentEpic != "" {
epicSuffix = style.Dim.Render(fmt.Sprintf(" [%s]", c.ParentEpic))
}
fmt.Printf(" %d. 🚚 %s: %s %s%s\n", i+1, c.ID, c.Title, status, epicSuffix)
}
fmt.Printf("\nUse 'gt convoy status <id>' or 'gt convoy status <n>' for detailed view.\n")
return nil
}
// printConvoyTree displays convoys with their child issues in a tree format.
func printConvoyTree(townBeads string, convoys []struct {
ID string `json:"id"`
Title string `json:"title"`
Status string `json:"status"`
CreatedAt string `json:"created_at"`
}) error {
// printConvoysByEpic groups and displays convoys by their parent epic.
func printConvoysByEpic(townBeads string, convoys []convoyListItem) error {
// Group convoys by parent epic
byEpic := make(map[string][]convoyListItem)
for _, c := range convoys {
epic := c.ParentEpic
if epic == "" {
epic = "(No Epic)"
}
byEpic[epic] = append(byEpic[epic], c)
}
// Get sorted epic keys (No Epic last)
var epics []string
for epic := range byEpic {
if epic != "(No Epic)" {
epics = append(epics, epic)
}
}
sort.Strings(epics)
if _, ok := byEpic["(No Epic)"]; ok {
epics = append(epics, "(No Epic)")
}
// Print grouped output
for _, epic := range epics {
convoys := byEpic[epic]
fmt.Printf("%s (%d convoys)\n", style.Bold.Render(epic), len(convoys))
for _, c := range convoys {
status := formatConvoyStatus(c.Status)
fmt.Printf(" 🚚 %s: %s %s\n", c.ID, c.Title, status)
}
fmt.Println()
}
return nil
}
// printConvoyTreeFromItems displays convoys with their child issues in a tree format.
func printConvoyTreeFromItems(townBeads string, convoys []convoyListItem) error {
for _, c := range convoys {
// Get tracked issues for this convoy
tracked := getTrackedIssues(townBeads, c.ID)
@@ -1255,7 +1367,11 @@ func printConvoyTree(townBeads string, convoys []struct {
if total > 0 {
progress = fmt.Sprintf(" (%d/%d)", completed, total)
}
fmt.Printf("🚚 %s: %s%s\n", c.ID, c.Title, progress)
epicSuffix := ""
if c.ParentEpic != "" {
epicSuffix = style.Dim.Render(fmt.Sprintf(" [%s]", c.ParentEpic))
}
fmt.Printf("🚚 %s: %s%s%s\n", c.ID, c.Title, progress, epicSuffix)
// Print tracked issues as tree children
for i, t := range tracked {
@@ -1285,6 +1401,40 @@ func printConvoyTree(townBeads string, convoys []struct {
return nil
}
// getEpicTitles fetches titles for the given epic IDs.
func getEpicTitles(epicIDs []string) map[string]string {
result := make(map[string]string)
if len(epicIDs) == 0 {
return result
}
// Use bd show to get epic details (handles routing automatically)
args := append([]string{"show"}, epicIDs...)
args = append(args, "--json")
showCmd := exec.Command("bd", args...)
var stdout bytes.Buffer
showCmd.Stdout = &stdout
if err := showCmd.Run(); err != nil {
return result
}
var issues []struct {
ID string `json:"id"`
Title string `json:"title"`
}
if err := json.Unmarshal(stdout.Bytes(), &issues); err != nil {
return result
}
for _, issue := range issues {
result[issue.ID] = issue.Title
}
return result
}
func formatConvoyStatus(status string) string {
switch status {
case "open":
@@ -1298,6 +1448,61 @@ func formatConvoyStatus(status string) string {
}
}
// getConvoyParentEpics returns a map from convoy ID to parent epic ID.
// Convoys link to epics via child_of dependency type.
// Uses a single batched query for efficiency.
func getConvoyParentEpics(townBeads string, convoyIDs []string) map[string]string {
result := make(map[string]string)
if len(convoyIDs) == 0 {
return result
}
dbPath := filepath.Join(townBeads, "beads.db")
// Build IN clause with properly escaped IDs
var quotedIDs []string
for _, id := range convoyIDs {
safeID := strings.ReplaceAll(id, "'", "''")
quotedIDs = append(quotedIDs, fmt.Sprintf("'%s'", safeID))
}
inClause := strings.Join(quotedIDs, ", ")
// Query child_of dependencies for all convoys at once
query := fmt.Sprintf(
`SELECT issue_id, depends_on_id FROM dependencies WHERE issue_id IN (%s) AND type = 'child_of'`,
inClause)
queryCmd := exec.Command("sqlite3", "-json", dbPath, query)
var stdout bytes.Buffer
queryCmd.Stdout = &stdout
if err := queryCmd.Run(); err != nil {
return result
}
var deps []struct {
IssueID string `json:"issue_id"`
DependsOnID string `json:"depends_on_id"`
}
if err := json.Unmarshal(stdout.Bytes(), &deps); err != nil {
return result
}
for _, dep := range deps {
epicID := dep.DependsOnID
// Handle external reference format: external:rig:issue-id
if strings.HasPrefix(epicID, "external:") {
parts := strings.SplitN(epicID, ":", 3)
if len(parts) == 3 {
epicID = parts[2] // Extract the actual issue ID
}
}
result[dep.IssueID] = epicID
}
return result
}
// trackedIssueInfo holds info about an issue being tracked by a convoy.
type trackedIssueInfo struct {
ID string `json:"id"`

View File

@@ -250,8 +250,22 @@ func runCrewAt(cmd *cobra.Command, args []string) error {
Topic: "restart",
})
// Ensure tmux session environment is set (for gt status-line to read).
// Sessions created before this was added may be missing GT_CREW, etc.
envVars := config.AgentEnv(config.AgentEnvConfig{
Role: "crew",
Rig: r.Name,
AgentName: name,
TownRoot: townRoot,
RuntimeConfigDir: claudeConfigDir,
BeadsNoDaemon: true,
})
for k, v := range envVars {
_ = t.SetEnvironment(sessionID, k, v)
}
// Use respawn-pane to replace shell with runtime directly
// Export GT_ROLE and BD_ACTOR since tmux SetEnvironment only affects new panes
// Export GT_ROLE and BD_ACTOR in the command since pane inherits from shell, not session env
startupCmd, err := config.BuildCrewStartupCommandWithAgentOverride(r.Name, name, r.Path, beacon, crewAgentOverride)
if err != nil {
return fmt.Errorf("building startup command: %w", err)

204
internal/cmd/crew_sync.go Normal file
View File

@@ -0,0 +1,204 @@
package cmd
import (
"fmt"
"path/filepath"
"github.com/spf13/cobra"
"github.com/steveyegge/gastown/internal/beads"
"github.com/steveyegge/gastown/internal/config"
"github.com/steveyegge/gastown/internal/crew"
"github.com/steveyegge/gastown/internal/git"
"github.com/steveyegge/gastown/internal/rig"
"github.com/steveyegge/gastown/internal/style"
"github.com/steveyegge/gastown/internal/workspace"
)
var crewSyncCmd = &cobra.Command{
Use: "sync",
Short: "Create missing crew members from rigs.json config",
Long: `Sync crew members from rigs.json configuration.
Creates any crew members defined in rigs.json that don't already exist locally.
This enables sharing crew configuration across machines.
Configuration in mayor/rigs.json:
{
"rigs": {
"gastown": {
"crew": {
"theme": "mad-max",
"members": ["diesel", "chrome", "nitro"]
}
}
}
}
Examples:
gt crew sync # Sync crew in current rig
gt crew sync --rig gastown # Sync crew in specific rig
gt crew sync --dry-run # Show what would be created`,
RunE: runCrewSync,
}
func init() {
crewSyncCmd.Flags().StringVar(&crewRig, "rig", "", "Rig to sync crew in")
crewSyncCmd.Flags().BoolVar(&crewDryRun, "dry-run", false, "Show what would be created without creating")
crewCmd.AddCommand(crewSyncCmd)
}
func runCrewSync(cmd *cobra.Command, args []string) error {
// Find workspace
townRoot, err := workspace.FindFromCwdOrError()
if err != nil {
return fmt.Errorf("not in a Gas Town workspace: %w", err)
}
// Load rigs config
rigsConfigPath := filepath.Join(townRoot, "mayor", "rigs.json")
rigsConfig, err := config.LoadRigsConfig(rigsConfigPath)
if err != nil {
return fmt.Errorf("loading rigs config: %w", err)
}
// Determine rig
rigName := crewRig
if rigName == "" {
rigName, err = inferRigFromCwd(townRoot)
if err != nil {
return fmt.Errorf("could not determine rig (use --rig flag): %w", err)
}
}
// Get rig entry from rigs.json
rigEntry, ok := rigsConfig.Rigs[rigName]
if !ok {
return fmt.Errorf("rig '%s' not found in rigs.json", rigName)
}
// Check if crew config exists
if rigEntry.Crew == nil || len(rigEntry.Crew.Members) == 0 {
fmt.Printf("No crew members configured for rig '%s' in rigs.json\n", rigName)
fmt.Printf("\nTo configure crew, add to mayor/rigs.json:\n")
fmt.Printf(" \"crew\": {\n")
fmt.Printf(" \"theme\": \"mad-max\",\n")
fmt.Printf(" \"members\": [\"diesel\", \"chrome\", \"nitro\"]\n")
fmt.Printf(" }\n")
return nil
}
// Get rig
g := git.NewGit(townRoot)
rigMgr := rig.NewManager(townRoot, rigsConfig, g)
r, err := rigMgr.GetRig(rigName)
if err != nil {
return fmt.Errorf("rig '%s' not found", rigName)
}
// Create crew manager
crewGit := git.NewGit(r.Path)
crewMgr := crew.NewManager(r, crewGit)
bd := beads.New(beads.ResolveBeadsDir(r.Path))
// Get existing crew
existingCrew, err := crewMgr.List()
if err != nil {
return fmt.Errorf("listing existing crew: %w", err)
}
existingNames := make(map[string]bool)
for _, c := range existingCrew {
existingNames[c.Name] = true
}
// Track results
var created []string
var skipped []string
var failed []string
// Process each configured member
for _, name := range rigEntry.Crew.Members {
if existingNames[name] {
skipped = append(skipped, name)
continue
}
if crewDryRun {
fmt.Printf("Would create: %s/%s\n", rigName, name)
created = append(created, name)
continue
}
// Create crew workspace
fmt.Printf("Creating crew workspace %s in %s...\n", name, rigName)
worker, err := crewMgr.Add(name, false) // No feature branch for synced crew
if err != nil {
if err == crew.ErrCrewExists {
skipped = append(skipped, name)
continue
}
style.PrintWarning("creating crew workspace '%s': %v", name, err)
failed = append(failed, name)
continue
}
fmt.Printf("%s Created crew workspace: %s/%s\n",
style.Bold.Render("\u2713"), rigName, name)
fmt.Printf(" Path: %s\n", worker.ClonePath)
fmt.Printf(" Branch: %s\n", worker.Branch)
// Create agent bead for the crew worker
prefix := beads.GetPrefixForRig(townRoot, rigName)
crewID := beads.CrewBeadIDWithPrefix(prefix, rigName, name)
if _, err := bd.Show(crewID); err != nil {
// Agent bead doesn't exist, create it
fields := &beads.AgentFields{
RoleType: "crew",
Rig: rigName,
AgentState: "idle",
}
desc := fmt.Sprintf("Crew worker %s in %s - synced from rigs.json.", name, rigName)
if _, err := bd.CreateAgentBead(crewID, desc, fields); err != nil {
style.PrintWarning("could not create agent bead for %s: %v", name, err)
} else {
fmt.Printf(" Agent bead: %s\n", crewID)
}
}
created = append(created, name)
fmt.Println()
}
// Summary
if crewDryRun {
fmt.Printf("\n%s Dry run complete\n", style.Bold.Render("\u2713"))
if len(created) > 0 {
fmt.Printf(" Would create: %v\n", created)
}
if len(skipped) > 0 {
fmt.Printf(" Already exist: %v\n", skipped)
}
return nil
}
if len(created) > 0 {
fmt.Printf("%s Created %d crew workspace(s): %v\n",
style.Bold.Render("\u2713"), len(created), created)
}
if len(skipped) > 0 {
fmt.Printf("%s Skipped %d (already exist): %v\n",
style.Dim.Render("-"), len(skipped), skipped)
}
if len(failed) > 0 {
fmt.Printf("%s Failed to create %d: %v\n",
style.Warning.Render("!"), len(failed), failed)
}
// Show theme if configured
if rigEntry.Crew.Theme != "" {
fmt.Printf("\nCrew theme: %s\n", rigEntry.Crew.Theme)
}
return nil
}

View File

@@ -182,6 +182,22 @@ Examples:
RunE: runDogDispatch,
}
var dogDoneCmd = &cobra.Command{
Use: "done [name]",
Short: "Mark a dog as idle (work complete)",
Long: `Mark a dog as idle after completing its work.
Dogs call this command after finishing plugin execution to reset their state
to idle, allowing them to receive new work dispatches.
If no name is provided, attempts to detect the current dog from BD_ACTOR.
Examples:
gt dog done alpha # Explicit dog name
gt dog done # Auto-detect from BD_ACTOR (e.g., "deacon/dogs/alpha")`,
RunE: runDogDone,
}
func init() {
// List flags
dogListCmd.Flags().BoolVar(&dogListJSON, "json", false, "Output as JSON")
@@ -212,6 +228,7 @@ func init() {
dogCmd.AddCommand(dogCallCmd)
dogCmd.AddCommand(dogStatusCmd)
dogCmd.AddCommand(dogDispatchCmd)
dogCmd.AddCommand(dogDoneCmd)
rootCmd.AddCommand(dogCmd)
}
@@ -500,6 +517,34 @@ func runDogStatus(cmd *cobra.Command, args []string) error {
return showPackStatus(mgr)
}
func runDogDone(cmd *cobra.Command, args []string) error {
mgr, err := getDogManager()
if err != nil {
return err
}
var name string
if len(args) > 0 {
name = args[0]
} else {
// Try to detect from BD_ACTOR (e.g., "deacon/dogs/alpha")
actor := os.Getenv("BD_ACTOR")
if actor != "" && strings.HasPrefix(actor, "deacon/dogs/") {
name = strings.TrimPrefix(actor, "deacon/dogs/")
}
if name == "" {
return fmt.Errorf("no dog name provided and could not detect from BD_ACTOR")
}
}
if err := mgr.ClearWork(name); err != nil {
return fmt.Errorf("marking dog %s as done: %w", name, err)
}
fmt.Printf("✓ %s marked as idle (ready for new work)\n", name)
return nil
}
func showDogStatus(mgr *dog.Manager, name string) error {
d, err := mgr.Get(name)
if err != nil {
@@ -791,6 +836,35 @@ func runDogDispatch(cmd *cobra.Command, args []string) error {
return fmt.Errorf("sending plugin mail to dog: %w", err)
}
// Spawn a session for the dog to execute the work.
// Without a session, the dog's mail inbox is never checked.
// See: https://github.com/steveyegge/gastown/issues/XXX (dog dispatch doesn't execute)
t := tmux.NewTmux()
townName, err := workspace.GetTownName(townRoot)
if err != nil {
townName = "gt" // fallback
}
dogSessionName := fmt.Sprintf("gt-%s-deacon-%s", townName, targetDog.Name)
// Kill any stale session first
if has, _ := t.HasSession(dogSessionName); has {
_ = t.KillSessionWithProcesses(dogSessionName)
}
// Build startup command with initial prompt to check mail and execute plugin
// Use BuildDogStartupCommand to properly set BD_ACTOR=deacon/dogs/<name> in the startup command
initialPrompt := fmt.Sprintf("I am dog %s. Check my mail inbox with 'gt mail inbox' and execute the plugin instructions I received.", targetDog.Name)
startCmd := config.BuildDogStartupCommand(targetDog.Name, townRoot, targetDog.Path, initialPrompt)
// Create session from dog's directory
if err := t.NewSessionWithCommand(dogSessionName, targetDog.Path, startCmd); err != nil {
if !dogDispatchJSON {
fmt.Printf(" Warning: could not spawn dog session: %v\n", err)
}
// Non-fatal: mail was sent, dog is marked as working, but no session to execute
// The deacon or human can manually start the session later
}
// Success - output result
if dogDispatchJSON {
return json.NewEncoder(os.Stdout).Encode(result)

View File

@@ -608,11 +608,21 @@ func updateAgentStateOnDone(cwd, townRoot, exitType, _ string) { // issueID unus
// has attached_molecule pointing to the wisp. Without this fix, gt done
// only closed the hooked bead, leaving the wisp orphaned.
// Order matters: wisp closes -> unblocks base bead -> base bead closes.
//
// BUG FIX (gt-zbnr): Close child wisps BEFORE closing the molecule itself.
// Deacon patrol molecules have child step wisps that were being orphaned
// when the patrol completed. Now we cascade-close all descendants first.
attachment := beads.ParseAttachmentFields(hookedBead)
if attachment != nil && attachment.AttachedMolecule != "" {
if err := bd.Close(attachment.AttachedMolecule); err != nil {
moleculeID := attachment.AttachedMolecule
// Cascade-close all child wisps before closing the molecule
childrenClosed := closeDescendants(bd, moleculeID)
if childrenClosed > 0 {
fmt.Printf(" Closed %d child step issues\n", childrenClosed)
}
if err := bd.Close(moleculeID); err != nil {
// Non-fatal: warn but continue
fmt.Fprintf(os.Stderr, "Warning: couldn't close attached molecule %s: %v\n", attachment.AttachedMolecule, err)
fmt.Fprintf(os.Stderr, "Warning: couldn't close attached molecule %s: %v\n", moleculeID, err)
}
}
@@ -645,7 +655,7 @@ func updateAgentStateOnDone(cwd, townRoot, exitType, _ string) { // issueID unus
if _, err := bd.Run("agent", "state", agentBeadID, "awaiting-gate"); err != nil {
fmt.Fprintf(os.Stderr, "Warning: couldn't set agent %s to awaiting-gate: %v\n", agentBeadID, err)
}
// ExitCompleted and ExitDeferred don't set state - observable from tmux
// ExitCompleted and ExitDeferred don't set state - observable from tmux
}
// ZFC #10: Self-report cleanup status

351
internal/cmd/focus.go Normal file
View File

@@ -0,0 +1,351 @@
package cmd
import (
"bytes"
"encoding/json"
"fmt"
"os"
"os/exec"
"path/filepath"
"sort"
"strings"
"time"
"github.com/spf13/cobra"
"github.com/steveyegge/gastown/internal/style"
"github.com/steveyegge/gastown/internal/workspace"
)
var focusJSON bool
var focusAll bool
var focusLimit int
var focusCmd = &cobra.Command{
Use: "focus",
GroupID: GroupWork,
Short: "Show what needs attention (stalest high-priority goals)",
Long: `Show what the overseer should focus on next.
Analyzes active epics (goals) and sorts them by staleness × priority.
Items that haven't moved in a while and have high priority appear first.
Staleness indicators:
🔴 stuck - no movement for 4+ hours (high urgency)
🟡 stale - no movement for 1-4 hours (needs attention)
🟢 active - moved within the last hour (probably fine)
Examples:
gt focus # Top 5 suggestions
gt focus --all # All active goals with staleness
gt focus --limit=10 # Top 10 suggestions
gt focus --json # Machine-readable output`,
RunE: runFocus,
}
func init() {
focusCmd.Flags().BoolVar(&focusJSON, "json", false, "Output as JSON")
focusCmd.Flags().BoolVar(&focusAll, "all", false, "Show all active goals (not just top N)")
focusCmd.Flags().IntVarP(&focusLimit, "limit", "n", 5, "Number of suggestions to show")
rootCmd.AddCommand(focusCmd)
}
// FocusItem represents a goal that needs attention.
type FocusItem struct {
ID string `json:"id"`
Title string `json:"title"`
Priority int `json:"priority"`
Status string `json:"status"`
Staleness string `json:"staleness"` // "active", "stale", "stuck"
StalenessHours float64 `json:"staleness_hours"` // Hours since last movement
Score float64 `json:"score"` // priority × staleness_hours
UpdatedAt string `json:"updated_at"`
Assignee string `json:"assignee,omitempty"`
DrillDown string `json:"drill_down"` // Suggested command
}
func runFocus(cmd *cobra.Command, args []string) error {
// Find town root to query both town and rig beads
townRoot, err := workspace.FindFromCwdOrError()
if err != nil {
return fmt.Errorf("not in a Gas Town workspace: %w", err)
}
// Collect epics from town beads and all rig beads
items, err := collectFocusItems(townRoot)
if err != nil {
return err
}
if len(items) == 0 {
fmt.Println("No active goals found.")
fmt.Println("Goals are epics with open status. Create one with: bd create --type=epic \"Goal name\"")
return nil
}
// Sort by score (highest first)
sort.Slice(items, func(i, j int) bool {
return items[i].Score > items[j].Score
})
// Apply limit
if !focusAll && len(items) > focusLimit {
items = items[:focusLimit]
}
if focusJSON {
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
return enc.Encode(items)
}
return outputFocusText(items)
}
// collectFocusItems gathers epics from all beads databases in the town.
func collectFocusItems(townRoot string) ([]FocusItem, error) {
var items []FocusItem
seenIDs := make(map[string]bool) // Dedupe across databases
// 1. Query town beads (hq-* prefix)
townBeads := filepath.Join(townRoot, ".beads")
if _, err := os.Stat(townBeads); err == nil {
townItems := queryEpicsFromBeads(townBeads)
for _, item := range townItems {
if !seenIDs[item.ID] {
items = append(items, item)
seenIDs[item.ID] = true
}
}
}
// 2. Query each rig's beads (gt-*, bd-*, sc-* etc. prefixes)
rigDirs, _ := filepath.Glob(filepath.Join(townRoot, "*", "mayor", "rig", ".beads"))
for _, rigBeads := range rigDirs {
rigItems := queryEpicsFromBeads(rigBeads)
for _, item := range rigItems {
if !seenIDs[item.ID] {
items = append(items, item)
seenIDs[item.ID] = true
}
}
}
return items, nil
}
// queryEpicsFromBeads queries a beads database for open epics.
func queryEpicsFromBeads(beadsPath string) []FocusItem {
var items []FocusItem
// Use bd to query epics
listCmd := exec.Command("bd", "list", "--type=epic", "--status=open", "--json")
listCmd.Dir = beadsPath
var stdout bytes.Buffer
listCmd.Stdout = &stdout
if err := listCmd.Run(); err != nil {
// Also try in_progress and hooked statuses
return items
}
var epics []struct {
ID string `json:"id"`
Title string `json:"title"`
Status string `json:"status"`
Priority int `json:"priority"`
UpdatedAt string `json:"updated_at"`
Assignee string `json:"assignee,omitempty"`
Labels []string `json:"labels,omitempty"`
Ephemeral bool `json:"ephemeral,omitempty"`
}
if err := json.Unmarshal(stdout.Bytes(), &epics); err != nil {
return items
}
now := time.Now()
for _, epic := range epics {
// Skip ephemeral issues (molecules, wisps, etc.) - these aren't real goals
if epic.Ephemeral {
continue
}
// Also skip by ID pattern - wisps have "wisp" in the ID
if strings.Contains(epic.ID, "wisp") || strings.Contains(epic.ID, "-mol-") {
continue
}
item := FocusItem{
ID: epic.ID,
Title: strings.TrimPrefix(epic.Title, "[EPIC] "),
Priority: epic.Priority,
Status: epic.Status,
UpdatedAt: epic.UpdatedAt,
Assignee: epic.Assignee,
}
// Calculate staleness
if epic.UpdatedAt != "" {
if updated, err := time.Parse(time.RFC3339, epic.UpdatedAt); err == nil {
staleDuration := now.Sub(updated)
item.StalenessHours = staleDuration.Hours()
// Classify staleness
switch {
case staleDuration >= 4*time.Hour:
item.Staleness = "stuck"
case staleDuration >= 1*time.Hour:
item.Staleness = "stale"
default:
item.Staleness = "active"
}
}
}
if item.Staleness == "" {
item.Staleness = "active"
}
// Calculate score: priority × staleness_hours
// P1 = 1, P2 = 2, etc. Lower priority number = higher importance
// Invert so P1 has higher score
priorityWeight := float64(5 - item.Priority) // P1=4, P2=3, P3=2, P4=1
if priorityWeight < 1 {
priorityWeight = 1
}
item.Score = priorityWeight * item.StalenessHours
// Suggest drill-down command
item.DrillDown = fmt.Sprintf("bd show %s", epic.ID)
items = append(items, item)
}
// Also query in_progress and hooked epics
for _, status := range []string{"in_progress", "hooked"} {
extraCmd := exec.Command("bd", "list", "--type=epic", "--status="+status, "--json")
extraCmd.Dir = beadsPath
var extraStdout bytes.Buffer
extraCmd.Stdout = &extraStdout
if err := extraCmd.Run(); err != nil {
continue
}
var extraEpics []struct {
ID string `json:"id"`
Title string `json:"title"`
Status string `json:"status"`
Priority int `json:"priority"`
UpdatedAt string `json:"updated_at"`
Assignee string `json:"assignee,omitempty"`
Ephemeral bool `json:"ephemeral,omitempty"`
}
if err := json.Unmarshal(extraStdout.Bytes(), &extraEpics); err != nil {
continue
}
for _, epic := range extraEpics {
// Skip ephemeral issues
if epic.Ephemeral {
continue
}
if strings.Contains(epic.ID, "wisp") || strings.Contains(epic.ID, "-mol-") {
continue
}
item := FocusItem{
ID: epic.ID,
Title: strings.TrimPrefix(epic.Title, "[EPIC] "),
Priority: epic.Priority,
Status: epic.Status,
UpdatedAt: epic.UpdatedAt,
Assignee: epic.Assignee,
}
if epic.UpdatedAt != "" {
if updated, err := time.Parse(time.RFC3339, epic.UpdatedAt); err == nil {
staleDuration := now.Sub(updated)
item.StalenessHours = staleDuration.Hours()
switch {
case staleDuration >= 4*time.Hour:
item.Staleness = "stuck"
case staleDuration >= 1*time.Hour:
item.Staleness = "stale"
default:
item.Staleness = "active"
}
}
}
if item.Staleness == "" {
item.Staleness = "active"
}
priorityWeight := float64(5 - item.Priority)
if priorityWeight < 1 {
priorityWeight = 1
}
item.Score = priorityWeight * item.StalenessHours
item.DrillDown = fmt.Sprintf("bd show %s", epic.ID)
items = append(items, item)
}
}
return items
}
func outputFocusText(items []FocusItem) error {
fmt.Printf("%s\n\n", style.Bold.Render("Suggested focus (stalest high-priority first):"))
for i, item := range items {
// Staleness indicator
var indicator string
switch item.Staleness {
case "stuck":
indicator = style.Error.Render("🔴")
case "stale":
indicator = style.Warning.Render("🟡")
default:
indicator = style.Success.Render("🟢")
}
// Priority display
priorityStr := fmt.Sprintf("P%d", item.Priority)
// Format staleness duration
stalenessStr := formatStaleness(item.StalenessHours)
// Main line
fmt.Printf("%d. %s [%s] %s: %s\n", i+1, indicator, priorityStr, item.ID, item.Title)
// Details
if item.Assignee != "" {
// Extract short name from assignee path
parts := strings.Split(item.Assignee, "/")
shortAssignee := parts[len(parts)-1]
fmt.Printf(" Last movement: %s Assignee: %s\n", stalenessStr, shortAssignee)
} else {
fmt.Printf(" Last movement: %s\n", stalenessStr)
}
// Drill-down hint
fmt.Printf(" %s\n\n", style.Dim.Render("→ "+item.DrillDown))
}
return nil
}
// formatStaleness formats staleness duration as human-readable string.
func formatStaleness(hours float64) string {
if hours < 1.0/60.0 { // Less than 1 minute
return "just now"
}
if hours < 1 {
return fmt.Sprintf("%dm ago", int(hours*60))
}
if hours < 24 {
return fmt.Sprintf("%.1fh ago", hours)
}
days := hours / 24
return fmt.Sprintf("%.1fd ago", days)
}

651
internal/cmd/goals.go Normal file
View File

@@ -0,0 +1,651 @@
package cmd
import (
"bytes"
"encoding/json"
"fmt"
"os"
"os/exec"
"path/filepath"
"sort"
"strconv"
"strings"
"time"
"github.com/spf13/cobra"
"github.com/steveyegge/gastown/internal/beads"
"github.com/steveyegge/gastown/internal/style"
"github.com/steveyegge/gastown/internal/workspace"
)
// Goal command flags
var (
goalsJSON bool
goalsStatus string
goalsPriority string
goalsIncludeWisp bool
)
var goalsCmd = &cobra.Command{
Use: "goals [goal-id]",
GroupID: GroupWork,
Short: "View strategic goals (epics) with staleness indicators",
Long: `View strategic goals (epics) across the workspace.
Goals are high-level objectives that organize related work items.
This command shows goals with staleness indicators to help identify
stale or neglected strategic initiatives.
Staleness indicators:
🟢 active: movement in last hour
🟡 stale: no movement for 1+ hours
🔴 stuck: no movement for 4+ hours
Goals are sorted by staleness × priority (highest attention needed first).
Examples:
gt goals # List all open goals
gt goals --json # Output as JSON
gt goals --status=all # Show all goals including closed
gt goals gt-abc # Show details for a specific goal`,
RunE: runGoals,
}
func init() {
goalsCmd.Flags().BoolVar(&goalsJSON, "json", false, "Output as JSON")
goalsCmd.Flags().StringVar(&goalsStatus, "status", "open", "Filter by status (open, closed, all)")
goalsCmd.Flags().StringVar(&goalsPriority, "priority", "", "Filter by priority (e.g., P0, P1, P2)")
goalsCmd.Flags().BoolVar(&goalsIncludeWisp, "include-wisp", false, "Include transient wisp molecules (normally hidden)")
rootCmd.AddCommand(goalsCmd)
}
func runGoals(cmd *cobra.Command, args []string) error {
// If arg provided, show specific goal
if len(args) > 0 {
goalID := args[0]
return showGoal(goalID)
}
// Otherwise list all goals
return listGoals()
}
// goalInfo holds computed goal data for display and sorting.
type goalInfo struct {
ID string `json:"id"`
Title string `json:"title"`
Status string `json:"status"`
Priority int `json:"priority"`
Assignee string `json:"assignee,omitempty"`
ConvoyCount int `json:"convoy_count"`
LastMovement time.Time `json:"last_movement,omitempty"`
StalenessHrs float64 `json:"staleness_hours"`
StalenessIcon string `json:"staleness_icon"`
Score float64 `json:"score"` // priority × staleness for sorting
}
func showGoal(goalID string) error {
// Get goal details via bd show
showCmd := exec.Command("bd", "show", goalID, "--json")
var stdout bytes.Buffer
showCmd.Stdout = &stdout
if err := showCmd.Run(); err != nil {
return fmt.Errorf("goal '%s' not found", goalID)
}
var goals []struct {
ID string `json:"id"`
Title string `json:"title"`
Description string `json:"description"`
Status string `json:"status"`
Priority int `json:"priority"`
IssueType string `json:"issue_type"`
Assignee string `json:"assignee"`
CreatedAt string `json:"created_at"`
UpdatedAt string `json:"updated_at"`
}
if err := json.Unmarshal(stdout.Bytes(), &goals); err != nil {
return fmt.Errorf("parsing goal data: %w", err)
}
if len(goals) == 0 {
return fmt.Errorf("goal '%s' not found", goalID)
}
goal := goals[0]
// Verify it's an epic
if goal.IssueType != "epic" {
return fmt.Errorf("'%s' is not a goal/epic (type: %s)", goalID, goal.IssueType)
}
// Get linked convoys (no dbPath available for single goal lookup, use fallback)
convoys := getLinkedConvoys(goalID, "")
// Compute staleness
lastMovement := computeGoalLastMovement(goal.UpdatedAt, convoys)
stalenessHrs := time.Since(lastMovement).Hours()
icon := stalenessIcon(stalenessHrs)
if goalsJSON {
out := goalInfo{
ID: goal.ID,
Title: goal.Title,
Status: goal.Status,
Priority: goal.Priority,
Assignee: goal.Assignee,
ConvoyCount: len(convoys),
LastMovement: lastMovement,
StalenessHrs: stalenessHrs,
StalenessIcon: icon,
}
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
return enc.Encode(out)
}
// Human-readable output
fmt.Printf("%s P%d %s: %s\n\n", icon, goal.Priority, style.Bold.Render(goal.ID), goal.Title)
fmt.Printf(" Status: %s\n", goal.Status)
fmt.Printf(" Priority: P%d\n", goal.Priority)
if goal.Assignee != "" {
fmt.Printf(" Assignee: @%s\n", goal.Assignee)
}
fmt.Printf(" Convoys: %d\n", len(convoys))
fmt.Printf(" Last activity: %s\n", formatLastActivity(lastMovement))
if goal.Description != "" {
fmt.Printf("\n %s\n", style.Bold.Render("Description:"))
// Indent description
for _, line := range strings.Split(goal.Description, "\n") {
fmt.Printf(" %s\n", line)
}
}
if len(convoys) > 0 {
fmt.Printf("\n %s\n", style.Bold.Render("Linked Convoys:"))
for _, c := range convoys {
statusIcon := "○"
if c.Status == "closed" {
statusIcon = "✓"
}
fmt.Printf(" %s %s: %s\n", statusIcon, c.ID, c.Title)
}
}
return nil
}
func listGoals() error {
// Collect epics from all rigs (goals are cross-rig strategic objectives)
epics, err := collectEpicsFromAllRigs()
if err != nil {
return err
}
// Filter out wisp molecules by default (transient/operational, not strategic goals)
// These have IDs like "gt-wisp-*" and are molecule-tracking beads, not human goals
if !goalsIncludeWisp {
filtered := make([]epicRecord, 0)
for _, e := range epics {
if !isWispEpic(e.ID, e.Title) {
filtered = append(filtered, e)
}
}
epics = filtered
}
// Filter by priority if specified
if goalsPriority != "" {
targetPriority := parsePriority(goalsPriority)
filtered := make([]epicRecord, 0)
for _, e := range epics {
if e.Priority == targetPriority {
filtered = append(filtered, e)
}
}
epics = filtered
}
// Build goal info with staleness computation
var goals []goalInfo
for _, epic := range epics {
convoys := getLinkedConvoys(epic.ID, epic.dbPath)
lastMovement := computeGoalLastMovement(epic.UpdatedAt, convoys)
stalenessHrs := time.Since(lastMovement).Hours()
icon := stalenessIcon(stalenessHrs)
// Score = priority_value × staleness_hours
// Lower priority number = higher priority, so invert (4 - priority)
priorityWeight := float64(4 - epic.Priority)
if priorityWeight < 1 {
priorityWeight = 1
}
score := priorityWeight * stalenessHrs
goals = append(goals, goalInfo{
ID: epic.ID,
Title: epic.Title,
Status: epic.Status,
Priority: epic.Priority,
Assignee: epic.Assignee,
ConvoyCount: len(convoys),
LastMovement: lastMovement,
StalenessHrs: stalenessHrs,
StalenessIcon: icon,
Score: score,
})
}
// Sort by score (highest attention needed first)
sort.Slice(goals, func(i, j int) bool {
return goals[i].Score > goals[j].Score
})
if goalsJSON {
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
return enc.Encode(goals)
}
if len(goals) == 0 {
fmt.Println("No goals found.")
fmt.Println("Create a goal with: bd create --type=epic --title=\"Goal name\"")
return nil
}
// Count active (non-closed) goals
activeCount := 0
for _, g := range goals {
if g.Status != "closed" {
activeCount++
}
}
fmt.Printf("%s\n\n", style.Bold.Render(fmt.Sprintf("Goals (%d active, sorted by staleness × priority)", activeCount)))
for _, g := range goals {
// Format: 🔴 P1 sc-xyz: Title
// 3 convoys | stale 6h
priorityStr := fmt.Sprintf("P%d", g.Priority)
fmt.Printf(" %s %s %s: %s\n", g.StalenessIcon, priorityStr, g.ID, g.Title)
// Second line with convoy count, staleness, and assignee (if any)
activityStr := formatActivityShort(g.StalenessHrs)
if g.Assignee != "" {
fmt.Printf(" %d convoy(s) | %s | @%s\n\n", g.ConvoyCount, activityStr, g.Assignee)
} else {
fmt.Printf(" %d convoy(s) | %s\n\n", g.ConvoyCount, activityStr)
}
}
return nil
}
// convoyInfo holds basic convoy info.
type convoyInfo struct {
ID string
Title string
Status string
}
// getLinkedConvoys finds convoys linked to a goal (via parent-child relation).
// dbPath is the path to beads.db containing the goal for direct SQLite queries.
func getLinkedConvoys(goalID, dbPath string) []convoyInfo {
var convoys []convoyInfo
// If no dbPath provided, fall back to bd subprocess (shouldn't happen normally)
if dbPath == "" {
return getLinkedConvoysFallback(goalID)
}
// Query dependencies directly from SQLite
// Children are stored as: depends_on_id = goalID (parent) with type 'blocks'
safeGoalID := strings.ReplaceAll(goalID, "'", "''")
query := fmt.Sprintf(`
SELECT i.id, i.title, i.status
FROM dependencies d
JOIN issues i ON d.issue_id = i.id
WHERE d.depends_on_id = '%s' AND d.type = 'blocks' AND i.issue_type = 'convoy'
`, safeGoalID)
queryCmd := exec.Command("sqlite3", "-json", dbPath, query)
var stdout bytes.Buffer
queryCmd.Stdout = &stdout
if err := queryCmd.Run(); err != nil {
return convoys
}
if stdout.Len() == 0 {
return convoys
}
var results []struct {
ID string `json:"id"`
Title string `json:"title"`
Status string `json:"status"`
}
if err := json.Unmarshal(stdout.Bytes(), &results); err != nil {
return convoys
}
for _, r := range results {
convoys = append(convoys, convoyInfo{
ID: r.ID,
Title: r.Title,
Status: r.Status,
})
}
return convoys
}
// getLinkedConvoysFallback uses bd subprocess (for when dbPath is unknown).
func getLinkedConvoysFallback(goalID string) []convoyInfo {
var convoys []convoyInfo
depArgs := []string{"dep", "list", goalID, "--json"}
depCmd := exec.Command("bd", depArgs...)
var stdout bytes.Buffer
depCmd.Stdout = &stdout
if err := depCmd.Run(); err != nil {
return convoys
}
var deps struct {
Children []struct {
ID string `json:"id"`
Type string `json:"type"`
} `json:"children"`
}
if err := json.Unmarshal(stdout.Bytes(), &deps); err != nil {
return convoys
}
for _, child := range deps.Children {
details := getIssueDetails(child.ID)
if details != nil && details.IssueType == "convoy" {
convoys = append(convoys, convoyInfo{
ID: details.ID,
Title: details.Title,
Status: details.Status,
})
}
}
return convoys
}
// computeGoalLastMovement computes when the goal last had activity.
// It looks at:
// 1. The goal's own updated_at (passed directly to avoid re-querying)
// 2. The last activity of any linked convoy's tracked issues
func computeGoalLastMovement(goalUpdatedAt string, convoys []convoyInfo) time.Time {
// Start with the goal's own updated_at
lastMovement := time.Now().Add(-24 * time.Hour) // Default to 24 hours ago
if goalUpdatedAt != "" {
if t, err := time.Parse(time.RFC3339, goalUpdatedAt); err == nil {
lastMovement = t
}
}
// If no convoys, return early (common case - avoids unnecessary work)
if len(convoys) == 0 {
return lastMovement
}
// Check convoy activity
townBeads, err := getTownBeadsDir()
if err != nil {
return lastMovement
}
for _, convoy := range convoys {
tracked := getTrackedIssues(townBeads, convoy.ID)
for _, t := range tracked {
// Get issue's updated_at
details := getIssueDetails(t.ID)
if details == nil {
continue
}
showCmd := exec.Command("bd", "show", t.ID, "--json")
var out bytes.Buffer
showCmd.Stdout = &out
showCmd.Run()
var issues []struct {
UpdatedAt string `json:"updated_at"`
}
json.Unmarshal(out.Bytes(), &issues)
if len(issues) > 0 && issues[0].UpdatedAt != "" {
if t, err := time.Parse(time.RFC3339, issues[0].UpdatedAt); err == nil {
if t.After(lastMovement) {
lastMovement = t
}
}
}
}
}
return lastMovement
}
// stalenessIcon returns the appropriate staleness indicator.
// 🟢 active: moved in last hour
// 🟡 stale: no movement for 1+ hours
// 🔴 stuck: no movement for 4+ hours
func stalenessIcon(hours float64) string {
if hours < 1 {
return "🟢"
}
if hours < 4 {
return "🟡"
}
return "🔴"
}
// formatLastActivity formats the last activity time for display.
func formatLastActivity(t time.Time) string {
if t.IsZero() {
return "unknown"
}
d := time.Since(t)
if d < time.Minute {
return "just now"
}
if d < time.Hour {
return fmt.Sprintf("%d minutes ago", int(d.Minutes()))
}
if d < 24*time.Hour {
return fmt.Sprintf("%d hours ago", int(d.Hours()))
}
return fmt.Sprintf("%d days ago", int(d.Hours()/24))
}
// formatActivityShort returns a short activity string for the list view.
func formatActivityShort(hours float64) string {
if hours < 1 {
mins := int(hours * 60)
if mins < 1 {
return "active just now"
}
return fmt.Sprintf("active %dm ago", mins)
}
if hours < 4 {
return fmt.Sprintf("stale %.0fh", hours)
}
return fmt.Sprintf("stuck %.0fh", hours)
}
// parsePriority converts a priority string (P0, P1, etc.) to an int.
func parsePriority(s string) int {
s = strings.TrimPrefix(strings.ToUpper(s), "P")
if p, err := strconv.Atoi(s); err == nil {
return p
}
return 2 // Default to P2
}
// isWispEpic returns true if the epic is a transient wisp molecule.
// These are operational/infrastructure beads, not strategic goals that need human attention.
// Detection criteria:
// - ID contains "-wisp-" (molecule tracking beads)
// - Title starts with "mol-" (molecule beads)
func isWispEpic(id, title string) bool {
// Check for wisp ID pattern (e.g., "gt-wisp-abc123")
if strings.Contains(id, "-wisp-") {
return true
}
// Check for molecule title pattern
if strings.HasPrefix(title, "mol-") {
return true
}
return false
}
// epicRecord represents an epic from bd list output.
type epicRecord struct {
ID string `json:"id"`
Title string `json:"title"`
Status string `json:"status"`
Priority int `json:"priority"`
UpdatedAt string `json:"updated_at"`
Assignee string `json:"assignee"`
// dbPath is the path to beads.db containing this epic (for direct queries)
dbPath string
}
// collectEpicsFromAllRigs queries all rigs for epics and aggregates them.
// Goals are cross-rig strategic objectives, so we need to query each rig's beads.
func collectEpicsFromAllRigs() ([]epicRecord, error) {
var allEpics []epicRecord
seen := make(map[string]bool) // Deduplicate by ID
// Find the town root
townRoot, err := workspace.FindFromCwdOrError()
if err != nil {
// Not in a Gas Town workspace, fall back to single query
return queryEpicsInDir("")
}
// Also query town-level beads (for hq- prefixed epics)
townBeadsDir := filepath.Join(townRoot, ".beads")
if _, err := os.Stat(townBeadsDir); err == nil {
epics, err := queryEpicsInDir(townRoot)
if err == nil {
for _, e := range epics {
if !seen[e.ID] {
seen[e.ID] = true
allEpics = append(allEpics, e)
}
}
}
}
// Find all rig directories (they have .beads/ subdirectories)
entries, err := os.ReadDir(townRoot)
if err != nil {
return allEpics, nil // Return what we have
}
for _, entry := range entries {
if !entry.IsDir() {
continue
}
// Skip hidden directories and known non-rig directories
name := entry.Name()
if strings.HasPrefix(name, ".") || name == "plugins" || name == "docs" {
continue
}
rigPath := filepath.Join(townRoot, name)
rigBeadsDir := filepath.Join(rigPath, ".beads")
// Check if this directory has a beads database
if _, err := os.Stat(rigBeadsDir); os.IsNotExist(err) {
continue
}
// Query this rig for epics
epics, err := queryEpicsInDir(rigPath)
if err != nil {
// Log but continue - one rig failing shouldn't stop the whole query
continue
}
for _, e := range epics {
if !seen[e.ID] {
seen[e.ID] = true
allEpics = append(allEpics, e)
}
}
}
return allEpics, nil
}
// queryEpicsInDir queries epics directly from SQLite in the specified directory.
// If dir is empty, uses current working directory.
func queryEpicsInDir(dir string) ([]epicRecord, error) {
beadsDir := dir
if beadsDir == "" {
var err error
beadsDir, err = os.Getwd()
if err != nil {
return nil, fmt.Errorf("getting working directory: %w", err)
}
}
// Resolve redirects to find actual beads.db
resolvedBeads := beads.ResolveBeadsDir(beadsDir)
dbPath := filepath.Join(resolvedBeads, "beads.db")
// Check if database exists
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
return nil, nil // No database, no epics
}
// Build SQL query for epics
query := `SELECT id, title, status, priority, updated_at, assignee
FROM issues
WHERE issue_type = 'epic'`
if goalsStatus == "" || goalsStatus == "open" {
query += ` AND status <> 'closed' AND status <> 'tombstone'`
} else if goalsStatus != "all" {
query += fmt.Sprintf(` AND status = '%s'`, strings.ReplaceAll(goalsStatus, "'", "''"))
} else {
// --all: exclude tombstones but include everything else
query += ` AND status <> 'tombstone'`
}
queryCmd := exec.Command("sqlite3", "-json", dbPath, query)
var stdout bytes.Buffer
queryCmd.Stdout = &stdout
if err := queryCmd.Run(); err != nil {
// Database might be empty or have no epics - not an error
return nil, nil
}
// Handle empty result (sqlite3 -json returns nothing for empty sets)
if stdout.Len() == 0 {
return nil, nil
}
var epics []epicRecord
if err := json.Unmarshal(stdout.Bytes(), &epics); err != nil {
return nil, fmt.Errorf("parsing epics: %w", err)
}
// Set dbPath on each epic for direct queries later
for i := range epics {
epics[i].dbPath = dbPath
}
return epics, nil
}

View File

@@ -204,22 +204,17 @@ func runHandoff(cmd *cobra.Command, args []string) error {
_ = os.WriteFile(markerPath, []byte(currentSession), 0644)
}
// Set remain-on-exit so the pane survives process death during handoff.
// Without this, killing processes causes tmux to destroy the pane before
// we can respawn it. This is essential for tmux session reuse.
if err := t.SetRemainOnExit(pane, true); err != nil {
style.PrintWarning("could not set remain-on-exit: %v", err)
}
// NOTE: We intentionally do NOT kill pane processes before respawning (hq-bv7ef).
// Previous approach (KillPaneProcessesExcluding) killed the pane's main process,
// which caused the pane to close (remain-on-exit is off by default), making
// RespawnPane fail because the target pane no longer exists.
//
// The respawn-pane -k flag handles killing atomically - it kills the old process
// and starts the new one in a single operation without closing the pane.
// If orphan processes remain (e.g., Claude ignoring SIGHUP), they will be cleaned
// up when the new session starts or when the Witness runs periodic cleanup.
// Kill all processes in the pane before respawning to prevent orphan leaks
// RespawnPane's -k flag only sends SIGHUP which Claude/Node may ignore
if err := t.KillPaneProcesses(pane); err != nil {
// Non-fatal but log the warning
style.PrintWarning("could not kill pane processes: %v", err)
}
// Use exec to respawn the pane - this kills us and restarts
// Note: respawn-pane automatically resets remain-on-exit to off
// Use respawn-pane to atomically kill old process and start new one
return t.RespawnPane(pane, restartCmd)
}
@@ -575,19 +570,10 @@ func handoffRemoteSession(t *tmux.Tmux, targetSession, restartCmd string) error
return nil
}
// Set remain-on-exit so the pane survives process death during handoff.
// Without this, killing processes causes tmux to destroy the pane before
// we can respawn it. This is essential for tmux session reuse.
if err := t.SetRemainOnExit(targetPane, true); err != nil {
style.PrintWarning("could not set remain-on-exit: %v", err)
}
// Kill all processes in the pane before respawning to prevent orphan leaks
// RespawnPane's -k flag only sends SIGHUP which Claude/Node may ignore
if err := t.KillPaneProcesses(targetPane); err != nil {
// Non-fatal but log the warning
style.PrintWarning("could not kill pane processes: %v", err)
}
// NOTE: We intentionally do NOT kill pane processes before respawning (hq-bv7ef).
// Previous approach (KillPaneProcesses) killed the pane's main process, which caused
// the pane to close (remain-on-exit is off by default), making RespawnPane fail.
// The respawn-pane -k flag handles killing atomically without closing the pane.
// Clear scrollback history before respawn (resets copy-mode from [0/N] to [0/0])
if err := t.ClearHistory(targetPane); err != nil {
@@ -595,8 +581,7 @@ func handoffRemoteSession(t *tmux.Tmux, targetSession, restartCmd string) error
style.PrintWarning("could not clear history: %v", err)
}
// Respawn the remote session's pane
// Note: respawn-pane automatically resets remain-on-exit to off
// Respawn the remote session's pane - -k flag atomically kills old process and starts new one
if err := t.RespawnPane(targetPane, restartCmd); err != nil {
return fmt.Errorf("respawning pane: %w", err)
}

View File

@@ -11,9 +11,7 @@ import (
"github.com/spf13/cobra"
"github.com/steveyegge/gastown/internal/beads"
"github.com/steveyegge/gastown/internal/events"
"github.com/steveyegge/gastown/internal/runtime"
"github.com/steveyegge/gastown/internal/style"
"github.com/steveyegge/gastown/internal/workspace"
)
var hookCmd = &cobra.Command{
@@ -185,15 +183,8 @@ func runHook(_ *cobra.Command, args []string) error {
fmt.Printf("%s Replacing completed bead %s...\n", style.Dim.Render(""), existing.ID)
if !hookDryRun {
if hasAttachment {
// Close completed molecule bead (use bd close --force for pinned)
closeArgs := []string{"close", existing.ID, "--force",
"--reason=Auto-replaced by gt hook (molecule complete)"}
if sessionID := runtime.SessionIDFromEnv(); sessionID != "" {
closeArgs = append(closeArgs, "--session="+sessionID)
}
closeCmd := exec.Command("bd", closeArgs...)
closeCmd.Stderr = os.Stderr
if err := closeCmd.Run(); err != nil {
// Close completed molecule bead (use force for pinned)
if err := b.CloseForced(existing.ID, "Auto-replaced by gt hook (molecule complete)"); err != nil {
return fmt.Errorf("closing completed bead %s: %w", existing.ID, err)
}
} else {
@@ -234,15 +225,9 @@ func runHook(_ *cobra.Command, args []string) error {
return nil
}
// Hook the bead using bd update (discovery-based approach)
// Run from town root so bd can find routes.jsonl for prefix-based routing.
// This is essential for hooking convoys (hq-* prefix) stored in town beads.
hookCmd := exec.Command("bd", "update", beadID, "--status=hooked", "--assignee="+agentID)
if townRoot, err := workspace.FindFromCwd(); err == nil {
hookCmd.Dir = townRoot
}
hookCmd.Stderr = os.Stderr
if err := hookCmd.Run(); err != nil {
// Hook the bead using beads package (uses RPC when daemon available)
status := beads.StatusHooked
if err := b.Update(beadID, beads.UpdateOptions{Status: &status, Assignee: &agentID}); err != nil {
return fmt.Errorf("hooking bead: %w", err)
}

View File

@@ -129,6 +129,13 @@ func detectSenderFromRole(role string) string {
return fmt.Sprintf("%s/refinery", rig)
}
return detectSenderFromCwd()
case "dog":
// Dogs use BD_ACTOR directly (set by BuildDogStartupCommand)
actor := os.Getenv("BD_ACTOR")
if actor != "" {
return actor
}
return detectSenderFromCwd()
default:
// Unknown role, try cwd detection
return detectSenderFromCwd()

View File

@@ -4,13 +4,13 @@ import (
"encoding/json"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
"github.com/spf13/cobra"
"github.com/steveyegge/gastown/internal/beads"
"github.com/steveyegge/gastown/internal/config"
"github.com/steveyegge/gastown/internal/git"
"github.com/steveyegge/gastown/internal/style"
"github.com/steveyegge/gastown/internal/workspace"
)
@@ -953,13 +953,9 @@ func outputMoleculeCurrent(info MoleculeCurrentInfo) error {
}
// getGitRootForMolStatus returns the git root for hook file lookup.
// Uses cached value to avoid repeated git subprocess calls.
func getGitRootForMolStatus() (string, error) {
cmd := exec.Command("git", "rev-parse", "--show-toplevel")
out, err := cmd.Output()
if err != nil {
return "", err
}
return strings.TrimSpace(string(out)), nil
return git.RepoRoot()
}
// isTownLevelRole returns true if the agent ID is a town-level role.

View File

@@ -53,13 +53,13 @@ func init() {
// StepDoneResult is the result of a step done operation.
type StepDoneResult struct {
StepID string `json:"step_id"`
MoleculeID string `json:"molecule_id"`
StepClosed bool `json:"step_closed"`
NextStepID string `json:"next_step_id,omitempty"`
StepID string `json:"step_id"`
MoleculeID string `json:"molecule_id"`
StepClosed bool `json:"step_closed"`
NextStepID string `json:"next_step_id,omitempty"`
NextStepTitle string `json:"next_step_title,omitempty"`
Complete bool `json:"complete"`
Action string `json:"action"` // "continue", "done", "no_more_ready"
Complete bool `json:"complete"`
Action string `json:"action"` // "continue", "done", "no_more_ready"
}
func runMoleculeStepDone(cmd *cobra.Command, args []string) error {
@@ -162,9 +162,10 @@ func runMoleculeStepDone(cmd *cobra.Command, args []string) error {
// extractMoleculeIDFromStep extracts the molecule ID from a step ID.
// Step IDs have format: mol-id.N where N is the step number.
// Examples:
// gt-abc.1 -> gt-abc
// gt-xyz.3 -> gt-xyz
// bd-mol-abc.2 -> bd-mol-abc
//
// gt-abc.1 -> gt-abc
// gt-xyz.3 -> gt-xyz
// bd-mol-abc.2 -> bd-mol-abc
func extractMoleculeIDFromStep(stepID string) string {
// Find the last dot
lastDot := strings.LastIndex(stepID, ".")
@@ -388,14 +389,26 @@ func handleMoleculeComplete(cwd, townRoot, moleculeID string, dryRun bool) error
}
if dryRun {
fmt.Printf("[dry-run] Would close child steps of %s\n", moleculeID)
fmt.Printf("[dry-run] Would unpin work for %s\n", agentID)
fmt.Printf("[dry-run] Would send POLECAT_DONE to witness\n")
return nil
}
// Unpin the molecule bead (set status to open, will be closed by gt done or manually)
// BUG FIX (gt-zbnr): Close child steps before unpinning/completing.
// Deacon patrol molecules have child step wisps that were being orphaned
// when the patrol completed. Now we cascade-close all descendants first.
workDir, err := findLocalBeadsDir()
if err == nil {
b := beads.New(workDir)
childrenClosed := closeDescendants(b, moleculeID)
if childrenClosed > 0 {
fmt.Printf("%s Closed %d child step issues\n", style.Bold.Render("✓"), childrenClosed)
}
}
// Unpin the molecule bead (set status to open, will be closed by gt done or manually)
if workDir, err := findLocalBeadsDir(); err == nil {
b := beads.New(workDir)
pinnedBeads, err := b.List(beads.ListOptions{
Status: beads.StatusPinned,

View File

@@ -433,19 +433,10 @@ func runPluginRun(cmd *cobra.Command, args []string) error {
fmt.Printf("%s\n", style.Bold.Render("Instructions:"))
fmt.Println(p.Instructions)
// Record the run
recorder := plugin.NewRecorder(townRoot)
beadID, err := recorder.RecordRun(plugin.PluginRunRecord{
PluginName: p.Name,
RigName: p.RigName,
Result: plugin.ResultSuccess, // Manual runs are marked success
Body: "Manual run via gt plugin run",
})
if err != nil {
fmt.Fprintf(os.Stderr, "Warning: failed to record run: %v\n", err)
} else {
fmt.Printf("\n%s Recorded run: %s\n", style.Dim.Render("●"), beadID)
}
// NOTE: We intentionally do NOT record a run here. This command only prints
// instructions for an agent/user to execute - it doesn't actually run the plugin.
// Recording "success" here would poison the cooldown gate, preventing real executions.
// The actual execution (by whatever follows these instructions) should record the result.
return nil
}

View File

@@ -2,7 +2,6 @@ package cmd
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"os"
@@ -12,6 +11,7 @@ import (
"github.com/spf13/cobra"
"github.com/steveyegge/gastown/internal/beads"
"github.com/steveyegge/gastown/internal/git"
"github.com/steveyegge/gastown/internal/lock"
"github.com/steveyegge/gastown/internal/state"
"github.com/steveyegge/gastown/internal/style"
@@ -340,29 +340,13 @@ func detectRole(cwd, townRoot string) RoleInfo {
return ctx
}
// runBdPrime runs `bd prime` and outputs the result.
// This provides beads workflow context to the agent.
// runBdPrime outputs beads workflow context directly.
// This replaces the bd subprocess call to eliminate ~40ms startup overhead.
func runBdPrime(workDir string) {
cmd := exec.Command("bd", "prime")
cmd.Dir = workDir
var stdout, stderr bytes.Buffer
cmd.Stdout = &stdout
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
// Skip if bd prime fails (beads might not be available)
// But log stderr if present for debugging
if errMsg := strings.TrimSpace(stderr.String()); errMsg != "" {
fmt.Fprintf(os.Stderr, "bd prime: %s\n", errMsg)
}
return
}
output := strings.TrimSpace(stdout.String())
if output != "" {
content := beads.GetPrimeContent(workDir)
if content != "" {
fmt.Println()
fmt.Println(output)
fmt.Println(content)
}
}
@@ -561,13 +545,9 @@ func buildRoleAnnouncement(ctx RoleContext) string {
}
// getGitRoot returns the root of the current git repository.
// Uses cached value to avoid repeated git subprocess calls.
func getGitRoot() (string, error) {
cmd := exec.Command("git", "rev-parse", "--show-toplevel")
out, err := cmd.Output()
if err != nil {
return "", err
}
return strings.TrimSpace(string(out)), nil
return git.RepoRoot()
}
// getAgentIdentity returns the agent identity string for hook lookup.
@@ -706,34 +686,20 @@ func ensureBeadsRedirect(ctx RoleContext) {
// checkPendingEscalations queries for open escalation beads and displays them prominently.
// This is called on Mayor startup to surface issues needing human attention.
// Uses beads package which leverages RPC when daemon is available.
func checkPendingEscalations(ctx RoleContext) {
// Query for open escalations using bd list with tag filter
cmd := exec.Command("bd", "list", "--status=open", "--tag=escalation", "--json")
cmd.Dir = ctx.WorkDir
var stdout, stderr bytes.Buffer
cmd.Stdout = &stdout
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
// Query for open escalations using beads package (uses RPC when available)
b := beads.New(ctx.WorkDir)
escalations, err := b.List(beads.ListOptions{
Status: "open",
Label: "escalation",
Priority: -1,
})
if err != nil || len(escalations) == 0 {
// Silently skip - escalation check is best-effort
return
}
// Parse JSON output
var escalations []struct {
ID string `json:"id"`
Title string `json:"title"`
Priority int `json:"priority"`
Description string `json:"description"`
Created string `json:"created"`
}
if err := json.Unmarshal(stdout.Bytes(), &escalations); err != nil || len(escalations) == 0 {
// No escalations or parse error
return
}
// Count by severity
critical := 0
high := 0

View File

@@ -88,9 +88,9 @@ func showMoleculeExecutionPrompt(workDir, moleculeID string) {
fmt.Println(style.Bold.Render("→ EXECUTE THIS STEP NOW."))
fmt.Println()
fmt.Println("When complete:")
fmt.Printf(" 1. Close the step: bd close %s\n", step.ID)
fmt.Println(" 2. Check for next step: bd ready")
fmt.Println(" 3. Continue until molecule complete")
fmt.Printf(" gt mol step done %s\n", step.ID)
fmt.Println()
fmt.Println("This closes the step and respawns your session with fresh context for the next step.")
} else {
// No next step - molecule may be complete
fmt.Println(style.Bold.Render("✓ MOLECULE COMPLETE"))
@@ -162,11 +162,10 @@ func outputMoleculeContext(ctx RoleContext) {
showMoleculeProgress(b, rootID)
fmt.Println()
fmt.Println("**Molecule Work Loop:**")
fmt.Println("1. Complete current step, then `bd close " + issue.ID + "`")
fmt.Println("2. Check for next steps: `bd ready --parent " + rootID + "`")
fmt.Println("3. Work on next ready step(s)")
fmt.Println("4. When all steps done, run `gt done`")
fmt.Println("**When step complete:**")
fmt.Println(" `gt mol step done " + issue.ID + "`")
fmt.Println()
fmt.Println("This closes the step and respawns with fresh context for the next step.")
break // Only show context for first molecule step found
}
}

View File

@@ -113,6 +113,7 @@ func outputMayorContext(ctx RoleContext) {
fmt.Println("- `gt status` - Show overall town status")
fmt.Println("- `gt rig list` - List all rigs")
fmt.Println("- `bd ready` - Issues ready to work")
fmt.Println("- `bd tree <issue>` - View ancestry, siblings, dependencies")
fmt.Println()
fmt.Println("## Hookable Mail")
fmt.Println("Mail can be hooked for ad-hoc instructions: `gt hook attach <mail-id>`")
@@ -176,6 +177,7 @@ func outputPolecatContext(ctx RoleContext) {
fmt.Println("## Key Commands")
fmt.Println("- `gt mail inbox` - Check your inbox for work assignments")
fmt.Println("- `bd show <issue>` - View your assigned issue")
fmt.Println("- `bd tree <issue>` - View ancestry, siblings, dependencies")
fmt.Println("- `bd close <issue>` - Mark issue complete")
fmt.Println("- `gt done` - Signal work ready for merge")
fmt.Println()
@@ -200,6 +202,7 @@ func outputCrewContext(ctx RoleContext) {
fmt.Println("- `gt mail inbox` - Check your inbox")
fmt.Println("- `bd ready` - Available issues")
fmt.Println("- `bd show <issue>` - View issue details")
fmt.Println("- `bd tree <issue>` - View ancestry, siblings, dependencies")
fmt.Println("- `bd close <issue>` - Mark issue complete")
fmt.Println()
fmt.Println("## Hookable Mail")

View File

@@ -147,8 +147,10 @@ func runReady(cmd *cobra.Command, args []string) error {
wg.Add(1)
go func(r *rig.Rig) {
defer wg.Done()
// Use mayor/rig path where rig-level beads are stored
rigBeadsPath := constants.RigMayorPath(r.Path)
// Use rig root path - ResolveBeadsDir follows redirects to find actual beads.
// For tracked beads: <rig>/.beads/redirect -> mayor/rig/.beads
// For rig-local beads: <rig>/.beads directly
rigBeadsPath := r.Path
rigBeads := beads.New(rigBeadsPath)
issues, err := rigBeads.Ready()

View File

@@ -11,6 +11,7 @@ import (
"strings"
"github.com/spf13/cobra"
"github.com/steveyegge/gastown/internal/git"
"github.com/steveyegge/gastown/internal/style"
"github.com/steveyegge/gastown/internal/workspace"
)
@@ -138,13 +139,7 @@ func runRigQuickAdd(cmd *cobra.Command, args []string) error {
}
func findGitRoot(path string) (string, error) {
cmd := exec.Command("git", "rev-parse", "--show-toplevel")
cmd.Dir = path
out, err := cmd.Output()
if err != nil {
return "", err
}
return strings.TrimSpace(string(out)), nil
return git.RepoRootFrom(path)
}
func findGitRemoteURL(gitRoot string) (string, error) {

View File

@@ -96,6 +96,8 @@ var (
slingAccount string // --account: Claude Code account handle to use
slingAgent string // --agent: override runtime agent for this sling/spawn
slingNoConvoy bool // --no-convoy: skip auto-convoy creation
slingEpic string // --epic: link auto-created convoy to parent epic
slingConvoy string // --convoy: add to existing convoy instead of creating new
)
func init() {
@@ -112,6 +114,8 @@ func init() {
slingCmd.Flags().StringVar(&slingAccount, "account", "", "Claude Code account handle to use")
slingCmd.Flags().StringVar(&slingAgent, "agent", "", "Override agent/runtime for this sling (e.g., claude, gemini, codex, or custom alias)")
slingCmd.Flags().BoolVar(&slingNoConvoy, "no-convoy", false, "Skip auto-convoy creation for single-issue sling")
slingCmd.Flags().StringVar(&slingEpic, "epic", "", "Link auto-created convoy to parent epic")
slingCmd.Flags().StringVar(&slingConvoy, "convoy", "", "Add to existing convoy instead of creating new")
slingCmd.Flags().BoolVar(&slingHookRawBead, "hook-raw-bead", false, "Hook raw bead without default formula (expert mode)")
rootCmd.AddCommand(slingCmd)
@@ -191,8 +195,8 @@ func runSling(cmd *cobra.Command, args []string) error {
// Determine target agent (self or specified)
var targetAgent string
var targetPane string
var hookWorkDir string // Working directory for running bd hook commands
var hookSetAtomically bool // True if hook was set during polecat spawn (skip redundant update)
var hookWorkDir string // Working directory for running bd hook commands
var hookSetAtomically bool // True if hook was set during polecat spawn (skip redundant update)
if len(args) > 1 {
target := args[1]
@@ -376,16 +380,28 @@ func runSling(cmd *cobra.Command, args []string) error {
}
}
// Auto-convoy: check if issue is already tracked by a convoy
// If not, create one for dashboard visibility (unless --no-convoy is set)
if !slingNoConvoy && formulaName == "" {
// Convoy handling: --convoy adds to existing, otherwise auto-create (unless --no-convoy)
if slingConvoy != "" {
// Use existing convoy specified by --convoy flag
if slingDryRun {
fmt.Printf("Would add to convoy %s\n", slingConvoy)
fmt.Printf("Would add tracking relation to %s\n", beadID)
} else {
if err := addToExistingConvoy(slingConvoy, beadID); err != nil {
return fmt.Errorf("adding to convoy: %w", err)
}
fmt.Printf("%s Added to convoy %s\n", style.Bold.Render("→"), slingConvoy)
}
} else if !slingNoConvoy && formulaName == "" {
// Auto-convoy: check if issue is already tracked by a convoy
// If not, create one for dashboard visibility
existingConvoy := isTrackedByConvoy(beadID)
if existingConvoy == "" {
if slingDryRun {
fmt.Printf("Would create convoy 'Work: %s'\n", info.Title)
fmt.Printf("Would add tracking relation to %s\n", beadID)
} else {
convoyID, err := createAutoConvoy(beadID, info.Title)
convoyID, err := createAutoConvoy(beadID, info.Title, slingEpic)
if err != nil {
// Log warning but don't fail - convoy is optional
fmt.Printf("%s Could not create auto-convoy: %v\n", style.Dim.Render("Warning:"), err)

View File

@@ -87,7 +87,7 @@ func runBatchSling(beadIDs []string, rigName string, townBeadsDir string) error
if !slingNoConvoy {
existingConvoy := isTrackedByConvoy(beadID)
if existingConvoy == "" {
convoyID, err := createAutoConvoy(beadID, info.Title)
convoyID, err := createAutoConvoy(beadID, info.Title, slingEpic)
if err != nil {
fmt.Printf(" %s Could not create auto-convoy: %v\n", style.Dim.Render("Warning:"), err)
} else {

View File

@@ -58,8 +58,9 @@ func isTrackedByConvoy(beadID string) string {
}
// createAutoConvoy creates an auto-convoy for a single issue and tracks it.
// If epicID is provided, links the convoy to the parent epic.
// Returns the created convoy ID.
func createAutoConvoy(beadID, beadTitle string) (string, error) {
func createAutoConvoy(beadID, beadTitle string, epicID string) (string, error) {
townRoot, err := workspace.FindFromCwd()
if err != nil {
return "", fmt.Errorf("finding town root: %w", err)
@@ -74,6 +75,9 @@ func createAutoConvoy(beadID, beadTitle string) (string, error) {
// Create convoy with title "Work: <issue-title>"
convoyTitle := fmt.Sprintf("Work: %s", beadTitle)
description := fmt.Sprintf("Auto-created convoy tracking %s", beadID)
if epicID != "" {
description += fmt.Sprintf("\nParent-Epic: %s", epicID)
}
createArgs := []string{
"create",
@@ -106,9 +110,61 @@ func createAutoConvoy(beadID, beadTitle string) (string, error) {
fmt.Printf("%s Could not add tracking relation: %v\n", style.Dim.Render("Warning:"), err)
}
// Link convoy to parent epic if specified (Goals layer)
if epicID != "" {
epicDepArgs := []string{"--no-daemon", "dep", "add", convoyID, epicID, "--type=child_of"}
epicDepCmd := exec.Command("bd", epicDepArgs...)
epicDepCmd.Dir = townBeads
epicDepCmd.Stderr = os.Stderr
if err := epicDepCmd.Run(); err != nil {
// Epic link failed - log warning but continue
fmt.Printf("%s Could not link convoy to epic: %v\n", style.Dim.Render("Warning:"), err)
}
}
return convoyID, nil
}
// addToExistingConvoy adds a bead to an existing convoy by creating a tracking relation.
// Returns an error if the convoy doesn't exist or the tracking relation fails.
func addToExistingConvoy(convoyID, beadID string) error {
townRoot, err := workspace.FindFromCwd()
if err != nil {
return fmt.Errorf("finding town root: %w", err)
}
townBeads := filepath.Join(townRoot, ".beads")
dbPath := filepath.Join(townBeads, "beads.db")
// Verify convoy exists and is open
query := fmt.Sprintf(`
SELECT id FROM issues
WHERE id = '%s'
AND issue_type = 'convoy'
AND status = 'open'
`, convoyID)
queryCmd := exec.Command("sqlite3", dbPath, query)
out, err := queryCmd.Output()
if err != nil || strings.TrimSpace(string(out)) == "" {
return fmt.Errorf("convoy %s not found or not open", convoyID)
}
// Add tracking relation: convoy tracks the issue
trackBeadID := formatTrackBeadID(beadID)
depArgs := []string{"--no-daemon", "dep", "add", convoyID, trackBeadID, "--type=tracks"}
depCmd := exec.Command("bd", depArgs...)
depCmd.Dir = townBeads
depCmd.Stderr = os.Stderr
if err := depCmd.Run(); err != nil {
return fmt.Errorf("adding tracking relation: %w", err)
}
return nil
}
// formatTrackBeadID formats a bead ID for use in convoy tracking dependencies.
// Cross-rig beads (non-hq- prefixed) are formatted as external references
// so the bd tool can resolve them when running from HQ context.

View File

@@ -10,6 +10,7 @@ import (
"github.com/steveyegge/gastown/internal/beads"
"github.com/steveyegge/gastown/internal/constants"
"github.com/steveyegge/gastown/internal/git"
"github.com/steveyegge/gastown/internal/tmux"
"github.com/steveyegge/gastown/internal/workspace"
)
@@ -137,12 +138,18 @@ func storeDispatcherInBead(beadID, dispatcher string) error {
}
// Get the bead to preserve existing description content
showCmd := exec.Command("bd", "show", beadID, "--json")
// Use --no-daemon for consistency with other sling operations (see h-3f96b)
showCmd := exec.Command("bd", "--no-daemon", "show", beadID, "--json", "--allow-stale")
out, err := showCmd.Output()
if err != nil {
return fmt.Errorf("fetching bead: %w", err)
}
// Handle bd --no-daemon exit 0 bug: empty stdout means not found
if len(out) == 0 {
return fmt.Errorf("bead not found")
}
// Parse the bead
var issues []beads.Issue
if err := json.Unmarshal(out, &issues); err != nil {
@@ -165,8 +172,8 @@ func storeDispatcherInBead(beadID, dispatcher string) error {
// Update the description
newDesc := beads.SetAttachmentFields(issue, fields)
// Update the bead
updateCmd := exec.Command("bd", "update", beadID, "--description="+newDesc)
// Update the bead (use --no-daemon for consistency)
updateCmd := exec.Command("bd", "--no-daemon", "update", beadID, "--description="+newDesc)
updateCmd.Stderr = os.Stderr
if err := updateCmd.Run(); err != nil {
return fmt.Errorf("updating bead description: %w", err)
@@ -190,12 +197,18 @@ func storeAttachedMoleculeInBead(beadID, moleculeID string) error {
issue := &beads.Issue{}
if logPath == "" {
// Get the bead to preserve existing description content
showCmd := exec.Command("bd", "show", beadID, "--json")
// Use --no-daemon for consistency with other sling operations (see h-3f96b)
showCmd := exec.Command("bd", "--no-daemon", "show", beadID, "--json", "--allow-stale")
out, err := showCmd.Output()
if err != nil {
return fmt.Errorf("fetching bead: %w", err)
}
// Handle bd --no-daemon exit 0 bug: empty stdout means not found
if len(out) == 0 {
return fmt.Errorf("bead not found")
}
// Parse the bead
var issues []beads.Issue
if err := json.Unmarshal(out, &issues); err != nil {
@@ -225,8 +238,8 @@ func storeAttachedMoleculeInBead(beadID, moleculeID string) error {
_ = os.WriteFile(logPath, []byte(newDesc), 0644)
}
// Update the bead
updateCmd := exec.Command("bd", "update", beadID, "--description="+newDesc)
// Update the bead (use --no-daemon for consistency)
updateCmd := exec.Command("bd", "--no-daemon", "update", beadID, "--description="+newDesc)
updateCmd.Stderr = os.Stderr
if err := updateCmd.Run(); err != nil {
return fmt.Errorf("updating bead description: %w", err)
@@ -319,13 +332,13 @@ func ensureAgentReady(sessionName string) error {
}
// detectCloneRoot finds the root of the current git clone.
// Uses cached value to avoid repeated git subprocess calls.
func detectCloneRoot() (string, error) {
cmd := exec.Command("git", "rev-parse", "--show-toplevel")
out, err := cmd.Output()
root, err := git.RepoRoot()
if err != nil {
return "", fmt.Errorf("not in a git repository")
}
return strings.TrimSpace(string(out)), nil
return root, nil
}
// detectActor returns the current agent's actor string for event logging.

View File

@@ -6,6 +6,7 @@ import (
"os"
"os/signal"
"path/filepath"
"sort"
"strings"
"sync"
"syscall"
@@ -439,6 +440,55 @@ func outputStatusText(status TownStatus) error {
fmt.Println()
}
// Goals summary (top 3 stalest high-priority)
goals, _ := collectFocusItems(status.Location)
// Sort by score (highest first)
sort.Slice(goals, func(i, j int) bool {
return goals[i].Score > goals[j].Score
})
if len(goals) > 0 {
fmt.Printf("%s (%d active)\n", style.Bold.Render("GOALS"), len(goals))
// Show top 3
showCount := 3
if len(goals) < showCount {
showCount = len(goals)
}
for i := 0; i < showCount; i++ {
g := goals[i]
var indicator string
switch g.Staleness {
case "stuck":
indicator = style.Error.Render("🔴")
case "stale":
indicator = style.Warning.Render("🟡")
default:
indicator = style.Success.Render("🟢")
}
fmt.Printf(" %s P%d %s: %s\n", indicator, g.Priority, g.ID, truncateWithEllipsis(g.Title, 40))
}
if len(goals) > showCount {
fmt.Printf(" %s\n", style.Dim.Render(fmt.Sprintf("... and %d more (gt focus)", len(goals)-showCount)))
}
fmt.Println()
}
// Attention summary (blocked items, reviews)
attention := collectAttentionSummary(status.Location)
if attention.Total > 0 {
fmt.Printf("%s (%d items)\n", style.Bold.Render("ATTENTION"), attention.Total)
if attention.Blocked > 0 {
fmt.Printf(" • %d blocked issue(s)\n", attention.Blocked)
}
if attention.Reviews > 0 {
fmt.Printf(" • %d PR(s) awaiting review\n", attention.Reviews)
}
if attention.Stuck > 0 {
fmt.Printf(" • %d stuck worker(s)\n", attention.Stuck)
}
fmt.Printf(" %s\n", style.Dim.Render("→ gt attention for details"))
fmt.Println()
}
// Role icons - uses centralized emojis from constants package
roleIcons := map[string]string{
constants.RoleMayor: constants.EmojiMayor,
@@ -1232,3 +1282,36 @@ func getAgentHook(b *beads.Beads, role, agentAddress, roleType string) AgentHook
return hook
}
// AttentionSummary holds counts of items needing attention for status display.
type AttentionSummary struct {
Blocked int
Reviews int
Stuck int
Decisions int
Total int
}
// collectAttentionSummary gathers counts of items needing attention.
func collectAttentionSummary(townRoot string) AttentionSummary {
summary := AttentionSummary{}
// Count blocked items (reuse logic from attention.go)
blocked := collectBlockedItems(townRoot)
summary.Blocked = len(blocked)
// Count reviews
reviews := collectReviewItems(townRoot)
summary.Reviews = len(reviews)
// Count stuck workers
stuck := collectStuckWorkers(townRoot)
summary.Stuck = len(stuck)
// Count decisions
decisions := collectDecisionItems(townRoot)
summary.Decisions = len(decisions)
summary.Total = summary.Blocked + summary.Reviews + summary.Stuck + summary.Decisions
return summary
}

View File

@@ -115,7 +115,7 @@ func runWorkerStatusLine(t *tmux.Tmux, session, rigName, polecat, crew, issue st
// Priority 2: Fall back to GT_ISSUE env var or in_progress beads
currentWork := issue
if currentWork == "" && hookedWork == "" && session != "" {
currentWork = getCurrentWork(t, session, 40)
currentWork = getCurrentWork(t, session, identity, 40)
}
// Show hooked work (takes precedence)
@@ -171,13 +171,17 @@ func runMayorStatusLine(t *tmux.Tmux) error {
townRoot, _ = workspace.Find(paneDir)
}
// Load registered rigs to validate against
// Load registered rigs to validate against and get aliases
registeredRigs := make(map[string]bool)
rigAliases := make(map[string]string)
if townRoot != "" {
rigsConfigPath := filepath.Join(townRoot, "mayor", "rigs.json")
if rigsConfig, err := config.LoadRigsConfig(rigsConfigPath); err == nil {
for rigName := range rigsConfig.Rigs {
for rigName, entry := range rigsConfig.Rigs {
registeredRigs[rigName] = true
if entry.Alias != "" {
rigAliases[rigName] = entry.Alias
}
}
}
}
@@ -291,11 +295,16 @@ func runMayorStatusLine(t *tmux.Tmux) error {
// Create sortable rig list
type rigInfo struct {
name string
alias string
status *rigStatus
}
var rigs []rigInfo
for rigName, status := range rigStatuses {
rigs = append(rigs, rigInfo{name: rigName, status: status})
ri := rigInfo{name: rigName, status: status}
if alias, ok := rigAliases[rigName]; ok {
ri.alias = alias
}
rigs = append(rigs, ri)
}
// Sort by: 1) running state, 2) operational state, 3) alphabetical
@@ -321,9 +330,16 @@ func runMayorStatusLine(t *tmux.Tmux) error {
})
// Build display with group separators
// Limit to maxRigs to prevent statusline overflow
maxRigs := 3
var rigParts []string
var lastGroup string
displayCount := 0
for _, rig := range rigs {
if displayCount >= maxRigs {
break
}
isRunning := rig.status.hasWitness || rig.status.hasRefinery
var currentGroup string
if isRunning {
@@ -363,7 +379,19 @@ func runMayorStatusLine(t *tmux.Tmux) error {
if led == "🅿️" {
space = " "
}
rigParts = append(rigParts, led+space+rig.name)
// Use alias if available, otherwise use full name
displayName := rig.name
if rig.alias != "" {
displayName = rig.alias
}
rigParts = append(rigParts, led+space+displayName)
displayCount++
}
// Show overflow indicator if there are more rigs
if len(rigs) > maxRigs {
rigParts = append(rigParts, fmt.Sprintf("+%d", len(rigs)-maxRigs))
}
if len(rigParts) > 0 {
@@ -713,6 +741,12 @@ func getMailPreviewWithRoot(identity string, maxLen int, townRoot string) (int,
// beadsDir should be the directory containing .beads (for rig-level) or
// empty to use the town root (for town-level roles).
func getHookedWork(identity string, maxLen int, beadsDir string) string {
// Guard: identity must be non-empty to filter by assignee.
// Without identity, the query would return ALL hooked beads regardless of assignee.
if identity == "" {
return ""
}
// If no beadsDir specified, use town root
if beadsDir == "" {
var err error
@@ -743,9 +777,15 @@ func getHookedWork(identity string, maxLen int, beadsDir string) string {
return display
}
// getCurrentWork returns a truncated title of the first in_progress issue.
// getCurrentWork returns a truncated title of the first in_progress issue assigned to this agent.
// Uses the pane's working directory to find the beads.
func getCurrentWork(t *tmux.Tmux, session string, maxLen int) string {
func getCurrentWork(t *tmux.Tmux, session, identity string, maxLen int) string {
// Guard: identity must be non-empty to filter by assignee.
// Without identity, the query would return ALL in_progress beads regardless of assignee.
if identity == "" {
return ""
}
// Get the pane's working directory
workDir, err := t.GetPaneWorkDir(session)
if err != nil || workDir == "" {
@@ -758,10 +798,11 @@ func getCurrentWork(t *tmux.Tmux, session string, maxLen int) string {
return ""
}
// Query beads for in_progress issues
// Query beads for in_progress issues assigned to this agent
b := beads.New(workDir)
issues, err := b.List(beads.ListOptions{
Status: "in_progress",
Assignee: identity,
Priority: -1,
})
if err != nil || len(issues) == 0 {

View File

@@ -49,36 +49,48 @@ func AgentEnv(cfg AgentEnvConfig) map[string]string {
case "mayor":
env["BD_ACTOR"] = "mayor"
env["GIT_AUTHOR_NAME"] = "mayor"
env["GIT_AUTHOR_EMAIL"] = "mayor@gastown.local"
case "deacon":
env["BD_ACTOR"] = "deacon"
env["GIT_AUTHOR_NAME"] = "deacon"
env["GIT_AUTHOR_EMAIL"] = "deacon@gastown.local"
case "boot":
env["BD_ACTOR"] = "deacon-boot"
env["GIT_AUTHOR_NAME"] = "boot"
env["GIT_AUTHOR_EMAIL"] = "boot@gastown.local"
case "dog":
env["BD_ACTOR"] = fmt.Sprintf("deacon/dogs/%s", cfg.AgentName)
env["GIT_AUTHOR_NAME"] = fmt.Sprintf("dog-%s", cfg.AgentName)
env["GIT_AUTHOR_EMAIL"] = fmt.Sprintf("dog-%s@gastown.local", cfg.AgentName)
case "witness":
env["GT_RIG"] = cfg.Rig
env["BD_ACTOR"] = fmt.Sprintf("%s/witness", cfg.Rig)
env["GIT_AUTHOR_NAME"] = fmt.Sprintf("%s/witness", cfg.Rig)
env["GIT_AUTHOR_EMAIL"] = fmt.Sprintf("%s-witness@gastown.local", cfg.Rig)
case "refinery":
env["GT_RIG"] = cfg.Rig
env["BD_ACTOR"] = fmt.Sprintf("%s/refinery", cfg.Rig)
env["GIT_AUTHOR_NAME"] = fmt.Sprintf("%s/refinery", cfg.Rig)
env["GIT_AUTHOR_EMAIL"] = fmt.Sprintf("%s-refinery@gastown.local", cfg.Rig)
case "polecat":
env["GT_RIG"] = cfg.Rig
env["GT_POLECAT"] = cfg.AgentName
env["BD_ACTOR"] = fmt.Sprintf("%s/polecats/%s", cfg.Rig, cfg.AgentName)
env["GIT_AUTHOR_NAME"] = cfg.AgentName
env["GIT_AUTHOR_EMAIL"] = fmt.Sprintf("%s-polecat-%s@gastown.local", cfg.Rig, cfg.AgentName)
case "crew":
env["GT_RIG"] = cfg.Rig
env["GT_CREW"] = cfg.AgentName
env["BD_ACTOR"] = fmt.Sprintf("%s/crew/%s", cfg.Rig, cfg.AgentName)
env["GIT_AUTHOR_NAME"] = cfg.AgentName
env["GIT_AUTHOR_EMAIL"] = fmt.Sprintf("%s-crew-%s@gastown.local", cfg.Rig, cfg.AgentName)
}
// Only set GT_ROOT if provided
@@ -121,7 +133,7 @@ func AgentEnvSimple(role, rig, agentName string) map[string]string {
// ShellQuote returns a shell-safe quoted string.
// Values containing special characters are wrapped in single quotes.
// Single quotes within the value are escaped using the '\'' idiom.
// Single quotes within the value are escaped using the '\ idiom.
func ShellQuote(s string) string {
// Check if quoting is needed (contains shell special chars)
needsQuoting := false

View File

@@ -14,6 +14,7 @@ func TestAgentEnv_Mayor(t *testing.T) {
assertEnv(t, env, "GT_ROLE", "mayor")
assertEnv(t, env, "BD_ACTOR", "mayor")
assertEnv(t, env, "GIT_AUTHOR_NAME", "mayor")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "mayor@gastown.local")
assertEnv(t, env, "GT_ROOT", "/town")
assertNotSet(t, env, "GT_RIG")
assertNotSet(t, env, "BEADS_NO_DAEMON")
@@ -31,6 +32,7 @@ func TestAgentEnv_Witness(t *testing.T) {
assertEnv(t, env, "GT_RIG", "myrig")
assertEnv(t, env, "BD_ACTOR", "myrig/witness")
assertEnv(t, env, "GIT_AUTHOR_NAME", "myrig/witness")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "myrig-witness@gastown.local")
assertEnv(t, env, "GT_ROOT", "/town")
}
@@ -49,6 +51,7 @@ func TestAgentEnv_Polecat(t *testing.T) {
assertEnv(t, env, "GT_POLECAT", "Toast")
assertEnv(t, env, "BD_ACTOR", "myrig/polecats/Toast")
assertEnv(t, env, "GIT_AUTHOR_NAME", "Toast")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "myrig-polecat-Toast@gastown.local")
assertEnv(t, env, "BEADS_AGENT_NAME", "myrig/Toast")
assertEnv(t, env, "BEADS_NO_DAEMON", "1")
}
@@ -68,6 +71,7 @@ func TestAgentEnv_Crew(t *testing.T) {
assertEnv(t, env, "GT_CREW", "emma")
assertEnv(t, env, "BD_ACTOR", "myrig/crew/emma")
assertEnv(t, env, "GIT_AUTHOR_NAME", "emma")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "myrig-crew-emma@gastown.local")
assertEnv(t, env, "BEADS_AGENT_NAME", "myrig/emma")
assertEnv(t, env, "BEADS_NO_DAEMON", "1")
}
@@ -85,6 +89,7 @@ func TestAgentEnv_Refinery(t *testing.T) {
assertEnv(t, env, "GT_RIG", "myrig")
assertEnv(t, env, "BD_ACTOR", "myrig/refinery")
assertEnv(t, env, "GIT_AUTHOR_NAME", "myrig/refinery")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "myrig-refinery@gastown.local")
assertEnv(t, env, "BEADS_NO_DAEMON", "1")
}
@@ -98,6 +103,7 @@ func TestAgentEnv_Deacon(t *testing.T) {
assertEnv(t, env, "GT_ROLE", "deacon")
assertEnv(t, env, "BD_ACTOR", "deacon")
assertEnv(t, env, "GIT_AUTHOR_NAME", "deacon")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "deacon@gastown.local")
assertEnv(t, env, "GT_ROOT", "/town")
assertNotSet(t, env, "GT_RIG")
assertNotSet(t, env, "BEADS_NO_DAEMON")
@@ -113,6 +119,24 @@ func TestAgentEnv_Boot(t *testing.T) {
assertEnv(t, env, "GT_ROLE", "boot")
assertEnv(t, env, "BD_ACTOR", "deacon-boot")
assertEnv(t, env, "GIT_AUTHOR_NAME", "boot")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "boot@gastown.local")
assertEnv(t, env, "GT_ROOT", "/town")
assertNotSet(t, env, "GT_RIG")
assertNotSet(t, env, "BEADS_NO_DAEMON")
}
func TestAgentEnv_Dog(t *testing.T) {
t.Parallel()
env := AgentEnv(AgentEnvConfig{
Role: "dog",
AgentName: "alpha",
TownRoot: "/town",
})
assertEnv(t, env, "GT_ROLE", "dog")
assertEnv(t, env, "BD_ACTOR", "deacon/dogs/alpha")
assertEnv(t, env, "GIT_AUTHOR_NAME", "dog-alpha")
assertEnv(t, env, "GIT_AUTHOR_EMAIL", "dog-alpha@gastown.local")
assertEnv(t, env, "GT_ROOT", "/town")
assertNotSet(t, env, "GT_RIG")
assertNotSet(t, env, "BEADS_NO_DAEMON")

View File

@@ -1457,6 +1457,17 @@ func BuildPolecatStartupCommandWithAgentOverride(rigName, polecatName, rigPath,
return BuildStartupCommandWithAgentOverride(envVars, rigPath, prompt, agentOverride)
}
// BuildDogStartupCommand builds the startup command for a deacon dog.
// Sets GT_ROLE, BD_ACTOR, GIT_AUTHOR_NAME, and GT_ROOT.
func BuildDogStartupCommand(dogName, townRoot, dogPath, prompt string) string {
envVars := AgentEnv(AgentEnvConfig{
Role: "dog",
AgentName: dogName,
TownRoot: townRoot,
})
return BuildStartupCommand(envVars, dogPath, prompt)
}
// BuildCrewStartupCommand builds the startup command for a crew member.
// Sets GT_ROLE, GT_RIG, GT_CREW, BD_ACTOR, GIT_AUTHOR_NAME, and GT_ROOT.
func BuildCrewStartupCommand(rigName, crewName, rigPath, prompt string) string {

View File

@@ -163,10 +163,12 @@ type RigsConfig struct {
// RigEntry represents a single rig in the registry.
type RigEntry struct {
GitURL string `json:"git_url"`
LocalRepo string `json:"local_repo,omitempty"`
AddedAt time.Time `json:"added_at"`
BeadsConfig *BeadsConfig `json:"beads,omitempty"`
GitURL string `json:"git_url"`
LocalRepo string `json:"local_repo,omitempty"`
AddedAt time.Time `json:"added_at"`
BeadsConfig *BeadsConfig `json:"beads,omitempty"`
Crew *CrewRegistryConfig `json:"crew,omitempty"`
Alias string `json:"alias,omitempty"` // Short display name for statusline
}
// BeadsConfig represents beads configuration for a rig.
@@ -175,6 +177,18 @@ type BeadsConfig struct {
Prefix string `json:"prefix"` // issue prefix
}
// CrewRegistryConfig represents crew configuration for a rig in rigs.json.
// This enables cross-machine sync of crew member definitions.
type CrewRegistryConfig struct {
// Theme selects the naming theme for crew members (e.g., "mad-max", "minerals").
// Used when displaying crew member names and for consistency across machines.
Theme string `json:"theme,omitempty"`
// Members lists the crew member names to create on this rig.
// Use `gt crew sync` to create missing members from this list.
Members []string `json:"members,omitempty"`
}
// CurrentTownVersion is the current schema version for TownConfig.
// Version 2: Added Owner and PublicName fields for federation identity.
const CurrentTownVersion = 2

View File

@@ -10,8 +10,8 @@ const (
ShutdownNotifyDelay = 500 * time.Millisecond
// ClaudeStartTimeout is how long to wait for Claude to start in a session.
// Increased to 60s because Claude can take 30s+ on slower machines.
ClaudeStartTimeout = 60 * time.Second
// Increased to 120s because Claude can take 60s+ on slower machines or under load.
ClaudeStartTimeout = 120 * time.Second
// ShellReadyTimeout is how long to wait for shell prompt after command.
ShellReadyTimeout = 5 * time.Second

View File

@@ -335,8 +335,8 @@ func (c *ClaudeSettingsCheck) checkSettings(path, _ string) []string {
// All templates should have:
// 1. enabledPlugins
// 2. PATH export in hooks
// 3. Stop hook with gt costs record (for autonomous)
// 4. gt nudge deacon session-started in SessionStart
// 3. gt nudge deacon session-started in SessionStart
// Note: Stop hook was removed (gt-quoj) - cost tracking is disabled
// Check enabledPlugins
if _, ok := actual["enabledPlugins"]; !ok {
@@ -359,10 +359,9 @@ func (c *ClaudeSettingsCheck) checkSettings(path, _ string) []string {
missing = append(missing, "deacon nudge")
}
// Check Stop hook exists with gt costs record (for all roles)
if !c.hookHasPattern(hooks, "Stop", "gt costs record") {
missing = append(missing, "Stop hook")
}
// Note: Stop hook with gt costs record was removed in gt-quoj.
// Cost tracking is disabled - Claude Code doesn't expose session costs.
// The Stop hook was causing 30s timeouts on session stop with no benefit.
return missing
}

View File

@@ -56,17 +56,6 @@ func createValidSettings(t *testing.T, path string) {
},
},
},
"Stop": []any{
map[string]any{
"matcher": "**",
"hooks": []any{
map[string]any{
"type": "command",
"command": "gt costs record --session $CLAUDE_SESSION_ID",
},
},
},
},
},
}
@@ -106,17 +95,6 @@ func createStaleSettings(t *testing.T, path string, missingElements ...string) {
},
},
},
"Stop": []any{
map[string]any{
"matcher": "**",
"hooks": []any{
map[string]any{
"type": "command",
"command": "gt costs record --session $CLAUDE_SESSION_ID",
},
},
},
},
},
}
@@ -156,9 +134,6 @@ func createStaleSettings(t *testing.T, path string, missingElements ...string) {
}
}
hookObj["hooks"] = filtered
case "Stop":
hooks := settings["hooks"].(map[string]any)
delete(hooks, "Stop")
}
}
@@ -374,33 +349,6 @@ func TestClaudeSettingsCheck_MissingDeaconNudge(t *testing.T) {
}
}
func TestClaudeSettingsCheck_MissingStopHook(t *testing.T) {
tmpDir := t.TempDir()
// Create stale settings missing Stop hook (at correct location)
mayorSettings := filepath.Join(tmpDir, "mayor", ".claude", "settings.json")
createStaleSettings(t, mayorSettings, "Stop")
check := NewClaudeSettingsCheck()
ctx := &CheckContext{TownRoot: tmpDir}
result := check.Run(ctx)
if result.Status != StatusError {
t.Errorf("expected StatusError for missing Stop hook, got %v", result.Status)
}
found := false
for _, d := range result.Details {
if strings.Contains(d, "Stop hook") {
found = true
break
}
}
if !found {
t.Errorf("expected details to mention Stop hook, got %v", result.Details)
}
}
func TestClaudeSettingsCheck_WrongLocationWitness(t *testing.T) {
tmpDir := t.TempDir()
rigName := "testrig"
@@ -468,7 +416,7 @@ func TestClaudeSettingsCheck_MultipleStaleFiles(t *testing.T) {
createStaleSettings(t, mayorSettings, "PATH")
deaconSettings := filepath.Join(tmpDir, "deacon", ".claude", "settings.json")
createStaleSettings(t, deaconSettings, "Stop")
createStaleSettings(t, deaconSettings, "deacon-nudge")
// Settings inside git repo (witness/rig/.claude/) are wrong location
witnessWrong := filepath.Join(tmpDir, rigName, "witness", "rig", ".claude", "settings.json")
@@ -1037,8 +985,7 @@ func TestClaudeSettingsCheck_TownRootSettingsWarnsInsteadOfKilling(t *testing.T)
"env": {"PATH": "/usr/bin"},
"enabledPlugins": ["claude-code-expert"],
"hooks": {
"SessionStart": [{"matcher": "", "hooks": [{"type": "command", "command": "gt prime"}]}],
"Stop": [{"matcher": "", "hooks": [{"type": "command", "command": "gt handoff"}]}]
"SessionStart": [{"matcher": "", "hooks": [{"type": "command", "command": "gt prime"}]}]
}
}`
if err := os.WriteFile(staleTownRootSettings, []byte(settingsContent), 0644); err != nil {

View File

@@ -1,29 +1,45 @@
description = """
Mayor's daemon patrol loop.
Mayor's daemon patrol loop - CONTINUOUS EXECUTION.
The Deacon is the Mayor's background process that runs continuously, handling callbacks, monitoring rig health, and performing cleanup. Each patrol cycle runs these steps in sequence, then loops or exits.
The Deacon is the Mayor's background process that runs CONTINUOUSLY in a loop:
1. Execute all patrol steps (inbox-check through context-check)
2. Wait for activity OR timeout (15-minute max)
3. Create new patrol wisp and repeat from step 1
**This is a continuous loop, not a one-shot execution.**
## Patrol Loop Flow
```
START → inbox-check → [all patrol steps] → loop-or-exit
await-signal (wait for activity)
create new wisp → START
```
## Plugin Dispatch
The plugin-run step scans $GT_ROOT/plugins/ for plugins with open gates and
dispatches them to dogs. With a 15-minute max backoff, plugins with 15m
cooldown gates will be checked at least once per interval.
## Idle Town Principle
**The Deacon should be silent/invisible when the town is healthy and idle.**
- Skip HEALTH_CHECK nudges when no active work exists
- Sleep 60+ seconds between patrol cycles (longer when idle)
- Let the feed subscription wake agents on actual events
- The daemon (10-minute heartbeat) is the safety net for dead sessions
This prevents flooding idle agents with health checks every few seconds.
- Sleep via await-signal (exponential backoff up to 15 min)
- Let the feed subscription wake on actual events
- The daemon is the safety net for dead sessions
## Second-Order Monitoring
Witnesses send WITNESS_PING messages to verify the Deacon is alive. This
prevents the "who watches the watchers" problem - if the Deacon dies,
Witnesses detect it and escalate to the Mayor.
The Deacon's agent bead last_activity timestamp is updated during each patrol
cycle. Witnesses check this timestamp to verify health."""
Witnesses detect it and escalate to the Mayor."""
formula = "mol-deacon-patrol"
version = 8
version = 9
[[steps]]
id = "inbox-check"
@@ -579,29 +595,48 @@ investigate why the Witness isn't cleaning up properly."""
[[steps]]
id = "plugin-run"
title = "Execute registered plugins"
title = "Scan and dispatch plugins"
needs = ["zombie-scan"]
description = """
Execute registered plugins.
Scan plugins and dispatch any with open gates to dogs.
Scan $GT_ROOT/plugins/ for plugin directories. Each plugin has a plugin.md with TOML frontmatter defining its gate (when to run) and instructions (what to do).
**Step 1: List plugins and check gates**
```bash
gt plugin list
```
See docs/deacon-plugins.md for full documentation.
For each plugin, check if its gate is open:
- **cooldown**: Time since last run (e.g., 15m) - check state.json
- **cron**: Schedule-based (e.g., "0 9 * * *")
- **condition**: Metric threshold (e.g., wisp count > 50)
- **event**: Trigger-based (e.g., startup, heartbeat)
Gate types:
- cooldown: Time since last run (e.g., 24h)
- cron: Schedule-based (e.g., "0 9 * * *")
- condition: Metric threshold (e.g., wisp count > 50)
- event: Trigger-based (e.g., startup, heartbeat)
**Step 2: Dispatch plugins with open gates**
```bash
# For each plugin with an open gate:
gt dog dispatch --plugin <plugin-name>
```
For each plugin:
1. Read plugin.md frontmatter to check gate
2. Compare against state.json (last run, etc.)
3. If gate is open, execute the plugin
This sends the plugin to an idle dog for execution. The dog will:
1. Execute the plugin instructions from plugin.md
2. Send DOG_DONE mail when complete (processed in next patrol's inbox-check)
Plugins marked parallel: true can run concurrently using Task tool subagents. Sequential plugins run one at a time in directory order.
**Step 3: Track dispatched plugins**
Record in state.json which plugins were dispatched this cycle:
```json
{
"plugins_dispatched": ["scout-patrol"],
"last_plugin_run": "2026-01-23T13:45:00Z"
}
```
Skip this step if $GT_ROOT/plugins/ does not exist or is empty."""
**If no plugins have open gates:**
Skip dispatch - all plugins are within their cooldown/schedule.
**If no dogs available:**
Log warning and skip dispatch this cycle. Dog pool maintenance step will spawn dogs.
See docs/deacon-plugins.md for full documentation."""
[[steps]]
id = "dog-pool-maintenance"
@@ -928,57 +963,89 @@ This enables the Deacon to burn and respawn cleanly."""
[[steps]]
id = "loop-or-exit"
title = "Burn and respawn or loop"
title = "Continuous patrol loop"
needs = ["context-check"]
description = """
Burn and let daemon respawn, or exit if context high.
Continue the patrol loop or exit for context refresh.
Decision point at end of patrol cycle:
**CRITICAL**: This is where the continuous patrol loop happens. The Deacon MUST
loop back and start a new patrol cycle. Do NOT wait for external triggers.
If context is LOW:
Use await-signal with exponential backoff to wait for activity:
## Decision Matrix
1. **Check context usage**: `gt context --usage`
2. **If context HIGH (>80%)**: Exit cleanly, daemon respawns fresh session
3. **If context LOW**: Continue to patrol loop below
## The Continuous Patrol Loop
When context is low, execute this loop:
```bash
# Step 1: Squash current patrol wisp (clean up)
gt mol squash
# Step 2: Wait for activity OR timeout (15-minute default)
gt mol step await-signal --agent-bead hq-deacon \
--backoff-base 60s --backoff-mult 2 --backoff-max 10m
--backoff-base 60s --backoff-mult 2 --backoff-max 15m
# Step 3: Reset idle counter if activity was detected
gt agents state hq-deacon --set idle=0
# Step 4: Create new patrol wisp
WISP_ID=$(bd mol wisp mol-deacon-patrol 2>&1 | grep -o 'hq-[a-z0-9]*')
# Step 5: Hook it and start executing
gt hook $WISP_ID
```
This command:
After hooking, immediately begin executing the new wisp from its first step
(inbox-check). The wisp is now on your hook, so just continue with patrol.
**IMPORTANT**: After await-signal returns (either signal or timeout), you MUST:
1. Squash the current wisp
2. Create a new patrol wisp
3. Hook it
4. Start executing from inbox-check
This IS the loop. There is no "return to inbox-check" command - you create a new
wisp and that wisp starts fresh from inbox-check.
## await-signal Behavior
The await-signal command:
1. Subscribes to `bd activity --follow` (beads activity feed)
2. Returns IMMEDIATELY when any beads activity occurs
3. If no activity, times out with exponential backoff:
- First timeout: 60s
- Second timeout: 120s
- Third timeout: 240s
- ...capped at 10 minutes max
- Third timeout: 240s (4 min)
- ...capped at 15 minutes max
4. Tracks `idle:N` label on hq-deacon bead for backoff state
**On signal received** (activity detected):
Reset the idle counter and start next patrol cycle:
```bash
gt agent state hq-deacon --set idle=0
```
Then return to inbox-check step.
**On timeout** (no activity):
The idle counter was auto-incremented. Continue to next patrol cycle
(the longer backoff will apply next time). Return to inbox-check step.
**Why this approach?**
- Any `gt` or `bd` command triggers beads activity, waking the Deacon
- Idle towns let the Deacon sleep longer (up to 10 min between patrols)
- Idle towns let the Deacon sleep longer (up to 15 min between patrols)
- Active work wakes the Deacon immediately via the feed
- No polling or fixed sleep intervals
- No fixed polling intervals - event-driven wake
If context is HIGH:
- Write state to persistent storage
- Exit cleanly
- Let the daemon orchestrator respawn a fresh Deacon
## Plugin Dispatch Timing
The daemon ensures Deacon is always running:
The plugin-run step (earlier in patrol) handles plugin dispatch:
- Scans $GT_ROOT/plugins/ for plugins with open gates
- Dispatches to dogs via `gt dog dispatch --plugin <name>`
- Dogs send DOG_DONE when complete (processed in next patrol's inbox-check)
With a 15-minute max backoff, plugins with 15m cooldown gates will be checked
at least once per interval when idle.
## Exit Path (High Context)
If context is HIGH (>80%):
```bash
# Daemon respawns on exit
gt daemon status
# Exit cleanly - daemon will respawn with fresh context
exit 0
```
This enables infinite patrol duration via context-aware respawning."""
The daemon ensures Deacon is always running. Exiting is safe - you'll be
respawned with fresh context and the patrol loop continues."""

View File

@@ -9,10 +9,13 @@ opportunities. The output is a set of beads capturing actionable findings.
You are a self-cleaning worker. You:
1. Receive work via your hook (pinned molecule + review scope)
2. Work through molecule steps using `bd ready` / `bd close <step>`
2. Work through molecule steps using `bd ready` / `gt mol step done <step>`
3. Complete and self-clean via `gt done` (submit findings + nuke yourself)
4. You are GONE - your findings are recorded in beads
**Fresh context:** Each `gt mol step done` respawns your session with fresh context.
This ensures each review step gets unbiased attention.
**Self-cleaning:** When you run `gt done`, you submit your findings, nuke your
sandbox, and exit. There is no idle state. Done means gone.

View File

@@ -9,10 +9,13 @@ standards, then approves, requests changes, or files followup beads.
You are a self-cleaning worker. You:
1. Receive work via your hook (pinned molecule + PR reference)
2. Work through molecule steps using `bd ready` / `bd close <step>`
2. Work through molecule steps using `bd ready` / `gt mol step done <step>`
3. Complete and self-clean via `gt done` (submit findings + nuke yourself)
4. You are GONE - your review is recorded in beads
**Fresh context:** Each `gt mol step done` respawns your session with fresh context.
This ensures each review step gets unbiased attention.
**Self-cleaning:** When you run `gt done`, you submit your findings, nuke your
sandbox, and exit. There is no idle state. Done means gone.

View File

@@ -9,10 +9,13 @@ crash after any step and resume from the last completed step.
You are a self-cleaning worker. You:
1. Receive work via your hook (pinned molecule + issue)
2. Work through molecule steps using `bd ready` / `bd close <step>`
2. Work through molecule steps using `bd ready` / `gt mol step done <step>`
3. Complete and self-clean via `gt done` (submit + nuke yourself)
4. You are GONE - Refinery merges from MQ
**Fresh context:** Each `gt mol step done` respawns your session with fresh context.
This ensures each step gets unbiased attention.
**Self-cleaning:** When you run `gt done`, you push your work, submit to MQ,
nuke your sandbox, and exit. There is no idle state. Done means gone.

View File

@@ -9,8 +9,49 @@ import (
"os/exec"
"path/filepath"
"strings"
"sync"
)
// Cached repo root for the current process.
// Since CLI commands are short-lived and the working directory doesn't change
// during a single invocation, caching this avoids repeated git subprocess calls
// that add ~50ms each and contend on .git/index.lock.
var (
cachedRepoRoot string
cachedRepoRootOnce sync.Once
cachedRepoRootErr error
)
// RepoRoot returns the root directory of the git repository containing the
// current working directory. The result is cached for the lifetime of the process.
// This avoids repeated git rev-parse calls that are expensive (~50ms each) and
// can cause lock contention when multiple agents are running.
func RepoRoot() (string, error) {
cachedRepoRootOnce.Do(func() {
cmd := exec.Command("git", "rev-parse", "--show-toplevel")
out, err := cmd.Output()
if err != nil {
cachedRepoRootErr = err
return
}
cachedRepoRoot = strings.TrimSpace(string(out))
})
return cachedRepoRoot, cachedRepoRootErr
}
// RepoRootFrom returns the root directory of the git repository containing the
// specified path. Unlike RepoRoot(), this is not cached because it depends on
// the input path. Use RepoRoot() when checking the current working directory.
func RepoRootFrom(path string) (string, error) {
cmd := exec.Command("git", "rev-parse", "--show-toplevel")
cmd.Dir = path
out, err := cmd.Output()
if err != nil {
return "", err
}
return strings.TrimSpace(string(out)), nil
}
// GitError contains raw output from a git command for agent observation.
// ZFC: Callers observe the raw output and decide what to do.
// The error interface methods provide human-readable messages, but agents

View File

@@ -9,7 +9,6 @@ import (
"path/filepath"
"regexp"
"sort"
"sync"
"time"
"github.com/steveyegge/gastown/internal/beads"
@@ -108,87 +107,71 @@ func (m *Mailbox) listBeads() ([]*Message, error) {
return messages, nil
}
// queryResult holds the result of a single query.
type queryResult struct {
messages []*Message
err error
}
// listFromDir queries messages from a beads directory.
// Returns messages where identity is the assignee OR a CC recipient.
// Includes both open and hooked messages (hooked = auto-assigned handoff mail).
// If all queries fail, returns the last error encountered.
// Queries are parallelized for performance (~6x speedup).
// Uses a single consolidated query for performance (<100ms vs 10s+ for parallel queries).
func (m *Mailbox) listFromDir(beadsDir string) ([]*Message, error) {
// Get all identity variants to query (handles legacy vs normalized formats)
// Get all identity variants to match (handles legacy vs normalized formats)
identities := m.identityVariants()
// Build list of queries to run in parallel
type querySpec struct {
filterFlag string
filterValue string
status string
// Single query: get all messages of type=message (open and hooked, not closed)
// We use --all to include hooked status, then filter out closed in Go
args := []string{"list",
"--type", "message",
"--all",
"--limit", "0",
"--json",
}
var queries []querySpec
// Assignee queries for each identity variant in both open and hooked statuses
for _, identity := range identities {
for _, status := range []string{"open", "hooked"} {
queries = append(queries, querySpec{
filterFlag: "--assignee",
filterValue: identity,
status: status,
})
stdout, err := runBdCommand(args, m.workDir, beadsDir)
if err != nil {
return nil, fmt.Errorf("mailbox query failed: %w", err)
}
// Parse JSON output
var beadsMsgs []BeadsMessage
if err := json.Unmarshal(stdout, &beadsMsgs); err != nil {
// Empty result
if len(stdout) == 0 || string(stdout) == "null" {
return nil, nil
}
return nil, err
}
// CC queries for each identity variant (open only)
for _, identity := range identities {
queries = append(queries, querySpec{
filterFlag: "--label",
filterValue: "cc:" + identity,
status: "open",
})
// Build identity lookup set for fast matching
identitySet := make(map[string]bool, len(identities))
for _, id := range identities {
identitySet[id] = true
}
// Execute all queries in parallel
results := make([]queryResult, len(queries))
var wg sync.WaitGroup
wg.Add(len(queries))
for i, q := range queries {
go func(idx int, spec querySpec) {
defer wg.Done()
msgs, err := m.queryMessages(beadsDir, spec.filterFlag, spec.filterValue, spec.status)
results[idx] = queryResult{messages: msgs, err: err}
}(i, q)
}
wg.Wait()
// Collect results
seen := make(map[string]bool)
// Filter messages: (assignee match AND status in [open,hooked]) OR (cc match AND status=open)
var messages []*Message
var lastErr error
anySucceeded := false
for _, bm := range beadsMsgs {
// Skip closed messages
if bm.Status == "closed" {
continue
}
for _, r := range results {
if r.err != nil {
lastErr = r.err
} else {
anySucceeded = true
for _, msg := range r.messages {
if !seen[msg.ID] {
seen[msg.ID] = true
messages = append(messages, msg)
}
// Check if assignee matches any identity variant
assigneeMatch := identitySet[bm.Assignee]
// Check if any CC label matches identity variants
ccMatch := false
bm.ParseLabels()
for _, cc := range bm.GetCC() {
if identitySet[cc] {
ccMatch = true
break
}
}
}
// If ALL queries failed, return the last error
if !anySucceeded && lastErr != nil {
return nil, fmt.Errorf("all mailbox queries failed: %w", lastErr)
// Include if: (assignee match AND open/hooked) OR (cc match AND open)
if assigneeMatch && (bm.Status == "open" || bm.Status == "hooked") {
messages = append(messages, bm.ToMessage())
} else if ccMatch && bm.Status == "open" {
messages = append(messages, bm.ToMessage())
}
}
return messages, nil
@@ -210,39 +193,6 @@ func (m *Mailbox) identityVariants() []string {
return variants
}
// queryMessages runs a bd list query with the given filter flag and value.
func (m *Mailbox) queryMessages(beadsDir, filterFlag, filterValue, status string) ([]*Message, error) {
args := []string{"list",
"--type", "message",
filterFlag, filterValue,
"--status", status,
"--json",
}
stdout, err := runBdCommand(args, m.workDir, beadsDir)
if err != nil {
return nil, err
}
// Parse JSON output
var beadsMsgs []BeadsMessage
if err := json.Unmarshal(stdout, &beadsMsgs); err != nil {
// Empty inbox returns empty array or nothing
if len(stdout) == 0 || string(stdout) == "null" {
return nil, nil
}
return nil, err
}
// Convert to GGT messages - wisp status comes from beads issue.wisp field
var messages []*Message
for _, bm := range beadsMsgs {
messages = append(messages, bm.ToMessage())
}
return messages, nil
}
func (m *Mailbox) listLegacy() ([]*Message, error) {
file, err := os.Open(m.path)
if err != nil {

View File

@@ -106,6 +106,122 @@ You are a **crew worker** - the overseer's (human's) personal workspace within t
**Key difference from polecats**: No one is watching you. You work directly with
the overseer, not as part of a transient worker pool.
### Crew Role: Goal Owner
You are a **goal owner** and **coordinator**. When the Mayor assigns you a goal
(an epic), you become its long-term owner and liaison.
**The Goal Ownership Pattern:**
1. **Receive goal assignment** - Mayor assigns you an epic/goal
2. **Gather requirements** - Discuss with Mayor (the overseer) to understand:
- What does "done" look like?
- What constraints apply (time, dependencies, approach)?
- What decisions need their input vs. your judgment?
3. **Own the goal long-term** - You are THE person responsible for this outcome
4. **Decompose into tasks** - Break the goal into polecat-sized pieces
5. **Dispatch to polecats** - Sling work to executors
6. **Coordinate completion** - Track progress, handle blockers, ensure quality
**You are a goal-specific mayor** - you own outcomes for your assigned goals,
achieving them through delegation and coordination.
### The Coordination Loop
Your day-to-day work:
1. **Research** - Understand the codebase and where changes belong
2. **Decompose** - Break goals into polecat-sized tasks
3. **Sling** - Dispatch implementation work to polecats
4. **Review** - Coordinate results, handle blockers, ensure quality
**Polecats execute. You think and coordinate.**
### When to Implement Yourself
Not everything needs delegation. Implement directly when:
- **Trivial fixes** - Typos, one-liners, obvious corrections
- **Exploratory spikes** - You need tight feedback loop to understand the problem
- **High decomposition overhead** - Filing the bead would take longer than the fix
**Rule of thumb**: If explaining the task to a polecat takes longer than doing it,
just do it yourself.
## Delegating Work
Crew members can delegate work to polecats or other crew members. Before delegating,
think carefully about whether the task requires execution or judgment.
### Delegation Checklist
Before slinging work to a polecat:
1. **Is this execution or thinking?**
- Execution (clear spec, known approach) → Polecat
- Thinking (research, design, judgment calls) → Crew or handle yourself
2. **Include mail-back instruction** in sling message:
```bash
gt sling <bead> <target> -m "When complete, mail {{ .RigName }}/crew/{{ .Polecat }} with findings before gt done"
```
3. **Note convoy IDs** to check progress later
### Polecat vs Crew Decision Table
| Task Type | Delegate To | Why |
|-----------|-------------|-----|
| Implement from spec | Polecat | Pure execution, no judgment needed |
| Batch N similar items | N Polecats | Parallelizable, independent work |
| Research/investigation | Crew | Requires judgment, may pivot |
| Design decisions | Crew | Needs context and trade-off analysis |
| Code review | Crew | Requires nuanced feedback |
| Quick fix (<15 min) | Do it yourself | Overhead of delegation exceeds work |
### Sling Pattern
```bash
# Standard delegation with callback
gt sling <bead-id> <rig>/polecats -m "When complete, mail {{ .RigName }}/crew/{{ .Polecat }} with findings before gt done"
# Delegation to specific polecat
gt sling <bead-id> <rig>/polecats/<name> -m "Mail back when done"
# Delegation to another crew member
gt sling <bead-id> <rig>/crew/<name> -m "Please review and let me know your thoughts"
```
### ⚠️ Completion Notification Gap
**Known limitation**: Polecats run `gt done` and exit without notifying the delegating
agent. This means:
- You must **actively check** convoy progress
- Mail-back instructions in sling message are the workaround
- The polecat must explicitly mail you before `gt done`
This is a known workflow gap (see sc-g7bl3). Until fixed, always include explicit
mail-back instructions when delegating.
### Escalation Protocol
When stuck on delegated work or blocked:
1. **Try for 15-30 minutes** - Don't spin longer without action
2. **Mail mayor with context**:
```bash
gt mail send mayor/ -s "BLOCKED: <brief issue>" -m "
Issue: <bead-id>
Problem: <what's blocking>
Tried: <what you attempted>
Question: <what you need decided>"
```
3. **If completely blocked**, use `gt done --status=ESCALATED` to exit cleanly
**Don't guess when uncertain.** Escalating early is better than wasting hours or
making bad decisions.
## Gas Town Architecture
Gas Town is a multi-agent workspace manager:

View File

@@ -220,7 +220,7 @@ Use this format:
- Brief description of what's happening
- Box width ~65 chars
### End of Patrol Cycle
### End of Patrol Cycle - CONTINUOUS LOOP
At the end of each patrol cycle, print a summary banner:
@@ -231,21 +231,30 @@ At the end of each patrol cycle, print a summary banner:
═══════════════════════════════════════════════════════════════
```
Then squash and decide:
**CRITICAL**: This is a CONTINUOUS loop. You MUST loop back after each cycle.
```bash
# Squash the wisp to a digest
bd mol squash <wisp-id> --summary="Patrol complete: checked inbox, scanned health, no issues"
# Step 1: Squash the wisp
gt mol squash
# Option A: Loop (low context)
bd mol wisp mol-deacon-patrol
bd update <wisp-id> --status=hooked --assignee=deacon
# Continue to first step...
# Step 2: Wait for activity OR timeout (15-minute max)
gt mol step await-signal --agent-bead hq-deacon \
--backoff-base 60s --backoff-mult 2 --backoff-max 15m
# Option B: Exit (high context)
# Just exit - daemon will respawn with fresh context
# Step 3: Reset idle counter
gt agents state hq-deacon --set idle=0
# Step 4: Create new patrol wisp and hook it
WISP_ID=$(bd mol wisp mol-deacon-patrol 2>&1 | grep -o 'hq-[a-z0-9]*')
gt hook $WISP_ID
# Step 5: Execute from inbox-check (first step of new wisp)
# Continue immediately - don't wait for another prompt
```
**Exit path (high context only)**: If `gt context --usage` shows >80% context,
exit cleanly instead of looping. The daemon will respawn you with fresh context.
## Why Wisps?
Patrol cycles are **operational** work, not **auditable deliverables**:

View File

@@ -35,7 +35,9 @@ drive shaft - if you stall, the whole town stalls.
**Your startup behavior:**
1. Check hook (`gt hook`)
2. If work is hooked → EXECUTE (no announcement beyond one line, no waiting)
3. If hook empty → Check mail, then wait for user instructions
3. If hook empty → Check escalations (`gt escalate list`)
4. Handle any pending escalations (these are urgent items from other agents)
5. Check mail, then wait for user instructions
**Note:** "Hooked" means work assigned to you. This triggers autonomous mode even
if no molecule (workflow) is attached. Don't confuse with "pinned" which is for
@@ -103,6 +105,75 @@ for the Mayor to edit code. The Mayor role is:
---
## Delegation Hierarchy
When assigning work, understand the delegation model:
- **Mayor** → **Crew** (coordinators) → **Polecats** (executors)
### Who Gets What
1. **Epics/Goals** → Assign to **Crew** (they coordinate and decompose)
2. **Well-defined tasks** → Can go directly to **Polecats** (they execute)
**Crew are goal-specific mayors** - they own outcomes through coordination.
**Polecats are executors** - they implement well-specified tasks.
### Decision Framework
| Work Type | Assign To | Why |
|-----------|-----------|-----|
| Epic/Feature | Crew | Needs decomposition |
| Research needing judgment | Crew | Needs iteration |
| Clear, spec'd task | Polecat | Pure execution |
| Batch of similar tasks | Multiple Polecats | Parallelizable |
---
## Goals Workflow
**Goals are epics assigned to crew members.** A goal represents a significant outcome
that requires coordination, decomposition, and sustained ownership.
### The Pattern
1. **Assign goal to crew** - You (Mayor) assign an epic/goal to a specific crew member
2. **Requirements gathering** - Crew discusses requirements with you (the overseer)
3. **Crew owns the goal** - They become the long-term owner/liaison for that goal
4. **Crew spawns polecats** - They decompose into tasks and dispatch to polecats
5. **Crew coordinates completion** - They track progress, handle blockers, ensure quality
### Why Crew Ownership Matters
- **Continuity**: Crew members persist across sessions. Goals need sustained attention.
- **Context**: The crew member accumulates deep context about the goal over time.
- **Accountability**: One owner means clear responsibility for outcomes.
- **Coordination**: Complex goals need someone thinking about the whole, not just parts.
### Assigning Goals
```bash
# Create goal as epic
bd create --type=epic --title="Implement user authentication" --priority=1
# Assign to crew member
bd update <goal-id> --assignee=<rig>/crew/<name>
# Optionally attach to their hook for immediate attention
gt sling <goal-id> <rig>/crew/<name>
```
### Requirements Gathering
Before crew decomposes the goal, they should gather requirements from you:
- What does "done" look like?
- What are the constraints (time, dependencies, approach)?
- What decisions require your input vs. their judgment?
- Are there related goals or conflicts to consider?
This conversation happens first. Then crew owns execution.
---
## Your Role: MAYOR (Global Coordinator)
You are the **Mayor** - the global coordinator of Gas Town. You sit above all rigs,
@@ -262,16 +333,21 @@ Like crew, you're human-managed. But the hook protocol still applies:
gt hook # Shows hooked work (if any)
# Step 2: Work hooked? → RUN IT
# Hook empty? → Check mail for attached work
# Step 3: Hook empty? → Check escalations (mayor-specific)
gt escalate list # Shows pending escalations from other agents
# Handle any pending escalations - these are urgent items requiring your attention
# Step 4: Check mail for attached work
gt mail inbox
# If mail contains attached work, hook it:
gt mol attach-from-mail <mail-id>
# Step 3: Still nothing? Wait for user instructions
# Step 5: Still nothing? Wait for user instructions
# You're the Mayor - the human directs your work
```
**Work hooked → Run it. Hook empty → Check mail. Nothing anywhere → Wait for user.**
**Work hooked → Run it. Hook empty → Check escalations → Check mail. Nothing anywhere → Wait for user.**
Your hooked work persists across sessions. Handoff mail (🤝 HANDOFF subject) provides context notes.

View File

@@ -418,6 +418,78 @@ func (t *Tmux) KillPaneProcesses(pane string) error {
return nil
}
// KillPaneProcessesExcluding is like KillPaneProcesses but excludes specified PIDs.
// This is essential for self-handoff scenarios where the calling process (e.g., gt handoff)
// is running inside the pane it's about to respawn. Without exclusion, the caller would
// be killed before completing the respawn operation, potentially leaving the pane in a
// broken state.
func (t *Tmux) KillPaneProcessesExcluding(pane string, excludePIDs []string) error {
// Build exclusion set for O(1) lookup
exclude := make(map[string]bool)
for _, pid := range excludePIDs {
exclude[pid] = true
}
// Get the pane PID
pid, err := t.GetPanePID(pane)
if err != nil {
return fmt.Errorf("getting pane PID: %w", err)
}
if pid == "" {
return fmt.Errorf("pane PID is empty")
}
// Collect PIDs to kill (excluding specified ones)
toKill := make(map[string]bool)
// First, collect process group members (catches reparented processes)
pgid := getProcessGroupID(pid)
if pgid != "" && pgid != "0" && pgid != "1" {
for _, member := range getProcessGroupMembers(pgid) {
if !exclude[member] {
toKill[member] = true
}
}
}
// Also walk the process tree for any descendants that might have called setsid()
descendants := getAllDescendants(pid)
for _, dpid := range descendants {
if !exclude[dpid] {
toKill[dpid] = true
}
}
// Convert to slice for iteration
var killList []string
for dpid := range toKill {
killList = append(killList, dpid)
}
// Send SIGTERM to all non-excluded processes
for _, dpid := range killList {
_ = exec.Command("kill", "-TERM", dpid).Run()
}
// Wait for graceful shutdown (2s gives processes time to clean up)
time.Sleep(processKillGracePeriod)
// Send SIGKILL to any remaining non-excluded processes
for _, dpid := range killList {
_ = exec.Command("kill", "-KILL", dpid).Run()
}
// Kill the pane process itself only if not excluded
if !exclude[pid] {
_ = exec.Command("kill", "-TERM", pid).Run()
time.Sleep(processKillGracePeriod)
_ = exec.Command("kill", "-KILL", pid).Run()
}
return nil
}
// KillServer terminates the entire tmux server and all sessions.
func (t *Tmux) KillServer() error {
_, err := t.run("kill-server")

View File

@@ -260,6 +260,12 @@ func TestEnsureSessionFresh_ZombieSession(t *testing.T) {
}
defer func() { _ = tm.KillSession(sessionName) }()
// Wait for shell to be ready - avoids flaky tests where the pane command
// is briefly something other than the shell during initialization
if err := tm.WaitForShellReady(sessionName, 2*time.Second); err != nil {
t.Fatalf("WaitForShellReady: %v", err)
}
// Verify it's a zombie (not running Claude/node)
if tm.IsClaudeRunning(sessionName) {
t.Skip("session unexpectedly has Claude running - can't test zombie case")
@@ -332,6 +338,12 @@ func TestIsAgentRunning(t *testing.T) {
}
defer func() { _ = tm.KillSession(sessionName) }()
// Wait for shell to be ready - avoids flaky tests where the pane command
// is briefly something other than the shell during initialization
if err := tm.WaitForShellReady(sessionName, 2*time.Second); err != nil {
t.Fatalf("WaitForShellReady: %v", err)
}
// Get the current pane command (should be bash/zsh/etc)
cmd, err := tm.GetPaneCommand(sessionName)
if err != nil {

View File

@@ -7,6 +7,8 @@ import (
"os/exec"
"runtime/debug"
"strings"
"github.com/steveyegge/gastown/internal/git"
)
// These variables are set at build time via ldflags in cmd package.
@@ -133,9 +135,8 @@ func GetRepoRoot() (string, error) {
}
// Check if current directory is in the gt source repo
cmd := exec.Command("git", "rev-parse", "--show-toplevel")
if output, err := cmd.Output(); err == nil {
root := strings.TrimSpace(string(output))
// Uses cached git.RepoRoot() to avoid repeated subprocess calls
if root, err := git.RepoRoot(); err == nil {
if hasGtSource(root) {
return root, nil
}