docs: consolidate design docs into architecture.md and beads

Replaced 4 individual design docs with single master architecture doc.
Design content moved to bead descriptions (self-contained, no markdown refs).
Closed gt-cr0.

- Add docs/architecture.md: top-down Gas Town explanation
- Delete docs/{town,swarm-shutdown,polecat-beads-access,mayor-handoff}-design.md
- Update CLAUDE.md to point to architecture.md
- Update beads: gt-sd6, gt-f8v, gt-eu9, gt-gl2, gt-zx3, gt-e1y, gt-cjb, gt-082,
  gt-g2d, gt-sye, gt-vci, gt-82y, gt-l3c, gt-u82 now self-contained

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Steve Yegge
2025-12-15 20:52:49 -08:00
parent fbb8b1f040
commit 07f23c86de
7 changed files with 360 additions and 2093 deletions

View File

@@ -1,10 +1,16 @@
{"id":"gt-082","title":"Worker cleanup: Beads sync on shutdown","description":"Update worker cleanup/decommission protocol:\n\nAdd to shutdown checklist:\n- bd sync before final merge\n- Verify beads committed\n\nHandle edge cases:\n- Uncommitted beads changes\n- Beads sync conflicts (rare with tombstones, but possible)","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:47:21.757756-08:00","updated_at":"2025-12-15T19:47:21.757756-08:00","dependencies":[{"issue_id":"gt-082","depends_on_id":"gt-l3c","type":"blocks","created_at":"2025-12-15T19:47:35.977804-08:00","created_by":"daemon"}]}
{"id":"gt-82y","title":"Design: Swarm shutdown and worker cleanup","description":"Gray area in handoff between Witness (pushes workers to finish) and Mayor (cleanup inspection). Need to design: session cycling for Witness, cleanup authority, state preservation (stashes/branches), and how this relates to Mayor session cycling and Refinery lifecycle.","status":"open","priority":1,"issue_type":"epic","created_at":"2025-12-15T19:47:44.936374-08:00","updated_at":"2025-12-15T19:47:44.936374-08:00"}
{"id":"gt-082","title":"Worker cleanup: Beads sync on shutdown","description":"Add beads sync verification to worker cleanup checklist and Witness verification.\n\n## Update to Decommission Checklist (gt-sd6)\n\nAdd to pre-done verification:\n- bd sync --status must show 'Up to date'\n- git status .beads/ must show no changes\n\n## Beads Edge Cases\n\nUncommitted beads changes:\n bd sync\n git add .beads/\n git commit -m 'beads: final sync'\n\nBeads sync conflict (rare):\n git fetch origin main\n git checkout main -- .beads/\n bd sync --force\n git add .beads/\n git commit -m 'beads: resolve sync conflict'\n\n## Update to Witness Verification (gt-f8v)\n\nWhen capturing worker state:\n town capture \u003cpolecat\u003e \"bd sync --status \u0026\u0026 git status .beads/\"\n\nCheck for:\n- bd sync --status shows 'Up to date'\n- git status .beads/ shows no changes\n\nIf beads not synced, nudge:\n WITNESS CHECK: Beads not synced. Run 'bd sync' then commit .beads/. Signal done when complete.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:47:21.757756-08:00","updated_at":"2025-12-15T20:48:37.663168-08:00","dependencies":[{"issue_id":"gt-082","depends_on_id":"gt-l3c","type":"blocks","created_at":"2025-12-15T19:47:35.977804-08:00","created_by":"daemon"}]}
{"id":"gt-1le","title":"town handoff command (optional)","description":"CLI support for handoff generation (optional, can defer):\n- town handoff - generate interactively\n- town handoff --send - generate and mail to self\n- town resume - check for and display handoff\n\n**Design**: See docs/mayor-handoff-design.md - Integration with Town Commands section.","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-15T20:15:31.954724-08:00","updated_at":"2025-12-15T20:15:55.134746-08:00","dependencies":[{"issue_id":"gt-1le","depends_on_id":"gt-u82","type":"blocks","created_at":"2025-12-15T20:15:39.647043-08:00","created_by":"daemon"}]}
{"id":"gt-61o","title":"Review and audit all GGT beads","description":"Thorough review of all filed beads in gastown GGT repo. Check for: consistency, completeness, correct dependencies, accurate descriptions, proper prioritization. Ensure beads are self-contained and dont rely on external docs.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T20:24:07.152386-08:00","updated_at":"2025-12-15T20:24:07.152386-08:00"}
{"id":"gt-82y","title":"Design: Swarm shutdown and worker cleanup","description":"Design for graceful swarm shutdown, worker cleanup, and session cycling.\n\n## Key Decisions\n\n1. Pre-kill verification uses model intelligence (not framework rules)\n2. Witness can request restart when context filling (mail self, exit)\n3. Mayor NOT involved in per-worker cleanup (Witness responsibility)\n4. Clear responsibility boundaries between Mayor/Witness/Polecat\n\n## Subtasks (implementation)\n\n- gt-sd6: Polecat decommission checklist prompting\n- gt-f8v: Witness pre-kill verification protocol\n- gt-eu9: Witness session cycling and handoff\n- gt-gl2: Mayor vs Witness cleanup responsibilities\n\n**Design complete.** Each subtask has full specification in its description.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-15T19:47:44.936374-08:00","updated_at":"2025-12-15T20:49:22.849598-08:00","closed_at":"2025-12-15T20:12:05.441911-08:00","close_reason":"Design complete in docs/swarm-shutdown-design.md. Subtasks remain open for implementation."}
{"id":"gt-95x","title":"Remove stale migration docs from gastown-py","description":"The gastown-py repo has migration-related documentation that is now misinformation since we have made design decisions. Remove or clearly mark as obsolete: any docs about migration paths, old architecture assumptions, or superseded designs.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T20:24:08.642373-08:00","updated_at":"2025-12-15T20:24:08.642373-08:00"}
{"id":"gt-9a2","title":"Federation: Wasteland architecture for cross-town coordination","description":"## Overview\n\nA Wasteland is a federation of Towns across multiple machines. This enables:\n- Same project managed in multiple locations (local Mac + GCP VMs)\n- Cross-town swarm coordination\n- Unified UI for managing distributed work\n\n## Use Cases\n\n1. **Scale-out**: Spawn workers on cloud VMs when local resources exhausted\n2. **Geographic distribution**: Run workers close to data/services\n3. **Redundancy**: Continue work if one machine goes down\n\n## Design Questions\n\n- How do towns discover each other?\n- Cross-town mail routing protocol\n- Shared vs replicated beads databases\n- Conflict resolution for concurrent edits\n- Authentication between towns\n- UI for wasteland-level visibility\n\n## Placeholder Tasks\n\n- [ ] Design wasteland discovery/registration\n- [ ] Design cross-town mail protocol\n- [ ] Design beads synchronization strategy\n- [ ] Implement gt wasteland commands\n- [ ] Implement federation UI\n\n## Related\n\n- gt-f9x: Town \u0026 Rig Management (has federation basics)\n- Connection interface in town-design.md","status":"open","priority":2,"issue_type":"epic","created_at":"2025-12-15T19:21:32.462063-08:00","updated_at":"2025-12-15T19:21:32.462063-08:00"}
{"id":"gt-cjb","title":"Witness updates: Remove issue filing proxy","description":"Update Witness role since polecats can now file their own beads:\n\nRemove:\n- Processing polecat \"file issue\" mail requests\n- Issue filing on behalf of polecats\n\nKeep:\n- Monitoring polecat progress\n- Nudge protocol\n- Session lifecycle management","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:47:19.921561-08:00","updated_at":"2025-12-15T19:47:19.921561-08:00","dependencies":[{"issue_id":"gt-cjb","depends_on_id":"gt-l3c","type":"blocks","created_at":"2025-12-15T19:47:35.896691-08:00","created_by":"daemon"}]}
{"id":"gt-e1y","title":"Worker prompting: Beads write access","description":"Update polecat prompting to:\n1. Grant bd write access (create, update, close)\n2. Teach bd commands and when to use them\n3. Encourage filing discovered work outside their purview\n4. Explain beads repo configuration (where their beads go)\n\nRemove:\n- Mail-based issue filing proxy instructions\n- \"Cannot file beads\" restrictions","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:47:18.459363-08:00","updated_at":"2025-12-15T19:47:18.459363-08:00","dependencies":[{"issue_id":"gt-e1y","depends_on_id":"gt-l3c","type":"blocks","created_at":"2025-12-15T19:47:35.81183-08:00","created_by":"daemon"}]}
{"id":"gt-eu9","title":"Witness session cycling and handoff","description":"Witness should be able to request restart when context filling. Mail self handoff notes listing checked polecats, exit cleanly. Daemon spawns fresh session with handoff context. Add to Witness prompting.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:48:55.484911-08:00","updated_at":"2025-12-15T19:48:55.484911-08:00","dependencies":[{"issue_id":"gt-eu9","depends_on_id":"gt-82y","type":"blocks","created_at":"2025-12-15T19:49:05.846443-08:00","created_by":"daemon"}]}
{"id":"gt-f8v","title":"Witness pre-kill verification protocol","description":"Before killing a polecat that signaled done, Witness should capture and assess git status, stash list, and unmerged commits. Only kill if clean; otherwise nudge with specific issues. Keeps intelligence in model, not framework.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:48:54.065679-08:00","updated_at":"2025-12-15T19:48:54.065679-08:00","dependencies":[{"issue_id":"gt-f8v","depends_on_id":"gt-82y","type":"blocks","created_at":"2025-12-15T19:49:05.763378-08:00","created_by":"daemon"}]}
{"id":"gt-av8","title":"Update Mayor prompting in gastown-py","description":"The Mayor CLAUDE.md and related prompting in gastown-py (still in production use) needs to reflect current design decisions: session cycling, handoff protocol, cleanup responsibilities, beads access model. Sync prompting with GGT design work.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T20:24:09.953043-08:00","updated_at":"2025-12-15T20:24:09.953043-08:00"}
{"id":"gt-cjb","title":"Witness updates: Remove issue filing proxy","description":"Update Witness prompting to remove issue filing proxy, since polecats now have direct beads access.\n\n## Remove from Witness Prompting\n\nThe following is NO LONGER Witness responsibility:\n- Processing polecat 'file issue' mail requests\n- Creating issues on behalf of polecats\n- Forwarding issue creation requests\n\n## Add: Legacy Request Handling\n\nIf Witness receives an old-style 'please file issue' request:\n\n1. Respond with update:\n town inject \u003cpolecat\u003e \"UPDATE: You have direct beads access now. Use bd create to file issues yourself.\"\n\n2. Do not file the issue - let the polecat learn the new workflow.\n\n## Keep in Witness Prompting\n\n- Monitoring polecat progress\n- Nudge protocol\n- Pre-kill verification\n- Session lifecycle management","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:47:19.921561-08:00","updated_at":"2025-12-15T20:48:36.020922-08:00","dependencies":[{"issue_id":"gt-cjb","depends_on_id":"gt-l3c","type":"blocks","created_at":"2025-12-15T19:47:35.896691-08:00","created_by":"daemon"}]}
{"id":"gt-cr0","title":"Consolidate design docs into beads descriptions","description":"The markdown design docs (swarm-shutdown-design.md, polecat-beads-access-design.md, mayor-handoff-design.md) will decay. Extract key decisions and prompting templates into the beads descriptions themselves, then archive or remove the markdown files. Beads are the source of truth.","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-15T20:24:05.45131-08:00","updated_at":"2025-12-15T20:51:52.083465-08:00","closed_at":"2025-12-15T20:51:52.083465-08:00","close_reason":"Consolidated: created docs/architecture.md, moved design content into bead descriptions, deleted 4 individual design docs"}
{"id":"gt-d3d","title":"Design: Additional design issues (placeholder)","description":"Placeholder for additional design issues the user wants to raise and work through. Convert to specific subtasks as issues are identified.","status":"open","priority":1,"issue_type":"epic","created_at":"2025-12-15T20:24:12.601585-08:00","updated_at":"2025-12-15T20:24:12.601585-08:00"}
{"id":"gt-e1y","title":"Worker prompting: Beads write access","description":"Add beads write access section to polecat AGENTS.md.template.\n\n## Beads Access Section for Prompting\n\n```markdown\n## Beads Access\n\nYou have **full beads access** - create, update, and close issues.\n\n### Quick Reference\n\n```bash\n# View work\nbd ready # Issues ready (no blockers)\nbd list # All open issues\nbd show \u003cid\u003e # Issue details\n\n# Create issues\nbd create --title=\"Fix bug\" --type=bug --priority=2\nbd create --title=\"Add feature\" --type=feature\n\n# Update issues\nbd update \u003cid\u003e --status=in_progress # Claim work\nbd close \u003cid\u003e # Mark complete\n\n# Sync (required before merge!)\nbd sync # Commit beads changes to git\n```\n\n### When to Create Issues\n\nCreate beads issues when you discover work that:\n- Is outside your current task scope\n- Would benefit from tracking\n- Should be done by someone else\n\n**Good examples**:\n```bash\nbd create --title=\"Race condition in auth\" --type=bug --priority=1\nbd create --title=\"Document API rate limits\" --type=task --priority=3\n```\n\n**Don't create for**:\n- Tiny fixes you can do in 2 minutes\n- Vague \"improvements\" with no scope\n- Work already tracked elsewhere\n\n### Beads Sync Protocol\n\n**CRITICAL**: Always sync beads before merging!\n\n```bash\nbd sync # Commits beads changes\ngit add .beads/\ngit commit -m \"beads: sync\"\n```\n\nIf you forget to sync, beads changes are lost when session ends.\n```\n\n## Implementation\n\nAdd to AGENTS.md.template polecat section.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:47:18.459363-08:00","updated_at":"2025-12-15T20:48:03.813068-08:00","dependencies":[{"issue_id":"gt-e1y","depends_on_id":"gt-l3c","type":"blocks","created_at":"2025-12-15T19:47:35.81183-08:00","created_by":"daemon"}]}
{"id":"gt-eu9","title":"Witness session cycling and handoff","description":"Add session cycling and handoff protocol to Witness CLAUDE.md template.\n\n## Session Cycling Protocol\n\n```markdown\n## Session Cycling\n\nYour context will fill over long swarms. Proactively cycle when:\n- Running for many hours\n- Losing track of which workers you've checked\n- Responses getting slower\n- About to start complex operation\n\n### Handoff Protocol\n\n1. **Capture current state**:\n```bash\ntown list . # Worker states\ntown all beads # Pending verifications \ntown inbox # Unprocessed messages\n```\n\n2. **Compose handoff note**:\n```\n[HANDOFF_TYPE]: witness_cycle\n[TIMESTAMP]: \u003cnow\u003e\n[RIG]: \u003crig\u003e\n\n## Active Workers\n\u003clist workers and status\u003e\n\n## Pending Verifications\n\u003cworkers signaled done but not verified\u003e\n\n## Recent Actions\n\u003clast 3-5 actions\u003e\n\n## Warnings/Notes\n\u003canything next session should know\u003e\n\n## Next Steps\n\u003cwhat should happen next\u003e\n```\n\n3. **Send handoff**:\n```bash\ntown mail send \u003crig\u003e/witness -s \"Session Handoff\" -m \"\u003cnote\u003e\"\n```\n\n4. **Exit cleanly**: End session, daemon spawns fresh one.\n\n### On Fresh Session Start\n\n1. Check for handoff: `town inbox | grep \"Session Handoff\"`\n2. If found, read it and resume from handoff state\n3. If not found, do full status check\n```\n\n## Implementation\n\nAdd to WITNESS_CLAUDE.md template.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:48:55.484911-08:00","updated_at":"2025-12-15T20:47:30.768506-08:00","dependencies":[{"issue_id":"gt-eu9","depends_on_id":"gt-82y","type":"blocks","created_at":"2025-12-15T19:49:05.846443-08:00","created_by":"daemon"}]}
{"id":"gt-f8v","title":"Witness pre-kill verification protocol","description":"Add pre-kill verification protocol to Witness CLAUDE.md template.\n\n## Protocol for Witness Prompting\n\n```markdown\n## Pre-Kill Verification Protocol\n\nBefore killing any worker session, verify workspace is clean.\n\n### Verification Steps\n\nWhen a worker signals done:\n\n1. **Capture worker state**:\n```bash\ntown capture \u003cpolecat\u003e \"git status \u0026\u0026 git stash list \u0026\u0026 bd sync --status\"\n```\n\n2. **Assess the output** (use your judgment):\n- Is working tree clean?\n- Is stash list empty?\n- Is beads synced?\n\n3. **Decision**:\n- **CLEAN**: Proceed to kill session\n- **DIRTY**: Send nudge with specific issues\n\n### Nudge Templates\n\n**Uncommitted Changes**:\n```\ntown inject \u003cpolecat\u003e \"WITNESS CHECK: Uncommitted changes found. Please commit or discard: \u003cfiles\u003e. Signal done when clean.\"\n```\n\n**Beads Not Synced**:\n```\ntown inject \u003cpolecat\u003e \"WITNESS CHECK: Beads not synced. Run 'bd sync' then commit. Signal done when complete.\"\n```\n\n### Kill Sequence\n\nOnly after verification passes:\n```bash\ntown kill \u003cpolecat\u003e\ntown sleep \u003cpolecat\u003e\n```\n\n### Escalation\n\nIf worker fails verification 3+ times:\n```bash\ntown mail send mayor/ -s \"Escalation: \u003cpolecat\u003e stuck\" -m \"Cannot complete cleanup after 3 attempts. Issues: \u003clist\u003e.\"\n```\n```\n\n## Implementation\n\nAdd to WITNESS_CLAUDE.md template.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:48:54.065679-08:00","updated_at":"2025-12-15T20:47:30.415244-08:00","dependencies":[{"issue_id":"gt-f8v","depends_on_id":"gt-82y","type":"blocks","created_at":"2025-12-15T19:49:05.763378-08:00","created_by":"daemon"}]}
{"id":"gt-f9x","title":"Town \u0026 Rig Management: install, doctor, federation","description":"Reify the Gas Town installation as a first-class concept.\n\n## Goals\n- Installable: gt install [path] creates complete installation\n- Diagnosable: gt doctor checks and fixes issues\n- Federable: Clone town to VMs with central control\n\n## Design Doc\nSee docs/town-design.md for full design.","status":"open","priority":1,"issue_type":"epic","created_at":"2025-12-15T16:36:37.344283-08:00","updated_at":"2025-12-15T16:36:37.344283-08:00","dependencies":[{"issue_id":"gt-f9x","depends_on_id":"gt-u1j.1","type":"blocks","created_at":"2025-12-15T16:37:32.3363-08:00","created_by":"daemon"}]}
{"id":"gt-f9x.1","title":"Config package: Config, State types and JSON serialization","description":"Define workspace and rig config structures","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T16:36:50.163851-08:00","updated_at":"2025-12-15T16:36:50.163851-08:00","dependencies":[{"issue_id":"gt-f9x.1","depends_on_id":"gt-f9x","type":"parent-child","created_at":"2025-12-15T16:36:50.164178-08:00","created_by":"daemon"}]}
{"id":"gt-f9x.10","title":"Extended addressing: Parse [machine:]rig/polecat","description":"Support machine-prefixed polecat addresses","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-15T16:37:23.426567-08:00","updated_at":"2025-12-15T16:37:23.426567-08:00","dependencies":[{"issue_id":"gt-f9x.10","depends_on_id":"gt-f9x","type":"parent-child","created_at":"2025-12-15T16:37:23.426926-08:00","created_by":"daemon"}]}
@@ -16,10 +22,14 @@
{"id":"gt-f9x.7","title":"Connection interface: Protocol for local/remote ops","description":"Abstract interface for local vs SSH operations","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-15T16:37:07.764838-08:00","updated_at":"2025-12-15T16:37:07.764838-08:00","dependencies":[{"issue_id":"gt-f9x.7","depends_on_id":"gt-f9x","type":"parent-child","created_at":"2025-12-15T16:37:07.765169-08:00","created_by":"daemon"}]}
{"id":"gt-f9x.8","title":"LocalConnection: Local file/exec/tmux operations","description":"Implementation for local machine operations","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-15T16:37:19.879102-08:00","updated_at":"2025-12-15T16:37:19.879102-08:00","dependencies":[{"issue_id":"gt-f9x.8","depends_on_id":"gt-f9x","type":"parent-child","created_at":"2025-12-15T16:37:19.879451-08:00","created_by":"daemon"},{"issue_id":"gt-f9x.8","depends_on_id":"gt-f9x.7","type":"blocks","created_at":"2025-12-15T16:37:36.087392-08:00","created_by":"daemon"}]}
{"id":"gt-f9x.9","title":"Machine registry: Store and manage machine configs","description":"Registry for federation machine management","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-15T16:37:21.968099-08:00","updated_at":"2025-12-15T16:37:21.968099-08:00","dependencies":[{"issue_id":"gt-f9x.9","depends_on_id":"gt-f9x","type":"parent-child","created_at":"2025-12-15T16:37:21.968442-08:00","created_by":"daemon"},{"issue_id":"gt-f9x.9","depends_on_id":"gt-f9x.7","type":"blocks","created_at":"2025-12-15T16:37:36.174052-08:00","created_by":"daemon"}]}
{"id":"gt-gl2","title":"Clarify Mayor vs Witness cleanup responsibilities","description":"Document that Mayor is NOT involved in per-worker cleanup or session killing. Mayor handles: swarm dispatch, escalation handling, final integration, strategic decisions. Witness handles: nudges, pre-kill verification, session lifecycle.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:48:56.678724-08:00","updated_at":"2025-12-15T19:48:56.678724-08:00","dependencies":[{"issue_id":"gt-gl2","depends_on_id":"gt-82y","type":"blocks","created_at":"2025-12-15T19:49:05.929877-08:00","created_by":"daemon"}]}
{"id":"gt-g2d","title":"Mayor session cycling prompting","description":"Add session cycling section to Mayor CLAUDE.md template.\n\n## When to Cycle\n\nCycle proactively when:\n- Running for several hours\n- Context feels crowded (losing track of earlier state)\n- Major phase completed\n- About to start complex new work\n\n## Composing Handoff Notes\n\n1. Gather information:\n town status # Overall health\n town rigs # Each rig state\n town inbox # Pending messages\n bd ready # Work items\n\n2. Compose note with this structure:\n\n[HANDOFF_TYPE]: mayor_cycle\n[TIMESTAMP]: \u003ccurrent time\u003e\n[SESSION_DURATION]: \u003chow long running\u003e\n\n## Active Swarms\n\u003cper-rig swarm status\u003e\n\n## Rig Status\n\u003ctable of rig health\u003e\n\n## Pending Escalations\n\u003cissues needing your decision\u003e\n\n## In-Flight Decisions\n\u003cdecisions being made\u003e\n\n## Recent Actions\n\u003clast 5-10 things you did\u003e\n\n## Delegated Work\n\u003cwork sent to refineries\u003e\n\n## User Requests\n\u003cpending user asks\u003e\n\n## Next Steps\n\u003cwhat next session should do\u003e\n\n## Warnings/Notes\n\u003ccritical info for next session\u003e\n\n3. Send handoff:\n town mail send mayor/ -s \"Session Handoff\" -m \"\u003cnote\u003e\"\n\n4. End session - next instance picks up from handoff.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T20:15:26.188561-08:00","updated_at":"2025-12-15T20:48:39.861022-08:00","dependencies":[{"issue_id":"gt-g2d","depends_on_id":"gt-u82","type":"blocks","created_at":"2025-12-15T20:15:39.361163-08:00","created_by":"daemon"}]}
{"id":"gt-gl2","title":"Clarify Mayor vs Witness cleanup responsibilities","description":"Document the cleanup authority model: Witness owns ALL per-worker cleanup, Mayor never involved.\n\n## The Rule\n\n**Witness handles ALL per-worker cleanup. Mayor is never involved.**\n\n## Why This Matters\n\n1. Separation of concerns: Mayor strategic, Witness operational\n2. Reduced coordination overhead: No back-and-forth for routine cleanup\n3. Faster shutdown: Witness kills workers immediately upon verification\n4. Cleaner escalation: Mayor only hears about problems\n\n## What Witness Handles\n\n- Verifying worker git state before kill\n- Nudging workers to fix dirty state\n- Killing worker sessions\n- Updating worker state (sleep/wake)\n- Logging verification results\n\n## What Mayor Handles\n\n- Receiving swarm complete notifications\n- Deciding whether to start new swarms\n- Handling escalations (stuck workers after 3 retries)\n- Cross-rig coordination\n\n## Escalation Path\n\nWorker stuck -\u003e Witness nudges (up to 3x) -\u003e Witness escalates to Mayor -\u003e Mayor decides: force kill, reassign, or human\n\n## Anti-Patterns\n\nDO NOT: Mayor asks Witness if worker X is clean\nDO: Witness reports swarm complete, all workers verified\n\nDO NOT: Mayor kills worker sessions directly\nDO: Mayor tells Witness to abort swarm, Witness handles cleanup\n\nDO NOT: Workers report done to Mayor\nDO: Workers report to Witness, Witness aggregates and reports up","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:48:56.678724-08:00","updated_at":"2025-12-15T20:48:12.068964-08:00","dependencies":[{"issue_id":"gt-gl2","depends_on_id":"gt-82y","type":"blocks","created_at":"2025-12-15T19:49:05.929877-08:00","created_by":"daemon"}]}
{"id":"gt-iib","title":"Architecture: Decentralized rig structure with per-rig agents","description":"## Decision\n\nAdopt decentralized architecture where each rig contains all its agents (mayor/, witness/, refinery/, polecats/) rather than centralizing mayor clones at town level.\n\n## Town Level Structure\n\n```\n~/ai/ # Town root\n├── config/ # Town config (VISIBLE, not hidden)\n│ ├── town.json # {\"type\": \"town\"}\n│ ├── rigs.json # Registry of managed rigs\n│ └── federation.json # Wasteland config (future)\n│\n├── mayor/ # Mayor's HOME at town level\n│ ├── CLAUDE.md\n│ ├── mail/inbox.jsonl\n│ └── state.json\n│\n└── \u003crigs\u003e/ # Managed projects\n```\n\n## Rig Level Structure (e.g., wyvern)\n\n```\nwyvern/ # Rig = clone of project repo\n├── .git/info/exclude # Gas Town adds: polecats/ refinery/ witness/ mayor/\n├── .beads/ # Beads (if project uses it)\n├── [project files] # Clean project code on main\n│\n├── polecats/ # Worker clones\n│ └── \u003cname\u003e/ # Each is a git clone\n│\n├── refinery/\n│ ├── rig/ # Refinery's clone\n│ ├── state.json\n│ └── mail/inbox.jsonl\n│\n├── witness/ # NEW: Per-rig pit boss\n│ ├── rig/ # Witness's clone\n│ ├── state.json\n│ └── mail/inbox.jsonl\n│\n└── mayor/\n ├── rig/ # Mayor's clone for this rig\n └── state.json\n```\n\n## Key Decisions\n\n1. **Visible config dir**: `config/` not `.gastown/` (models don't find hidden dirs)\n2. **Witness per-rig**: Each rig has its own Witness (pit boss) with its own clone\n3. **Mayor decentralized**: Mayor's clones live IN each rig at `\u003crig\u003e/mayor/rig/`\n4. **Minimal invasiveness**: Only `.git/info/exclude` modified, no commits to project\n5. **Clone subdir name**: Keep `rig/` for consistency (refinery/rig/, witness/rig/, mayor/rig/)\n\n## Role Detection\n\n- Town root or mayor/ → Mayor (town level)\n- Rig root → Mayor (canonical main)\n- \u003crig\u003e/mayor/rig/ → Mayor (rig-specific)\n- \u003crig\u003e/refinery/rig/ → Refinery\n- \u003crig\u003e/witness/rig/ → Witness\n- \u003crig\u003e/polecats/\u003cname\u003e/ → Polecat\n\n## Migration from PGT\n\n- `mayor/rigs/\u003crig\u003e/` → `\u003crig\u003e/mayor/rig/`\n- `\u003crig\u003e/town/` → eliminated (rig root IS the clone)\n- Add `witness/` to each rig","status":"closed","priority":0,"issue_type":"task","created_at":"2025-12-15T19:21:19.913928-08:00","updated_at":"2025-12-15T19:21:40.461186-08:00","closed_at":"2025-12-15T19:21:40.461186-08:00","close_reason":"Design decision recorded","dependencies":[{"issue_id":"gt-iib","depends_on_id":"gt-u1j","type":"blocks","created_at":"2025-12-15T19:21:40.374551-08:00","created_by":"daemon"}]}
{"id":"gt-l3c","title":"Design: Polecat Beads write access","description":"## Background\n\nReversing the original decision to make polecats read-only for Beads. With v0.30.0's\ntombstone-based rearchitecture for deletions, we now have solid multi-agent support\neven at high loads.\n\n## Benefits\n- Simplifies design (no need for mail-based issue filing proxy)\n- Empowers polecats to file discovered work that's out of their purview\n- Beads solves the work-disavowal problem\n\n## Complications\n- For OSS projects where you're not a maintainer, polecats need to file beads\n in a separate repo (Beads supports this via --root)\n- Need per-rig beads repo configuration\n- Default: polecats file to rig's own .beads/\n- Option: polecats file to external beads repo (for OSS contributions)\n\n## Design Areas\n1. Per-rig beads configuration (which repo to use)\n2. Worker prompting updates (grant write access, teach bd commands)\n3. Witness plan updates (no longer needs to proxy issue filing)\n4. Worker cleanup code updates","status":"open","priority":1,"issue_type":"epic","created_at":"2025-12-15T19:37:42.191734-08:00","updated_at":"2025-12-15T19:47:01.468964-08:00"}
{"id":"gt-sd6","title":"Enhanced polecat decommission prompting","description":"Strengthen polecat prompting around decommission checklist. Make crystal clear: git status clean, stash list empty, bd sync complete, merge to main done BEFORE signaling done. Witness will verify and bounce back if dirty.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:48:57.911311-08:00","updated_at":"2025-12-15T19:48:57.911311-08:00","dependencies":[{"issue_id":"gt-sd6","depends_on_id":"gt-82y","type":"blocks","created_at":"2025-12-15T19:49:06.008061-08:00","created_by":"daemon"}]}
{"id":"gt-j87","title":"Design: Swarm simulation and validation","description":"Before implementation, validate GGT designs through simulation. Options: 1) Dry-run simulations with Mayor walking through scenarios, 2) Real swarms in gastown-py to stress-test assumptions, 3) Edge case analysis. Goal: ensure design is robust and improves with model cognition.","status":"open","priority":1,"issue_type":"epic","created_at":"2025-12-15T20:24:11.251841-08:00","updated_at":"2025-12-15T20:24:11.251841-08:00"}
{"id":"gt-l3c","title":"Design: Polecat Beads write access","description":"Design for granting polecats direct beads write access.\n\n## Background\n\nWith Beads v0.30.0 tombstone-based rearchitecture, we have solid multi-agent support. Reversing the original read-only decision.\n\n## Benefits\n\n- Simplifies architecture (no mail-based issue filing proxy)\n- Empowers polecats to file discovered work\n- Beads handles work-disavowal\n\n## Complications\n\nFor OSS projects where you cannot commit to project .beads/, need per-rig beads repo configuration.\n\n## Subtasks (implementation)\n\n- gt-zx3: Per-rig beads configuration schema\n- gt-e1y: Worker prompting updates for beads access\n- gt-cjb: Witness proxy removal\n- gt-082: Beads sync in decommission checklist\n\n**Design complete.** Each subtask has full specification in its description.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-15T19:37:42.191734-08:00","updated_at":"2025-12-15T20:49:24.598429-08:00","closed_at":"2025-12-15T20:14:04.174535-08:00","close_reason":"Design complete in docs/polecat-beads-access-design.md. Subtasks remain open for implementation."}
{"id":"gt-qh2","title":"Session cycling UX: smooth transitions via TUI wrapper","description":"## Problem\n\nCurrent CLI agent session cycling is painful:\n- Shell → CC starts → priming → context loads → ready → work → exit/crash → repeat\n- Each cycle is 30-60 seconds of cold boot\n- No continuity between shell and agent's inner state\n- Raw \"session not running, starting...\" loop is the baseline\n\n## GGT Advantages (already have)\n\n- Beads: Work state survives session death completely\n- Mail: Handoff notes from past-self to future-self \n- Prime commands: Structured context reload\n\n## Gap: Transition Mechanics\n\nIdeas to explore when actively using CLI:\n\n1. **In-band cycling** - `/restart` or `/cycle` command, agent handles own restart without dropping to shell\n\n2. **Hot standby** - TUI maintains pre-warmed session in background, switch to already-primed agent\n\n3. **Persistent wrapper** - Bubbletea TUI stays running across session cycles, CC sessions come/go inside it\n\n4. **Session pooling** - Keep 2-3 primed sessions ready, never wait for cold start\n\n## Deferred\n\nDeliberately P4 until we're actively using the simpler CLI and feel the pain firsthand.","status":"open","priority":4,"issue_type":"task","created_at":"2025-12-15T20:38:12.660716-08:00","updated_at":"2025-12-15T20:38:23.422132-08:00"}
{"id":"gt-sd6","title":"Enhanced polecat decommission prompting","description":"Add decommission checklist to polecat AGENTS.md.template. Make crystal clear: verify ALL before signaling done.\n\n## Checklist for AGENTS.md.template\n\n```markdown\n## Decommission Checklist\n\n**CRITICAL**: Before signaling done, you MUST complete this checklist.\nThe Witness will verify each item and bounce you back if dirty.\n\n### Pre-Done Verification\n\n```bash\n# 1. Git status - must be clean\ngit status\n# Expected: \"nothing to commit, working tree clean\"\n\n# 2. Stash list - must be empty\ngit stash list\n# Expected: (empty output)\n\n# 3. Beads sync - must be up to date\nbd sync --status\n# Expected: \"Up to date\" or \"Nothing to sync\"\n\n# 4. Branch merged - your work must be on main\ngit log main --oneline -1\ngit log HEAD --oneline -1\n# Expected: Same commit\n```\n\n### If Any Check Fails\n\n- **Uncommitted changes**: Commit them or discard if unnecessary\n- **Stashes**: Pop and commit, or drop if obsolete\n- **Beads out of sync**: Run `bd sync`\n- **Branch not merged**: Complete the merge workflow\n\n### Signaling Done\n\nOnly after ALL checks pass:\n\n```bash\nbd close \u003cissue-id\u003e\nbd sync\ntown mail send \u003crig\u003e/witness -s \"Work Complete\" -m \"Issue \u003cid\u003e done.\"\n```\n```\n\n## Implementation\n\nAdd to AGENTS.md.template in the polecat prompting section.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:48:57.911311-08:00","updated_at":"2025-12-15T20:47:30.062333-08:00","dependencies":[{"issue_id":"gt-sd6","depends_on_id":"gt-82y","type":"blocks","created_at":"2025-12-15T19:49:06.008061-08:00","created_by":"daemon"}]}
{"id":"gt-sye","title":"Mayor startup protocol prompting","description":"Add startup protocol to Mayor CLAUDE.md template.\n\n## On Session Start\n\n1. Check for handoff:\n town inbox | grep \"Session Handoff\"\n\n2. If handoff found:\n - Read it: town read \u003cmsg-id\u003e\n - Process pending escalations (highest priority)\n - Check status of noted swarms\n - Verify rig health matches notes\n - Continue with documented next steps\n\n3. If no handoff:\n town status # Overall health\n town rigs # Each rig\n bd ready # Work items\n town inbox # Any messages\n Build your own picture of current state.\n\n4. After processing handoff:\n - Archive or delete the handoff message\n - You now own the current state\n\n## Handoff Best Practices\n\n- Be specific: 'Toast has merge conflict in auth/middleware.go' not 'Toast is stuck'\n- Include context: Why decisions are pending, what you were thinking\n- Prioritize next steps: What is most urgent\n- Note time-sensitive items: Anything that might have changed since handoff","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T20:15:27.915484-08:00","updated_at":"2025-12-15T20:48:57.555724-08:00","dependencies":[{"issue_id":"gt-sye","depends_on_id":"gt-u82","type":"blocks","created_at":"2025-12-15T20:15:39.459108-08:00","created_by":"daemon"}]}
{"id":"gt-u1j","title":"Port Gas Town to Go","description":"Complete rewrite of Gas Town in Go for improved performance and single-binary distribution.\n\n## Goals\n- Single installable binary (gt)\n- All Python functionality ported\n- Federation support built-in\n- Improved performance\n\n## Phases\n1. Core infrastructure (config, workspace, git wrapper)\n2. Rig \u0026 polecat management\n3. Session \u0026 tmux operations\n4. Mail system\n5. CLI commands\n6. TUI (optional)","status":"open","priority":0,"issue_type":"epic","created_at":"2025-12-15T16:36:28.769343-08:00","updated_at":"2025-12-15T16:36:28.769343-08:00"}
{"id":"gt-u1j.1","title":"Go scaffolding: cmd/gt, go.mod, Cobra setup","description":"Set up Go project structure with CLI framework.\n\n**Stack:**\n- Cobra for command/flag handling\n- Lipgloss for styled terminal output\n\n**Deliverables:**\n- cmd/gt/main.go with Cobra root command\n- Basic subcommands: version, help\n- Lipgloss styles for status output (success, warning, error)\n- go.mod with dependencies","status":"open","priority":0,"issue_type":"task","created_at":"2025-12-15T16:36:48.376267-08:00","updated_at":"2025-12-15T16:51:36.774242-08:00","dependencies":[{"issue_id":"gt-u1j.1","depends_on_id":"gt-u1j","type":"parent-child","created_at":"2025-12-15T16:36:48.376622-08:00","created_by":"daemon"}]}
{"id":"gt-u1j.10","title":"CLI: core commands (status, prime, version, init)","description":"Essential CLI commands: gt status, gt prime, gt version, gt init.","status":"open","priority":0,"issue_type":"task","created_at":"2025-12-15T17:12:38.367667-08:00","updated_at":"2025-12-15T17:12:38.367667-08:00","dependencies":[{"issue_id":"gt-u1j.10","depends_on_id":"gt-u1j","type":"parent-child","created_at":"2025-12-15T17:12:38.368006-08:00","created_by":"daemon"},{"issue_id":"gt-u1j.10","depends_on_id":"gt-u1j.5","type":"blocks","created_at":"2025-12-15T17:14:06.123332-08:00","created_by":"daemon"}]}
@@ -42,5 +52,6 @@
{"id":"gt-u1j.7","title":"Session management: start, stop, attach, capture","description":"Polecat session lifecycle. Start Claude in tmux, stop gracefully, attach for interaction, capture output.","status":"open","priority":0,"issue_type":"task","created_at":"2025-12-15T17:12:25.473674-08:00","updated_at":"2025-12-15T17:12:25.473674-08:00","dependencies":[{"issue_id":"gt-u1j.7","depends_on_id":"gt-u1j","type":"parent-child","created_at":"2025-12-15T17:12:25.473993-08:00","created_by":"daemon"},{"issue_id":"gt-u1j.7","depends_on_id":"gt-u1j.4","type":"blocks","created_at":"2025-12-15T17:13:52.081053-08:00","created_by":"daemon"}]}
{"id":"gt-u1j.8","title":"Polecat management: add, remove, list, state","description":"Create/destroy polecats, list polecats in rig, track polecat state (awake/asleep).","status":"open","priority":0,"issue_type":"task","created_at":"2025-12-15T17:12:27.402824-08:00","updated_at":"2025-12-15T17:12:27.402824-08:00","dependencies":[{"issue_id":"gt-u1j.8","depends_on_id":"gt-u1j","type":"parent-child","created_at":"2025-12-15T17:12:27.403171-08:00","created_by":"daemon"},{"issue_id":"gt-u1j.8","depends_on_id":"gt-u1j.5","type":"blocks","created_at":"2025-12-15T17:13:53.747126-08:00","created_by":"daemon"},{"issue_id":"gt-u1j.8","depends_on_id":"gt-u1j.3","type":"blocks","created_at":"2025-12-15T17:13:53.831197-08:00","created_by":"daemon"}]}
{"id":"gt-u1j.9","title":"Witness daemon: heartbeat loop, spawn ephemeral agent","description":"Background daemon that monitors polecats, spawns ephemeral agents for notifications, heartbeat checks.","status":"open","priority":0,"issue_type":"task","created_at":"2025-12-15T17:12:29.389103-08:00","updated_at":"2025-12-15T17:12:29.389103-08:00","dependencies":[{"issue_id":"gt-u1j.9","depends_on_id":"gt-u1j","type":"parent-child","created_at":"2025-12-15T17:12:29.389428-08:00","created_by":"daemon"},{"issue_id":"gt-u1j.9","depends_on_id":"gt-u1j.7","type":"blocks","created_at":"2025-12-15T17:14:04.353775-08:00","created_by":"daemon"},{"issue_id":"gt-u1j.9","depends_on_id":"gt-u1j.8","type":"blocks","created_at":"2025-12-15T17:14:04.440363-08:00","created_by":"daemon"}]}
{"id":"gt-u82","title":"Design: Mayor session cycling and handoff","description":"Mayor needs the same session cycling pattern as Witness and workers. When context fills or session ends, Mayor should produce structured handoff notes for next session. This is a core part of working with Gas Town but currently undocumented. Related: gt-82y (swarm shutdown), gt-eu9 (witness session cycling).","status":"open","priority":1,"issue_type":"epic","created_at":"2025-12-15T20:03:16.125725-08:00","updated_at":"2025-12-15T20:03:16.125725-08:00"}
{"id":"gt-zx3","title":"Per-rig beads repo configuration","description":"Design config schema for per-rig beads repository settings.\n\nOptions needed:\n- beads_repo: \"local\" (default) | \"\u003cgit-url\u003e\" | \"\u003cpath\u003e\"\n- beads_root: override for bd --root\n\nUse cases:\n1. Local project (Wyvern): polecats write directly to project's .beads/\n2. OSS contribution: polecats write to separate beads repo (e.g., ~/ai/my-oss-beads/)\n3. Shared team beads: polecats write to team's central beads repo","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:47:16.660049-08:00","updated_at":"2025-12-15T19:47:16.660049-08:00","dependencies":[{"issue_id":"gt-zx3","depends_on_id":"gt-l3c","type":"blocks","created_at":"2025-12-15T19:47:35.726502-08:00","created_by":"daemon"}]}
{"id":"gt-u82","title":"Design: Mayor session cycling and handoff","description":"Design for Mayor session cycling and structured handoff.\n\n## Overview\n\nMayor coordinates across all rigs and runs for extended periods. Needs session cycling pattern with structured handoff notes.\n\n## Key Elements\n\n1. Session cycling recognition (when to cycle)\n2. Handoff note format (structured state capture)\n3. Handoff delivery (mail to self)\n4. Fresh session startup (reading and resuming)\n\n## Subtasks (implementation)\n\n- gt-g2d: Mayor session cycling prompting\n- gt-sye: Mayor startup protocol prompting\n- gt-vci: Mayor handoff mail template\n- gt-1le: town handoff command (optional, P2)\n\n**Design complete.** Each subtask has full specification in its description.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-15T20:03:16.125725-08:00","updated_at":"2025-12-15T20:49:26.203276-08:00","closed_at":"2025-12-15T20:16:10.772149-08:00","close_reason":"Design complete in docs/mayor-handoff-design.md. 4 subtasks created for implementation."}
{"id":"gt-vci","title":"Mayor handoff mail template","description":"Add MAYOR_HANDOFF mail template to templates.py.\n\n## Template Function\n\ndef mayor_handoff(\n active_swarms: List[SwarmStatus],\n rig_status: Dict[str, RigStatus],\n pending_escalations: List[Escalation],\n in_flight_decisions: List[Decision],\n recent_actions: List[str],\n delegated_work: List[DelegatedItem],\n user_requests: List[str],\n next_steps: List[str],\n warnings: Optional[str] = None,\n session_duration: Optional[str] = None,\n) -\u003e Message:\n metadata = {\n 'template': 'MAYOR_HANDOFF',\n 'timestamp': datetime.utcnow().isoformat(),\n 'session_duration': session_duration,\n 'active_swarm_count': len(active_swarms),\n 'pending_escalation_count': len(pending_escalations),\n }\n # ... format sections ...\n return Message.create(\n sender='mayor/',\n recipient='mayor/',\n subject='Session Handoff',\n body=body,\n priority='high',\n )\n\n## Metadata Fields\n\n- template: MAYOR_HANDOFF\n- timestamp: ISO format\n- session_duration: Human readable\n- active_swarm_count: Number of active swarms\n- pending_escalation_count: Number of escalations\n\n## Mail Priority\n\nUse priority='high' to ensure handoff is seen on startup.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T20:15:30.26323-08:00","updated_at":"2025-12-15T20:48:59.550689-08:00","dependencies":[{"issue_id":"gt-vci","depends_on_id":"gt-u82","type":"blocks","created_at":"2025-12-15T20:15:39.554108-08:00","created_by":"daemon"}]}
{"id":"gt-zx3","title":"Per-rig beads repo configuration","description":"Add per-rig beads configuration to rig config schema.\n\n## Config Schema\n\nIn each rig's config.json:\n\n```json\n{\n \"version\": 1,\n \"name\": \"wyvern\",\n \"git_url\": \"https://github.com/steveyegge/wyvern\",\n \"beads\": {\n \"repo\": \"local\", // \"local\" | \"\u003cpath\u003e\" | \"\u003cgit-url\u003e\"\n \"root\": null, // Override bd --root (optional)\n \"prefix\": \"wyv\" // Issue prefix for this rig\n }\n}\n```\n\n## Repo Options\n\n| Value | Meaning | Use Case |\n|-------|---------|----------|\n| `\"local\"` | Use project's `.beads/` | Own projects, full commit access |\n| `\"\u003cpath\u003e\"` | Use beads at path | OSS contributions |\n| `\"\u003cgit-url\u003e\"` | Clone and use repo | Team shared beads |\n\n## Environment Injection\n\nWhen spawning polecats, Gas Town sets:\n```bash\nexport BEADS_ROOT=\"\u003cresolved-path\u003e\"\n```\n\n## Resolution Logic\n\n```go\nfunc ResolveBeadsRoot(rigConfig *RigConfig, rigPath string) (string, error) {\n beads := rigConfig.Beads\n switch {\n case beads.Root != \"\":\n return beads.Root, nil\n case beads.Repo == \"local\" || beads.Repo == \"\":\n return filepath.Join(rigPath, \".beads\"), nil\n case strings.HasPrefix(beads.Repo, \"/\"):\n return beads.Repo, nil\n case strings.Contains(beads.Repo, \"://\"):\n return cloneAndResolve(beads.Repo)\n default:\n return filepath.Join(rigPath, beads.Repo), nil\n }\n}\n```\n\n## Backwards Compatibility\n\nIf `beads` section missing, assume `\"repo\": \"local\"`.","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-15T19:47:16.660049-08:00","updated_at":"2025-12-15T20:48:02.122203-08:00","dependencies":[{"issue_id":"gt-zx3","depends_on_id":"gt-l3c","type":"blocks","created_at":"2025-12-15T19:47:35.726502-08:00","created_by":"daemon"}]}

View File

@@ -8,7 +8,7 @@ This is the **Go port** of Gas Town, a multi-agent workspace manager.
- **Issue prefix**: `gt-`
- **Python version**: ~/ai/gastown-py (reference implementation)
- **Design docs**: docs/town-design.md
- **Architecture**: docs/architecture.md
## Development

337
docs/architecture.md Normal file
View File

@@ -0,0 +1,337 @@
# Gas Town Architecture
Gas Town is a multi-agent workspace manager that coordinates AI coding agents working on software projects. It provides the infrastructure for running swarms of agents, managing their lifecycle, and coordinating their work through mail and issue tracking.
## Core Concepts
### Town
A **Town** is a complete Gas Town installation - the workspace where everything lives. A town contains:
- Town configuration (`config/` directory)
- Mayor's home (`mayor/` directory at town level)
- One or more **Rigs** (managed project repositories)
### Rig
A **Rig** is a managed project repository with its associated agents. Each rig is a git clone of a project that Gas Town manages. Within each rig:
- The project's actual code lives at the rig root
- Agent directories are git-ignored via `.git/info/exclude`
- Each rig has its own Witness, Refinery, and Polecats
- Mayor has a clone in each rig for rig-specific work
### Agents
Gas Town has four agent roles:
| Agent | Scope | Responsibility |
|-------|-------|----------------|
| **Mayor** | Town-wide | Global coordination, swarm dispatch, cross-rig decisions |
| **Witness** | Per-rig | Worker lifecycle, nudging, pre-kill verification, session cycling |
| **Refinery** | Per-rig | Merge queue processing, PR review, integration |
| **Polecat** | Per-rig | Implementation work on assigned issues |
### Mail
Agents communicate via **mail** - JSONL-based inboxes for asynchronous messaging. Each agent has an inbox at `mail/inbox.jsonl`. Mail enables:
- Work assignment (Mayor → Refinery → Polecat)
- Status reporting (Polecat → Witness → Mayor)
- Session handoff (Agent → Self for context cycling)
- Escalation (Witness → Mayor for stuck workers)
### Beads
**Beads** is the issue tracking system. Gas Town agents use beads to:
- Track work items (`bd ready`, `bd list`)
- Create issues for discovered work (`bd create`)
- Claim and complete work (`bd update`, `bd close`)
- Sync state to git (`bd sync`)
Polecats have direct beads write access and file their own issues.
## Directory Structure
### Town Level
```
~/ai/ # Town root
├── config/ # Town configuration (VISIBLE, not hidden)
│ ├── town.json # {"type": "town", "name": "..."}
│ ├── rigs.json # Registry of managed rigs
│ └── federation.json # Remote machine config (future)
├── mayor/ # Mayor's HOME at town level
│ ├── CLAUDE.md # Mayor role prompting
│ ├── mail/inbox.jsonl # Mayor's inbox
│ └── state.json # Mayor state
├── wyvern/ # A rig (project repository)
└── beads/ # Another rig
```
### Rig Level
```
wyvern/ # Rig = clone of project repo
├── .git/
│ └── info/exclude # Contains: polecats/ refinery/ witness/ mayor/
├── .beads/ # Beads (if project uses it)
├── [project files] # Clean project code on main branch
├── polecats/ # Worker clones (gitignored)
│ ├── Nux/ # Each polecat has a full clone
│ ├── Toast/
│ └── Capable/
├── refinery/ # Refinery agent
│ ├── rig/ # Refinery's working clone
│ ├── state.json
│ └── mail/inbox.jsonl
├── witness/ # Witness agent (per-rig pit boss)
│ ├── rig/ # Witness's working clone
│ ├── state.json
│ └── mail/inbox.jsonl
└── mayor/ # Mayor's presence in this rig
├── rig/ # Mayor's rig-specific clone
└── state.json
```
### Why Decentralized?
Agents live IN rigs rather than in a central location:
- **Locality**: Each agent works in the context of its rig
- **Independence**: Rigs can be added/removed without restructuring
- **Parallelism**: Multiple rigs can have active swarms simultaneously
- **Simplicity**: Agent finds its context by looking at its own directory
## Agent Responsibilities
### Mayor
The Mayor is the global coordinator:
- **Swarm dispatch**: Decides which rigs need swarms, what work to assign
- **Cross-rig coordination**: Routes work between rigs when needed
- **Escalation handling**: Resolves issues Witnesses can't handle
- **Strategic decisions**: Architecture, priorities, integration planning
**NOT Mayor's job**: Per-worker cleanup, session killing, nudging workers
### Witness
The Witness is the per-rig "pit boss":
- **Worker monitoring**: Track polecat health and progress
- **Nudging**: Prompt workers toward completion
- **Pre-kill verification**: Ensure git state is clean before killing sessions
- **Session lifecycle**: Kill sessions, update worker state
- **Self-cycling**: Hand off to fresh session when context fills
- **Escalation**: Report stuck workers to Mayor
**Key principle**: Witness owns ALL per-worker cleanup. Mayor is never involved in routine worker management.
### Refinery
The Refinery manages the merge queue:
- **PR review**: Check polecat work before merging
- **Integration**: Merge completed work to main
- **Conflict resolution**: Handle merge conflicts
- **Quality gate**: Ensure tests pass, code quality maintained
### Polecat
Polecats are the workers that do actual implementation:
- **Issue completion**: Work on assigned beads issues
- **Self-verification**: Run decommission checklist before signaling done
- **Beads access**: Create issues for discovered work, close completed work
- **Clean handoff**: Ensure git state is clean for Witness verification
## Key Workflows
### Swarm Dispatch
```
Mayor Refinery Polecats
│ │ │
├─── [dispatch swarm] ────►│ │
│ ├─── [assign issues] ──────►│
│ │ │ (work)
│ │◄── [PR ready] ────────────┤
│ │ (review/merge) │
│◄── [swarm complete] ─────┤ │
```
### Worker Cleanup (Witness-Owned)
```
Polecat Witness Mayor
│ │ │
│ (completes work) │ │
├─── [done signal] ───────►│ │
│ │ (capture git state) │
│ │ (assess cleanliness) │
│◄── [nudge if dirty] ─────┤ │
│ (fixes issues) │ │
├─── [done signal] ───────►│ │
│ │ (verify clean) │
│ │ (kill session) │
│ │ │
│ │ (if stuck 3x) ───────────►│
│ │ │ (escalation)
```
### Session Cycling (Mail-to-Self)
When an agent's context fills, it hands off to its next session:
1. **Recognize**: Notice context filling (slow responses, losing track of state)
2. **Capture**: Gather current state (active work, pending decisions, warnings)
3. **Compose**: Write structured handoff note
4. **Send**: Mail handoff to own inbox
5. **Exit**: End session cleanly
6. **Resume**: New session reads handoff, picks up where old session left off
## Key Design Decisions
### 1. Witness Owns Worker Cleanup
**Decision**: Witness handles all per-worker cleanup. Mayor is never involved.
**Rationale**:
- Separation of concerns (Mayor strategic, Witness operational)
- Reduced coordination overhead
- Faster shutdown
- Cleaner escalation path
### 2. Polecats Have Direct Beads Access
**Decision**: Polecats can create, update, and close beads issues directly.
**Rationale**:
- Simplifies architecture (no proxy through Witness)
- Empowers workers to file discovered work
- Faster feedback loop
- Beads v0.30.0+ handles multi-agent conflicts
### 3. Session Cycling via Mail-to-Self
**Decision**: Agents mail handoff notes to themselves when cycling sessions.
**Rationale**:
- Consistent pattern across all agent types
- Timestamped and logged
- Works with existing inbox infrastructure
- Clean separation between sessions
### 4. Decentralized Agent Architecture
**Decision**: Agents live in rigs (`<rig>/witness/rig/`) not centralized (`mayor/rigs/<rig>/`).
**Rationale**:
- Agents work in context of their rig
- Rigs are independent units
- Simpler role detection
- Cleaner directory structure
### 5. Visible Config Directory
**Decision**: Use `config/` not `.gastown/` for town configuration.
**Rationale**: AI models often miss hidden directories. Visible is better.
## Configuration
### town.json
```json
{
"type": "town",
"version": 1,
"name": "stevey-gastown",
"created_at": "2024-01-15T10:30:00Z"
}
```
### rigs.json
```json
{
"version": 1,
"rigs": {
"wyvern": {
"git_url": "https://github.com/steveyegge/wyvern",
"added_at": "2024-01-15T10:30:00Z"
}
}
}
```
### Per-Rig Beads Config
Each rig can configure where polecats file beads:
```json
{
"beads": {
"repo": "local", // "local" | "/path/to/beads" | "git-url"
"prefix": "wyv"
}
}
```
- `"local"`: Use project's `.beads/` (default, for your own projects)
- Path: Use beads at specific location (for OSS contributions)
- Git URL: Clone and use shared team beads
## CLI Commands
### Town Management
```bash
gt install [path] # Install Gas Town at path
gt doctor # Check workspace health
gt doctor --fix # Auto-fix issues
```
### Agent Operations
```bash
gt status # Overall town status
gt rigs # List all rigs
gt polecats <rig> # List polecats in a rig
```
### Communication
```bash
gt inbox # Check inbox
gt send <addr> -s "Subject" -m "Message"
gt inject <polecat> "Message" # Direct injection to session
gt capture <polecat> "<cmd>" # Run command in polecat session
```
### Session Management
```bash
gt spawn --issue <id> # Start polecat on issue
gt kill <polecat> # Kill polecat session
gt wake <polecat> # Mark polecat as active
gt sleep <polecat> # Mark polecat as inactive
```
## Future: Federation
Federation enables work distribution across multiple machines via SSH. Not yet implemented, but the architecture supports:
- Machine registry (local, ssh, gcp)
- Extended addressing: `[machine:]rig/polecat`
- Cross-machine mail routing
- Remote session management
## Implementation Status
Gas Town is being ported from Python (gastown-py) to Go (gastown). The Go port (GGT) is in development:
- **Epic**: gt-u1j (Port Gas Town to Go)
- **Scaffolding**: gt-u1j.1 (Go scaffolding - blocker for implementation)
- **Management**: gt-f9x (Town & Rig Management: install, doctor, federation)
See beads issues with `bd list --status=open` for current work items.

View File

@@ -1,493 +0,0 @@
# Mayor Session Cycling and Handoff Design
Design for Mayor session management, context cycling, and structured handoff.
**Epic**: gt-u82 (Design: Mayor session cycling and handoff)
## Overview
Mayor coordinates across all rigs and runs for extended periods. Like Witness,
Mayor needs to cycle sessions when context fills, producing structured handoff
notes for the next session.
## Key Differences from Witness
| Aspect | Witness | Mayor |
|--------|---------|-------|
| Scope | Single rig, workers | All rigs, refineries |
| State tracking | Worker status, pending verifications | Active swarms, rig status, escalations |
| Handoff recipient | Self (same rig Witness) | Self (Mayor) |
| Complexity | Medium | Higher (cross-rig coordination) |
| Daemon | Witness daemon respawns | No daemon (manual or cron restart) |
## Design Areas
1. **Session cycling recognition** - When Mayor should cycle
2. **Handoff note format** - Structured state capture
3. **Handoff delivery** - Mail to self
4. **Fresh session startup** - Reading and resuming from handoff
5. **Integration with town commands** - CLI support
---
## 1. Session Cycling Recognition
### When to Cycle
Mayor should cycle when:
- Context is noticeably filling (responses slowing, losing track of state)
- Major phase completed (swarm finished, integration done)
- User requests session end
- Extended idle period with no active work
### Proactive vs Reactive
**Proactive** (preferred):
- Mayor notices context filling and initiates handoff
- Clean state capture while still coherent
**Reactive** (fallback):
- Session times out or crashes
- Less clean, may lose state
### Recognition Cues (for prompting)
```markdown
## Session Cycling
Monitor your context usage throughout the session. Signs you should cycle:
- You've been running for several hours
- You're having trouble remembering earlier conversation context
- You've completed a major phase of work
- Responses are taking longer than usual
- You're about to start a complex new operation
When you notice these signs, proactively initiate handoff rather than
waiting for problems.
```
---
## 2. Handoff Note Format
### Structure
Mayor handoff captures cross-rig state:
```
[HANDOFF_TYPE]: mayor_cycle
[TIMESTAMP]: 2024-01-15T14:30:00Z
[SESSION_DURATION]: 3h 45m
## Active Swarms
<per-rig swarm status>
## Rig Status
<health/state of each rig>
## Pending Escalations
<issues awaiting Mayor decision>
## In-Flight Decisions
<decisions being made, context needed>
## Recent Actions
<last 5-10 significant actions>
## Delegated Work
<work sent to refineries, awaiting response>
## User Requests
<any pending user requests>
## Next Steps
<what the next session should do>
## Warnings/Notes
<anything critical for next session>
```
### Example Handoff Note
```markdown
[HANDOFF_TYPE]: mayor_cycle
[TIMESTAMP]: 2024-01-15T14:30:00Z
[SESSION_DURATION]: 3h 45m
## Active Swarms
### gastown
- Status: Active swarm on auth feature
- Refinery: gastown/refinery coordinating
- Workers: 3 active (Furiosa, Toast, Capable)
- Issues: gt-auth-1, gt-auth-2, gt-auth-3
- Expected completion: Soon (2/3 issues merged)
### beads
- Status: Idle, no active swarm
- Last activity: 2h ago (maintenance work)
## Rig Status
| Rig | Health | Last Contact | Notes |
|-----|--------|--------------|-------|
| gastown | Good | 5min ago | Swarm active |
| beads | Good | 2h ago | Idle |
## Pending Escalations
1. **gastown/Toast stuck** - Witness escalated at 14:15
- Issue: gt-auth-2 has merge conflict
- Awaiting decision: reassign or manual fix?
- Context: Toast tried 3 times, conflict in auth/middleware.go
## In-Flight Decisions
None currently.
## Recent Actions
1. 14:25 - Checked gastown swarm status
2. 14:20 - Received escalation re: Toast
3. 14:00 - Sent status request to beads/refinery
4. 13:30 - Dispatched auth swarm to gastown
5. 13:00 - Session started, read previous handoff
## Delegated Work
- gastown/refinery: Auth feature swarm (dispatched 13:30)
- Expecting completion report when done
## User Requests
- User asked for auth feature implementation (completed dispatch)
- No other pending requests
## Next Steps
1. **Resolve Toast escalation** - Decide on reassign vs manual fix
2. **Monitor gastown swarm** - Should complete soon
3. **Check beads rig** - Been quiet, verify health
## Warnings/Notes
- Toast merge conflict is blocking swarm completion
- Consider waking another polecat if reassignment needed
```
---
## 3. Handoff Delivery
### Mail to Self
Mayor mails handoff to own inbox:
```bash
town mail send mayor/ -s "Session Handoff" -m "<handoff-content>"
```
### Why Mail (not file)?
- Consistent with Witness pattern
- Timestamped and logged
- Works across potential Mayor instances
- Integrates with existing inbox check on startup
### Handoff Template Function
```python
def mayor_handoff(
active_swarms: List[SwarmStatus],
rig_status: Dict[str, RigStatus],
pending_escalations: List[Escalation],
in_flight_decisions: List[Decision],
recent_actions: List[str],
delegated_work: List[DelegatedItem],
user_requests: List[str],
next_steps: List[str],
warnings: Optional[str] = None,
session_duration: Optional[str] = None,
) -> Message:
"""Create Mayor session handoff note."""
metadata = {
"template": "MAYOR_HANDOFF",
"timestamp": datetime.utcnow().isoformat(),
"session_duration": session_duration,
"active_swarm_count": len(active_swarms),
"pending_escalation_count": len(pending_escalations),
}
# ... format sections ...
return Message.create(
sender="mayor/",
recipient="mayor/",
subject="Session Handoff",
body=body,
priority="high", # Ensure it's seen
)
```
---
## 4. Fresh Session Startup
### Startup Protocol
When Mayor session starts:
1. **Check for handoff**:
```bash
town inbox | grep "Session Handoff"
```
2. **If handoff exists**:
```bash
# Read most recent handoff
town read <latest-handoff-id>
# Resume from handoff state
# - Address pending escalations first
# - Check on in-flight work
# - Continue with next steps
```
3. **If no handoff** (fresh start):
```bash
# Full system status check
town status
town rigs
bd ready
# Check all rig inboxes for pending items
town inbox
```
### Handoff Processing
```markdown
## On Session Start
1. **Check inbox for handoff**:
```bash
town inbox
```
Look for "Session Handoff" messages.
2. **If handoff found**:
- Read the handoff note
- Process pending escalations (highest priority)
- Check status of noted swarms
- Verify rig health matches notes
- Continue with documented next steps
3. **If no handoff**:
- Do full status check: `town status`
- Check each rig: `town rigs`
- Check inbox for any messages
- Check beads for work: `bd ready`
4. **After processing handoff**:
- Archive or delete the handoff message
- You now own the current state
```
---
## 5. Integration with Town Commands
### New Commands (optional, can be deferred)
```bash
# Generate handoff note interactively
town handoff
# Generate and send in one step
town handoff --send
# Check for handoff on startup
town resume
```
### Implementation
For now, Mayor does this manually in prompting. Later can add CLI support:
```go
// cmd/gt/cmd/handoff.go
var handoffCmd = &cobra.Command{
Use: "handoff",
Short: "Generate session handoff note",
Run: func(cmd *cobra.Command, args []string) {
// Gather state
swarms := gatherActiveSwarms()
rigs := gatherRigStatus()
// ... etc
// Format handoff
note := formatHandoffNote(swarms, rigs, ...)
if send {
// Send to mayor inbox
mail.Send("mayor/", "Session Handoff", note)
} else {
// Print for review
fmt.Println(note)
}
},
}
```
---
## Subtasks
Based on this design, create these implementation subtasks:
### gt-u82.1: Mayor session cycling prompting
Add to Mayor CLAUDE.md:
- When to cycle recognition
- How to compose handoff note
- Handoff format specification
### gt-u82.2: Mayor startup protocol prompting
Add to Mayor CLAUDE.md:
- Check for handoff on start
- Process handoff content
- Fresh start fallback
### gt-u82.3: Mayor handoff mail template
Add to templates.py:
- MAYOR_HANDOFF template
- Parsing utilities
### gt-u82.4: (Optional) town handoff command
CLI support for handoff generation:
- `town handoff` - generate interactively
- `town handoff --send` - generate and mail
- `town resume` - check for and display handoff
---
## Prompting Additions
### Mayor CLAUDE.md - Session Management Section
```markdown
## Session Management
### Recognizing When to Cycle
Monitor your session health. Cycle proactively when:
- You've been running for several hours
- Context feels crowded (losing track of earlier state)
- Major phase completed (good stopping point)
- About to start complex new work
Don't wait for problems - proactive handoff produces cleaner state.
### Creating Handoff Notes
Before ending your session, capture current state:
1. **Gather information**:
```bash
town status # Overall health
town rigs # Each rig's state
town inbox # Pending messages
bd ready # Work items
```
2. **Compose handoff note** with this structure:
```
[HANDOFF_TYPE]: mayor_cycle
[TIMESTAMP]: <current time>
[SESSION_DURATION]: <how long you've been running>
## Active Swarms
<list each rig with active swarm, workers, progress>
## Rig Status
<table of rig health>
## Pending Escalations
<issues needing your decision>
## In-Flight Decisions
<decisions you were making>
## Recent Actions
<last 5-10 things you did>
## Delegated Work
<work sent to refineries>
## User Requests
<any pending user asks>
## Next Steps
<what next session should do>
## Warnings/Notes
<critical info for next session>
```
3. **Send handoff**:
```bash
town mail send mayor/ -s "Session Handoff" -m "<your handoff note>"
```
4. **End session** - next instance will pick up from handoff.
### On Session Start
1. **Check for handoff**:
```bash
town inbox | grep "Session Handoff"
```
2. **If found, read it**:
```bash
town read <msg-id>
```
3. **Process in priority order**:
- Pending escalations (urgent)
- In-flight decisions (context-dependent)
- Check noted swarm status (may have changed)
- Continue with next steps
4. **If no handoff**:
```bash
town status
town rigs
bd ready
town inbox
```
Build your own picture of current state.
### Handoff Best Practices
- **Be specific** - "Toast has merge conflict in auth/middleware.go" not "Toast is stuck"
- **Include context** - Why decisions are pending, what you were thinking
- **Prioritize next steps** - What's most urgent
- **Note time-sensitive items** - Anything that might have changed since handoff
```
---
## Implementation Checklist
- [ ] Create subtasks (gt-u82.1 through gt-u82.4)
- [ ] Add session management section to Mayor CLAUDE.md template
- [ ] Add MAYOR_HANDOFF template to templates.py
- [ ] Update startup instructions in Mayor prompting
- [ ] (Optional) Implement town handoff command

View File

@@ -1,448 +0,0 @@
# Polecat Beads Write Access Design
Design for granting polecats direct beads write access.
**Epic**: gt-l3c (Design: Polecat Beads write access)
## Background
Originally, polecats were read-only for beads to prevent multi-agent conflicts.
With Beads v0.30.0's tombstone-based rearchitecture for deletions, we now have
solid multi-agent support even at high concurrent load.
## Benefits
1. **Simplifies architecture** - No need for mail-based issue filing proxy via Witness
2. **Empowers polecats** - Can file discovered work that's out of their purview
3. **Beads handles work-disavowal** - Workers can close issues they didn't start
4. **Faster feedback** - No round-trip through Witness for issue creation
## Complications
For OSS projects where you're not a maintainer:
- Can't commit to the project's `.beads/` directory
- Need to file beads in a separate repo
- Beads supports this via `--root` flag
## Subtask Designs
---
### gt-zx3: Per-Rig Beads Configuration
#### Config Location
Per-rig configuration lives in the rig's config:
**Option A: In rig state.json** (simpler)
```
<rig>/config.json (or state.json)
```
**Option B: In town-level rigs.json** (centralized)
```
config/rigs.json
```
Recommend **Option A** - each rig owns its config, easier to manage.
#### Config Schema
```json
// <rig>/config.json
{
"version": 1,
"name": "wyvern",
"git_url": "https://github.com/steveyegge/wyvern",
"beads": {
// Where polecats file beads
// Options: "local" | "<path>" | "<git-url>"
"repo": "local",
// Override bd --root (optional, derived from repo if not set)
"root": null,
// Issue prefix for this rig (used by bd create)
"prefix": "wyv"
}
}
```
#### Repo Options
| Value | Meaning | Use Case |
|-------|---------|----------|
| `"local"` | Use project's `.beads/` | Own projects, full commit access |
| `"<path>"` | Use beads at path | OSS contributions, external beads |
| `"<git-url>"` | Clone and use repo | Team shared beads |
#### Examples
**Local project (default)**:
```json
{
"beads": {
"repo": "local",
"prefix": "wyv"
}
}
```
**OSS contribution** (can't commit to project):
```json
{
"beads": {
"repo": "/home/user/my-beads/react-contributions",
"prefix": "react"
}
}
```
**Team shared beads**:
```json
{
"beads": {
"repo": "https://github.com/myteam/shared-beads",
"prefix": "team"
}
}
```
#### Environment Variable Injection
When spawning polecats, Gas Town sets:
```bash
export BEADS_ROOT="<resolved-path>"
# Polecats use bd normally; it respects BEADS_ROOT
```
Or pass explicit flag in spawn:
```bash
# Gas Town wraps bd calls internally
bd --root "$BEADS_ROOT" create --title="..."
```
#### Resolution Logic
```go
func ResolveBeadsRoot(rigConfig *RigConfig, rigPath string) (string, error) {
beads := rigConfig.Beads
switch {
case beads.Root != "":
// Explicit root override
return beads.Root, nil
case beads.Repo == "local" || beads.Repo == "":
// Use project's .beads/
return filepath.Join(rigPath, ".beads"), nil
case strings.HasPrefix(beads.Repo, "/") || strings.HasPrefix(beads.Repo, "~"):
// Absolute path
return expandPath(beads.Repo), nil
case strings.Contains(beads.Repo, "://"):
// Git URL - need to clone
return cloneAndResolve(beads.Repo)
default:
// Relative path from rig
return filepath.Join(rigPath, beads.Repo), nil
}
}
```
---
### gt-e1y: Worker Prompting - Beads Write Access
Add to polecat CLAUDE.md template (AGENTS.md.template):
```markdown
## Beads Access
You have **full beads access** - you can create, update, and close issues.
### Quick Reference
```bash
# View available work
bd ready # Issues ready to work (no blockers)
bd list # All open issues
bd show <id> # Issue details
# Create issues
bd create --title="Fix login bug" --type=bug --priority=2
bd create --title="Add dark mode" --type=feature
# Update issues
bd update <id> --status=in_progress # Claim work
bd close <id> # Mark complete
bd close <id> --reason="Duplicate of <other>"
# Sync (required before merge!)
bd sync # Commit beads changes to git
bd sync --status # Check if sync needed
```
### When to Create Issues
Create beads issues when you discover work that:
- Is outside your current task scope
- Would benefit from tracking
- Should be done by someone else (or future you)
**Good examples**:
```bash
# Found a bug while implementing feature
bd create --title="Race condition in auth middleware" --type=bug --priority=1
# Noticed missing documentation
bd create --title="Document API rate limits" --type=task --priority=3
# Tech debt worth tracking
bd create --title="Refactor legacy payment module" --type=task --priority=4
```
**Don't create issues for**:
- Tiny fixes you can do in 2 minutes (just do them)
- Vague "improvements" with no clear scope
- Work that's already tracked elsewhere
### Issue Lifecycle
```
┌─────────┐ ┌─────────────┐ ┌──────────┐
│ open │───►│ in_progress │───►│ closed │
└─────────┘ └─────────────┘ └──────────┘
│ ▲
└───────────────────────────────────┘
(can close directly)
```
You can close issues without claiming them first.
Useful for quick fixes or discovered duplicates.
### Beads Sync Protocol
**CRITICAL**: Always sync beads before merging to main!
```bash
# Before your final merge
bd sync # Commits beads changes
git status # Should show .beads/ changes
git add .beads/
git commit -m "beads: sync"
# Then proceed with merge to main
```
If you forget to sync, your beads changes will be lost when your session ends.
### Your Beads Repo
Your beads are configured for this rig. You don't need to specify --root.
Just use `bd` commands normally.
To check where your beads go:
```bash
bd config show root
```
```
---
### gt-cjb: Witness Updates - Remove Issue Filing Proxy
Update Witness CLAUDE.md to remove proxy responsibilities:
**REMOVE from Witness prompting**:
```markdown
## Issue Filing Proxy (REMOVED)
The following is NO LONGER your responsibility:
- Processing polecat "file issue" mail requests
- Creating issues on behalf of polecats
- Forwarding issue creation requests
Polecats now have direct beads write access and file their own issues.
```
**KEEP in Witness prompting** (from swarm-shutdown-design.md):
- Monitoring polecat progress
- Nudge protocol
- Pre-kill verification
- Session lifecycle management
**UPDATE**: If Witness receives an old-style "please file issue" request:
```markdown
### Legacy Issue Filing Requests
If you receive a mail asking you to file an issue on a polecat's behalf:
1. **Respond with update**:
```bash
town inject <polecat> "UPDATE: You have direct beads access now. Use 'bd create --title=\"...\" --type=...' to file issues yourself."
```
2. **Don't file the issue yourself** - let the polecat learn the new workflow.
```
---
### gt-082: Worker Cleanup - Beads Sync on Shutdown
This integrates with swarm-shutdown-design.md decommission checklist.
**Update to decommission checklist** (addition to gt-sd6):
```markdown
## Decommission Checklist (Updated)
### Pre-Done Verification
```bash
# 1. Git status - must be clean
git status
# Expected: "nothing to commit, working tree clean"
# 2. Stash list - must be empty
git stash list
# Expected: (empty)
# 3. Beads sync - MUST be synced
bd sync --status
# Expected: "Up to date" or "Nothing to sync"
# If not: run 'bd sync' first!
# 4. Beads committed - verify in git
git status
# Expected: .beads/ should NOT show changes
# If it does: git add .beads/ && git commit -m "beads: sync"
# 5. Branch merged to main
git log main --oneline -1
git log HEAD --oneline -1
# Expected: Same commit
```
### Beads Edge Cases
**Uncommitted beads changes**:
```bash
bd sync # Commits to .beads/
git add .beads/
git commit -m "beads: final sync"
```
**Beads sync conflict** (rare):
```bash
# If bd sync fails with conflict:
git fetch origin main
git checkout main -- .beads/
bd sync --force # Re-apply your changes
git add .beads/
git commit -m "beads: resolve sync conflict"
```
```
**Update to Witness pre-kill verification** (addition to gt-f8v):
```markdown
### Beads-Specific Verification
When capturing worker state, also check beads:
```bash
town capture <polecat> "bd sync --status && git status .beads/"
```
**Check for**:
- `bd sync --status` shows "Up to date"
- `git status .beads/` shows no changes
**If beads not synced**:
```
town inject <polecat> "WITNESS CHECK: Beads not synced. Run 'bd sync' then 'git add .beads/ && git commit -m \"beads: sync\"'. Signal done again when complete."
```
```
---
## Config File Examples
### Rig with local beads (default)
```json
// gastown/config.json
{
"version": 1,
"name": "gastown",
"git_url": "https://github.com/steveyegge/gastown",
"beads": {
"repo": "local",
"prefix": "gt"
}
}
```
### Rig contributing to OSS project
```json
// react/config.json
{
"version": 1,
"name": "react",
"git_url": "https://github.com/facebook/react",
"beads": {
"repo": "/home/steve/my-beads/react",
"prefix": "react"
}
}
```
### Rig with team shared beads
```json
// internal-app/config.json
{
"version": 1,
"name": "internal-app",
"git_url": "https://github.com/mycompany/internal-app",
"beads": {
"repo": "https://github.com/mycompany/team-beads",
"prefix": "app"
}
}
```
---
## Migration Notes
### For Existing Rigs
1. Add `beads` section to rig config.json
2. Default to `"repo": "local"` if not specified
3. Update polecat CLAUDE.md templates
4. Remove Witness proxy code
### Backwards Compatibility
- If `beads` section missing, assume `"repo": "local"`
- Old-style "file issue" mail requests get redirect nudge
- No breaking changes for polecats already using bd read commands
---
## Implementation Checklist
- [ ] Add beads config schema to rig config (gt-zx3)
- [ ] Update polecat CLAUDE.md template with bd write access (gt-e1y)
- [ ] Update Witness CLAUDE.md to remove proxy, add redirect (gt-cjb)
- [ ] Update decommission checklist with beads sync (gt-082)
- [ ] Update Witness verification to check beads sync (gt-082)
- [ ] Add BEADS_ROOT environment injection to spawn logic

View File

@@ -1,554 +0,0 @@
# Swarm Shutdown Design
Design for graceful swarm shutdown, worker cleanup, and session cycling.
**Epic**: gt-82y (Design: Swarm shutdown and worker cleanup)
## Key Decisions (from ultrathink)
1. **Pre-kill verification uses model intelligence** - Witness assesses git status output, not framework rules
2. **Witness can request restart** - Mail self handoff notes, exit cleanly when context filling
3. **Mayor NOT involved in per-worker cleanup** - That's Witness's domain
4. **Polecats verify themselves first** - Decommission checklist in prompting, Witness double-checks
## Responsibility Boundaries (gt-gl2)
### Mayor Responsibilities
- Swarm dispatch and strategic planning
- Cross-rig coordination
- Escalation handling (when Witness reports blocked workers)
- Final integration decisions
- **NOT**: Per-worker cleanup, session killing, nudging
### Witness Responsibilities
- Monitor worker health and progress
- Nudge workers toward completion
- Pre-kill verification (capture & assess git status)
- Session lifecycle (kill, restart workers)
- Self session cycling (mail handoff, exit)
- Report blocked workers to Mayor for escalation
- **NOT**: Implementation work, cross-rig coordination
### Polecat Responsibilities
- Complete assigned work
- Self-verify before signaling done (decommission checklist)
- Respond to Witness nudges
- **NOT**: Killing own session, coordinating with other polecats directly
## Subtask Designs
---
### gt-sd6: Enhanced Polecat Decommission Prompting
Add to polecat CLAUDE.md template (AGENTS.md.template):
```markdown
## Decommission Checklist
**CRITICAL**: Before signaling you are done, you MUST complete this checklist.
The Witness will verify each item and bounce you back if anything is dirty.
### Pre-Done Verification
Run these commands and verify ALL are clean:
```bash
# 1. Git status - must be clean (no uncommitted changes)
git status
# Expected: "nothing to commit, working tree clean"
# 2. Stash list - must be empty (no forgotten stashes)
git stash list
# Expected: (empty output)
# 3. Beads sync - must be up to date
bd sync --status
# Expected: "Up to date" or "Nothing to sync"
# 4. Branch merged - your work must be on main
git log main --oneline -1
git log HEAD --oneline -1
# Expected: Same commit (your branch is merged)
```
### If Any Check Fails
- **Uncommitted changes**: Commit them or discard if truly unnecessary
- **Stashes**: Pop and commit, or drop if obsolete
- **Beads out of sync**: Run `bd sync`
- **Branch not merged**: Complete the merge workflow
### Signaling Done
Only after ALL checks pass:
```bash
# Close your issue
bd close <issue-id>
# Final sync
bd sync
# Signal ready for decommission
town mail send <rig>/witness -s "Work Complete" -m "Issue <id> done. Checklist verified."
```
The Witness will capture your git state and verify before killing your session.
If anything is dirty, you'll receive a nudge with specific issues to fix.
```
---
### gt-f8v: Witness Pre-Kill Verification Protocol
Add to Witness CLAUDE.md template:
```markdown
## Pre-Kill Verification Protocol
Before killing any worker session, you MUST verify their workspace is clean.
Use your judgment on the output - don't rely on pattern matching.
### Verification Steps
When a worker signals done:
1. **Capture worker state**:
```bash
# Attach and capture git status
town capture <polecat> "git status && git stash list && git log --oneline -3"
```
2. **Assess the output** (use your judgment):
- Is working tree clean? (no modified/untracked files that matter)
- Is stash list empty? (or only contains intentional stashes)
- Does recent history show their work is committed?
3. **Decision**:
- **CLEAN**: Proceed to kill session
- **DIRTY**: Send nudge with specific issues
### Nudge Templates
**Uncommitted Changes**:
```
town inject <polecat> "WITNESS CHECK: You have uncommitted changes. Please commit or discard: <list files>. Signal done again when clean."
```
**Stash Not Empty**:
```
town inject <polecat> "WITNESS CHECK: You have stashed changes. Please pop and commit, or drop if obsolete: <stash list>. Signal done again when clean."
```
**Work Not Merged**:
```
town inject <polecat> "WITNESS CHECK: Your commits are not on main. Please complete merge workflow. Signal done again when merged."
```
**Multiple Issues**:
```
town inject <polecat> "WITNESS CHECK: Multiple issues found:
1. <issue 1>
2. <issue 2>
Please resolve all and signal done again."
```
### Kill Sequence
Only after verification passes:
```bash
# Log the verification
echo "[$(date)] Verified clean: <polecat>" >> witness/verification.log
# Kill the session
town kill <polecat>
# Update state
town sleep <polecat>
```
### Escalation
If a worker fails verification 3+ times or becomes unresponsive:
```bash
town mail send mayor/ -s "Escalation: <polecat> stuck" -m "Worker <polecat> cannot complete cleanup after 3 attempts. Issues: <list>. Requesting guidance."
```
```
---
### gt-eu9: Witness Session Cycling and Handoff
Add to Witness CLAUDE.md template:
```markdown
## Session Cycling
Your context will fill over long swarms. When you notice significant context usage
or feel you're losing track of state, proactively cycle your session.
### Recognizing When to Cycle
Signs you should cycle:
- You've been running for many hours
- You're losing track of which workers you've checked
- Responses are getting slower or less coherent
- You're about to start a complex operation
### Handoff Protocol
1. **Capture current state**:
```bash
# Check all worker states
town list .
# Check pending verifications
town all beads
# Check your inbox for unprocessed messages
town inbox
```
2. **Compose handoff note**:
```bash
town mail send <rig>/witness -s "Session Handoff" -m "$(cat <<'EOF'
[HANDOFF_TYPE]: witness_cycle
[TIMESTAMP]: $(date -Iseconds)
[RIG]: <rig>
## Active Workers
<list workers and their current status>
## Pending Verifications
<workers who signaled done but not yet verified>
## Recent Actions
<last 3-5 actions taken>
## Warnings/Notes
<anything the next session should know>
## Next Steps
<what should happen next>
EOF
)"
```
3. **Exit cleanly**:
```bash
# Ensure no pending operations
# Then simply end your session - the daemon will spawn a fresh one
```
### Handoff Note Format
The handoff note uses metadata format for parseability:
```
[HANDOFF_TYPE]: witness_cycle
[TIMESTAMP]: 2024-01-15T10:30:00Z
[RIG]: gastown
## Active Workers
- Furiosa: working on gt-abc1 (spawned 2h ago)
- Toast: idle, awaiting assignment
- Capable: signaled done, pending verification
## Pending Verifications
- Capable: signaled done at 10:25, not yet verified
## Recent Actions
1. Verified and killed Nux (gt-xyz9 complete)
2. Spawned Furiosa on gt-abc1
3. Received done signal from Capable
## Warnings/Notes
- Furiosa has been quiet for 30min, may need nudge
- Integration branch has 3 merged PRs
## Next Steps
1. Verify Capable's workspace
2. Check on Furiosa's progress
3. Report status to Refinery if all workers done
```
### On Fresh Session Start
When you start (or restart after cycling):
1. **Check for handoff**:
```bash
town inbox | grep "Session Handoff"
```
2. **If handoff exists, read it**:
```bash
town read <handoff-msg-id>
```
3. **Resume from handoff state** - pick up pending verifications, check noted workers
4. **If no handoff** - do full status check:
```bash
town list .
town all beads
```
```
---
### gt-gl2: Mayor vs Witness Cleanup Documentation
This goes in the main Gas Town documentation or CLAUDE.md templates.
```markdown
## Cleanup Authority Model
Gas Town uses a clear separation of cleanup responsibilities:
### The Rule
**Witness handles ALL per-worker cleanup. Mayor is never involved.**
### Why This Matters
1. **Separation of concerns**: Mayor thinks strategically, Witness thinks operationally
2. **Reduced coordination overhead**: No back-and-forth for routine cleanup
3. **Faster shutdown**: Witness can kill workers immediately upon verification
4. **Cleaner escalation**: Mayor only hears about problems, not routine operations
### What "Cleanup" Means
Witness handles:
- Verifying worker git state before kill
- Nudging workers to fix dirty state
- Killing worker sessions
- Updating worker state (sleep/wake)
- Logging verification results
Mayor handles:
- Receiving "swarm complete" notifications
- Deciding whether to start new swarms
- Handling escalations (stuck workers after multiple retries)
- Cross-rig coordination if workers need to hand off
### Escalation Path
```
Worker stuck -> Witness nudges (up to 3x) -> Witness escalates to Mayor
-> Mayor decides: force kill, reassign, or human intervention
```
### Anti-Patterns
**DON'T**: Have Mayor ask Witness "is worker X clean?"
**DO**: Have Witness report "swarm complete, all workers verified and killed"
**DON'T**: Have Mayor kill worker sessions directly
**DO**: Have Mayor tell Witness "abort swarm" and let Witness handle cleanup
**DON'T**: Have workers report done to Mayor
**DO**: Have workers report done to Witness, Witness aggregates and reports to Refinery/Mayor
```
---
## Mail Templates (additions to templates.py)
### WORKER_DONE (Worker -> Witness)
```python
def worker_done(
sender: str,
rig: str,
issue_id: str,
checklist_verified: bool = True,
) -> Message:
"""Worker signals completion to Witness."""
metadata = {
"template": "WORKER_DONE",
"rig": rig,
"issue": issue_id,
"checklist_verified": checklist_verified,
}
body = f"""Work complete on {issue_id}.
{_format_metadata(metadata)}
Decommission checklist {'verified' if checklist_verified else 'NOT verified - please check'}.
Ready for verification and session termination.
"""
return Message.create(
sender=sender,
recipient=f"{rig}/witness",
subject=f"Work Complete: {issue_id}",
body=body,
)
```
### VERIFICATION_FAILED (Witness -> Worker, via inject)
```python
def verification_failed(
worker: str,
issues: List[str],
) -> str:
"""Generate nudge text for failed verification (injected, not mailed)."""
issues_text = "\n".join(f" - {issue}" for issue in issues)
return f"""WITNESS VERIFICATION FAILED
The following issues must be resolved before decommission:
{issues_text}
Please fix these issues and signal done again.
"""
```
### WITNESS_HANDOFF (Witness -> Witness)
```python
def witness_handoff(
sender: str,
rig: str,
active_workers: List[Dict],
pending_verifications: List[str],
recent_actions: List[str],
warnings: Optional[str] = None,
next_steps: List[str] = None,
) -> Message:
"""Witness session handoff note."""
metadata = {
"template": "WITNESS_HANDOFF",
"rig": rig,
"timestamp": datetime.utcnow().isoformat(),
"active_worker_count": len(active_workers),
"pending_verification_count": len(pending_verifications),
}
# Format workers
workers_text = "\n".join(
f"- {w['name']}: {w['status']}" for w in active_workers
) or "None"
# Format pending
pending_text = "\n".join(f"- {p}" for p in pending_verifications) or "None"
# Format actions
actions_text = "\n".join(f"{i+1}. {a}" for i, a in enumerate(recent_actions[-5:]))
body = f"""Session handoff for {rig} Witness.
{_format_metadata(metadata)}
## Active Workers
{workers_text}
## Pending Verifications
{pending_text}
## Recent Actions
{actions_text}
## Warnings
{warnings or "None"}
## Next Steps
{chr(10).join(f"- {s}" for s in (next_steps or ["Check pending verifications"]))}
"""
return Message.create(
sender=sender,
recipient=f"{rig}/witness",
subject="Session Handoff",
body=body,
)
```
### ESCALATION (Witness -> Mayor)
```python
def worker_escalation(
sender: str,
rig: str,
worker: str,
issue_id: str,
attempts: int,
unresolved_issues: List[str],
) -> Message:
"""Witness escalates stuck worker to Mayor."""
metadata = {
"template": "WORKER_ESCALATION",
"rig": rig,
"worker": worker,
"issue": issue_id,
"verification_attempts": attempts,
}
issues_text = "\n".join(f" - {i}" for i in unresolved_issues)
body = f"""Worker {worker} cannot complete cleanup.
{_format_metadata(metadata)}
After {attempts} verification attempts, the following issues remain:
{issues_text}
Requesting guidance:
1. Force kill and abandon changes?
2. Reassign to another worker?
3. Escalate to human?
"""
return Message.create(
sender=sender,
recipient="mayor/",
subject=f"Escalation: {worker} stuck on {issue_id}",
body=body,
priority="high",
)
```
---
## Implementation Notes
### Verification State Tracking
Witness should track verification attempts in memory (or state.json):
```json
{
"pending_verifications": {
"Furiosa": {
"issue_id": "gt-abc1",
"signaled_at": "2024-01-15T10:25:00Z",
"attempts": 1,
"last_issues": ["uncommitted changes in src/foo.py"]
}
}
}
```
### Nudge vs Mail
- **Nudge (inject)**: For immediate attention - verification failures, progress checks
- **Mail**: For async communication - handoffs, escalations, status reports
### Timeout Handling
If worker doesn't respond to nudge within reasonable time:
1. First: Re-nudge with more urgency
2. Second: Capture their session state for diagnostics
3. Third: Escalate to Mayor
---
## Checklist for Implementation
- [ ] Update AGENTS.md.template with decommission checklist (gt-sd6)
- [ ] Create WITNESS_CLAUDE.md template with verification protocol (gt-f8v)
- [ ] Add session cycling to Witness prompting (gt-eu9)
- [ ] Document cleanup authority in main docs (gt-gl2)
- [ ] Add mail templates to templates.py
- [ ] Add verification state to Witness state.json schema

View File

@@ -1,586 +0,0 @@
# Town Management Design
Design for `gt install`, `gt doctor`, and federation in the Gas Town Go port.
## Overview
A **Town** is a complete Gas Town installation containing:
- Town config (`config/` directory - VISIBLE, not hidden)
- Mayor's home (`mayor/` directory)
- Rigs (managed project clones)
- Per-rig agents (witness/, refinery/, polecats/, mayor/)
- Mail system
- Beads integration
## Architecture Decision: Decentralized Agents (gt-iib)
Each rig contains ALL its agents rather than centralizing at town level:
- Mayor's clone lives at `<rig>/mayor/rig/` (not `mayor/rigs/<rig>/`)
- Witness (pit boss) at `<rig>/witness/rig/` - NEW in GGT
- Refinery at `<rig>/refinery/rig/`
- Polecats at `<rig>/polecats/<name>/`
## Directory Structure
### Town Level
```
~/ai/ # Town root (e.g., stevey-gastown repo)
├── config/ # Town config (VISIBLE, not hidden!)
│ ├── town.json # {"type": "town", "name": "..."}
│ ├── rigs.json # Registry of managed rigs
│ └── federation.json # Wasteland config (future)
├── mayor/ # Mayor's HOME at town level
│ ├── CLAUDE.md # Mayor role context
│ ├── mail/
│ │ └── inbox.jsonl # Mayor's inbox
│ └── state.json
├── wyvern/ # Rig (see below)
└── beads/ # Another rig
```
### Rig Level (e.g., wyvern)
```
wyvern/ # Rig = clone of project repo
├── .git/
│ └── info/exclude # Gas Town adds: polecats/ refinery/ witness/ mayor/
├── .beads/ # Beads (if project uses it)
├── [project files] # Clean project code on main branch
├── polecats/ # Worker clones (gitignored via exclude)
│ ├── Nux/ # git clone of wyvern
│ └── Toast/
├── refinery/ # Refinery agent
│ ├── rig/ # Refinery's clone
│ ├── state.json
│ └── mail/inbox.jsonl
├── witness/ # Witness agent (per-rig pit boss) - NEW
│ ├── rig/ # Witness's clone
│ ├── state.json
│ └── mail/inbox.jsonl
└── mayor/ # Mayor's presence in this rig
├── rig/ # Mayor's clone for rig-specific edits
└── state.json
```
### Minimal Rig Invasiveness
Gas Town is a harness OVER projects. When adding a rig:
1. Clone project to town: `gt rig add <git-url>`
2. Add to `.git/info/exclude`: `polecats/`, `refinery/`, `witness/`, `mayor/`
3. Create agent directories
**The project repo is NEVER modified.** No commits needed.
## Config Files
### config/town.json
```json
{
"type": "town",
"version": 1,
"name": "stevey-gastown",
"created_at": "2024-01-15T10:30:00Z"
}
```
### config/rigs.json
```json
{
"version": 1,
"rigs": {
"wyvern": {
"git_url": "https://github.com/steveyegge/wyvern",
"added_at": "2024-01-15T10:30:00Z"
},
"beads": {
"git_url": "https://github.com/steveyegge/beads",
"added_at": "2024-01-15T10:30:00Z"
}
}
}
```
### config/federation.json (future)
```json
{
"version": 1,
"wasteland": null,
"peers": []
}
```
### Agent state.json (refinery/, witness/, mayor/)
```json
{
"version": 1,
"state": "stopped",
"awake": false,
"created_at": "2024-01-15T10:30:00Z",
"last_started": null,
"last_stopped": null,
"last_wake": null,
"last_sleep": null
}
```
## gt install
### Command
```bash
gt install [path] # Default: current directory
gt install ~/ai
```
### Behavior
1. **Check if already installed**: Look for `mayor/` or `.gastown/` with `type: "workspace"`
2. **Create workspace structure**:
- `mayor/config.json` - workspace identity
- `mayor/state.json` - workspace state
- `mayor/mail/` - mail directory
- `mayor/boss/state.json` - boss state
- `mayor/rigs/` - empty, populated when rigs are added
3. **Create .gitignore** - ignore ephemeral state, polecat clones
4. **Create CLAUDE.md** - Mayor instructions
5. **Initialize git** if not present
### Implementation (Go)
```go
// pkg/workspace/install.go
package workspace
type InstallOptions struct {
Path string
Force bool // Overwrite existing
}
func Install(opts InstallOptions) (*Workspace, error) {
path := opts.Path
if path == "" {
path = "."
}
// Resolve and validate
absPath, err := filepath.Abs(path)
if err != nil {
return nil, fmt.Errorf("invalid path: %w", err)
}
// Check existing
if ws, _ := Find(absPath); ws != nil && !opts.Force {
return nil, ErrAlreadyInstalled
}
// Create structure
mayorDir := filepath.Join(absPath, "mayor")
if err := os.MkdirAll(mayorDir, 0755); err != nil {
return nil, err
}
// Write config
config := Config{
Type: "workspace",
Version: 1,
CreatedAt: time.Now().UTC(),
}
if err := writeJSON(filepath.Join(mayorDir, "config.json"), config); err != nil {
return nil, err
}
// ... create state, mail, boss, rigs
return &Workspace{Path: absPath, Config: config}, nil
}
```
## gt doctor
### Command
```bash
gt doctor # Check workspace health
gt doctor --fix # Auto-fix issues
gt doctor <rig> # Check specific rig
```
### Checks
#### Workspace Level
1. **Workspace exists**: `mayor/` or `.gastown/` directory
2. **Valid config**: `config.json` has `type: "workspace"`
3. **State file**: `state.json` exists and is valid JSON
4. **Mail directory**: `mail/` exists
5. **Boss state**: `boss/state.json` exists
6. **Rigs directory**: `rigs/` exists
#### Per-Rig Checks
1. **Refinery directory**: `<rig>/refinery/` exists
2. **Refinery README**: `refinery/README.md` exists
3. **Refinery state**: `refinery/state.json` exists
4. **Refinery lock**: `refinery/state.json.lock` exists
5. **Refinery clone**: `refinery/rig/` has valid `.git`
6. **Boss rig clone**: `mayor/rigs/<rig>/` has valid `.git`
7. **Gitignore entries**: workspace `.gitignore` has rig patterns
### Output Format
```
$ gt doctor
Workspace: ~/ai
✓ Workspace config valid
✓ Workspace state valid
✓ Mail directory exists
✓ Boss state valid
Rig: gastown
✓ Refinery directory exists
✓ Refinery README exists
✓ Refinery state valid
✗ Missing refinery/rig/ clone
✓ Mayor rig clone exists
✓ Gitignore entries present
Rig: beads
✓ Refinery directory exists
✓ Refinery README exists
✓ Refinery state valid
✓ Refinery clone valid
✓ Mayor rig clone exists
✓ Gitignore entries present
Issues: 1 found, 0 fixed
Run with --fix to auto-repair
```
### Implementation (Go)
```go
// pkg/doctor/doctor.go
package doctor
type CheckResult struct {
Name string
Status Status // Pass, Fail, Warn
Message string
Fixable bool
}
type DoctorOptions struct {
Fix bool
Rig string // Empty = all rigs
Verbose bool
}
func Run(ws *workspace.Workspace, opts DoctorOptions) (*Report, error) {
report := &Report{}
// Workspace checks
report.Add(checkWorkspaceConfig(ws))
report.Add(checkWorkspaceState(ws))
report.Add(checkMailDir(ws))
report.Add(checkBossState(ws))
report.Add(checkRigsDir(ws))
// Per-rig checks
rigs, _ := ws.ListRigs()
for _, rig := range rigs {
if opts.Rig != "" && rig.Name != opts.Rig {
continue
}
report.AddRig(rig.Name, checkRig(rig, ws, opts.Fix))
}
return report, nil
}
func checkRefineryHealth(rig *rig.Rig, fix bool) []CheckResult {
var results []CheckResult
refineryDir := filepath.Join(rig.Path, "refinery")
// Check refinery directory
if !dirExists(refineryDir) {
r := CheckResult{Name: "Refinery directory", Status: Fail, Fixable: true}
if fix {
if err := os.MkdirAll(refineryDir, 0755); err == nil {
r.Status = Fixed
}
}
results = append(results, r)
}
// Check README.md
readmePath := filepath.Join(refineryDir, "README.md")
if !fileExists(readmePath) {
r := CheckResult{Name: "Refinery README", Status: Fail, Fixable: true}
if fix {
if err := writeRefineryReadme(readmePath); err == nil {
r.Status = Fixed
}
}
results = append(results, r)
}
// ... more checks
return results
}
```
## Workspace Detection
Find workspace root by walking up from current directory:
```go
// pkg/workspace/find.go
func Find(startPath string) (*Workspace, error) {
current, _ := filepath.Abs(startPath)
for current != filepath.Dir(current) {
// Check both "mayor" and ".gastown" directories
for _, dirName := range []string{"mayor", ".gastown"} {
configDir := filepath.Join(current, dirName)
configPath := filepath.Join(configDir, "config.json")
if fileExists(configPath) {
var config Config
if err := readJSON(configPath, &config); err != nil {
continue
}
if config.Type == "workspace" {
return &Workspace{
Path: current,
ConfigDir: dirName,
Config: config,
}, nil
}
}
}
current = filepath.Dir(current)
}
return nil, ErrNotFound
}
```
## Minimal Federation Protocol
Federation enables work distribution across multiple machines via SSH.
### Core Abstractions
#### Connection Interface
```go
// pkg/connection/connection.go
type Connection interface {
// Command execution
Execute(ctx context.Context, cmd string, opts ExecOpts) (*Result, error)
// File operations
ReadFile(path string) ([]byte, error)
WriteFile(path string, data []byte) error
AppendFile(path string, data []byte) error
FileExists(path string) (bool, error)
ListDir(path string) ([]string, error)
MkdirAll(path string) error
// Tmux operations
TmuxSend(session string, text string) error
TmuxCapture(session string, lines int) (string, error)
TmuxHasSession(session string) (bool, error)
// Health
IsHealthy() bool
}
```
#### LocalConnection
```go
// pkg/connection/local.go
type LocalConnection struct{}
func (c *LocalConnection) Execute(ctx context.Context, cmd string, opts ExecOpts) (*Result, error) {
// Direct exec.Command
}
func (c *LocalConnection) ReadFile(path string) ([]byte, error) {
return os.ReadFile(path)
}
```
#### SSHConnection
```go
// pkg/connection/ssh.go
type SSHConnection struct {
Host string
User string
KeyPath string
client *ssh.Client
}
func (c *SSHConnection) Execute(ctx context.Context, cmd string, opts ExecOpts) (*Result, error) {
session, err := c.client.NewSession()
if err != nil {
return nil, err
}
defer session.Close()
// Run command via SSH
}
```
### Machine Registry
```go
// pkg/federation/registry.go
type Machine struct {
Name string
Type string // "local", "ssh", "gcp"
Workspace string // Remote workspace path
SSHHost string
SSHUser string
SSHKeyPath string
GCPProject string
GCPZone string
GCPInstance string
}
type Registry struct {
machines map[string]*Machine
conns map[string]Connection
}
func (r *Registry) GetConnection(name string) (Connection, error) {
if conn, ok := r.conns[name]; ok {
return conn, nil
}
machine, ok := r.machines[name]
if !ok {
return nil, ErrMachineNotFound
}
var conn Connection
switch machine.Type {
case "local":
conn = &LocalConnection{}
case "ssh", "gcp":
conn = NewSSHConnection(machine.SSHHost, machine.SSHUser, machine.SSHKeyPath)
}
r.conns[name] = conn
return conn, nil
}
```
### Extended Addressing
Polecat addresses support optional machine prefix:
```
[machine:]rig/polecat
Examples:
beads/happy # Local machine (default)
gcp-west:beads/happy # Remote machine
```
```go
// pkg/identity/address.go
type PolecatAddress struct {
Machine string // Default: "local"
Rig string
Polecat string
}
func ParseAddress(addr string) (*PolecatAddress, error) {
parts := strings.SplitN(addr, ":", 2)
if len(parts) == 2 {
// machine:rig/polecat
machine := parts[0]
rigPolecat := strings.SplitN(parts[1], "/", 2)
return &PolecatAddress{Machine: machine, Rig: rigPolecat[0], Polecat: rigPolecat[1]}, nil
}
// rig/polecat (local default)
rigPolecat := strings.SplitN(addr, "/", 2)
return &PolecatAddress{Machine: "local", Rig: rigPolecat[0], Polecat: rigPolecat[1]}, nil
}
```
### Mail Routing
For federation < 50 agents, use centralized mail through Mayor's machine:
```go
// pkg/mail/router.go
type MailRouter struct {
registry *federation.Registry
}
func (r *MailRouter) Deliver(msg *Message) error {
addr, _ := identity.ParseAddress(msg.Recipient)
conn, err := r.registry.GetConnection(addr.Machine)
if err != nil {
return err
}
mailboxPath := filepath.Join(addr.Rig, addr.Polecat, "mail", "inbox.jsonl")
return conn.AppendFile(mailboxPath, msg.ToJSONL())
}
```
## Implementation Plan
### Subtasks for gt-evp2
1. **Config package** - Config, State types and JSON serialization
2. **Workspace detection** - Find() walking up directory tree
3. **gt install command** - Create workspace structure
4. **Doctor framework** - Check interface, Result types, Report
5. **Workspace doctor checks** - Config, state, mail, boss, rigs
6. **Rig doctor checks** - Refinery health, clones, gitignore
7. **Connection interface** - Define protocol for local/remote ops
8. **LocalConnection** - Local file/exec/tmux operations
9. **Machine registry** - Store and manage machine configs
10. **Extended addressing** - Parse `[machine:]rig/polecat`
### Deferred (Federation Phase 2)
- SSHConnection implementation
- GCPConnection with gcloud integration
- Cross-machine mail routing
- Remote session management
- Worker pool across machines
## CLI Commands Summary
```bash
# Installation
gt install [path] # Install workspace at path
gt install --force # Overwrite existing
# Diagnostics
gt doctor # Check workspace health
gt doctor --fix # Auto-fix issues
gt doctor <rig> # Check specific rig
gt doctor --verbose # Show all checks (not just failures)
# Future (federation)
gt machine list # List machines
gt machine add <name> # Add machine
gt machine status # Check all machine health
```